id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2303.08275 | Interpretable Machine Learning Methods Applied to Jet Background
Subtraction in Heavy Ion Collisions | Jet measurements in heavy ion collisions can provide constraints on the
properties of the quark gluon plasma, but the kinematic reach is limited by a
large, fluctuating background. We present a novel application of symbolic
regression to extract a functional representation of a deep neural network
trained to subtract the background for measurements of jets in relativistic
heavy ion collisions. We show that the deep neural network is approximately the
same as a method using the particle multiplicity in a jet. This demonstrates
that interpretable machine learning methods can provide insight into underlying
physical processes. | Tanner Mengel, Patrick Steffanic, Charles Hughes, Antonio Carlos Oliveira da Silva, Christine Nattrass | 2023-03-14T23:43:55Z | http://arxiv.org/abs/2303.08275v2 | # Interpretable Machine Learning Methods
###### Abstract
Jet measurements in heavy ion collisions can provide constraints on the properties of the quark gluon plasma, but the kinematic reach is limited by a large, fluctuating background. We present a novel application of symbolic regression to extract a functional representation of a deep neural network trained to subtract background from jets in heavy ion collisions. We show that the deep neural network is approximately the same as a method using the particle multiplicity in a jet. This demonstrates that interpretable machine learning methods can provide insight into underlying physical processes.
## I Introduction
The Quark Gluon Plasma (QGP) is a hot, dense, strongly interacting liquid of quarks and gluons that is created briefly in high energy heavy ion collisions [1, 2, 3, 4]. Measurements of jets produced by hard scatterings between partons in heavy ion collisions can be used to investigate the properties of the QGP [5]. Quantitative comparisons between jet measurements and physics models can provide further constraints on these properties [6, 7]. However, heavy ion events are dominated by a fluctuating background of soft particles not due to hard scatterings. The details of these fluctuations are sensitive to correlations from hydrodynamical flow and the shape of the single particle spectra [8], and as such are unlikely to be exactly the same in data and models. Mixed events are able to successfully describe the background in measurements of hadron-jet correlations by the STAR collaboration [9] at the Relativistic Heavy Ion Collider (RHIC). Studies of the background at the Large Hadron Collider (LHC) by the ALICE Collaboration found that the distribution of background energy density in random cones is well described by a random background with correlations due to hydrodynamical flow and Poissonian fluctuations [10]. A better understanding of this background will facilitate more precise jet measurements for comparisons between data and models.
Measurement precision and kinematic range is limited by the ability to correct for this background and its fluctuations. Background correction in jet measurements requires subtraction of contributions from soft particles within the jet, and suppression of fluctuations which have been reconstructed as combinatorial jets. At low momenta, combinatorial jets limit the kinematic reach of the measurement. Improved background subtraction methods would increase measurements' sensitivity to partonic energy loss. Measurements of jet spectra which extend to low momenta primarily use the area method [11] for background subtraction. This method was initially proposed to correct for the underlying event in \(p+p\) collisions in high pile-up conditions [11] and has also been applied to heavy ion collisions [12, 13, 14, 15].
The complexity of jet background subtraction makes it an interesting environment to apply machine learning techniques. However, application of machine learning methods to background subtraction should be handled with care since models are not able to fully reproduce background fluctuations in heavy ion collisions [8]. Nuclear physics has prioritized the continued advancement in machine learning analysis techniques with a focus on interpretable methods that are robust, provide clear uncertainty quantification, and are explainable [16]. Applications of non-interpretable machine learning methods are insufficient when models available for training may be inaccurate, when it may be necessary to understand the method to interpret the results, or when a result is needed outside of the training space.
Application of a deep neural network, i.e. a neural network with multiple hidden layers, to jet background subtraction in heavy ion collisions has demonstrated significant improvements compared to the area method, particularly at low jet momenta [17, 18]. Deep neural networks are susceptible to model bias because their predictions risk being unreliable outside the domain of their training space. These methods may break down when they are extrapolated beyond this space, and due to their opaque nature, offer little indication where and why this break down occurs. In addition, one cannot validate the technique against data because we do not know the true jet momenta in data.
Increased performance of machine learning methods over traditional methods is an indication that there is information accessible to the machine learning that accounts for this improvement. We present an interpretable machine learning technique that allows us to understand why a deep neural network improves the jet momentum resolution in heavy ion collisions. We empirically derive an alternate method based on the background described in [19, 10], we call the multiplicity method. We compare the widths of the fluctuations of the jet momenta for the this method to the area and neural network methods and estimate the impact of the methods on the kinematic range. We apply symbolic regression to determine a functional form describing the mapping learned by the neural network, which was trained using TennGen [8] for the background and PYTHIA [20] for the signal. We com
pare this functional description of the neural network to the form of the multiplicity method.
## II Method
### Simulation
TennGen [21; 22] generates heavy ion collisions with \(\pi^{\pm}\), K\({}^{\pm}\), p and \(\bar{p}\) hadrons with yields [23], momentum distributions [24; 25], and azimuthal anisotropies [26; 27] matched to published data. TennGen was updated to simulate collision energies per nucleon of \(\sqrt{s_{\rm NN}}=200\) GeV collisions as well as \(\sqrt{s_{\rm NN}}=2.76\) TeV, including multiplicity fluctuations, and improved computational efficiency. Proton-proton collisions at \(\sqrt{s}=200\) GeV were simulated with the PYTHIA 8.307 [20] Monash 2013 tune [28] in 25 \(p_{T}^{hard}\) bins starting at 5 GeV, with 1 million \(p\)+\(p\) events in each bin. Only final state charged particles from PYTHIA are mixed with a TennGen background event. Charged particles from both PYTHIA and TennGen are required to have a minimum \(p_{T}\) of 150 MeV and be within pseudo-rapidity \(|\eta|<0.9\).
Jets are clustered using the anti-\(k_{T}\) algorithm with FastJet 3.4.0 [29] with jet resolution parameters \(R=0.2\), 0.4, and 0.6. To determine the true momentum, jets are reconstructed separately in both PYTHIA and the combined event. Jets in the combined PYTHIA and TennGen event are geometrically matched to a PYTHIA jet if \(\Delta R=\sqrt{\Delta\eta^{2}+\Delta\phi^{2}}<0.1\) where \(\Delta\eta\) and \(\Delta\phi\) are the differences in \(\eta\) and \(\phi\) between the jets and there is a bijective match. Reconstructed jets are required to have \(p_{T}\)\(>\) 5 GeV and be within pseudo-rapidity \(|\eta_{jet}|<0.9-R\). The momentum of the PYTHIA jet is taken as the truth momentum, \(p_{T,jet}^{Ttruth}\equiv p_{T,jet}^{PYTHIA}\).
The improvement in jet momentum resolution extends the kinematic range of the measurement to lower jet momenta. Increased precision in jet momentum allows for combinatorial jets to be reconstructed in jet momentum bins closer to zero. The lower threshold for unfolding is typically set to be between 2-5 times the width of the jet momentum resolution to suppress effects of combinatorial jets on the unfolded results [12; 18]. The suppression of combinatorial jets for each method is demonstrated by taking the ratio of unmatched reconstructed jet spectra including combinatorial background to matched reconstructed PYTHIA jet spectra.
### Area and multiplicity methods
For area-based background subtraction, the jet area [30] is estimating through the use of "ghost" particles, jets are reconstructed using the \(k_{T}\) jet finder [31]. The corrected jet momentum is then estimated as
\[p_{T,Jet}^{Corr,A}=p_{T,Jet}^{tot}-\rho A, \tag{1}\]
where \(A\) is the jet area, \(\rho\) is the background momentum density per unit area, and \(p_{T,Jet}^{tot}\) is the total momentum in the jet. The \(\rho\) in an event is approximated as the median \(p_{T,Jet}^{tot}/A\) for \(k_{T}\) jets because \(k_{T}\) jets are dominated by background.
To a good approximation, the variance of the momentum residual \(\delta p_{T}=p_{T}^{Corr}-p_{T}^{Truth}\) with the area method is given by
\[\sigma_{\delta p_{T}}=\sqrt{N\sigma_{p_{T}}^{2}+(N+2N^{2}\sum_{n=1}^{\infty}v _{n}^{2})\langle p_{T}\rangle^{2}}, \tag{2}\]
where \(N\) is the number of background particles in the jet, \(\sigma_{p_{T}}\) is the standard deviation of the single track momentum distribution, \(v_{n}\) are the coefficients of the azimuthal anisotropies of the single particle distributions, and \(\langle p_{T}\rangle\) is the average momentum of background particles [10]. This is derived by assuming each of the \(N\) particles is drawn from a single track momentum distribution which is approximately a Gamma distribution, giving rise to the first term [19]. The second term is from Poissonian fluctuations in the number of background particles and the third term is from fluctuations in the number of particles due to hydrodynamical flow. Deviations of the single track momentum distribution from a Gamma distribution and momentum dependence of the \(v_{n}\) lead to slightly larger widths [8].
The area method is usually used instead of iterative background subtraction methods [32; 33; 34] for measurements of jets at lower momenta. Iterative methods may suppress the fluctuations described in eq. 2 by estimating the local background and suppress combinatorial jets by requiring high momentum or energy constituents. At low momenta, these requirements may impose a bias on the surviving jets. Fluctuations and the contribution from combinatorial jets are generally higher with the area method, but with less bias.
We propose a multiplicity-based method as an alternative to the area method
\[p_{T,Jet}^{Corr,N}=p_{T,Jet}^{tot}-\rho_{Mult}(N_{tot}-N_{signal}), \tag{3}\]
where \(N_{tot}\) is the total number of particles in the jet, \(N_{signal}\) is the number of particles in the signal, and \(N=N_{tot}-N_{signal}\). This leverages the fact that the natural variable in the variance is the number of background particles, largely eliminating the second and third terms in eq. 2. The \(\rho_{Mult}\) in an event is the mean transverse momentum per background particle, which is approximated as the median \(p_{T,Jet}^{tot}/N_{tot}\) for \(k_{T}\) jets. \(N_{signal}\) is roughly described by models [35] and therefore can be estimated. This method would introduce an additional systematic uncertainty, which can be estimated using measurements of fragmentation functions to be around one additional particle in \(N_{signal}\)[36]. The variance of the momentum residual in eq. 2 is then generally smaller for the multiplicity method than the area method for 0-80% central collisions.
### Machine learning methods
A sufficiently complex neural network can interpolate any function, at the cost of transparency to the user. This poses an obstacle to application of deep neural networks in physics where understanding predictions and identifying their potential biases is crucial. Our approach to addressing this challenge is through symbolic regression, one example of interpretable machine learning, to extract mathematical expressions from trained deep neural networks. The resulting equations provide an effective description of the neural network's mapping between the input and output. By constraining the types of operations available, we can impose complexity and smoothness requirements.
We train a deep neural network to predict the corrected jet momentum from the following input features: the uncorrected jet momentum, jet area, jet angularity, number of jet constituents, and seven leading constituent momenta. The architecture and input features of the network are motivated by previous application of neural networks to proton-proton jets with a thermal background [17]. The deep neural network is implemented with TensorFlow 2.10.0 [37]. The deep neural network has three hidden layers consisting of 100, 100 and 50 nodes, each activated by a rectified linear unit (ReLU) [38] function. The model is optimized with ADAM [38] and the loss function is a modified mean squared error
\[\mathcal{L}=\langle||p_{T,det}^{Truth}-p_{T,det}^{DNN}||^{2}\rangle-\lambda \sum_{l=1}^{L}||\mathbf{W}_{l}||^{2}, \tag{4}\]
where \(p_{T,det}^{DNN}\) is the predicted jet momentum, \(p_{T,Jet}^{Truth}\) is the truth momentum, the last term is an \(L^{2}(\lambda)\) regularization where \(\lambda=0.001\), \(\mathbf{W}_{l}\) is the weight matrix of layer \(l\), and the sum is over the \(L\) layers. The regularization term penalizes redundancy and encourages sparsity in the final trained network. The network is trained using 50% of the simulated jets while the remaining 50% are reserved for testing.
Once the neural network is trained, it represents an approximate mapping between the input jet features and the truth jet momentum. We apply a genetic algorithm to symbolically regress a functional form which describes this mapping using the PySR 0.11.11 [39] package. The PySR model samples the phase space of analytic expressions defined by operators, input features, and constants for minimization through genetic programming. The input features are comparable to those of the neural network, and the pool of operations are arithmetic, exponential, trigonometric, and exponentiation. The model mutates over 50 generations of 20 different population samples, with each population containing 33 individuals. The loss function for the PySR model
\[\mathcal{L}=\langle||p_{T,Jet}^{DNN}-p_{T,Jet}^{P_{ySR}}||^{2}\rangle, \tag{5}\]
is the mean squared error between the prediction from PySR \(p_{T,Jet}^{P_{ySR}}\) and the corrected jet momentum predicted by the neural network. PySR evaluates expressions based on a score \(S\) that rewards minimizing the loss function \(\mathcal{L}\) and penalizes equation complexity \(C\)
\[S=-\frac{\delta\ln\mathcal{L}}{\delta C}, \tag{6}\]
where the equation complexity \(C\) is defined as the total number of operations, variables, and constants used in an equation [40]. The simulated jets, designated for testing, are used to sufficiently sample the neural network outputs throughout the possible input feature space. The highest scoring PySR expression is a functional representation of the mapping from input jet features to corrected jet momentum learned by the deep neural network.
## III Results
Figure 1 shows the width of the jet momentum residual distributions as a function of jet momentum for each background subtraction method in both Au+Au collisions at \(\sqrt{s_{\rm NN}}\) = 200 GeV and Pb+Pb collisions at \(\sqrt{s_{\rm NN}}\) = 2.76 TeV. The \(\sigma_{\delta p_{T}}\) increases with increasing jet resolution parameter, as expected because there is more background when the jet is larger. The \(\sigma_{\delta p_{T}}\) also increases with \(\sqrt{s_{\rm NN}}\) because the particle multiplicity increases. As seen in [17], the deep neural network reconstructs the momentum considerably more accurately than the area method. The performance of the multiplicity method is comparable to that of the deep neural network in Au+Au collisions and small jet resolution parameter. The ability of each method to sufficiently suppress contributions from combinatorial jets at low \(p_{T}\) is demonstrated with the ratios of the reconstructed jet spectra to the true jet spectra, shown in Fig. 2. The contributions from combinatorial jets decreases with increasing jet momentum for all methods, with all jet resolution parameters, and for both collision energies. The ratios for the deep neural network and multiplicity methods are both lower than those of the area method. For all jet resolution parameters and both collision energies, the symbolic regression found that the best description of the deep neural network has the functional form
\[p_{T,Jet}^{Corr.PySR}=p_{T,Jet}^{tot}-C_{1}(N_{tot}-C_{2}), \tag{7}\]
where the two parameters, \(C_{1}\) and \(C_{2}\), are optimization constants defined by PySR. These parameters are plotted in Fig. 3 and compared to the average value of the parameters used in the multiplicity method. We find that the symbolic regression parameters \(C_{1}\) and \(C_{2}\) are comparable to the averages of those for the multiplicity method, \(\langle\rho_{Mult}\rangle\) and \(\langle N_{signal}\rangle\), respectively, with greater deviations at LHC energies and larger \(R\). This indicates that the deep neural network is using a relationship similar to the multiplicity method to predict jet momenta.
Figure 3: PySR optimization constants compared to average value of multiplicity method parameters versus jet resolution parameter for Au+Au collisions at \(\sqrt{s_{\rm NN}}=200\) GeV and Pb+Pb collisions at \(\sqrt{s_{\rm NN}}=2.76\) TeV for jet resolution parameters \(R=0.2,0.4\), and \(0.6\).
Figure 2: Ratio of the reconstructed jet spectrum over the truth spectrum for Au+Au collisions at \(\sqrt{s_{\rm NN}}=200\) GeV and Pb+Pb collisions at \(\sqrt{s_{\rm NN}}=2.76\) TeV for jet resolution parameters \(R=0.2,0.4\), and \(0.6\). Low momentum points for the area method for LHC energies at \(R=0.4\) and \(R=0.6\) are off scale.
Figure 1: Comparisons of jet \(p_{T}\) residual width for each background subtraction method as a function of reconstructed jet momentum for Au+Au collisions at \(\sqrt{s_{\rm NN}}=200\) GeV and Pb+Pb collisions at \(\sqrt{s_{\rm NN}}=2.76\) TeV for jet resolution parameters \(R=0.2,0.4\), and \(0.6\).
This approach to machine learning enables use of domain knowledge. The optimization parameters from PySR would otherwise not have a clear physical interpretation. Since these parameters are understood in the multiplicity method, it is possible to assign a physically motivated uncertainty to them. Assumptions inherent in the method can then be understood.
## IV Conclusions
We have shown that interpretable machine learning methods can learn an underlying physical correlation, such as the multiplicity dependence for jet background, that was previously overlooked. We previously showed that when we used a random forest to classify jets as either combinatorial or signal, the optimal selection was on the leading hadron momentum [41], already used as a standard technique [12; 13; 14; 15]. We argue that applying machine learning to scientific problems requires methods that are interpretable. The definition of interpretability is often ambiguous or under-specified, but [42] presents several definitions of interpretability to guide our selection of machine learning methods. We argue that for a machine learning method to be interpretable (1) it should be applicable equivalently to data and simulation, (2) the method's output can be understood outside the range of the training set, and (3) a measurement uncertainty can be calculated. We argue that an uncertainty on the method is not a proxy for a measurement uncertainty. These stricter criteria are consistent with those outlined in [16]. Symbolic regression satisfies these requirements because the output is a formula. The convergence of the empirically-based multiplicity method and the formula produced through symbolic regression is a clear indication of the usefulness of an interpretable method. Machine learning should be used to gain knowledge about the underlying physical processes that drive the relationships in our data. We must interpret the details of any method in terms of these underlying physical processes.
###### Acknowledgements.
We are grateful to Friederike Bock, Hannah Bossi, Adrian Del Maestro, Jamie Nagle, Ken Read, and Austin Schmier for useful discussions and feedback on the manuscript. This work was supported in part by funding from the Division of Nuclear Physics of the U.S. Department of Energy under Grant No. DE-FG02-96ER40982. This work was performed on the computational resources at the Infrastructure for Scientific Applications and Advanced Computing (ISAAC) supported by the University of Tennessee.
|
2303.07146 | NeuroQL: A Neuro-Symbolic Language and Dataset for Inter-Subjective
Reasoning | We present a new AI task and baseline solution for Inter-Subjective
Reasoning. We define inter-subjective information, to be a mixture of objective
and subjective information possibly shared by different parties. Examples may
include commodities and their objective properties as reported by IR
(Information Retrieval) systems, that need to be cross-referenced with
subjective user reviews from an online forum. For an AI system to successfully
reason about both, it needs to be able to combine symbolic reasoning of
objective facts with the shared consensus found on subjective user reviews. To
this end we introduce the NeuroQL dataset and DSL (Domain-specific Language) as
a baseline solution for this problem. NeuroQL is a neuro-symbolic language that
extends logical unification with neural primitives for extraction and
retrieval. It can function as a target for automatic translation of
inter-subjective questions (posed in natural language) into the neuro-symbolic
code that can answer them. | Nick Papoulias | 2023-03-13T14:16:59Z | http://arxiv.org/abs/2303.07146v1 | # NeuroQL: A Neuro-Symbolic Language and Dataset
###### Abstract
We present a new AI task and baseline solution for Inter-Subjective Reasoning. We define inter-subjective information, to be a mixture of objective and subjective information possibly shared by different parties. Examples may include commodities and their objective properties as reported by IR (Information Retrieval) systems, that need to be cross-referenced with subjective user reviews from an online forum. For an AI system to successfully reason about both, it needs to be able to combine symbolic reasoning of objective facts with the shared consensus found on subjective user reviews. To this end we introduce the NeuroQL dataset and DSL (Domain-specific Language) as a baseline solution for this problem. NeuroQL is a neuro-symbolic language that extends logical unification with neural primitives for extraction and retrieval. It can function as a target for automatic translation of inter-subjective questions (posed in natural language) into the neuro-symbolic code that can answer them.
## 1 Introduction
Our digital world comprises of information sources with varying degrees of objectivity and subjectivity. Structured information stored in databases or other IR systems (such as for _e.g._ the physical properties and prices of products) are presented as objective facts to potential customers, while their accompanying free-form reviews fall by definition in the subjective part of this spectrum. Yet, as users of these systems we are interested in questions involving both aspects. These are questions where both the authoritative structured data of a product and the unstructured opinions of the general public are pertinent. Consider for example the search query: _"How is the bass for headphones at around 30 dollars having minimum 14K reviews that is not discontinued?"_ which we depict in Fig. 1:
We call this type of questions _inter-subjective_ given that they involve a mixture of subjective and objective information that may be shared by different parties. Currently, answers to such queries are treated with a semi-manual two stage process by information systems. In the first step the user
Figure 1: An inter-subjective question _“How is the bass for headphones at around 30 dollars having minimum 14K reviews that is not discontinued?”_. Depicting both the subjective and the objective components of this question: (_i.e. opinion, title, price, reviews, manufacturing etc._).
needs to pinpoint a single product of interest using a query involving only its objective properties and description (e.g a query involving the title, price, number of reviews _etc._ as in the example above). Then in the second step, the user needs to scan the reviews of this product either manually or with a similarity search in order to find relevant subjective opinions regarding the quality of the product (e.g. the quality of the basis in the above example). With the advent of deep-learning [24; 17] for natural-language processing [48] and more specifically deep-learning Q&A (Question & Answering) models [46], this second step can now be supplemented with neural retrieval and neural comprehension of reviews.
In this work we investigate the possibility of automating and merging these two stages that involve both symbolic reasoning (over structured factual information) and neuronal reasoning (over unstructured subjective sources). The problem of interfacing symbolic and neuronal reasoning, is a known open problem in AI literature involving neuro-symbolic systems [8; 32]. Our contribution is to propose a new AI task and baseline solution for _inter-subjective_ queries and reasoning, as the one we saw above.
To this end we introduce the NeuroQL dataset and DSL (Domain-specific Language [14; 34]) as a baseline solution for this problem. Our dataset extends previous work on Q&A systems focused on metadata [31] and subjective [4] information to include inter-subjective questions and their translation into neuro-symbolic queries. Our queries are expressed in NeuroQL which is a neuro-symbolic language that we embed inside the Python [47] runtime environment. This embedded DSL implements and extends logical unification [41; 40] with neural primitives for extraction and retrieval. NeuroQL can function as a target for automatic translation of inter-subjective questions (posed in natural language) into the neuro-symbolic code that can answer them, as we show in Figure 2:
Figure 2: Equivalence between the inter-subjective question _“How is the bass for headphones at around 30 dollars having minimum 14K reviews that is not discontinued?”_ and its translation into a neuro-symbolic query in NeuroQL. Starting from the top we depict both the subjective and the objective components of the question and their corresponding categories (_i.e. opinion, title, price, reviews, manufacturing etc._). On the bottom we see the corresponding _search_ expression in NeuroQL with all sub-queries highlighted to match their equivalent category in natural language.
In this figure we show our goals and hypotheses regarding NeuroQL. Firstly, we hypothesize that:
**(_H1_): _It is possible to translate inter-subjective questions (as the one shown on top of Figure 2), comprising of different subjective and objective components, into neuro-symbolic queries expressed in NeuroQL._**
To test (_H1_) we experiment with a neural translation solution fine-tuned to this domain, aiming to distinguish not only between objective and subjective components, but also between different kinds of sub-query categories (_opinion, title, price, reviews, manufacturing etc._). These sub-query categories have different translations in NeuroQL (as shown in the bottom part of Figure 2) which are highlighted to match their equivalent categories in natural language.
Furthermore, we hypothesize that:
**(_H2_)**: Symbolic reasoning through unification (covering structured objective facts), similar to the one found in prolog [9, 11] and datalog [6] systems, can be extended with neuronal reasoning (for unstructured subjective data) in a disciplined manner, producing satisfactory answers for inter-subjective queries.
Meaning, that the neuro-symbolic synthesis can be expressed in a concise syntactic and semantic form for NeuroQL users, while still being able to answer inter-subjective queries in a satisfactory way. To test (_H2_) we measure _recall, em and f1 scores_[46] of translated NeuroQL queries against previously unseen inter-subjective questions.
The rest of this paper is organized as follows: Section 2 presents the results of our effort including: a hands-on walk-through of NeuroQL (sub-section 2.1), a description of the NeuroQL inference pipeline (sub-section 2.2), a description of the NeuroQL dataset (sub-section 2.3), our translation, recall, em and f1 experiments (sub-section 2.4) as well as a further review of related work (sub-section 2.5). Section 3 discusses the implications of our contribution, possible threats to validity and future perspectives. Finally, Section 4 concludes the paper with pertinent methodological comments.
All data and examples from this paper are available at: [https://orgdlabs.com/neuroQL](https://orgdlabs.com/neuroQL).
## 2 Results
### NeuroQL by Example
You can load the NeuroQL language inside a python environment as the one we provide for our readers at: [https://orgdlabs.com/neuroql_eg](https://orgdlabs.com/neuroql_eg) by running the following import statement (shown in line 1 of listing 1):
```
1from NeuroQLimport*
2NeuroQL.load('asin_key_properties.csv','asin_reviews.csv')
```
Listing 1: Importing NeuroQL into Python and loading product properties and reviews
Then as shown in line 2 of listing 1 you can load data (_e.g._ from csv files) to populate your knowledge base. In this case we load product properties and reviews in line 2, which have the following general format:
```
1...
2B00001P4ZH,title,kossportaproheadphoneswithcase
3B00001P4ZH,price,39.36
4B00001P4ZH,brand,koss
5B00001P4ZH,stars,4.7
6B0001P4ZH,total_reviews,14549
7B00001P4ZH,item_weight,2.79
8B00001P4ZH,item_weight_units,ounces
9B00001P4ZH,item_model_number,6303157
10B00001P4ZH,is_discontinued_by_manufacturer,no
11...
```
Listing 2: Product properties excerpt
This data consists of a product identifier (like _B00001P4ZH_), which is called an _asin_ (Amazon Standard Identification Number) in the amazon.com product catalog [2]. The identifier is followed by the name of a property (such as _title_, or _price_) together with that property's value (_e.g. 39.36_ in the case of the _price_ property on line 3 above). NeuroQL is able to load arbitrarily nested n-tuples of any length, but for the sake of simplicity we use here a flat representation of the form: _object,property_ = _value_ familiar to programmers from OO paradigms as well as engineers working with triple-based knowledge systems _(subject,predicate,object)_[3]. If we need to load new items explicitly (instead from a pre-saved format) we can invoke the NeuroQL _fact_ primitive as follows: _fact(('B00001P4ZH','price',39.36))_. Alternatively we can declare the id: _B00001P4ZH = ids()_ and simply state _B00001P4ZH.price == 39.36_.
After having loaded the data, we can perform our first simple query using NeuroQL as follows:
```
1B00001P4ZH=ids()
2_property,_value=vars()
3search(B00001P4ZH._property==_value)
```
Listing 3: Retrieving all available product properties by id
Here in listing 3 we define an id of interest on line 1 and declare our variables of interest on line 2. In this case we would like to retrieve all available properties and their values regarding a particular id. To do so, on line 3 we invoke the _search_ NeuroQL primitive, which takes as input a NeuroQL expression, and returns all possible combinations of values from our knowledge base that satisfy our input expression.
Thus our search results always consist of a list of mappings (such as dictionaries) that bind our unknown variables (seen on listing 4) to the values that satisfy our expression:
```
1{'?property':'stars','?value':4.7},
2{'?property':'item_weight','?value':2.79},
3{'?property':'total_reviews','?value':14549},
4{'?property':'title','?value':'kossportapro...'},
5{'?property':'price','?value':39.36},
6{'?property':'review','?value':'882b1e2745a47...'},
7{'?property':'brand','?value':'koss'},
8{'?property':'is_discontinued_by_manufacturer','?value':'no'},
9{'?property':'item_model_number','?value':6303157},
10{'?property':'review','?value':'ce76793f036494...'},
11{'?property':'item_weight_units','?value':'ounces'},
12{'?property':'review','?value':'d040f2713caa2...'}
```
Listing 4: Product properties retrieved by id
This type of binding is achieved through symbolic unification [41, 40], which we will be extending later on with a set of neuronal primitives.
Continuing with our examples, NeuroQL also allows for the creation of more complex queries from simpler ones, with additional means of combination as shown in listing 5:
```
1_asin,_total_reviews,_price,_title,
2_review,_review_text,_answers=vars()
3
4search(
5bm25_match(_asin.title==_title,'headphones',80),
6_asin.price==_price,
7op_filter(lambdae:abs(e['?price']-30)<10),
8}
```
Listing 5: Searching for headphones with a price around 30S
Here on lines 1 and 2 we define a set of variables that we want to find the bindings of (for this and subsequent examples). Then on lines 4 through 8 we are asking for the ids of headphones whose price is around 30 dollars. To express this query in NeuroQL we form a conjunction of expressions (by passing each expression as a separate argument to our _search_ primitive). The first
such expression performs a _bm25_[39] matching against all product titles, returning the top 80 results (line 5). Similarly, on line 6 we are asking for the _price_ variable to be bound to the price of the products that we have found thus far. Finally on line 7, we are filtering our results to only include products whose price is within 10 dollars of our target price.
Notice here, that the id (_i.e_. the _asin_), the product _title_ as well as the product _price_, are all variables. These are the variables which we are asking NeuroQL to bound for us. Each binding will correspond to specific values from our knowledge base that jointly satisfy our expressions, getting us the following results (listing 6):
```
1...
2{'?asin':'B00001P4ZH',
3?title':'kossportaproheadphoneswithcase',
4?price':39.36},
5...
6{'?asin':'B000AJIF4E',
7?title':'sonymdr7506professionallargediaphragmheadphone',
8?price':29.99}
9...
```
Listing 6: Output example for a simple query (showing 2 sample results out of a total of 10)
Building towards our complete example from Figures 1 and 2 we can now ask (listing 7) for _headphones at around 30 dollars, having minimum 14K reviews, that are not discontinued by the manufacturer_:
```
1search(
2bm25_match(_asin.title==_title,'headphones',80),
3_asin.price==_price,
4op_filter(lambdae:abs(e['?price'] - 30) < 10),
5_asin.total_reviews==_total_reviews,
6op_filter(lambdae:e['?total_reviews'] >= 14000),
7_asin.is_discontinued_by_manufacturer=='no',
8)
```
Listing 7: Refining our search to include total number of reviews and manufacturing details
This is achieved by extending our previous query (of listing 5) with lines 5 through 7 on listing 7. Describing a _total_reviews_ variable to be bound (for our already retrieved products) on line 5. Filtering the results further depending on the total number of reviews on line 6. Then finally, on line 7 adding the additional constraint that the products we are looking for should not have been discontinued. The results from this extended query are seen in listing 8:
```
1{'?asin':'B00001P4ZH',
2?title':'kossportaproheadphoneswithcase',
3?price':39.36,
4?total_reviews':14549},
5{'??asin':'B0007XJSQC',
6?title':'sennheiserhd201lightweightover-ear..headphones',
7?price':24.95,
8?total_reviews':14980},
9{'?asin':'B000AJIF4E',
10?title':'sonymdr7506professionallargediaphragmheadphone',
11?price':29.99,
12?total_reviews':22071}
```
Listing 8: Output example for a refined query (showing all 3 results)
#### 2.1.1 Neuro-Symbolic Composition
Up to this point we have been seeing how NeuroQL can handle the objective components of natural language questions. Now we will extend our previous example (listing 7) to handle the entirety of an
inter-subjective question as the one we described in Figures 1 and 2. Namely, _"How is the bass for headphones at around 30 dollars having minimum 14K reviews that is not discontinued?"_. To do so, we start by extending our previous example on line 8 of listing 9 where we bind a _review variable to the reviews of the products we have found so far.
Then on line 9 we invoke the neural primitive neural_match, taking the following algorithmic steps to extend classical unification:
1. neural_match will receive the sub-query _review.tezt==_review.tezt as its first argument. It will use this sub-query to create a set of (id, document) pairs (in this case _review,_review.tezt pairs) binding the variable _review.tezt in the process.
2. _It will then try to match the bindings of _review.tezt against the subjective component of the initial query, in this case: 'how is the bass?', which is the second argument we passed to our neural_match primitive. With default settings this match will be performed by a DPR (Dense Passage Retriever) [21], using question & context encoders trained with the Natural Questions dataset [25, 23]._
3. _Finally, neural_match will filter all query bindings thus far (up to line 11 of listing 9) to include only the top 5 results or our DPR _review.tezt match (using the third argument on line 10)._
```
1search(
2bm25_match(_asin.title==_title,'headphones',80),
3_asin.price==_price,
4op_filter(lambdae:abs(e[?price'] - 30) < 10),
5_asin.total_reviews==_total_reviews,
6op_filter(lambdae:e[?total_reviews'] >= 14000),
7_asin.is_discontinued_by_manufacturer=='no',
8_asin.review==_review,
9neural_match(
10_review.text==_review.text,'howisthebass?',5
11),
12neural_extract(
13_answers,_review.text==_review.text,'howisthebass?',2
14) )
```
Listing 9: Creating an inter-subjective query by filtering and scanning reviews to answer a particular question
Thus far we have the top 5 reviews that match our question for the exact subset of products satisfying our objective constraints (up to line 11). We can now proceed to extract relevant opinions for our inter-subjective question using the neural_extract primitive (lines 12 to 14 of listing 9), as follows:
1. neural_extract will first receive the name of a new variable to bind (in this case the variable _answers) with the extracted text that matches its targeted question.
2. _It will then use the sub-query _review.tezt==_review.tezt passed as a second argument, to create a new set of (ids, documents) matching the sub-query (as we did before)._
3. _Then with the third argument (the subjective sub-component 'how is the bass?') it will try to extract the answer to this question from the _review.tezt documents. It will do so using a Reader model, such as MiniLM [49] initially trained on the SQuAD 2.0 dataset [37] and further fine-tuned on the NeuroQL training set to improve its performance (as we describe in Section 2.4)._
4. _Finally, neural_extract will filter all query bindings to include only the top 2 results extracted by the Reader (using the forth argument on line 13)._
The final results of our inter-subjective query are seen in listing 10 with a sample of the final variable bindings. These now include both objective (_title, price, total_reviews, manufacturing_) and subjective (_review, answers_) components that jointly satisfied our query's constraints. As an example the most pertinent opinion for product B000AJIF4E (satisfying our constraints regarding _title, price,
total_reviews, manufacturing etc.)_ is that the 'Bass is amazing', while for product B00001P4ZH is that the 'Bass is weak as expected'.
```
1{??asin':B000AJIF4E',
2?title':'sonymdr7506professionallargediaphragmheadphone',
3?price':29.99,
4?total_reviews':22071,
5?review':'5e96b0052898fe67cf622888fc5af69',
6...
7?answers':{'answer':'Bass is amazing',...}
8...
9{??asin':B00001P4ZH',
10?title':'kossportaproheadphoneswithcase',
11?price':39.36,
12?total_reviews':14549,
13?review':'d040f2713caa2aff0ce95affb40e12c2',
14...
15?answers':{'answer':'Bass is weak as expected',...}}
```
Listing 10: Output example for an inter-subjective query
We complete our examples by showing how to define and use inference rules in NeuroQL 1. Lines 1 through 6 of listing 11 define an inference rule using the primitive rule. The first argument on line 1 is the sub-query (_asin.well_ranked == True) that defines the conclusion that the rule can infer _if_ the rest of the arguments are satisfied. The rest of the arguments being sub-queries themselves forming a conjunction.
Footnote 1: A formal model of the syntax and semantics of inference rules in NeuroQL will be part of a follow up paper.
Whenever the head of this rule unifies with a sub-query that we are currently looking to bind (like the sub-query on line 11 of our search primitive) the rule will be tested. This means that there will be a nested search that will try to satisfy the rule's conjunction given the current bindings and return new bindings if needed.
```
1rule(_asin.well_ranked==True,
2_asin.total_reviews==_total_reviews,
3op_filter(lambdae:e['?total_reviews']>=20000),
4_asin.stars==_stars,
5op_filter(lambdae:e['?stars']>=4.0)
6)
7search(
8bm25_match(_asin.title==_title,'headphones',80),
9_asin.price==_price,
10op_filter(lambdae:abs(e['?price']-30)<10),
11_asin.well_ranked == True,
12_asin.is_discontinued_by_manufacturer=='no',
13_asin.reviews==_review,
14neural_match(
15_review.text==_review_text,'howisthebass?',5
16),
17neural_extract(
18_answers,_review.text==_review_text,'howisthebass?',2
19) )
```
Listing 11: Incorporating a rule to infer well ranked products during search
In plain english this rule states that a product is well ranked if it has at least 20K reviews and a rating of at least 4.0 stars. By using this rule on line 11 of listing 11 instead of our previous less strict criteria (on lines 5 and 6 of listing 9) we get a more constraint result (seen on listing 12).
```
1...
2 {'?asin':'B000AJIF4E',
3 '?title':'sonymdr7506professionallargediaphragmheadphone',
4 '?price':29.99,
5 '?total_reviews':22071,
6 '?review':'5e96b0052898fe667cf622888fc5af69',
7...
8 '?answers':{'answer':'Bassisamazing',...}} ```
Listing 12: Inter-subjective query results using rules for well ranked products during inference
### The NeuroQL Architecture
Our two main hypotheses (as detailed in Section 1) are that (_H1_) it is possible to automatically translate inter-subjective questions from natural language into NeuroQL and (_H2_) that NeuroQL can concisely extend unification with neural reasoning to produce satisfactory answers for inter-subjective questions. To test (_H1_) and (_H2_) we devised the following architecture (seen in Figure 3) that serves as a baseline solution for our inter-subjective Q&A task. Starting at the top left corner of Figure 3 we see a user submitting an inter-subjective question in natural language 1. This question during inference 1.2 will be passed to a translation model (in our case a fine-tuned CodeT5 model [51]) that will attempt to translate the question into a NeuroQL query 1. The translation model is trained 1.2 for this downstream task using the NeuroQL question/query dataset (detailed in Sections 2.3
Figure 3: NeuroQL Architecture: Starting from an inter-subjective question the NeuroQL architecture firsts infers the equivalent NeuroQL query using a neural translation network that has been fine-tuned for this task. Subsequently the query is executed against the NeuroQL database, that employs both symbolic (through unification) and neural reasoning (through a retriever and a reader network) to incrementally bind the query variables and return an answer to the user.
and 2.4). Subsequently, our architecture will attempt to execute this query 1 2 which (as we saw in Section 2.1) means finding all possible bindings within the NeuroQL database that satisfies the query's main conjunction. The NeuroQL database itself is an in-memory n-tuple store upon which our extended unification algorithm is applied. These extensions include the neural_match 2 and neural_extract 3 primitives. These primitives are based on a retriever model (in our case a DPR [21] trained with the Natural Questions dataset [25, 23]) and a reader model (a MiniLM [49] initially trained on the SQuAD 2.0 dataset [37]). Using these models our unification engine will attempt to further bind and filter the neuronal sub-queries of a logical conjunction (as we saw in Section 2.1.1). During training our reader has been further fine-tuned 4 on the NeuroQL training set (using our review/answer pairs) to improve its performance (as we further describe in Sections 2.3 and 2.4). Finally, when all possible bindings have been found 5 an answer 6 will be returned to our user.
Listings 13 and 14 show step by step how we can first translate a natural language question into NeuroQL (lines 1 to 3 of listing 13) invoking our primitive NeuroQL.translate. Then, how to dynamically evaluate the resulting query (in line 1 of listing 14) to get our results (using NeuroQL.eval). This is done with both cases matching our previous results from listings 9 and 10. Finally, listing 15 show us how both translation and evaluation can be expressed in a single step (line 1 of listing 15) by simply invoking our primitive NeuroQL.answer.
```
1query=NeuroQL.translate(
2'Howisthebassforheadphonesataround[...]?'
3)
4print(query)
5
6Output:#
*?price':39.36,
*?total_reviews':14549,
*?review':'d040f2713caa2aff0ce95affb40e12c2',
*...
*?answers':{'answer':'Bassisweakasexpected'....)}
*###################################################################
Listing 14: Dynamically evaluating a generated NeuroQL query
```
1NeuroQL.answer(
2'Howisthebassforheadphonesataround[...]?'
3)
4
5Output:###########
6{'?asin':'B000AJIF4E',
7'title':'sonymdr7506professionallarged diaphragmheadphone',
8'?price':29.99,
9'Total_reviews':22071,
10'review':'5e96b0052898fe667cf622888fc5af69',
11...
12'answers':{'answer':'Bassisamazing',...)}
13...
14{'?asin':'B00001P4ZH',
15'title':'kossportaproheadphoneswithcase',
16'?price':39.36,
17'Total_reviews':14549,
18'review':'d040f2713caa2aff0ce95affb40e12c2',
19...
20'answers':{'answer':'Bassisweakasexpected',...)}
21####################################################################
Listing 15: Combining translation and evaluation of inter-subjective questions in a single call
### The NeuroQL Dataset
The NeuroQL dataset extends previous work on Q&A systems focused on metadata [31] and subjective [4] information to include _1505_ inter-subjective questions and their translation into neuro-symbolic queries (_e.g._ in listing 16). These _(question, query)_ pairs are coupled with a detailed knowledge base of _4250_ properties (_e.g._ in listing 2) for _500_ different products, including _1583_ reviews and _1627_ ground truths for Q&A extraction.
```
1B00001P4ZH,question,0514ee34...
20514ee34...,text,Forheadphonesmodelnumber6303157...
30514ee34...,query,"search{
4bm25_match(...
5...
6neural_match(...
7neural_extract(...
8)" ```
Listing 16: A _(question, query)_ pair from the NeuroQL dataset
On Table 1 we list all the categories of objective properties that we included in our dataset with a short description for each. Each category is followed by either its domain or a sample value for reference. Finally on Figure 4 we present a breakdown of our dataset in terms of questions involving a specific objective property (on the left of Figure 4). On the right (similarly to [46]) we give statistics regarding the first word of the subjective component present in our inter-subjective questions.
### Experiments and Validation
There are two distinct experimental tasks that the NeuroQL dataset makes possible. The first is the _NeuroQL Translation Task_ where the goal is to translate inter-subjective questions posed in natural
language into the neuro-symbolic code that can answer them. The task takes into consideration _(question, query)_ pairs and can be evaluated using metrics such as _sacreBleu_[36] and four-gram or tri-gram precision, common in machine translation tasks [46].
The second task is the _NeuroQL Query Task_ where the goal is to fine-tune neural primitives such as the neural_extract sub-query to produce as many accurate results as possible. The task takes into consideration _(query, answers)_ pairs and can be evaluated using metrics such as _recall_, _em_ and _f1_ common in Q&A extraction tasks [46].
For our _NeuroQL Translation Task_ we fine-tune a CodeT5 model [51] for _20 epochs_ aiming to translate inter-subjective questions into NeuroQL code. We use a maximum input/output length of _512_ tokens that fits our _(question,query)_ pairs, a batch size of _64_ and an _1e-4_ learning rate with a _linear_ scheduler and an _AdamW_ optimizer [28]. As we can see in the the left part of Figure 5, the model is significantly improving up until epoch 11 (without over-fitting), after which point the
\begin{table}
\begin{tabular}{l l l} \hline \hline \multicolumn{3}{c}{Product Properties} \\ \hline Name & Description & Sample Values \\ \hline title & The product’s title as reported online & e.g. _apple magic mouse_ \\ brand & The product’s brand & e.g. _audio-technica_ \\ item\_model\_number & Manufacturer’s Serial No or other id & e.g. _6229a003aa_ \\ price & The product’s price as reported online & _in range \([1.95,999.0]\)_ \\ stars & The product’s average rating & _in range \([2.8,5]\)_ \\ total\_reviews & Total number of reviews of a product & _in range \([2,134717]\)_ \\ color & The product’s color as reported online & e.g. _amethyst gray_ \\ item\_weight & The product’s price as reported online & _in range \([0.01,73.2]\)_ \\ item\_weight\_units & Units of item\_weight & \(\in\{^{\prime}kilograms^{\prime},ounces^{\prime},^{\prime}\text{ } pounds^{\prime}\}\) \\ batteries & Number of batteries included & _in range \([1,3]\)_ \\ is\_discontinued & Continued Manufacturing & \(\in\{^{\prime}mo^{\prime},^{\prime}\text{ }yes^{\prime}\}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: NeuroQL Dataset: Type, Description and Sample Values of Objective Properties
Figure 4: Breakdown of properties and subjective components in the NeuroQL Dataset
returns are diminishing leading to a near perfect sacreBleu score at epoch 20 for both training and validation sets. The test set evaluation after epoch 20 confirms our model's performance. We split our dataset between training _(80%)_, validation _(10%)_ and test _(10%)_ sets using unique product ids for each slice to ensure no leakage. The four-gram and tri-gram precisions reported on the right side of Figure 5 follow our sacreBleu observations as expected. Given these results our initial hypothesis _(HI)_ regarding the feasibility of translating inter-subjective questions into neuro-symbolic queries has been validated.
For our _NeuroQL Query Task_, we first measure the impact our DPR [21] model trained with the Natural Questions dataset [25, 23]) has on our test set, for different values of _top_k_ results returned by the retriever. We then fine-tune our reader model (a MiniLM [49] initially trained on the SQuAD 2.0 dataset [37]) to improve its performance for different values of _top_k_ results returned by the reader. On the left part of Figure 6 we can see that our DPR model performs quite well at 70% recall even in the case of only the _top_2_ results returned. Yet, it does not surpass the _90%_ mark before _top_13_ results are returned, reaching a 94% recall score at _top_20_. Using this last recall score from our retriever, we then evaluate our fine-tuned reader model, observing as expected an increase at _em_ and _f1_ scores as _top_k_ increases. Yet, the best values observed were _0.20_ for _em_ and _0.33_ for _f1_. These were both reached at _top_8_ results returned, being comparable to scores for extractive Q&A on user reviews reported by [46], using the same model.
Prior to this evaluation we fine-tuned our MiniLM [49] reader model for _3 epochs_ in order to improve its extraction accuracy. We used a _384_-token sequence length with a _128_-token document stride and a batch size of _16_, including the ability to return no answers when predicting results. We set the learning rate at _1e-5_ with a _0.2_ warmup, using the same dataset split as before. Given the above results our initial hypothesis _(H2)_ regarding the feasibility of extending unification with neural primitives for answering inter-subjective questions has only been partially validated. Neuro-symbolic unification as presented in this work, can indeed be used to execute inter-subjective neuro-symbolic queries. Yet these results present only a first baseline. There is more work needed to improve the accuracy of extracted answers, particularly regarding the neural_extract primitive.
### Related Work
There is a significant body of work dedicated to translating natural language questions to queries for established symbolic languages such as sql [53, 52, 22, 5] and sparql [54, 42]. These solutions
Figure 5: Our Experimental Results for the NeuroQL Translation Task including sacreBleu, four-gram and tri-gram precision metrics.
have only one neuronal component for translation, going from: _neural translation \(\rhd\) symbolic code \(\rhd\) symbolic reasoning_. Our contribution takes a step further by translating natural language into neuro-symbolic code, closing the loop between symbolic and neuronal reasoning: _neural translation \(\rhd\)neuro-symbolic code \(\rhd\)neuro-symbolic reasoning_.
Deep-prolog [30] and TensorLog [10] are two other neuro-symbolic and logic-oriented programming languages that aim to integrate neuronal and symbolic reasoning. Both rely on probabilistic logic and thus try to integrate neural primitives as predicates with probabilities that can be parameterized by neural networks. Similarly, Scallop [19] takes a weighted graph approach for knowledge graphs, where the weights between entities are learned. Finally, SQLFlow [50] focuses not on execution but on orchestration (using Kubernetes) for ML workloads using an SQL-like language. In our case the integration between neuronal and symbolic reasoning targets execution and occurs at a higher semantic level than probabilistic approaches. In NeuroQL reasoning tasks such as neuronal extraction and matching are expressed as filtering and binding operations for unification [41; 40]. This allows us to create a concise neuro-symbolic query language that can act as a target for automatic translation of natural language questions.
A number of DSLs for neuro-symbolic programming [8] target specific domains such as symbolic regression [12], behavior analysis [45] and program synthesis [13]. Ours is the first to address the problem of inter-subjective reasoning using neural translation, extending prior work on metadata [31] and subjectivity [4].
Finally, our work is related to the wider domain of retrieval augmented [27; 44; 16] and tool augmented [43; 7; 26] extraction and generation of answers. In our case though we proposed a rich intermediate representation for our task that takes the form of neuro-symbolic code. This allowed us to integrate neuronal and symbolic reasoning in an interpretable and explicit way.
## 3 Discussion
We believe that the translation of natural language into neuro-symbolic instead of simply symbolic code is a step forward in the direction of closing the loop between neuronal and symbolic reasoning. The tasks, dataset and DSL for inter-subjective reasoning that we have introduced in this work provide a case-study close to real-world needs with clear neuro-symbolic characteristics. As such we believe it can be used as a baseline to evaluate future work.
Figure 6: Our Experimental Results for the NeuroQL Query Task including recall, em and f1 scores for our DPR and Reader models.
Our results regarding hypothesis (_H1_) for the _NeuroQL Translation Task_, were surprising. For this task the network needs to learn not only to distinguish between objective and subjective components, but also between different kinds of sub-query categories (_i.e.__opinion, title, price, reviews, manufacturing etc._) as shown in Figure 2. Given the near perfect sacreBleu score for this task, we believe that the close-world assumption (_i.e._ the fact that all queries concerned products and their objective or subjective properties) needs to be relaxed in future iterations of the dataset. For example we can consider extending the dataset with sufficiently different domains of inquiry, such as for _e.g._ news articles and their commentary. Moreover, automatically paraphrasing [15] our questions to create a larger more diverse dataset is a promising direction for future work.
Our results regarding hypothesis (_H2_) for the _NeuroQL Query Task_ were closer to our expectations given that there were comparable to a simpler extractive Q&A task on user reviews reported by [46]. While these results can serve as a baseline, there is clearly more work needed to improve accuracy.
Regarding possible threats to validity, we note that the language used in parts of our questions comes directly from real-world usage and is unedited. This at first glance is a strength of the dataset (_i.e._ models need to learn to adapt to real-world noise). Nevertheless, examples where there are syntactic or semantic mistakes (_e.g._ from non-native speakers) can also affect the quality of labeling, making it harder to pinpoint valid answers. Here also, an automatic paraphrasing [15] and auto-correction approach might help us control for and measure the impact of linguistic coherence in our dataset.
## 4 Methods
Domain-Specific Language:We implemented NeuroQL as a DSL [14; 34] using reflection and meta-programming [29; 35], allowing us to change the normal semantics of Python only for NeuroQL expressions (leaving the rest of the runtime unchanged). These changes include _(i)_ the redefinition of variable declaration (when using the vars() and ids() primitives) _(ii)_ the redefinition of the dot, assignment and equality operators for NeuroQL variables and NeuroQL ids. These changes make NeuroQL an _embedded pidgin_ language according to the categorization of DSLs provided in [38].
Unification:Our base unification algorithm [41; 40] is implemented as a generalization of pattern matching [1; 18], where both the pattern and the target may contain variables. The generalization receives a set of bindings as input (usually referred to as _frames_) and returns possible augmentations that contain matched values of - as yet - unresolved variables. By allowing for sub-queries to be matched either jointly or in parallel we can implement logical operations such as conjunction and disjunction. Unification with inference rules can then be achieved through nested pattern matching that tries to satisfy a rule's body given the current bindings.
Bm25:We use a slight variation of the BM25 (Best Match 25) algorithm in our work to compensate for the most common cases of the "exact overlap" problem of the initial algorithm [20]. The BM25 is itself an improvement over TF-IDF (Term Frequency-Inverse Document Frequency) [20], with both algorithms being used as ranking methods for sparse retrievers. In our case we pre-process both the documents and the query to be matched by tokenizing and stemming the inputs. We don't provide a further solution for synonyms using BM25, since in this case the more accurate DPR primitive can be used (see below). The two methods (BM25 and DPR) provide a trade-off for our users between speed and accuracy, with BM25 being the fastest and DPR being the most accurate.
Dense Passage Retrieval:DPR (Dense Passage Retrieval) is a method that uses dense text embeddings for both query and documents that need to be matched. It is based on a BERT bi-encoder architecture that computes a dot product similarity between a document and a query. Our DPR is based on [21], using question & context encoders trained with the Natural Questions dataset [25; 23]. This DPR is then used as a backend for of our _neural_match_ primitive. The _neural_match_ primitive receives a sub-query whose results are used to create a set of _(id, document)_ pairs. It then tries to match the documents against a target query text, using our DPR to return the top_k results that matched (see full example of _neural_match_ in Section 2.1.1).
Reader Model:A Reader or reading comprehension model is a neural network that can perform extractive Q&A by returning relevant text intervals of documents. In our work we use the MiniLM [49] model initially trained on the SQuAD 2.0 dataset [37] and further fine-tuned on the NeuroQL
training set to improve its performance (as we describe in Section 2.4). This Reader is then used as a backend for of our _neural_extract_ primitive. The _neural_extract_ primitive receives the name of a new variable to bind for extracted answers, a query to create _(id, document)_ pairs and finally a target query text. It then tries to extract relevant text intervals from our documents using the Reader to return the top_k results found. We fine-tuned our reader model for _3 epochs_, using a _384_-token sequence length with a _128_-token document stride. We used a batch size of _16_, learning rate of _1e-5_ with a _0.2_ warmup and included the ability to return no answers when predicting results.
Translation Model:A translation model is a sequence-to-sequence neural network trained over pairs of input and target sequences. In our case we fine-tuned a CodeT5 model [51]) using the NeuroQL question/query dataset in order to translate inter-subjective questions posed in natural language into the NeuroQL query that can answer them (as detailed in Sections 2.3 and 2.4). This translation model is then used as a backend for our NeuroQL.translate and NeuroQL.answer primitives, which as their name suggests can translate questions into NeuroQL code and attempt to answer them. Our translation model was fine-tuned for _20 epochs_ with a maximum input/output length of _512_ tokens, a batch size of _64_ and an _1e-4_ learning rate using a _linear_ scheduler and an _AdamW_ optimizer [28].
SacreBleu, Recall, EM & F1 Scores:In machine translation tasks the _sacreBleu_[36] score is used for the evaluation of generated translations focusing on reproducibility and comparability of reported results. It is itself a standardization of the Bleu [33] score that compares the n-grams of the generated sequences to those of the reference translations. In our work we use _sacreBleu_ to evaluate the quality of our translation model (as detailed in Section 2.4). _Recall:_ is a metric used to evaluate retrieval methods such as our _neural_match_ primitive that is using a DPR model. Recall represents the percentage of relevant documents retrieved among the top_k results returned by a retriever (as we report in Section 2.4). _EM & F1 Scores:_ are used to evaluate Q&A extraction methods such as our _neural_extract_ primitive that is using a Reader model. EM represents the percentage of exact extracted matches, while F1 measures the harmonic mean of precision and recall (reported in Section 2.4).
## Code & Data Availability
All data and examples from this paper are available at: [https://orgdlabs.com/neuroQL](https://orgdlabs.com/neuroQL)
|
2308.05423 | On the Stability and Convergence of Physics Informed Neural Networks | Physics Informed Neural Networks is a numerical method which uses neural
networks to approximate solutions of partial differential equations. It has
received a lot of attention and is currently used in numerous physical and
engineering problems. The mathematical understanding of these methods is
limited, and in particular, it seems that, a consistent notion of stability is
missing. Towards addressing this issue we consider model problems of partial
differential equations, namely linear elliptic and parabolic PDEs. We consider
problems with different stability properties, and problems with time discrete
training. Motivated by tools of nonlinear calculus of variations we
systematically show that coercivity of the energies and associated compactness
provide the right framework for stability. For time discrete training we show
that if these properties fail to hold then methods may become unstable.
Furthermore, using tools of $\Gamma-$convergence we provide new convergence
results for weak solutions by only requiring that the neural network spaces are
chosen to have suitable approximation properties. | Dimitrios Gazoulis, Ioannis Gkanis, Charalambos G. Makridakis | 2023-08-10T08:35:55Z | http://arxiv.org/abs/2308.05423v1 | # On the stability and convergence of Physics Informed Neural Networks
###### Abstract
Physics Informed Neural Networks is a numerical method which uses neural networks to approximate solutions of partial differential equations. It has received a lot of attention and is currently used in numerous physical and engineering problems. The mathematical understanding of these methods is limited, and in particular, it seems that, a consistent notion of stability is missing. Towards addressing this issue we consider model problems of partial differential equations, namely linear elliptic and parabolic PDEs. We consider problems with different stability properties, and problems with time discrete training. Motivated by tools of nonlinear calculus of variations we systematically show that coercivity of the energies and associated compactness provide the right framework for stability. For time discrete training we show that if these properties fail to hold then methods may become unstable. Furthermore, using tools of \(\Gamma-\)convergence we provide new convergence results for weak solutions by only requiring that the neural network spaces are chosen to have suitable approximation properties.
## 1 Introduction
### PDEs and Neural Networks
In this work we consider model problems of partial differential equations (PDEs) approximated by deep neural learning (DNN) algorithms. In particular we focus on linear elliptic and parabolic PDEs and Physics Informed Neural Networks, i.e., algorithms where the discretisation is based on the minimisation of the \(L^{2}\) norm of the residual over a set of neural networks with a given architecture. Standard tools of numerical analysis assessing the quality and performance of an algorithm are based on the notions of stability and approximability. Typically, in problems arising in scientific applications another important algorithmic characteristic is the preservation of key qualitative properties of the simulating system at the discrete level. In important classes of problems, stability and structural consistency are often linked. Our aim is to introduce a novel notion of stability for the above DNN algorithms approximating solutions of PDEs. In addition, we show convergence provided that the set of DNNs has the right approximability properties and the training of the algorithm produces stable approximations.
In the area of machine learning for models described by partial differential equations, at present, there is intense activity at multiple fronts: developing new methods for solving differential equations using neural networks, designing special neural architectures to approximate families of differential operators (operator learning), combination of statistical and machine learning techniques for related problems in uncertainty quantification and statistical functional inference. Despite the
progress at all these problems in the last years, basic mathematical, and hence algorithmical, understanding is still under development.
Partial Differential Equations (PDEs) has been proven an area of very important impact in science and engineering, not only because many physical models are described by PDEs, but crucially, methods and techniques developed in this field contributed to the scientific development in several areas where very few scientists would have guessed as possible. Numerical solution of PDEs utilising neural networks is at an early stage and has received a lot of attention. Such methods have significantly different characteristics compared to more traditional methods, and have been proved quite effective, e.g., in solving problems in high-dimensions, or when methods combining statistical approaches and PDEs are needed. Physics Informed Neural Networks is one of the most successful numerical methods which uses neural networks to approximate solutions of PDEs, see e.g., [39], [33]. Residual based methods were considered in [29], [6], [40], [46] and their references. Other neural network methods for differential equations and related problems include, for example, [41], [18], [27], [48], [12], [20], [23]. The term _Physics Informed Neural Networks_ was introduced in the highly influential paper [39]. It was then used extensively in numerous physical and engineering problems; for a broader perspective of the related methodologies and the importance of the NN methods for scientific applications, see e.g., [26]. Despite progress at some fronts, see [46], [3], [44], [45], [35], [36], the mathematical understanding of these methods is limited. In particular, it seems that, a consistent notion of stability is missing. Stability is an essential tool, in a priori error analysis and convergence of the algorithms, [30]. It provides valuable information for fixed values of the discretisation parameters, i.e., in the pre-asymptotic regime, and it is well known that unstable methods have poor algorithmic performance. On the other hand, stability is a problem dependent notion and not always easy to identify. Towards addressing this issue we consider model problems of partial differential equations, namely linear elliptic and parabolic PDEs. We consider PDEs with different stability properties, and parabolic problems with time discrete training. Since, apparently, the training procedure influences the behaviour of the method in an essential manner, but, on the other hand, complicates the analysis considerably, we have chosen as a first step in this work to consider time discrete only training. Motivated by tools of nonlinear calculus of variations we systematically show that coercivity of the energies and associated compactness provide the right framework for stability. For time discrete training we show that if these properties fail to hold then methods become unstable and it seems that they do not converge. Furthermore, using tools of \(\Gamma-\)convergence we provide new convergence results for weak solutions by only requiring that the neural network spaces are chosen to have suitable approximation properties.
### Model problems and their Machine Learning approximations
In this work we consider linear elliptic and parabolic PDEs. To fix notation, we consider simple boundary value problems of the form,
\[\begin{cases}L\,u=f&\text{in}\ \ \Omega\\ u=0&\text{on}\ \partial\Omega\end{cases} \tag{1}\]
where \(u:\Omega\subset\mathbb{R}^{d}\to\mathbb{R},\ \Omega\) is an open, bounded set with smooth enough boundary, \(f\in L^{2}(\Omega)\) and \(L\) a self-adjoint elliptic operator of the form
\[\begin{split} Lu:=-\sum_{1\leq i,j\leq d}\big{(}a_{ij}u_{x_{i}} \big{)}_{x_{j}}+cu\\ \text{where}\ \ \sum_{i,j}a_{ij}(x)\xi_{i}\xi_{j}\geq\theta|\xi|^{2} \ \ \text{for any}\ \ x\in\Omega\ \ \text{and any}\ \ \xi\in\mathbb{R}^{n},\ \ \ \text{for some}\ \ \theta>0\end{split} \tag{2}\]
also, \(a_{ij}=a_{ji}\in C^{1}(\overline{\Omega}),\ b_{i},\ c\in L^{\infty}(\Omega)\) and hence bounded in \(\overline{\Omega}.\) Further assumptions on \(L\) will be discussed in the next sections. Dirichlet boundary conditions were selected for simplicity.
The results of this work can be extended to other boundary conditions with appropriate technical modifications.
We shall study the corresponding parabolic problem as well. We use the compact notation \(\Omega_{T}=\Omega\times(0,T],\)\(\partial\Omega_{T}=\partial\Omega\times(0,T]\) for some fixed time \(T>0.\) We consider the initial-boundary value problem
\[\begin{cases}u_{t}+Lu=f,&\text{in}\;\;\Omega_{T},\\ u=0,&\text{on}\;\;\partial\Omega\times(0,T],\\ u=u^{0},&\text{in}\;\;\Omega\,,\end{cases} \tag{3}\]
where \(f\in L^{2}(\Omega_{T}),\;u^{0}\in H^{1}_{0}(\Omega)\) and \(L\) is as in (2). In the sequel we shall use the compact operator notation \(\mathscr{L}\) for either \(u_{t}+Lu\) or \(Lu\) for the parabolic or the elliptic case correspondingly. The associated energies used will be the \(L^{2}-\)residuals
\[\mathcal{E}(v)=\int_{\Omega_{D}}|\mathscr{L}v-f|^{2}\mathrm{d}\overline{x}+\, \mu\int_{\Omega}|v-u^{0}|^{2}\,\mathrm{d}x+\tau\,\int_{\partial\Omega_{T}}|v| ^{2}\,\mathrm{d}\overline{S} \tag{4}\]
defined over smooth enough functions and domains \(\Omega_{D}\) being \(\Omega_{T}\) or \(\Omega\) (with measures \(d\overline{x}\) ) for the parabolic or the elliptic case correspondingly. Clearly, the coefficient \(\mu\geq 0\) of the initial condition is set to zero in the elliptic case.
It is typical to consider regularised versions of \(\mathcal{E}(v)\) as well. Such functionals have the form
\[\mathcal{E}_{reg}(v)=\mathcal{E}(v)+\lambda\mathcal{J}(v)\,, \tag{5}\]
where the regularisation parameter \(\lambda=\lambda_{reg}>0\) is in principle small and \(\mathcal{J}(v)\) is an appropriate functional (often a power of a semi-norm) reflecting the qualitative properties of the regularisation. The formulation of the method extends naturally to nonlinear versions of the generic operator \(\mathscr{L}v-f,\) whereby in principle both \(\mathscr{L}\) and \(f\) might depend on \(v.\)
### Discrete Spaces generated by Neural Networks
We consider functions \(u_{\theta}\) defined through neural networks. Notice that the structure described is indicative and it is presented in order of fix ideas. Our results do not depend on particular neural network architectures but only on their approximation ability. A deep neural network maps every point \(\overline{x}\in\Omega_{D}\) to a number \(u_{\theta}(\overline{x})\in\mathbb{R},\) through
\[u_{\theta}(\overline{x})=C_{L}\circ\sigma\circ C_{L-1}\cdots\circ\sigma\circ C _{1}(\overline{x})\quad\forall\overline{x}\in\Omega_{D}. \tag{6}\]
The process
\[\mathcal{C}_{L}:=C_{L}\circ\sigma\circ C_{L-1}\cdots\circ\sigma\circ C_{1} \tag{7}\]
is in principle a map \(\mathcal{C}_{L}:\mathbb{R}^{m}\rightarrow\mathbb{R}^{m^{\prime}}\); in our particular application, \(m=d\) (elliptic case) or \(m=d+1\) (parabolic case) and \(m^{\prime}=1.\) The map \(\mathcal{C}_{L}\) is a neural network with \(L\) layers and activation function \(\sigma.\) Notice that to define \(u_{\theta}(\overline{x})\) for all \(\overline{x}\in\Omega_{D}\) we use the same \(\mathcal{C}_{L},\) thus \(u_{\theta}(\cdot)=\mathcal{C}_{L}(\cdot).\) Any such map \(\mathcal{C}_{L}\) is characterised by the intermediate (hidden) layers \(C_{k},\) which are affine maps of the form
\[C_{k}y=W_{k}y+b_{k},\qquad\text{where }W_{k}\in\mathbb{R}^{d_{k+1}\times d_{k}},b_{k}\in\mathbb{R}^{d_{k+1}}. \tag{8}\]
Here the dimensions \(d_{k}\) may vary with each layer \(k\) and \(\sigma(y)\) denotes the vector with the same number of components as \(y,\) where \(\sigma(y)_{i}=\sigma(y_{i})\,.\) The index \(\theta\) represents collectively all the parameters of the network \(\mathcal{C}_{L},\) namely \(W_{k},b_{k},\)\(k=1,\dots,L.\) The set of all networks \(\mathcal{C}_{L}\) with a given structure (fixed \(L,d_{k},k=1,\dots,L\) ) of the form (6), (8) is called \(\mathcal{N}.\) The total dimension
(total number of degrees of freedom) of \(\mathcal{N},\) is \(\dim\mathcal{N}=\sum_{k=1}^{L}d_{k+1}(d_{k}+1)\,.\) We now define the space of functions
\[V_{\mathcal{N}}=\{u_{\theta}:\Omega_{D}\to\mathbb{R},\text{ where }u_{\theta}( \overline{x})=\mathcal{C}_{L}(\overline{x}),\text{ for some }\mathcal{C}_{L}\in \mathcal{N}\,\}\,. \tag{9}\]
It is important to observe that \(V_{\mathcal{N}}\) is not a linear space. We denote by
\[\Theta=\{\theta\,:u_{\theta}\in V_{\mathcal{N}}\}. \tag{10}\]
Clearly, \(\Theta\) is a linear subspace of \(\mathbb{R}^{\dim\mathcal{N}}.\)
### Discrete minimisation on \(V_{\mathcal{N}}\)
Physics Informed Neural networks are based on the minimisation of residual-type functionals of the form (5) over the discrete set \(V_{\mathcal{N}}\,:\)
**Definition 1**: _Assume that the problem_
\[\min_{v\in V_{\mathcal{N}}}\mathcal{E}(v) \tag{11}\]
_has a solution \(v^{\star}\in V_{\mathcal{N}}.\) We call \(v^{\star}\,\) a deep-\(V_{\mathcal{N}}\) minimiser of \(\mathcal{E}\,.\)_
A key difficulty in studying this problem lies on the fact that \(V_{\mathcal{N}}\) is not a linear space. Computationally, this problem can be equivalently formulated as a minimisation problem in \(\mathbb{R}^{\dim\mathcal{N}}\) by considering \(\theta\) as the parameter vector to be identified through
\[\min_{\theta\in\Theta}\mathcal{E}(u_{\theta}). \tag{12}\]
Notice that although (12) is well defined as a discrete minimisation problem, in general, this is non-convex with respect to \(\theta\) even though the functional \(\mathcal{E}(v)\) is convex with respect to \(v.\) This is the source of one of the main technical difficulties in machine learning algorithms.
### Time discrete Training
To implement such a scheme we shall need computable discrete versions of the energy \(\mathcal{E}(u_{\theta}).\) This can be achieved through different ways. A common way to achieve this is to use appropriate quadrature for integrals over \(\Omega_{D}\) (Training through quadrature). Just to fix ideas such a quadrature requires a set \(K_{h}\) of discrete points \(z\in K_{h}\) and corresponding nonnegative weights \(w_{z}\) such that
\[\sum_{z\in K_{h}}\,w_{z}\,g(z)\approx\int_{\Omega_{D}}\,g(\overline{x})\, \mathrm{d}\overline{x}. \tag{13}\]
Then one can define the discrete functional
\[\mathcal{E}_{Q,h}(g)=\sum_{z\in K_{h}}\,w_{z}\,|\mathcal{L}v(z)-f(z)|^{2}\,\,. \tag{14}\]
In the case of the parabolic problem a similar treatment should be done for the term corresponding to the initial condition \(\int_{\Omega}|v-u^{0}|^{2}dx\,.\) Notice that both deterministic and probabilistic (Monte-Carlo, Quasi-Monte-Carlo) quadrature rules are possible, yielding different final algorithms. In this work we shall not consider in detail the influence of the quadrature (and hence of the training) to the stability and convergence of the algorithms. This requires a much more involved technical analysis and it will be the subject of future research. However, it will be instrumental for studying the notion of stability introduced herein, to consider a hybrid algorithm where quadrature (and discretisation) is applied only to the time variable of the parabolic problem. This approach is instrumental in the
design and analysis of time-discrete methods for evolution problems, and we believe that it is quite useful in the present setting.
To apply a quadrature in the time integral only we proceed as follows: Let \(0=t^{0}<t^{1}<\cdots<t^{N}=T\) define a partition of \([0,T]\) and \(I_{n}:=(t^{n-1},t^{n}]\), \(k_{n}:=t^{n}-t^{n-1}\). We shall denote by \(v^{m}(\cdot)\) and \(f^{m}(\cdot)\) the values \(v(\cdot,t^{m})\) and \(f(\cdot,t^{m})\). Then we define the discrete in time quadrature by
\[\sum_{n=1}^{N}\,k_{n}\,g(t^{n})\approx\int_{o}^{T}\,g(t)\,\mathrm{d}t. \tag{15}\]
We proceed to define the time-discrete version of the functional (5) as follows
\[\mathcal{G}_{k,IE}(v)=\sum_{n=1}^{N}\,k_{n}\,\int_{\Omega}\big{|}\frac{v^{n}-v ^{n-1}}{k_{n}}+Lv^{n}-f^{n}\big{|}^{2}\,\,\mathrm{d}x+\,\int_{\Omega}|v-u^{0}| ^{2}\mathrm{d}x \tag{16}\]
We shall study the stability and convergence properties of the minimisers of the problems:
\[\min_{v\in V_{\mathcal{N}}}\mathcal{G}_{k,IE}(v)\,. \tag{17}\]
It will be interesting to consider a seemingly similar (from the point of view of quadrature and approximation) discrete functional:
\[\mathcal{G}_{k,EE}(v)=\sum_{n=1}^{N}\,k_{n}\,\int_{\Omega}\big{|}\frac{v^{n}-v ^{n-1}}{k_{n}}+Lv^{n-1}-f^{n-1}\big{|}^{2}\,\,\mathrm{d}x+\,\sigma\int_{ \Omega}|v-u^{0}|^{2}dx, \tag{18}\]
and compare its properties to the functional \(\mathcal{G}_{k,IE}\), and the corresponding \(V_{\mathcal{N}}\) minimisers.
## 2 Our results
In this section we discuss our main contributions. Our goal is twofold: to suggest a consistent notion of stability and a corresponding convergence framework for the methods considered.
_Equi-Coercivity and Stability._
Equi-Coercivity is a key notion in the \(\Gamma-\)convergence analysis which drives compactness and the convergence of minimisers of the approximate functionals. Especially, in the case of discrete functionals (denoted below by \(\mathcal{E}_{\ell}\), \(\ell\) stands for a discretisation parameter) stability is a prerequisite for compactness and convergence. Our analysis is driven by two key properties which are roughly stated as follows:
1. If energies \(\mathcal{E}_{\ell}\) are uniformly bounded \[\mathcal{E}_{\ell}[u_{\ell}]\leq C,\] then there exists a constant \(C_{1}>0\) and \(\ell-\)dependent norms \(V_{\ell}\) such that \[\|u_{\ell}\|_{V_{\ell}}\leq C_{1}.\] (19)
2. Uniformly bounded sequences in \(\|u_{\ell}\|_{V_{\ell}}\) have convergent subsequences in \(H,\) where \(H\) is a normed space (typically a Sobolev space) which depends on the form of the discrete energy considered. Property [S1] requires that \(\mathcal{E}_{\ell}[v_{\ell}]\) is coercive with respect to (possibly \(\ell\)-dependent) norms (or semi-norms). Further, [S2], implies that, although \(\|\cdot\|_{V_{\ell}}\) are \(\ell\)-dependent, they should be such that, from uniformly bounded sequences in these norms, it is possible to extract convergent subsequences in a weaker topology (induced by the space \(H\)).
We argue that these properties provide the right framework for stability. Although, in principle, the use of discrete norms is motivated from a nonlinear theory, [21], [9], [22], in order to focus on ideas rather than on technical tools, we started our study in this work on simple linear problems. To this end, we consider four different problems, where [S1] and [S2] are relevant: Two elliptic problems with distinct regularity properties: namely elliptic operators posed on convex and non-convex Lipschitz domains. In addition, we study linear parabolic problems and their time-discrete only version. The last example highlights that training is a key factor in algorithmic design, since it influences not only the accuracy, but crucially, the stability properties of the algorithm. In fact, we provide evidence that functionals related to time discrete training of the form (81), which fail to satisfy the stability criteria [S1] and [S2], produce approximations with unstable behaviour.
Section 3 is devoted to elliptic problems and Section 4 to parabolic. In Section 3.1 and Section 3.2 we consider the same elliptic operator but posed on convex and non-convex Lipschitz domains respectively. It is interesting to compare the corresponding stability results, Propositions 3 and 7 where in the second case the stability is in a weaker norm as expected. Similar considerations apply to the continuous formulation (without training) of the parabolic problem, Proposition 10. Here an interesting feature appears to be that a maximal regularity estimate is required for the parabolic problem. In the case of time-discrete training, Proposition 13, [S1] holds with an \(\ell-\) dependent norm. Again it is interesting to observe that a discrete maximal regularity estimate is required in the proof of Proposition 13. Although we do not use previous results, it is interesting to compare to [28], [31], [2].
Let us mention that for simplicity in the exposition we assume that the discrete energies are defined on spaces where homogenous Dirichlet conditions are satisfied. This is done only to highlight the ideas presented herein without extra technical complications. It is clear that all results can be extended when these conditions are imposed weakly through the loss functional. It is interesting to note, that in certain cases, however, the choice of the form of the boundary terms in the discrete functional might affect how strong is the norm of the underlined space \(H\) in [S1], [S2], see Remark 4.
_Convergence_ - \(\liminf-\limsup\) _framework._
We show convergence of the discrete minimisers to the solutions of the underlined PDE under minimal regularity assumptions. For certain cases, see Theorem 5 for example, it is possible by utilising the stability of the energies and the linearity of the problem, to show direct bounds for the errors and convergence. This is in particular doable in the absence of training. In the case of regularised fuctionals, or when time discrete training is considered one has to use the liminf-limsup framework of De Giorgi, see Section 2.3.4 of [14], and e.g., [10], used in the \(\Gamma-\)convergence of functionals arising in non-linear PDEs, see Theorems 6, 9, (regularised functionals) and Theorem 14 (time-discrete training). These results show that stable functionals in the sense of [S1], [S2], yield neural network approximations converging to the weak solutions of the PDEs, under no extra assumptions. This analytical framework combined with the stability notion introduced above provides a consistent and flexible toolbox, for analysing neural network approximations to PDEs. It can be extended to various other, possibly nonlinear, problems. Furthermore, it provides a clear connection to PDE well posedness and discrete stability when training is taking place.
_Previous works._
Previous works on the analysis of methods based on residual minimisation over neural network spaces for PDEs include [46], [3], [44], [45], [35], [25], [36]. In [46] convergence was established for smooth enough classical solutions of a class of nonlinear parabolic PDEs, without considering training of the functional. Convergence results, under assumptions on the discrete minimisers or the NN space, when Monte-Carlo training was considered, were derived in [44], [45], [25]. In addition,
in [45], continuous stability of certain linear operators is used in the analysis. The results of [3], [35], [36] were based on estimates where the bounds are dependent on the discrete minimisers and their derivatives. These bounds imply convergence only under the assumption that these functions are uniformly bounded in appropriate Sobolev norms. The results in [25] with deterministic training, are related, in the sense that they are applicable to NN spaces where by construction high-order derivatives are uniformly bounded in appropriate norms. Conceptually related is the recent work on Variational PINNs (the residuals are evaluated in a weak-variational sense), [8], where the role of quadrature was proven crucial in the analysis of the method.
As mentioned, part of the analysis is based on \(\Gamma\)-convergence arguments. \(\Gamma\)-convergence is a very natural framework which is used in nonlinear energy minimisation. In [37]\(\Gamma\)-convergence was used in the analysis of deep Ritz methods without training. In the recent work [32], the \(\liminf-\limsup\,\) framework was used in general machine learning algorithms with probabilistic training to derive convergence results for global and local discrete minimisers. For recent applications to computational methods where the discrete energies are rather involved, see [5], [21], [9], [22]. It seems that these analytical tools coming from nonlinear PDEs provide very useful insight in the present neural network setting, while standard linear theory arguments are rarely applicable due to the nonlinear character of the spaces \(V_{\mathcal{N}}\).
## 3 Elliptic problems
We consider the problem
\[Lu=f \tag{20}\]
where \(u:\Omega\subset\mathbb{R}^{d}\to\mathbb{R}\), \(\Omega\) is an open, bounded set with Lipschitz boundary, \(f\in L^{2}(\Omega)\) and \(L\) the elliptic operator as in (2).
For smooth enough \(v\) now define the energy as follows
\[\mathcal{E}(v)=\int_{\Omega}|Lv-f|^{2}\,\mathrm{d}x+\int_{\partial\Omega}|v| ^{2}\,\mathrm{d}x \tag{21}\]
Define now the linear space \(\mathcal{H}_{L}=\{v\in H^{1}(\Omega):\;Lv\in L^{2}(\Omega)\,\}.\) We consider now the minimisation problem:
\[\min_{u\in\mathcal{H}_{L}}\mathcal{E}(u)\,. \tag{22}\]
We show next that the (unique) solution of (22) is the weak solution of the PDE (20). The Euler-Lagrange equations for (22) are
\[\int_{\Omega}(Lu-f)\,Lv\,\,\mathrm{d}x+\int_{\partial\Omega}u\,v\,\mathrm{d}x =0\qquad\text{for all }v\in\mathcal{H}_{L}\,. \tag{23}\]
Let \(w\in H^{1}_{0}(\Omega)\) be given but arbitrary. Consider \(\overline{v}\) to be the solution of \(L\overline{v}=w\) with zero boundary conditions. Hence \(\overline{v}\in H^{1}_{0}(\Omega)\,.\) Then there holds,
\[\int_{\Omega}(Lu-f)\,w\,\,\mathrm{d}x+\int_{\partial\Omega}u\,\overline{v}\, \mathrm{d}x=\int_{\Omega}(Lu-f)\,w\,\,\mathrm{d}x=0\qquad\text{for all }w\in H^{1}_{0}(\Omega)\,. \tag{24}\]
Hence, \(Lu=f\) in the sense of distributions. We turn now to (23) and observe that \(\int_{\partial\Omega}u\,v\,\mathrm{d}x=0\) for all \(v\in\mathcal{H}_{L}\,.\) We conclude therefore that \(u=0\) on \(\partial\Omega\) and the claim is proved.
In this section we assume that if we select the networks appropriately, as we increase their complexity we may approximate any \(w\) in \(H^{2}\). To this end, we select a sequence of spaces
as follows: for each \(\ell\in\mathbb{N}\) we correspond a DNN space \(V_{\mathcal{N}},\) which is denoted by \(V_{\ell}\) with the following property: For each \(w\in H_{0}^{2}(\Omega)\) there exists a \(w_{\ell}\in V_{\ell}\) such that,
\[\|w_{\ell}-w\|_{H^{2}(\Omega)}\leq\;\beta_{\ell}\left(w\right),\qquad\text{and }\;\beta_{\ell}\left(w\right)\to 0,\;\;\ell\to\infty\,. \tag{25}\]
If in addition, \(w\in H^{m}(\Omega)\cap H_{0}^{2}(\Omega)\) is in higher order Sobolev space then
\[\|w_{\ell}-w\|_{H^{2}(\Omega)}\leq\;\tilde{\beta}_{\ell}\,\|w\|_{H^{m}(\Omega )},\qquad\text{and }\;\tilde{\beta}_{\ell}\,\to 0,\;\;\ell\to\infty\,. \tag{26}\]
We do not need specific rates for \(\tilde{\beta}_{\ell}\,,\) but only the fact that the right-hand side of (26) has an explicit dependence of Sobolev norms of \(w.\) This assumption is a reasonable one in view of the available approximation results of neural network spaces, see for example [48], [13, 24, 43, 16, 7], and their references.
**Remark 2**: _Due to higher regularity needed by the loss functional one has to use smooth enough activation functions, such as \(\tanh\) or ReLU\({}^{k},\) that is, \(\sigma(y)=(\max\{0,y\})^{k},\) see e.g., [48], [15]. In general, the available results so far do not provide enough information on specific architectures required to achieve specific bounds with rates. Since the issue of the approximation properties is an important but independent problem, we have chosen to require minimal assumptions which can be used to prove convergence._
### Convex domains
Next, we study first the case where elliptic regularity bounds hold. Consider the sequence of energies
\[\mathcal{E}_{\ell}(u_{\ell})=\begin{cases}\mathcal{E}(u_{\ell})&,\;\;u_{\ell} \in V_{\ell}\cap H_{0}^{2}(\Omega)\\ +\infty&,\;\;\text{otherwise}\end{cases} \tag{27}\]
where \(V_{\ell}\) are chosen to satisfy (25).
#### 3.1.1 Stability
Now we have equicoercivity of \(\mathcal{E}_{\ell}\) as a corollary of the following result.
**Proposition 3** (Stability/Equi-coercivity): _Assume that \(\Omega\) is convex. Let \((u_{\ell})\) be a sequence of functions in \(V_{\ell}\) such that for a constant \(C>0\) independent of \(\ell\), it holds that_
\[\mathcal{E}_{\ell}(u_{\ell})\leq C. \tag{28}\]
_Then there exists a constant \(C_{1}>0\) such that_
\[||u_{\ell}||_{H^{2}(\Omega)}\leq C_{1}\,. \tag{29}\]
* Since \(\mathcal{E}_{\ell}(u_{\ell})\leq C,\) from the definition of \(\mathcal{E}_{\ell},\) it holds that \(\mathcal{E}(u_{\ell})\leq C.\) We have that \[\mathcal{E}(u)=\int_{\Omega}(|Lu|^{2}-2f\;Lu+|f|^{2})\,\mathrm{d}x\leq C\,.\] (30) From Holder's inequality we have, since \(f\in L^{2}(\Omega),\) \[||Lu||_{L^{2}(\Omega)}\leq C_{1}\,.\] (31)
Finally, since \(u|_{\partial\Omega}=0\), by the global elliptic regularity in \(H^{2}\) theorem (see Theorem 4, p.334 in [19]) we have
\[||u||_{H^{2}(\Omega)}\leq C_{2}(||Lu||_{L^{2}(\Omega)}+||u||_{L^{2}(\Omega)}) \tag{32}\]
where \(C_{2}\) depends only on \(\Omega\) and the coefficients of \(L\). Now since \(0\notin\Sigma\)\((\Sigma\) is the spectrum of \(L)\), by Theorem 6 in [19] (p.324), we have
\[||u||_{L^{2}(\Omega)}\leq C_{3}||Lu||_{L^{2}(\Omega)} \tag{33}\]
where \(C_{3}\) depends only on \(\Omega\) and the coefficients of \(L\). Thus by (31), (32) and (33) we conclude
\[||u||_{H^{2}(\Omega)}\leq\tilde{C}\,. \tag{34}\]
\(\blacksquare\)
**Remark 4** (Boundary loss): _As mentioned in the introduction, in order to avoid the involved technical issues related to boundary conditions we have chosen to assume throughout that homogenous Dirichlet conditions are satisfied. It is evident that that our results are valid when the boundary conditions are imposed weakly through the discrete loss functional under appropriate technical modifications. In the case where the loss is_
\[\int_{\Omega}|Lv-f|^{2}\mathrm{d}x+\tau\,\int_{\partial\Omega}|v|^{2}\,\mathrm{ d}S \tag{35}\]
_the assumption \(\mathcal{E}_{\ell}(u_{\ell})\leq C\) provides control of the \(\|v\|_{L^{2}(\partial\Omega)}\) which is not enough to guarantee that elliptic regularity estimates will hold up to the boundary, see e.g., [11], [42], for a detailed discussion of subtle issues related to the effect of the boundary conditions on the regularity. Since the choice of the loss is at our disposal during the algorithm design, it will be interesting to consider more balanced choices of the boundary loss, depending on the regularity of the boundary. This is beyond the scope of the present work. Alternatively, one might prefer to use the framework of [47] to exactly satisfy the boundary conditions. As noted in this paper, there are instances where the boundary loss of (35) is rather weak to capture accurately the boundary behaviour of the approximations. The above observations is yet another indication that our stability framework is consistent and able to highlight possible imbalances at the algorithmic design level._
#### 3.1.2 Convergence of the minimisers
In this subsection, we discuss the convergence properties of the discrete minimisers. Given the regularity properties of the elliptic problem and in the absence of training, it is possible to show the following convergence result.
**Theorem 5** (Estimate in \(H^{2}\)): _Let \(\mathcal{E}_{\ell}\) be the energy functionals defined in (27) and let \((u_{\ell}),\,u_{\ell}\in V_{\ell},\) be a sequence of minimisers of \(\mathcal{E}_{\ell}.\) Then, if \(u\) is the exact solution of (1),_
\[\|u-u_{\ell}\|_{H^{2}(\Omega)}\leq C\,\inf_{\varphi\in V_{\ell}}\|u-\varphi\| _{H^{2}(\Omega)}\,. \tag{36}\]
_and furthermore,_
\[u_{\ell}\to u,\quad\text{in}\,\,\,\,H^{2}(\Omega)\,,\qquad\ell\to\infty\,. \tag{37}\]
**Proof** Let \(u\in H^{2}_{0}(\Omega)\) be the unique solution of (20). Consider the sequence of minimisers \((u_{\ell})\,.\)
Obviously,
\[\mathcal{E}_{\ell}(u_{\ell})\leq\mathcal{E}_{\ell}(v_{\ell}),\qquad\text{for all}\,\,v_{\ell}\in V_{\ell}\,.\]
Then,
\[\mathcal{E}_{\ell}(u_{\ell})=\int_{\Omega}|Lu_{\ell}-f|^{2}=\int_{\Omega}|L(u _{\ell}-u)|^{2}\geq\beta\|u-u_{\ell}\|_{H^{2}(\Omega)}^{2}, \tag{38}\]
by Proposition 3, which proves the first claim. For the second, let \(u\in H^{2}_{0}(\Omega)\) be the unique solution of (20). Consider the sequence of minimisers \((u_{\ell})\,.\) Obviously,
\[\mathcal{E}_{\ell}(u_{\ell})\leq\mathcal{E}_{\ell}(v_{\ell}),\qquad\text{for all }v_{\ell}\in V_{\ell}\,.\]
In particular,
\[\mathcal{E}_{\ell}(u_{\ell})\leq\mathcal{E}_{\ell}(\tilde{u}_{\ell}),\]
where \(\tilde{u}_{\ell}\) is the recovery sequence corresponding to \(u\) by assumption (25). Then \(\tilde{u}_{\ell}\to u\) in \(H^{2}(\Omega)\) and
\[\mathcal{E}_{\ell}(\tilde{u}_{\ell})=||L\tilde{u}_{\ell}-f||^{2}_{L^{2}( \Omega)}=||L(\tilde{u}_{\ell}-u)||^{2}_{L^{2}(\Omega)}\,, \tag{39}\]
and the proof is complete in view of (38). \(\blacksquare\)
In the present smooth setting, the above proof hinges on the fact that \(\mathcal{E}(u)=0\) and on the linearity of the problem. In the case of regularised functional
\[\mathcal{E}_{reg}(v)=\mathcal{E}(v)+\lambda\mathcal{J}(v)\,, \tag{40}\]
the proof is more involved. We need certain natural assumptions on the functional \(\mathcal{J}(v)\) to conclude the convergence. We shall work with convex functionals \(\mathcal{J}(v)\) that are \(\mathcal{H}\) consistent, i.e., they satisfy the properties:
\[(i) \mathcal{J}(v)\geq 0,\] \[(ii) \mathcal{J}(v)\leq\liminf_{\ell\to\infty}\mathcal{J}(v_{\ell}) \text{ for all weakly convergent sequences }v_{\ell}\rightharpoonup v\in\mathcal{H}, \tag{41}\] \[(iii) \mathcal{J}(w)=\lim_{\ell\to\infty}\mathcal{J}(w_{\ell})\text{ for all convergent sequences }w_{\ell}\to w\in\mathcal{H},\]
where \(\mathcal{H}\) is an appropriate Sobolev (sub)space which will be specified in each statement.
The proof of the next theorem is very similar to the (more complicated) proof of the Theorem 9 and it is omitted.
**Theorem 6** (Convergence for the regularised functional): _Let \(\mathcal{E}_{reg},\ \mathcal{E}_{reg,\ell}\) be the energy functionals defined in (40) and_
\[\mathcal{E}_{reg,\ell}(u_{\ell})=\begin{cases}\mathcal{E}_{reg}(u_{\ell}),&u_ {\ell}\in V_{\ell}\cap H^{2}_{0}(\Omega)\\ +\infty,&\text{otherwise}\,.\end{cases} \tag{42}\]
_Assume that the convex functional \(\mathcal{J}(v)\) is \(H^{2}(\Omega)\) consistent. Let \((u_{\ell}),\,u_{\ell}\in V_{\ell},\) be a sequence of minimisers of \(\mathcal{E}_{\ell}\), i.e._
\[\mathcal{E}_{reg,\ell}(u_{\ell})=\inf_{v_{\ell}\in V_{\ell}}\mathcal{E}_{reg, \ell}(v_{\ell})\,. \tag{43}\]
_Then,_
\[u_{\ell}\to u^{(\lambda)},\ \ \text{in}\ \ H^{1}(\Omega)\,,\qquad\ell\to\infty\,, \tag{44}\]
_where \(u^{(\lambda)}\) is the exact solution of the regularised problem_
\[\mathcal{E}_{reg}(u^{(\lambda)})=\min_{v\in H^{2}_{0}(\Omega)}\mathcal{E}_{ reg}(v)\,. \tag{45}\]
### Non-convex Lipschitz domains
In this subsection we discuss the case on non-convex Lipschitz domains, i.e., elliptic regularity bounds are no longer valid, and solutions might form singularities and do not belong in general to \(H^{2}(\Omega).\) We will see that the stability notion discussed in [S1] and [S2] is still relevant but in a weaker topology than in the previous case.
In the analysis below we shall use the bilinear form associated to the elliptic operator \(L,\) denoted \(B:H^{1}_{0}(\Omega)\times H^{1}_{0}(\Omega)\rightarrow\mathbb{R}.\) In particular,
\[B(u,v)=\int_{\Omega}\Big{(}\sum_{i,j=1}^{n}a_{ij}u_{x_{i}}v_{x_{j}}+cuv\,\Big{)} \,\mathrm{d}x\,. \tag{46}\]
In the sequel, we shall assume that the coefficients \(a_{ij},\)\(c\) are smooth enough and satisfy the required positivity properties for our purposes. We have the following stability result:
**Proposition 7**: _The functional \(\mathcal{E}\) defined in (5) is stable with respect to the \(H^{1}\)-norm: Let \((u_{\ell})\) be a sequence of functions in \(V_{\ell}\) such that for a constant \(C>0\) independent of \(\ell,\) it holds that_
\[\mathcal{E}_{\ell}(u_{\ell})\leq C. \tag{47}\]
_Then there exists a constant \(C_{1}>0\) such that_
\[\|u_{\ell}\|_{H^{1}(\Omega)}\leq C_{1}\,. \tag{48}\]
* We show that, if \(\mathcal{E}_{\ell}(v)\leq C\) for some \(C>0,\) then \(\|v\|_{H^{1}(\Omega)}\leq\tilde{C}\) for some \(\tilde{C}>0.\) Indeed the positivity properties of the coefficients imply, for any \(v\in H^{1}_{0}(\Omega),\) \[\theta||\nabla v||^{2}_{L^{2}(\Omega)}\leq B(v,v)\,.\] (49) Also, if \(Lu\in L^{2}(\Omega)\,,\) \[B(v,v)=\int_{\Omega}vLv\,\mathrm{d}x\leq||v||_{L^{2}(\Omega)}||Lv||_{L^{2}( \Omega)}\,,\] (50) and the claim follows by applying Holder and Poincare inequalities.
The convergence proof below relies on a crucial \(\limsup\) inequality which is proved in the next Theorem 9.
**Theorem 8** (Convergence in \(H^{1}\)): _Let \(\mathcal{E}_{\ell}\) be the energy functionals defined in (27) and let \((u_{\ell}),\)\(u_{\ell}\in V_{\ell},\) be a sequence of minimisers of \(\mathcal{E}_{\ell}\), where \(\Omega\) is a possibly non-convex Lipschitz domain. Then, if \(u\) is the exact solution of (1),_
\[u_{\ell}\to u,\ \ \mbox{in}\ \ H^{1}(\Omega)\,,\qquad\ell\rightarrow\infty\,. \tag{51}\]
* Let \(u\in\mathcal{H}_{L}\) be the unique solution of (20). Consider the sequence of minimisers \((u_{\ell})\,.\) Obviously, \[\mathcal{E}_{\ell}(u_{\ell})\leq\mathcal{E}_{\ell}(v_{\ell}),\qquad\mbox{for all }v_{\ell}\in V_{\ell}\,.\] By the proof of Proposition 7, we have, for \(c_{0}>0,\) \[\mathcal{E}_{\ell}(u_{\ell})=\int_{\Omega}|Lu_{\ell}-f|^{2}=\int_{\Omega}|L(u_ {\ell}-u)|^{2}\geq c_{0}\|u-u_{\ell}\|^{2}_{H^{1}(\Omega)}\,.\] (52)
Furthermore, let \(\tilde{u}_{\ell}\) be the recovery sequence corresponding to \(u\) constructed in the proof of Theorem 9. Since
\[\mathcal{E}_{\ell}(u_{\ell})\leq\mathcal{E}_{\ell}(\tilde{u}_{\ell}),\]
and
\[\lim_{\ell\to\infty}\mathcal{E}_{\ell}(\tilde{u}_{\ell})=\mathcal{E}(u)=0,\]
the proof follows.
Next, we utilise the standard \(\liminf\)-\(\limsup\) framework of \(\Gamma\)-convergence, to prove that the sequence of discrete minimisers \((u_{\ell})\) of the regularised functionals converges to a global minimiser of the continuous regularised functional.
**Theorem 9** (Convergence of the regularised functionals ): _Let \(\mathcal{E}_{reg},\ \mathcal{E}_{reg,\ell}\) be the energy functionals defined in (40) and (42) respectively, where \(\Omega\) is a possibly non-convex Lipschitz domain. Assume that the convex functional \(\mathcal{J}(v)\) is \(\mathcal{H}_{L}\) consistent. Let \((u_{\ell}),\,u_{\ell}\in V_{\ell},\) be a sequence of minimisers of \(\mathcal{E}_{reg,\ell}.\) Then,_
\[u_{\ell}\to u^{(\lambda)},\ \ \ \mbox{in}\ \ L^{2}(\Omega),\ \ \ u_{\ell}\rightharpoonup u^{(\lambda)}\,,\ \ \ \mbox{in}\ \ H^{1}(\Omega),\qquad\ell\to\infty\,. \tag{53}\]
_where \(u^{(\lambda)}\) is the exact solution of the regularised problem_
\[\mathcal{E}_{reg}(u^{(\lambda)})=\min_{v\in\mathcal{H}_{L}(\Omega)}\mathcal{ E}_{reg}(v)\,. \tag{54}\]
**Proof** We start with a \(\liminf\) inequality: We assume there is a sequence, still denoted by \(u_{\ell}\), such that \(\mathcal{E}_{\ell}(u_{\ell})\leq C\) uniformly in \(\ell\), otherwise \(\mathcal{E}(u)\leq\liminf_{\ell\to\infty}\mathcal{E}_{\ell}(u_{\ell})=+\infty.\) The above stability result, Proposition 7, implies that \(||u_{\ell}||_{H^{1}(\Omega)}\) are uniformly bounded. Therefore, up to subsequences, there exists a \(v\in H^{1}(\Omega),\) such that \(u_{\ell}\rightharpoonup v\) in \(H^{1}\) and \(u_{\ell}\to u\) in \(L^{2}\), thus \(u_{\ell}\rightharpoonup u\) in \(H^{1}\). Also, from the energy bound we have that \(||Lu_{\ell}||_{L^{2}(\Omega)}\leq C\) and therefore \(Lu_{\ell}\rightharpoonup w\). Next we shall show that \(w=Lu\). Indeed, we have
\[\lim_{\ell\to\infty}\int_{\Omega}Lu_{\ell}\phi\,\mathrm{d}x=\int_{\Omega}w \phi\,\mathrm{d}x\ \ \,\ \forall\ \phi\in C_{0}^{\infty}(\Omega)\,, \tag{55}\]
and
\[\lim_{\ell\to\infty}\int_{\Omega}Lu_{\ell}\phi\,\mathrm{d}x=\lim_{\ell\to \infty}B(u_{\ell},\phi)=B(u,\phi),\ \ \ \mbox{since}\ \ u_{\ell}\rightharpoonup u\ \ \mbox{in}\ \ H^{1}(\Omega)\,, \tag{56}\]
hence,
\[B(u,\phi)=\int_{\Omega}w\phi\ \mathrm{d}x, \tag{57}\]
for all test functions. That is, \(Lu=w\) weakly. The convexity of \(\int_{\Omega}|Lu_{\ell}-f|^{2}\) implies weak lower semicontinuity, that is
\[\int_{\Omega}|Lv-f|^{2}\leq\liminf_{\ell\to\infty}\int_{\Omega}|Lv_{\ell}-f|^{2} \tag{58}\]
and since \(\mathcal{J}(v)\) is \(\mathcal{H}_{L}\) consistent, (ii) of (41) implies that \(\mathcal{E}_{reg}(v)\leq\liminf_{\ell\to\infty}\mathcal{E}_{reg,\ell}(v_{\ell})\) for each such sequence \((v_{\ell})\).
Let \(w\in\mathcal{H}_{L}\) be arbitrary; we will show the existence of a recovery sequence \((w_{\ell})\), such that \(\mathcal{E}(w)=\lim_{\ell\to\infty}\mathcal{E}_{\ell}(w_{\ell}).\) For each \(\delta>0\) we can select a smooth enough mollifier \(w_{\delta}\in H^{2}_{0}(\Omega)\cap C^{m}_{0}(\Omega),\,m>2,\) such that
\[\begin{split}&\|w-w_{\delta}\|_{H^{1}(\Omega)}+\|Lw-Lw_{\delta}\|_{L^{ 2}(\Omega)}\lesssim\delta\,,\ \ \ \mbox{and,}\\ &|w_{\delta}|_{H^{s}(\Omega)}\lesssim\frac{1}{\delta^{s}}|w|_{H^{ 1}(\Omega)}.\end{split} \tag{59}\]
For \(w_{\delta},\) (26), there exists \(w_{\ell,\delta}\in V_{\ell}\) such that
\[\|w_{\ell,\delta}-w_{\delta}\|_{H^{2}(\Omega)}\leq\ \tilde{\beta}_{\ell}\,\|w_{ \delta}\|_{H^{s}(\Omega)}\leq\ \tilde{\beta}_{\ell}\frac{1}{\delta^{s}}\,\|w\|_{H^{1}(\Omega)},\qquad\text{ and}\ \ \tilde{\beta}_{\ell}\,(w)\to 0,\ \ \ell\to\infty\,.\]
Choosing \(\delta\) appropriately as function of \(\tilde{\beta}_{\ell}\) we can ensure that \(w_{\ell}=w_{\ell,\delta}\) satisfies,
\[||Lw_{\ell}-f||_{L^{2}(\Omega)}\to||Lw-f||_{L^{2}(\Omega)}\,, \tag{60}\]
since \(\mathcal{J}(v)\) is \(\mathcal{H}_{L}\) consistent, (iii) of (41) implies that \(\mathcal{J}(w_{\ell})\to\mathcal{J}(w)\) and hence
\[\mathcal{E}_{reg,\ell}(w_{\ell})\to\mathcal{E}_{reg}(w). \tag{61}\]
Next, let \(u^{(\lambda)}\in\mathcal{H}_{L}\) be the unique solution of (54) and consider the sequence of the discrete minimisers \((u_{\ell})\,.\) Clearly,
\[\mathcal{E}_{reg,\ell}(u_{\ell})\leq\mathcal{E}_{reg,\ell}(v_{\ell}),\qquad \text{for all}\ v_{\ell}\in V_{\ell}\,.\]
In particular, \(\mathcal{E}_{reg,\ell}(u_{\ell})\leq\mathcal{E}_{reg,\ell}(\tilde{u}_{\ell}),\) where \(\tilde{u}_{\ell}\) is the recovery sequence constructed above corresponding to \(w=u^{(\lambda)}.\) Thus the discrete energies are uniformly bounded. Then the stability result Proposition 7, implies that
\[\|u_{\ell}\|_{H^{1}(\Omega)}<C, \tag{62}\]
uniformly. By the Rellich-Kondrachov theorem, [19], and the \(\liminf\) argument above, there exists \(\tilde{u}\in\mathcal{H}_{L}\) such that \(u_{\ell}\to u\) in \(L^{2}(\Omega)\) up to a subsequence not re-labeled here. Next we show that \(\tilde{u}\) is a global minimiser of \(\mathcal{E}_{reg}.\) We combine the \(\liminf\) and \(\limsup\) inequalities as follows: Let \(w\in\mathcal{H}_{L},\) and \(w_{\ell}\in V_{\ell}\) be its recovery sequence such that \(||Lw_{\ell}-f||_{L^{2}(\Omega)}\to||Lw-f||_{L^{2}(\Omega)}\,.\) Therefore, the \(\liminf\) inequality and the fact that \(u_{\ell}\) are minimisers of the \(\mathcal{E}_{reg,\ell},\) imply that
\[\mathcal{E}_{reg}(\tilde{u})\leq\liminf_{\ell\to\infty}\mathcal{E}_{reg,\ell} (u_{\ell})\leq\limsup_{\ell\to\infty}\mathcal{E}_{reg,\ell}(u_{\ell})\leq \limsup_{\ell\to\infty}\mathcal{E}_{reg,\ell}(w_{\ell})=\mathcal{E}_{reg}(w), \tag{63}\]
for all \(w\in\mathcal{H}_{L}.\) Therefore \(\tilde{u}\) is a minimiser of \(\mathcal{E},\) and since \(u^{(\lambda)}\) is the unique global minimiser of \(\mathcal{E}_{reg}\) on \(\mathcal{H}_{L}\) we have that \(\tilde{u}=u^{(\lambda)}.\)
\(\blacksquare\)
## 4 Parabolic problems
Let as before \(\Omega\subset\mathbb{R}^{d},\) open, bounded and set \(\Omega_{T}=\Omega\times(0,T]\) for some fixed time \(T>0.\) We consider the parabolic problem
\[\begin{cases}u_{t}+Lu=f,&\text{in}\ \,\Omega_{T},\\ u=0,&\text{on}\ \,\partial\Omega\times(0,T],\\ u=u^{0},&\text{on}\ \,\Omega\times\{t=0\}\,.\end{cases} \tag{64}\]
In this section we discuss convergence properties of approximations of (64) obtained by minimisation of continuous and time-discrete energy functionals over appropriate sets of neural network functions. We shall assume that \(\Omega\) is a convex Lipschitz domain. The case of a non-convex domain can be treated with the appropriate modifications.
### Exact time integrals
So now we define \(\mathcal{G}:H^{1}(0,T;L^{2}(\Omega))\cap L^{2}(0,T;H^{2}_{0}(\Omega))\to\overline{ \mathbb{R}}\) as follows
\[\mathcal{G}(v)=\int_{0}^{T}\|v_{t}(t)+Lv(t)-f(t)\|_{L^{2}(\Omega)}^{2}\mathrm{d }t+|v(0)-u^{0}|_{H^{1}(\Omega)}^{2}\,. \tag{65}\]
We use \(H^{1}(\Omega)\) seminorm for the initial condition, since then the regularity properties of the functional are better. Of course, one can use the \(L^{2}(\Omega)\) norm instead with appropriate modifications in the proofs.
As before, we select a sequence of spaces \(V_{\mathcal{N}}\) as follows: for each \(\ell\in\mathbb{N}\) we correspond a DNN space \(W_{\mathcal{N}}\), which is denoted by \(W_{\ell}\) such that: For each \(w\in H^{1}(0,T;L^{2}(\Omega))\cap L^{2}(0,T;H^{2}(\Omega))\) there exists a \(w_{\ell}\in W_{\ell}\) such that,
\[\|w_{\ell}-w\|_{H^{1}(0,T;L^{2}(\Omega))\cap L^{2}(0,T;H^{2}(\Omega))}\leq\; \beta_{\ell}\left(w\right),\qquad\text{and}\;\;\beta_{\ell}\left(w\right)\to 0,\;\;\ell\to\infty\,. \tag{66}\]
If in addition, \(w\) has higher regularity, we assume that
\[\|(w_{\ell}-w)^{\prime}\|_{H^{1}(0,T;L^{2}(\Omega))\cap L^{2}(0,T;H^{2}( \Omega))}\leq\;\tilde{\beta}_{\ell}\,\|w^{\prime}\|_{H^{m}(0,T;H^{2}(\Omega))},\qquad\text{and}\;\;\tilde{\beta}_{\ell}\;\to 0,\;\;\ell\to\infty\,. \tag{67}\]
As in the elliptic case, we do not need specific rates for \(\tilde{\beta}_{\ell}\,,\) but only the fact that the right-hand side of (67) has an explicit dependence of Sobolev norms of \(w\). See [1] and its references where space-time approximation properties of neural network spaces are derived, see also [48], [15] and Remark 2.
In the sequel we consider the sequence of energies
\[\mathcal{G}_{\ell}(u_{\ell})=\begin{cases}\mathcal{G}(u_{\ell}),&u_{\ell}\in W _{\ell}\cap L^{2}(0,T;H^{1}_{0}(\Omega))\\ +\infty,&\text{otherwise}\end{cases} \tag{68}\]
where \(W_{\ell}\) is chosen as before.
#### 4.1.1 Equi-coercivity
Now we have equicoercivity of \(\mathcal{G}_{\ell}\) as a corollary of the following result.
**Proposition 10**: _The functional \(\mathcal{G}\) defined in (65) is equicoercive with respect to the \(H^{1}(0,T;H^{2}_{0}(\Omega))\)-norm. That is,_
\[\begin{array}{l}\text{If}\;\;\mathcal{G}(u)\leq C\;\text{ for some }\;C>0\;,\;\text{we have}\\ ||u||_{L^{2}(0,T;H^{2}(\Omega))}+||u^{\prime}||_{L^{2}(0,T;L^{2}(\Omega))}\leq C _{1}\end{array} \tag{69}\]
* As in the proof of equicoercivity for (5), we have \[\mathcal{G}(u)=\int_{\Omega_{T}}(|u_{t}+Lu|^{2}-2f\left(u_{t}+Lu\right)+|f|^{ 2})\leq C\] (70) Hence, one can conclude that since \(f\in L^{2}(\Omega_{T})\), \[||u_{t}+Lu||_{L^{2}(0,T;L^{2}(\Omega))}\leq C_{1}\] (71)
From regularity theory for parabolic equations (see for example Theorem 5, p.382 in [19]) we have
\[\begin{array}{c}\mbox{ess sup}_{0\leq t\leq T}||u(t)||_{H^{1}_{0}(\Omega)}+||u||_ {L^{2}(0,T;H^{2}(\Omega))}+||u^{\prime}||_{L^{2}(0,T;L^{2}(\Omega))}\\ \qquad\qquad\leq\tilde{C}(||u_{t}+Lu||_{L^{2}(0,T;L^{2}(\Omega))}+||u(0)||_{H^ {1}_{0}(\Omega)})\end{array} \tag{72}\]
the constant \(\tilde{C}\) depending only on \(\Omega\,\ T\) and the coefficients of \(L\). Notice that (72) is a maximal parabolic regularity estimate in \(L^{2}(0,T;L^{2}(\Omega))\,.\) This completes the proof.
#### 4.1.2 Compactness and Convergence of Discrete Minimizers
As in the previous section, from standard arguments in the theory of \(\Gamma\)-convergence, we will prove that under some boundedness hypothesis on \(u_{\ell}\), the sequence of discrete minimizers \((u_{\ell})\) converges in \(L^{2}(0,T;H^{1}(\Omega))\) to a global minimiser of the continuous functional. We will also need the well-known Aubin-Lions theorem as an analog of the Rellich-Kondrachov theorem in the parabolic case, that can be found, for example, in [49].
**Theorem 11** (Aubin-Lions): _Let \(B_{0},B,B_{1}\) be three Banach spaces where \(B_{0},B_{1}\) are reflexive. Suppose that \(B_{0}\) is continuously imbedded into \(B\), which is also continuously imbedded into \(B_{1}\), and the imbedding from \(B_{0}\) into \(B\) is compact. For any given \(p_{0},p_{1}\) with \(1<p_{0},p_{1}<\infty\), let_
\[W=\{v\,|\,\,v\in L^{p_{0}}([0,T],B_{0})\,\ v_{t}\in L^{p_{1}}([0,T],B_{1})\}. \tag{73}\]
_Then the imbedding from \(W\) into \(L^{p_{0}}([0,T],B)\) is compact._
**Theorem 12** (Convergence of discrete minimisers): _Let \((u_{\ell})\subset W_{\ell}\) be a sequence of minimizers of \({\cal G}_{\ell}\), i.e.,_
\[{\cal G}_{\ell}(u_{\ell})=\inf_{w_{\ell}\in W_{\ell}}{\cal G}_{\ell}(w_{\ell}) \tag{74}\]
_then_
\[u_{\ell}\to u,\ \ \mbox{in}\ \ L^{2}(0,T;H^{1}(\Omega)) \tag{75}\]
_where \(u\) is the solution of (64)._
* We begin with the liminf inequality. We assume there is a sequence, still denoted by \(u_{\ell}\), such that \({\cal G}_{\ell}(u_{\ell})\leq C\) uniformly in \(\ell\), otherwise \({\cal G}(u)\leq\liminf_{\ell\to\infty}{\cal G}_{\ell}(u_{\ell})=+\infty.\) From Proposition 10, the uniform bound \({\cal G}_{\ell}(u_{\ell})\leq C\) implies that \(||u_{\ell}||_{L^{2}(0,T;H^{2}(\Omega))}+||u^{\prime}_{\ell}||_{L^{2}(0,T;L^{2 }(\Omega))}\) are uniformly bounded. This implies (we denote \(u^{\prime}:=u_{t}\)) \[\nabla^{2}u_{\ell}\rightharpoonup\nabla^{2}u\ \ \mbox{and}\ \ u^{\prime}_{\ell} \rightharpoonup u^{\prime}\ \ \mbox{weakly in}\ \ L^{2}(0,T;L^{2}(\Omega)),\] (76) and hence \(u^{\prime}_{\ell}+Lu_{\ell}-f\rightharpoonup u^{\prime}+Lu-f\,.\) The convexity of \(\int_{\Omega_{T}}|u^{\prime}_{\ell}+Lu_{\ell}-f|^{2}\) implies weak lower semicontinuity, that is \[\int_{\Omega_{T}}|u^{\prime}+Lu-f|^{2}\leq\liminf_{\ell\to\infty}\int_{\Omega_ {T}}|u^{\prime}_{\ell}+Lu_{\ell}-f|^{2}\] (77)
and therefore we conclude that \(\mathcal{G}(u)\leq\liminf_{\ell\to\infty}\mathcal{G}_{\ell}(u_{\ell})\).
Let \(w\in H^{1}(0,T;L^{2}(\Omega))\cap L^{2}(0,T;H^{2}(\Omega))\), by (66) there exists \(w_{\ell}\in W_{\ell}\) such that \(w_{\ell}\to w\) in \(H^{1}(0,T;L^{2}(\Omega))\cap L^{2}(0,T;H^{2}(\Omega))\). We can conclude that \(w_{\ell}^{\prime}+Lw_{\ell}\to w^{\prime}+Lw\) in \(L^{2}(0,T;L^{2}(\Omega)),\) and hence
\[||w_{\ell}^{\prime}+Lw_{\ell}-f||_{L^{2}(0,T;L^{2}(\Omega))}\to||w^{\prime}+Lw -f||_{L^{2}(0,T;L^{2}(\Omega))} \tag{78}\]
That is, \(\mathcal{G}_{\ell}(w_{\ell})\,\to\,\mathcal{G}(w)\,.\) We argue as in Theorem 9 and we conclude the proof. The only difference is that we utilise Theorem 11 instead of Rellich-Kondrachov Theorem, with \(B_{0}=H^{2}(\Omega)\,\ B=H^{1}(\Omega)\) and \(B_{1}=L^{2}(\Omega)\).
### Time discrete training
To apply a quadrature in the time integral only we proceed as follows: Let \(0=t^{0}<t^{1}<\cdots<t^{N}=T\) define a partition of \([0,T]\) and \(I_{n}:=(t^{n-1},t^{n}]\), \(k_{n}:=t^{n}-t^{n-1}\). We shall denote by \(v^{m}(\cdot)\) and \(f^{m}(\cdot)\) the values \(v(\cdot,t^{m})\) and \(f(\cdot,t^{m})\). Then we define the discrete in time quadrature by
\[\sum_{n=1}^{N}\,k_{n}\,g(t^{n})\approx\int_{0}^{T}\,g(t)\,\mathrm{d}t. \tag{79}\]
We proceed to define the time-discrete version of the functional (5) as follows
\[\mathcal{G}_{IE,k}(v)=\sum_{n=1}^{N}\,k_{n}\,\int_{\Omega}\big{|}\frac{v^{n}-v ^{n-1}}{k_{n}}+Lv^{n}-f^{n}\big{|}^{2}\,\,\mathrm{d}x+\,\int_{\Omega}|v^{0}-u ^{0}|_{H^{1}(\Omega)}^{2}\,\mathrm{d}x\,. \tag{80}\]
We shall study the stability and convergence properties of the minimisers of the problems:
\[\min_{v\in V_{\mathcal{N}}}\mathcal{G}_{IE,k}(v)\,. \tag{81}\]
Next we introduce the _time reconstruction_\(\widehat{U}\) of a time dependent function \(U\) to be the piecewise linear approximation of \(U\) defined by linearly interpolating between the nodal values \(U^{n-1}\) and \(U^{n}\):
\[\widehat{U}(t):=\ell_{0}^{n}(t)U^{n-1}+\ell_{1}^{n}(t)U^{n},\quad t\in I_{n}, \tag{82}\]
with \(\ell_{0}^{n}(t):=(t^{n}-t)/k_{n}\) and \(\ell_{1}^{n}(t):=(t-t^{n-1})/k_{n}\). This reconstruction of the discrete solution has been proven useful in various instances, see [4], [38], [17] and for higher-order versions [34].
Correspondingly, the piecewise constant interpolant of \(U^{j}\) is denoted by \(\overline{U},\)
\[\overline{U}(t):=U^{n},\quad t\in I_{n}\,. \tag{83}\]
So now the discrete energy \(\mathcal{G}_{IE,k}\) can be written as follows
\[\mathcal{G}_{IE,k}(U)= \|\widehat{U}_{t}+L\overline{U}-\overline{f}\|_{L^{2}(0,T;L^{2}( \Omega))}^{2}+\,\int_{\Omega}|\widehat{U}^{0}-u^{0}|_{H^{1}(\Omega)}^{2}\, \mathrm{d}x \tag{84}\] \[= \int_{0}^{T}\|\widehat{U}_{t}+L\overline{U}-\overline{f}\|_{L^{2 }(\Omega)}^{2}\,\mathrm{d}t+\,\int_{\Omega}|\widehat{U}^{0}-u^{0}|_{H^{1}( \Omega)}^{2}\,\mathrm{d}x\,.\]
#### 4.2.1 Stability-Equi-coercivity
Now we have equicoercivity of \({\cal G}_{IE,k}\) as a corollary of the following result.
**Proposition 13**: _The functional \({\cal G}_{IE,k}\) defined in (84) is equicoercive with respect to \(\widehat{U},\overline{U}\). That is,_
\[\begin{array}{l}\mbox{If }\ {\cal G}_{k}(U)\leq C\ \mbox{ for some }\ C>0\,\mbox{ we have}\\ \|\overline{U}\|_{L^{2}(0,T;H^{2}(\Omega))}+\|\widehat{U}^{\prime}\|_{L^{2} (0,T;L^{2}(\Omega))}\leq C_{1}\end{array} \tag{85}\]
* As in the proof of equicoercivity for (5), we have \[\int_{\Omega_{T}}(|\widehat{U}_{t}+L\overline{U}|^{2}-2\overline{f}\,( \widehat{U}_{t}+L\overline{U})+|\overline{f}|^{2})\leq C\] (86) Thus we can conclude that since \(f\in L^{2}(\Omega_{T})\), we have the uniform bound \[\|\widehat{U}_{t}+L\overline{U}\|_{L^{2}(0,T;L^{2}(\Omega))}\leq C_{1}\,.\] (87) We shall need a discrete maximal regularity estimate in the present Hilbert-space setting. To this end we observe, \[\|\widehat{U}_{t}+L\overline{U}\|_{L^{2}(0,T;L^{2}(\Omega))}^{2} =\|\widehat{U}_{t}\|_{L^{2}(0,T;L^{2}(\Omega))}^{2}+\|L\overline{U}\|_{L^{2 }(0,T;L^{2}(\Omega))}^{2}+2\sum_{n=1}^{N}\,\int_{I_{n}}\,\left\langle\widehat{ U}_{t},L\overline{U}\right\rangle\ dt\] \[=\|\widehat{U}_{t}\|_{L^{2}(0,T;L^{2}(\Omega))}^{2}+\|L\overline{ U}\|_{L^{2}(0,T;L^{2}(\Omega))}^{2}\] \[\qquad+2\sum_{n=1}^{N}\,\int_{I_{n}}\,\left\langle\big{[}\frac{U ^{n}-U^{n-1}}{k_{n}}\big{]},LU^{n}\right\rangle\ dt\] \[=\|\widehat{U}_{t}\|_{L^{2}(0,T;L^{2}(\Omega))}^{2}+\|L\overline{ U}\|_{L^{2}(0,T;L^{2}(\Omega))}^{2}\] (88) \[\qquad+2\sum_{n=1}^{N}\,\left\langle\big{[}U^{n}-U^{n-1}\big{]}, LU^{n}\right\rangle\] \[=\|\widehat{U}_{t}\|_{L^{2}(0,T;L^{2}(\Omega))}^{2}+\|L\overline{ U}\|_{L^{2}(0,T;L^{2}(\Omega))}^{2}+\left\langle L\,U^{N},U^{N}\right\rangle\] \[\qquad+\sum_{n=1}^{N}\,\left\langle L\,\big{[}U^{n}-U^{n-1}\big{]},U^{n}-U^{n-1}\right\rangle\,-\left\langle L\,U^{0},U^{0}\right\rangle.\] Since all but the last term \(\left\langle L\,U^{0},U^{0}\right\rangle\) are positive, we conclude, \[\|\widehat{U}_{t}\|_{L^{2}(0,T;L^{2}(\Omega))}^{2}+\|L\overline{U}\|_{L^{2}(0, T;L^{2}(\Omega))}^{2}\leq\|\widehat{U}_{t}+L\overline{U}\|_{L^{2}(0,T;L^{2}(\Omega))}^{2}+ \left\langle L\,U^{0},U^{0}\right\rangle,\] (89) and the proof is complete.
#### 4.2.2 \(\liminf\) inequality
We assume there is a sequence, still denoted by \(U_{\ell}\), such that \({\cal G}_{IE,\ell}(U_{\ell})\leq C\) uniformly in \(\ell\), otherwise \(\liminf_{\ell\to\infty}{\cal G}_{IE,\ell}(U_{\ell})=+\infty.\) From the discrete stability estimate, the uniform bound \({\cal G}_{IE,\ell}(U_{\ell})\leq C\) implies that \(\|\overline{U}_{\ell}\|_{L^{2}(0,T;H^{2}(\Omega))}+\|\widehat{U}_{\ell}^{ \prime}\|_{L^{2}(0,T;L^{2}(\Omega))}\leq C_{1}\,,\) are uniformly bounded.
By the relative compactness in \(L^{2}(0,T;L^{2}(\Omega))\) we have (up to a subsequence not re-labeled) the existence of \(u_{(1)}\) and \(u_{(2)}\) such that
\[L\overline{U}_{\ell}\rightharpoonup Lu_{(1)}\ \ \text{and}\ \ \widehat{U}_{\ell}^{ \prime}\rightharpoonup u_{(2)}^{\prime}\ \ \text{weakly in}\ \ L^{2}(0,T;L^{2}(\Omega))\,. \tag{90}\]
Notice that, for any space-time test function \(\varphi\in C_{0}^{\infty}\) there holds (we have set \(\tilde{\varphi}^{n}:=\frac{1}{k_{n}}\int_{I_{n}}\varphi\ \,dt\))
\[-\int_{0}^{T}\langle\widehat{U}_{\ell},\varphi^{\prime}\rangle \mathrm{d}t=\int_{0}^{T}\langle\widehat{U}_{\ell}^{\prime},\varphi\rangle \mathrm{d}t \tag{91}\] \[=\sum_{n=1}^{N}\,\int_{I_{n}}\,\langle\big{[}\frac{U_{\ell}^{n}-U _{\ell}^{n-1}}{k_{n}}\big{]},\varphi\rangle\,\ dt=\sum_{n=1}^{N}\,\langle U_{ \ell}^{n},\tilde{\varphi}^{n}\rangle-\langle U_{\ell}^{n-1},\tilde{\varphi}^{ n}\rangle\] \[=\sum_{n=1}^{N}\,\langle U_{\ell}^{n},\varphi^{n-1}\rangle- \langle U_{\ell}^{n-1},\varphi^{n-1}\rangle+\sum_{n=1}^{N}\,\langle U_{\ell}^ {n},\big{[}\tilde{\varphi}^{n}-\varphi^{n-1}\big{]}\rangle-\langle U_{\ell}^{ n-1},\big{[}\tilde{\varphi}^{n}-\varphi^{n-1}\big{]}\rangle\] \[=-\sum_{n=1}^{N}\,\langle U_{\ell}^{n},\varphi^{n}-\varphi^{n-1} \rangle+\sum_{n=1}^{N}\,\langle\big{[}U_{\ell}^{n}-U_{\ell}^{n-1}\big{]},\big{[} \tilde{\varphi}^{n}-\varphi^{n-1}\big{]}\rangle\] \[=-\int_{0}^{T}\langle\overline{U}_{\ell},\varphi^{\prime}\rangle \mathrm{d}t+\sum_{n=1}^{N}\,\langle\big{[}U_{\ell}^{n}-U_{\ell}^{n-1}\big{]}, \big{[}\tilde{\varphi}^{n}-\varphi^{n-1}\big{]}\rangle\,.\]
By the uniform bound,
\[\|\widehat{U}_{\ell}^{\prime}\|_{L^{2}(0,T;L^{2}(\Omega))}^{2}=\sum_{n=1}^{N} \,\frac{1}{k_{n}}\|U_{\ell}^{n}-U_{\ell}^{n-1}\|_{L^{2}(\Omega)}^{2}\leq C_{1} ^{2}\,,\]
and standard approximation properties for \(\tilde{\varphi}^{n}-\varphi^{n-1}\) we conclude that for any fixed test function,
\[\int_{0}^{T}\,\langle\widehat{U}_{\ell},\varphi^{\prime}\rangle \mathrm{d}t-\int_{0}^{T}\,\langle\overline{U}_{\ell},\varphi^{\prime}\rangle \mathrm{d}t\to 0,\qquad\ell\to\infty\,. \tag{92}\]
We can conclude therefore that \(u_{(1)}=u_{(2)}=u\) and thus,
\[\widehat{U}_{\ell}^{\prime}+L\overline{U}_{\ell}-\overline{f}\rightharpoonup u ^{\prime}+Lu-f,\qquad\ell\to\infty\,. \tag{93}\]
The convexity of \(\int_{\Omega_{T}}|\cdot|^{2}\) implies weak lower semicontinuity, that is
\[\int_{\Omega_{T}}|u^{\prime}+Lu-f|^{2}\leq\liminf_{\ell\to\infty}\int_{ \Omega_{T}}|\widehat{U}_{\ell}^{\prime}+L\overline{U}_{\ell}-\overline{f}|^{2} \tag{94}\]
and therefore we conclude that \(\mathcal{G}(u)\leq\liminf_{\ell\to\infty}\mathcal{G}_{IE,\ell}(U_{\ell})\).
#### 4.2.3 \(\limsup\) inequality
Let \(w\in H^{1}(0,T;L^{2}(\Omega))\cap L^{2}(0,T;H^{2}(\Omega))\). We will now show the existence of a recovery sequence \((w_{\ell})\) such that \(w_{\ell}\to w\) and \(\mathcal{G}(w)=\lim_{\ell\to\infty}\mathcal{G}_{IE,\ell}(w_{\ell})\). Since \(C^{\infty}(0,T;H^{2}(\Omega))\) is dense in \(L^{2}(0,T;H^{2}(\Omega))\) we can select a \((w_{\delta})\subset C^{\infty}(0,T;H^{2}(\Omega))\) with the properties
\[\begin{split}&\|w-w_{\delta}\|_{H^{1}(0,T;L^{2}(\Omega))\cap L^{2}(0,T;H^ {2}(\Omega))}\lesssim\delta\,,\quad\text{and,}\\ &|w_{\delta}^{\prime}|_{H^{1}(0,T;L^{2}(\Omega))\cap L^{2}(0,T;H^ {2}(\Omega))}\lesssim\frac{1}{\delta}|w|_{H^{1}(0,T;L^{2}(\Omega))\cap L^{2}( 0,T;H^{2}(\Omega))}\,.\end{split} \tag{95}\]
If \(w_{\delta,\ell}\in W_{\ell}\) is a neural network function satisfying (66), (67), we would like to show
\[||\widehat{w}_{\delta,\ell}^{\prime}+L\,\overline{w}_{\delta,\ell}-\overline{f} ||_{L^{2}(0,T;L^{2}(\Omega))}\rightarrow||w^{\prime}+Lw-f||_{L^{2}(0,T;L^{2}( \Omega))} \tag{96}\]
where \(\delta=\delta(\ell)\) is appropriately selected. Then,
\[\mathcal{G}_{IE,\ell}(w_{\delta,\ell})\rightarrow\mathcal{G}(w)\,. \tag{97}\]
To this end it suffices to consider the difference
\[\|\widehat{w}_{\delta,\ell}^{\prime}+L\,\overline{w}_{\delta,\ell}-w^{\prime} -Lw\|_{L^{2}(0,T;L^{2}(\Omega))}\,. \tag{98}\]
We have
\[\|\widehat{w}_{\delta,\ell}^{\prime}+L\,\overline{w}_{\delta,\ell }-w^{\prime}-Lw\|_{L^{2}(0,T;L^{2}(\Omega))}\leq \|\widehat{w}_{\delta,\ell}^{\prime}+L\,\overline{w}_{\delta,\ell }-\widehat{w}_{\delta}^{\prime}-L\,\overline{w}_{\delta}\|_{L^{2}(0,T;L^{2}( \Omega))} \tag{99}\] \[+\|\widehat{w}_{\delta}^{\prime}+L\,\overline{w}_{\delta}-w^{ \prime}-Lw\|_{L^{2}(0,T;L^{2}(\Omega))}\] \[=:A_{1}+A_{2}\,.\]
To estimate \(A_{1}\) we proceed as follows: Let \(\theta_{\ell}(t):=w_{\delta,\ell}(t)-w_{\delta}(t)\,.\) Then,
\[\|\widehat{w}_{\delta,\ell}^{\prime}-\widehat{w}_{\delta}^{\prime }\|_{L^{2}(0,T;L^{2}(\Omega))}^{2} =\sum_{n=1}^{N}\,\int_{I_{n}}\,\big{\|}\,\frac{\theta_{\ell}^{n}- \theta_{\ell}^{n-1}}{k_{n}}\big{\|}_{L^{2}(\Omega)}^{2}\,\ dt \tag{100}\] \[=\sum_{n=1}^{N}\,\frac{1}{k_{n}}\big{\|}\theta_{\ell}^{n}-\theta_ {\ell}^{n-1}\big{\|}_{L^{2}(\Omega)}^{2}\] \[=\sum_{n=1}^{N}\,\frac{1}{k_{n}}\big{\|}\int_{I_{n}}\,\theta_{ \ell}^{\prime}(t)\,\ dt\big{\|}_{L^{2}(\Omega)}^{2}\] \[\leq\sum_{n=1}^{N}\,\frac{1}{k_{n}}\int_{I_{n}}\big{\|}\theta_{ \ell}^{\prime}(t)\big{\|}_{L^{2}(\Omega)}^{2}\,\ dt\,k_{n}\] \[=\|\theta_{\ell}^{\prime}\|_{L^{2}(0,T;L^{2}(\Omega))}^{2}\,.\]
Similarly,
\[\|L\,\overline{w}_{\delta,\ell} -L\,\overline{w}_{\delta}\|_{L^{2}(0,T;L^{2}(\Omega))}=\Big{\{} \sum_{n=1}^{N}\,\int_{I_{n}}\,\big{\|}L\,\theta_{\ell}^{n}\big{\|}_{L^{2}( \Omega)}^{2}\,\ dt\Big{\}}^{1/2} \tag{101}\] \[\leq\Big{\{}\sum_{n=1}^{N}\,k_{n}\,\big{\|}L\,\theta_{\ell}^{n}- \frac{1}{k_{n}}\int_{I_{n}}L\,\theta_{\ell}(t)\mathrm{d}t\big{\|}_{L^{2}( \Omega)}^{2}\,\Big{\}}^{1/2}+\Big{\{}\sum_{n=1}^{N}\,\int_{I_{n}}\,\big{\|}L \,\theta_{\ell}(t)\big{\|}_{L^{2}(\Omega)}^{2}\,\ dt\Big{\}}^{1/2}\] \[=\Big{\{}\sum_{n=1}^{N}\,k_{n}\,\big{\|}L\,\theta_{\ell}^{n}- \frac{1}{k_{n}}\int_{I_{n}}L\,\theta_{\ell}(t)\mathrm{d}t\big{\|}_{L^{2}( \Omega)}^{2}\,\Big{\}}^{1/2}+\|L\,\theta_{\ell}\|_{L^{2}(0,T;L^{2}(\Omega))}\,.\]
It remains to estimate,
\[\Big{\{}\sum_{n=1}^{N}\,k_{n}\left\|L\,\theta_{\ell}^{n}-\frac{1}{k_{ n}}\int_{I_{n}}L\,\theta_{\ell}(t)\mathrm{d}t\right\|_{L^{2}(\Omega)}^{2} \Big{\}}^{1/2}=\Big{\{}\sum_{n=1}^{N}\,\frac{1}{k_{n}}\left\|\int_{I_{n}}\left[L \,\theta_{\ell}^{n}-L\,\theta_{\ell}(t)\right]\mathrm{d}t\right\|_{L^{2}( \Omega)}^{2}\Big{\}}^{1/2} \tag{102}\] \[\leq\Big{\{}\sum_{n=1}^{N}\,\frac{1}{k_{n}}\left[\,\int_{I_{n}}\, \left\|L\,\theta_{\ell}^{n}(s)\right\|_{L^{2}(\Omega)}\mathrm{d}s\,\mathrm{d} t\right]^{2}\Big{\}}^{1/2}\] \[=\Big{\{}\sum_{n=1}^{N}\,k_{n}\left[\,\int_{I_{n}}\,\left\|L\, \theta_{\ell}^{\prime}(t)\right\|_{L^{2}(\Omega)}\mathrm{d}t\right]^{2} \Big{\}}^{1/2}\] \[\leq k\,\|L\,\theta_{\ell}^{\prime}\|_{L^{2}(0,T;L^{2}(\Omega))}\,.\]
We conclude therefore that, \(k=\max_{n}k_{n},\)
\[A_{2}\leq\|\theta_{\ell}^{\prime}\|_{L^{2}(0,T;L^{2}(\Omega))}+\|L\,\theta_{ \ell}\|_{L^{2}(0,T;L^{2}(\Omega))}+k\,\|L\,\theta_{\ell}^{\prime}\|_{L^{2}(0,T ;L^{2}(\Omega))}\,. \tag{103}\]
On the other hand, standard time interpolation estimates yield,
\[A_{1}\leq C\,k\left[\|w_{\delta}^{\prime\prime}\|_{L^{2}(0,T;L^{2}(\Omega))}+ \|L\,w_{\delta}^{\prime}\|_{L^{2}(0,T;L^{2}(\Omega))}\right]. \tag{104}\]
Hence, we have using (66), (67), (95),
\[A_{1}+A_{2}\leq\beta_{\ell}(w_{\delta})+\frac{k}{\delta^{m+1}}\tilde{\beta}_{ \ell}\|w\|_{L^{2}(0,T;H^{2}(\Omega))}+C\frac{k}{\delta}\|w\|_{H^{1}(0,T;L^{2}( \Omega))\cap L^{2}(0,T;H^{2}(\Omega))}\,. \tag{105}\]
Therefore, we conclude that (96) holds upon selecting \(\delta=\delta(\ell,k)\) appropriately.
#### 4.2.4 Convergence of the minimisers
In this subsection, we conclude the proof that the sequence of discrete minimisers \((u_{\ell})\) converges in \(L^{2}(0,T;H^{1}(\Omega))\) to the minimiser of the continuous problem.
**Theorem 14** (Convergence): _Let \(\mathcal{G},\ \mathcal{G}_{IE,\ell}\) be the energy functionals defined in (65) and (80) respectively. Let \(u\) be the exact solution of (64) and let \((u_{\ell}),\)\(u_{\ell}\in V_{\ell},\) be a sequence of minimisers of \(\mathcal{G}_{IE,\ell}\), i.e._
\[\mathcal{G}_{IE,\ell}(u_{\ell})=\inf_{v_{\ell}\in W_{\ell}}\mathcal{G}_{IE, \ell}(v_{\ell})\,. \tag{106}\]
_Then,_
\[\hat{u}_{\ell}\to u,\ \ \text{in}\ \ L^{2}(0,T;H^{1}(\Omega)), \tag{107}\]
_where \(\hat{u}_{\ell}\) is defined by (82)._
* **Proof** Next, let \(u\in L^{2}(0,T;H^{2}(\Omega))\cap H^{1}(0,T;L^{2}(\Omega))\) be the solution of (64). Consider the sequence of minimisers \((u_{\ell})\). Obviously, \[\mathcal{G}_{IE,\ell}(u_{\ell})\leq\mathcal{G}_{IE,\ell}(v_{\ell}),\qquad\text {for all }v_{\ell}\in V_{\ell}\,.\] In particular, \[\mathcal{G}_{IE,\ell}(u_{\ell})\leq\mathcal{G}_{IE,\ell}(\tilde{u}_{\ell}),\]
where \(\tilde{u}_{\ell}\) is the recovery sequence \(w_{\delta,\ell}\) corresponding to \(w=u\) constructed above. Hence, we conclude that the sequence \(\mathcal{G}_{IE,\ell}(u_{\ell})\) is uniformly bounded. The stability-equi-coercivity of the discrete functional, see Proposition 13, implies that
\[\|\overline{u}_{\ell}\|_{L^{2}(0,T;H^{2}(\Omega))}+\|\widehat{u}_{\ell}\|_{L^{ 2}(0,T;H^{2}(\Omega))}+\|\widehat{u}_{\ell}^{\prime}\|_{L^{2}(0,T;L^{2}( \Omega))}\leq C\,. \tag{108}\]
The Aubin-Lions theorem ensures that there exists \(\tilde{u}\in L^{2}(0,T;H^{1}(\Omega))\) such that \(\widehat{u}_{\ell}\rightarrow\tilde{u}\) in \(L^{2}(0,T;H^{1}(\Omega))\) up to a subsequence not re-labeled. Furthermore the previous analysis shows that \(L\tilde{u}\in L^{2}(0,T;L^{2}(\Omega))\,.\) To prove that \(\tilde{u}\) is the minimiser of \(\mathcal{G},\) and hence \(\tilde{u}=u,\) we combine the results of Sections 4.2.2 and 4.2.3: Let \(w\in H^{1}(0,T;L^{2}(\Omega))\cap L^{2}(0,T;H^{2}(\Omega)).\) We did show the existence of a recovery sequence \((w_{\ell})\) such that \(w_{\ell}\to w\) and
\[\mathcal{G}(w)=\lim_{\ell\rightarrow\infty}\mathcal{G}_{IE,\ell}(w_{\ell}).\]
Therefore, the \(\liminf\) inequality and the fact that \(u_{\ell}\) are minimisers of the discrete problems imply that
\[\mathcal{G}(\tilde{u})\leq\liminf_{\ell\rightarrow\infty}\mathcal{G}_{IE,\ell }(u_{\ell})\leq\limsup_{\ell\rightarrow\infty}\mathcal{G}_{IE,\ell}(u_{\ell}) \leq\limsup_{\ell\rightarrow\infty}\mathcal{G}_{IE,\ell}(w_{\ell})=\mathcal{G }(w), \tag{109}\]
for all \(w\in H^{1}(0,T;L^{2}(\Omega))\cap L^{2}(0,T;H^{2}(\Omega)).\) Therefore \(\tilde{u}\) is the minimiser of \(\mathcal{G},\) hence \(\tilde{u}=u\) and the entire sequence satisfies
\[\hat{u}_{\ell}\to u,\;\;\;\text{in}\;\;L^{2}(0,T;H^{1}(\Omega)).\]
Therefore the proof is complete.
#### 4.2.5 Explicit time discrete training
It will be interesting to consider a seemingly similar (from the point of view of quadrature and approximation) discrete functional:
\[\mathcal{G}_{k,EE}(v)=\sum_{n=1}^{N}\,k_{n}\,\int_{\Omega}\big{|}\frac{v^{n}- v^{n-1}}{k_{n}}+Lv^{n-1}-f^{n-1}\big{|}^{2}\;\mathrm{d}x+\,\int_{\Omega}|v-u^{0}| ^{2}dx, \tag{110}\]
and compare its properties to the functional \(\mathcal{G}_{k,IE}(v)\) and the corresponding \(V_{\mathcal{N}}\) minimisers. The functional (110) is related to _explicit_ Euler discretisation in time as opposed to the _implicit_ Euler discretisation in time for \(\mathcal{G}_{k,IE}(v).\) Clearly, in the discrete minimisation framework, both energies are fully implicit, since the evaluation of the minimisers involves the solution of global space-time problems. It is therefore rather interesting that these two energies result in completely different stability properties.
Let us first note that it does not appear possible that a discrete coercivity such as (85) can be proved. Indeed, an argument similar to (91) is possible but with the crucial difference that the second to last term of this relation will be negative instead of positive. This is a fundamental point directly related to the (in)stability of the forward Euler method. Typically for finite difference forward Euler schemes one is required to assume a strong CFL condition of the form \(k\leq Ch^{2}\) where \(h\) is the spatial discretisation parameter to preserve stability. It appears that a phenomenon of similar nature is present in our case as well. Although we do not show stability bounds when spatial training is taking place, the numerical experiments show that the stability behaviour of the explicit training method deteriorates when we increase the number of spatial training points while keeping \(k\) constant. These stability considerations are verified by the numerical experiments we present below. Indeed, these computations provide convincing evidence that coercivity bounds
similar to (85) are necessary for stable behaviour of the approximations. In the computations we solve the one dimensional heat equation with zero boundary conditions and two different initial values plotted in black. All runs were performed using the package _DeepXDE_, [33], with random spatial training and constant time step. |
2301.09712 | Black holes as frozen stars: Regular interior geometry | We have proposed a model geometry for the interior of a regular black hole
mimicker, the frozen star, whose most startling feature is that each spherical
shell in its interior is a surface of infinite redshift. The geometry is a
solution of the Einstein equations which is sourced by an exotic matter with
maximally negative radial pressure. The frozen star geometry was previously
presented in singular coordinates for which $-g_{tt}$ and $g^{rr}$ vanish in
the bulk and connect smoothly to the Schwarzschild exterior. Additionally, the
geometry was mildly singular in the center of the star. Here, we present
regular coordinates for the entirety of the frozen star. Each zero in the
metric is replaced with a small, dimensionless parameter $\varepsilon$; the
same parameter in both $-g_{tt}$ and $g^{rr}$ so as to maintain maximally
negative radial pressure. We also regularize the geometry, energy density and
pressure in the center of the star in a smooth way. Our initial analysis uses
Schwarzschild-like coordinates and applies the Killing equations to show that
an infalling, point-like object will move very slowly, effectively sticking to
the surface of the star and never coming out. If one nevertheless follows the
trajectory of the object into the interior of the star, it moves along an
almost-radial trajectory until it comes within a small distance from the star's
center. Once there, if the object has any amount of angular momentum at all, it
will be reflected outwards by a potential barrier onto a different
almost-radial trajectory. Finally, using Kruskal-like coordinates, we consider
the causal structure of the regularized frozen star and discuss its
$\varepsilon\to 0$ limit, for which the geometry degenerates and becomes
effectively two dimensional. | Ram Brustein, A. J. M. Medved, Tom Shindelman, Tamar Simhon | 2023-01-23T20:37:52Z | http://arxiv.org/abs/2301.09712v1 | # Black holes as frozen stars: Regular interior geometry
###### Abstract
We have proposed a model geometry for the interior of a regular black hole mimicker, the frozen star, whose most startling feature is that each spherical shell in its interior is a surface of infinite redshift. The geometry is a solution of the Einstein equations which is sourced by an exotic matter with maximally negative radial pressure. The frozen star geometry was previously presented in singular coordinates for which \(-g_{tt}\) and \(g^{rr}\) vanish in the bulk and connect smoothly to the Schwarzschild exterior. Additionally, the geometry was mildly singular in the center of the star. Here, we present regular coordinates for the entirety of the frozen star. Each zero in the metric is replaced with a small, dimensionless parameter \(\varepsilon\); the same parameter in both \(-g_{tt}\) and \(g^{rr}\) so as to maintain maximally negative radial pressure. We also regularize the geometry, energy density and pressure in the center of the star in a smooth way. Our initial analysis uses Schwarzschild-like coordinates and applies the Killing equations to show that an infalling, point-like object will move very slowly, effectively sticking to the surface of the star and never coming out. If one nevertheless follows the trajectory of the object into the interior of the star, it moves along an almost-radial trajectory until it comes within a small distance from the star's center. Once there, if the object has any amount of angular momentum at all, it will be reflected outwards by a potential barrier onto a different almost-radial trajectory. Finally, using Kruskal-like coordinates, we consider the causal structure of the regularized frozen star and discuss its \(\varepsilon\to 0\) limit, for which the geometry degenerates and becomes effectively two dimensional.
Introduction
The term black hole (BH) has been universally adopted to describe the final state of matter after it gravitationally collapses and is meant to reflect the singular nature of its classical solution; something that had long been suspected [1, 2, 3, 4, 5, 6] but only proven in the classic works of Penrose and Hawking [7, 8]. On the other hand, given that quantum theory is expected to resolve all singularities, a more appropriate term for the final state might be a frozen star, as was first coined in [9]. This is because of the infinite time for gravitational collapse to transpire from the perspective of an external observer and because deviations away from the static Schwarzschild geometry decay exponentially fast on a scale that is set by the light-crossing time. In other words, the collapsing matter configuration can, for all practical purposes, be regarded as frozen in time. In this spirit, we have adopted the name "frozen star" for our own model of the final state of matter a long time after it collapsed.
As for quantum mechanics' role as the guardian of regularity, a common expectation is that quantum effects at the Planck scale will be sufficient for this purpose. Although this idea cannot be ruled out in general, there are strong indications to the contrary in the context of BH singularities. First, a seemingly necessary condition for evading the singularity theorems [7, 8] and the closely related "Buchdahl-like" bounds [3, 4, 5, 6] is that the geometry is sourced by matter having the most negative radial pressure that is permitted by causality, \(\ p_{r}=-\rho\), all the way out to the surface of the star [10]. This property was an essential ingredient in the black star model [11], the gravastar model [12, 13] and a hybrid of the two [14]. Furthermore, if one also
considers the emitted Hawking radiation from a regularized BH mimicker, what is found is an untenable violation of energy conservation when the scale of resolution is parametrically smaller than that of the Schwarzschild radius \(R_{S}\). Indeed, in this case, the emitted energy of Hawking particles will greatly exceed the original mass of the collapsing matter [15, 16]. The natural conclusion is that a regularized BH mimicker is required to have deviations from classical general relativity that extend throughout the object's interior. For a comprehensive list and extensive discussion on compact objects that are meant to mimick BHs, see [17].
One such BH mimicker, known as the collapsed polymer model, was proposed by two of the current authors [18] on the basis that the object's interior should be filled up with a maximally entropic fluid [19], a state that is best described by a Hagedorn phase of highly entropic stringy matter [20, 21, 22, 23, 24]. Utilizing, in particular, a collection of long, closed, interacting strings, we were able to replicate all known features of Schwarzschild BHs [25] and make a number of novel predictions about the non-equilibrium physics [26] that could possibly be tested via the observation of gravitational-wave emissions during binary-BH mergers [27, 28, 29, 30].
The problem with the polymer model is that its highly quantum interior cannot be described by a semiclassical metric. The way out of this conundrum is to identify a classical geometry that maintains many of the same characteristics as the polymer model [10] or, put differently, understand how the polymer BH would be viewed by someone who is ignorant about the microscopic nature of its interior or, more so, someone who is determined -- by hook or by crook -- to forgo quantum mechanics in her picture of
gravitational collapse [31].
The polymer's geometric proxy, the frozen star, was assumed initially to have the following prominent features in terms of the energy density \(\rho\), the radial pressure \(p_{r}\) and the transverse pressure \(p_{\perp}\):
1. It has maximally negative radial pressure, \(p_{r}=-\rho\), which implies a specific geometry, \(f(r)\equiv-g_{tt}=g^{rr}\).
2. It has vanishing transverse pressure, \(p_{\perp}=0\).
3. The interior metric, which is defined for \(r\leq R\) with \(R\simeq R_{S}\), has the same form as that of the Schwarzschild horizon, \(f(r)=0\) everywhere except for a thin layer at the outer surface [32] and a small region surrounding the center (see below). In spite of this, it is regular throughout the interior
4. It is ultra-stable against perturbations [10; 32].
In previous papers, the condition \(f(r)=0\) has been strictly enforced in the bulk of the interior. However, this geometry has apparently singular coordinates which makes it hard to deduce its physical consequences. The main objective of our current paper is to study a more accessible geometry by relaxing this condition, setting \(f(r<R)=1-v^{2}\) for \(v^{2}\lesssim 1\), with \(\varepsilon=1-v^{2}\ll 1\) as the small parameter in this model. Such a geometry has been referred to as a hedgehog compactification elsewhere in a cosmological context [33; 34]. Notice that we are not relaxing the condition \(-g_{tt}=g^{rr}\) (including in the outer layer and the central region), as it this choice that ensures \(p_{r}=-\rho\). Also note that, as long as \(f(r)\) is a constant, the relaxing of (3) has no bearing on (2), \(p_{\perp}=0\) still holds.
The remainder of the paper proceeds as follows: First, we utilize the Schwarzschild-like (or hedgehog) coordinate system along with the Killing equations to show that, from an external perspective, an infalling particle would take a very long time, \(R/\sqrt{\varepsilon}\gg R\), to re-emerge from the frozen star. Here, it is also shown that the gravitational potential forms an infinite angular momentum barrier near the center of the frozen star at \(r\sim R\sqrt{\varepsilon}\) for any particle with non-zero angular momentum. The barrier implies that almost all of the infalling particles avoid the origin.
Next, in Section 3, we discuss the causal structure of the frozen star by introducing Kruskal-like coordinates, which cast the metric in a form that is manifestly regular even in the \(\varepsilon\to 0\) limit. This form is useful for better understanding this limiting case, as well as for understanding the corresponding Penrose Diagrams.
As a prologue to discussing Section 4, let us first mention that the energy density and most curvature invariants formally diverge in the combined \(r\to 0\) and \(\varepsilon\to 0\) limits. The divergence is rather mild as the total mass in the central region is finite and small. Nevertheless, as a goal of this paper is to present a completely regular metric everywhere in space, we need to regularize the geometry in the region close to the center. The regularization procedure is summarized in Section 4 of the main text and closely follows our analysis in [32] for the outer layer. Technical details about the regularization are presented in an appendix.
The paper ends with a short comment about stability, a brief overview and the aforementioned Appendix.
### Conventions
We assume a spherically symmetric and static background spacetime with \(D=3+1\) spacetime dimensions, but similar results will persist for any \(D>3\). All fundamental constants besides Newton's constant \(G\) are set to unity throughout except when included for clarity and \(8\pi G=1\) is used in Section 4 and the Appendix. A prime indicates a radial derivative.
## 2 The interior geometry
Let us begin here with a review and then present the regularized version of the interior geometry of the frozen star. Here, we will ignore the thin layer near the outer surface which must be modified to ensure that the solution can be matched smoothly to the Schwarzschild exterior [32] and also the small region near the center which must be regularized to ensure that all densities remain finite (see Section 4).
A static and spherically symmetric line element is assumed,
\[ds^{2}\ =\ -f(r)dt^{2}+\frac{1}{\tilde{f}(r)}dr^{2}+r^{2}(d\theta^{2}+sin^{2} \theta d\phi^{2}). \tag{1}\]
It is further assumed that the radial pressure is maximally negative, \(p_{r}=-\rho\), the transverse components \(p_{\perp}\) are initially unspecified and all of the off-diagonal components are vanishing. Under these conditions, Einstein's
equations reduce to
\[\left(r\widetilde{f}\right)^{\prime} = 1-8\pi G\rho r^{2}\;, \tag{2}\] \[\left(rf\right)^{\prime\prime} = 16\pi Grp_{\perp}\;. \tag{3}\]
where \(\;f=\widetilde{f}\;\) due to the maximally negative pressure. This can all be combined into a single equation of the form
\[\left(\rho r^{2}\right)^{\prime}\;=\;-2rp_{\perp}\;, \tag{4}\]
which also follows from the stress-tensor conservation equation.
Next, defining the mass function,
\[m(r)\;=\;4\pi G\int\limits_{0}^{r}dx\,x^{2}\rho(x)\;\;\;{\rm for}\;\;\;r\leq R\;, \tag{5}\]
we find that
\[f(r)\;=\;\widetilde{f}(r)\;=\;1-\frac{2Gm(r)}{r}\;. \tag{6}\]
The functional form of \(m(r)\) determines the geometry. For example, if \(m\) is chosen so that \(\rho\) is constant, the result is the gravastar model.
The frozen star corresponds to the choice
\[m(r)=r/2G \tag{7}\]
throughout the interior. This in turn implies that \(\;f=0\;\) and the matter
densities take the forms
\[8\pi G\rho = \frac{1-(rf)^{\prime}}{r^{2}}\ =\ \frac{1}{r^{2}}\, \tag{8}\] \[8\pi Gp_{r} = -\frac{1-(rf)^{\prime}}{r^{2}}\ =\ -\frac{1}{r^{2}}\,\] (9) \[8\pi Gp_{\perp} = \frac{(rf)^{\prime\prime}}{2r}\ =\ 0. \tag{10}\]
The radius of the star \(R\) in this case is exactly the Schwarzshild radius \(R_{S}=2GM=2Gm(R)\) and the exterior geometry is exactly the Schwarzschild geometry. 1
Footnote 1: Recall that we are ignoring the thin region near the surface of the star where its density decreases smoothly to zero to match the Schwarzschild exterior.
In this paper, we follow a somewhat different route and rather set \(2Gm(r)=rv^{2}\), so that \(f=1-\upsilon^{2}=\varepsilon\ll 1\). Then, in addition to having a regular geometry almost everywhere in the bulk 2, the coordinates are also regular.
Footnote 2: The exception is a small region near the center, which will be discussed separately.
The line element inside the star is then the following:
\[ds^{2}\ =\ -\varepsilon dt^{2}+\frac{1}{\varepsilon}dr^{2}+r^{2}d\Omega_{2}^{2}. \tag{11}\]
As already mentioned, this metric was first introduced in [33, 34] in a cosmological context. The quantity \(\upsilon\) has the dimensionality of a velocity, and indeed, we will find that this is the proper radial velocity inside the frozen star of massive objects which start at rest at infinity (see Eq. (23)).
The previous densities in Eqs. (8), (9) are only slightly modified,
\[8\pi G\rho = \frac{\upsilon^{2}}{r^{2}}\;, \tag{12}\] \[8\pi Gp_{r} = -\frac{\upsilon^{2}}{r^{2}}\;,\] (13) \[8\pi Gp_{\perp} = 0\;, \tag{14}\]
and are recovered in the limit \(\upsilon^{2}\to 1\) or, equivalently, \(\varepsilon\to 0\;\).
The radial position of the surface of the star \(R\) for a star of mass \(M\) shifts out by a parametrically small amount from its Schwarzschild value,
\[R\;=\;\frac{2GM}{\upsilon^{2}}\;=\;\frac{2GM}{1-\varepsilon}\;\approx\;2GM\left( 1+\varepsilon\right)\;. \tag{15}\]
Again, for \(r>R\), the geometry is exactly that of Schwarzschild for a star of mass \(M\).
Here, we do not discuss explicitly the transitional layer near the surface of the star, in which the density continuously decreases to zero to match the outer Schwarzschild geometry. We have verified that the relevant analysis in [32] can be extended in a straightforward manner to the current model. In both cases, the position of the outer surface depends on the width of the translational layer and, for the current case, the outermost surface is shifted further out by a small amount from Eq. (15).
As observed in [34], the geometry (11) can be viewed as a spherically symmetric collection of straight, rigid, constant-tension, one-dimensional, radially pointing rods (or strings). If \(1/\alpha^{\prime}\) denotes the tension, the total mass of the strings inside a ball of radius \(r\) is given by
\(4\pi/\alpha^{\prime}\ r\), and so the mass function is indeed linear in \(r\). Comparing to Eq. (7), one finds that \(\alpha^{\prime}\simeq 8\pi G\). This interpretation of the geometry will also become clear from analyzing the trajectories of infalling objects, which comes up next. We find, to a very good approximation, that objects move on these radial strings in the bulk of the frozen star.
The just-discussed geometry is depicted in Fig. 1. As is clear from the left panel of the figure, the energy density is formally divergent at the center of the star where all the strings meet. But this divergence is less severe than it looks because the total mass in this region is small \(m(r)=r/2\). Nevertheless, a solution that smoothes out this divergence is presented in Section 4 and depicted in the right panel of Fig. 1. The smoothed-out solution can be thought of as allowing the strings to bend a little when they reach a certain small distance from the center, and so they do not all meet at \(r=0\).
Figure 1: The frozen star geometry: On the left, unregularized and, on the right, regularized.
## 3 The fate of infalling objects
Here, we are interested in characterizing the trajectories, both time-like and null, of objects after falling into the frozen star. We follow the standard textbook discussion to find the effective gravitational potential that the objects encounter and use it to understand how they move through the star. Our starting point is the temporal and azimuthal Killing equations.
The temporal Killing equation is
\[f\frac{dt}{d\tau}\;=\;{\cal E}\;, \tag{16}\]
where \({\cal E}\), being a conserved quantity, is equal to the asymptotic momentum per unit mass. The azimuthal Killing equation is
\[r^{2}\sin^{2}\theta\frac{d\phi}{d\tau}\;=\;L\;, \tag{17}\]
such that \(L\), also a conserved quantity, is the asymptotic angular momentum per unit mass.
Figure 2: The gravitational potential of the frozen star for various values of asymptotic angular momenta per unit mass. The potentials are plotted for both null (left panel) and timelike trajectories (right panel).
One can use the Killing equations to rewrite the velocity-normalization equation for both light (\(k=0\)) and matter (\(k=1\)),
\[-k\;=\;-f\left(u^{t}\right)^{2}+f^{-1}\left(u^{r}\right)^{2}+r^{2}\left(u^{ \Omega}\right)^{2}\;, \tag{18}\]
where \(u^{i}\) is the 4-velocity component related to the \(i\)th coordinate. For null trajectories (\(k=0\)), \(u^{i}\) denotes \(dx^{i}/d\lambda\), with \(\lambda\) being an affine parameter along the trajectory. As in the Schwarzschild geometry, angular-momentum conservation allows one to consider purely equatorial trajectories.
For an equatorial trajectory, one can rearrange the previous expression into
\[\left(u^{r}\right)^{2} = f\left[f\left(u^{t}\right)^{2}-r^{2}\left(u^{\Omega}\right)^{2 }-k\right] \tag{19}\] \[= {\cal E}^{2}-f\left(\frac{L^{2}}{r^{2}}+k\right)\]
and introduce the gravitational potential,
\[\frac{1}{2}{\cal E}^{2}\;=\;\frac{1}{2}\left(u^{r}\right)^{2}+V\left(r\right)\;, \tag{20}\]
with
\[V\left(r\right)\;=\;\frac{1}{2}f\left(\frac{L^{2}}{r^{2}}+k\right)\;. \tag{21}\]
This potential is depicted in Fig. (2).
We first discuss in a qualitative manner the properties of trajectories in the frozen star spacetime and then consider some specific examples.
The trajectories outside a frozen star of mass \(M\) are, by design, identical
to the trajectories in a Schwarzschild geometry of the same mass. Meaning that trajectories with a large impact parameter do not enter the star, a photon sphere exists at \(R=3M\) and the redshift factor increases towards the surface of the star.
Trajectories entering the star are of course different but, as argued shortly, the difference is quite subtle from the perspective of an outside observer. Inside the star, except for very close to \(r=0\), the potential scales as \(V\sim f\simeq\varepsilon\ll 1\). It follows that the interior trajectories, whether the null or timelike case, are almost purely radial. The potential only becomes significant near the center of the star.
In both cases, the 3-velocites are suppressed inside the star by a factor of \(\varepsilon\ll 1\). It follows that, to a very good approximation, the light-crossing time is \(\Delta t\simeq 2R/\varepsilon\) and so extremely long. Hence, the asymptotic coordinate time for any object to move through the star is parametrically much larger than the both the corresponding proper time and the Schwarzschild light-crossing time \(4MG/c^{3}\). To get an idea, the light-crossing time of a frozen star of solar mass, in the case that \(\varepsilon\sim\ \frac{1}{100}l_{P}/R_{S}\) (\(l_{P}\) being the Planck length) is \(\Delta t\sim 10^{32}\) seconds, much larger than the age of the Universe!
In fact, if one also includes quantum-mechanical considerations, the light-crossing time greatly exceeds the so-called scrambling time of \(R\ln S_{BH}\) (\(S_{BH}\) is the Bekenstein-Hawking entropy) [35, 36]. From an external observer's perspective, this is the time it takes for the object to lose its identity, as it becomes chaotically mixed with the other degrees of freedom inside the BH. Meaning that, from this perspective and for all practical purposes, an infalling object will adhere onto the surface of the star and never really fall
in.
We now turn to discuss some explicit examples, starting with radial infall. For light ( \({\cal E}=1\), \(k=0\)), one obtains for the 4-velocity \(u\) and 3-velocity \(v\), respectively,
\[u^{r}\left(r<R\right) = \frac{dr}{d\tau}\ =1\;,\] \[v^{r}\left(r<R\right) = \frac{dr}{dt}\ =\ 1-v^{2}\;. \tag{22}\]
For a radially infalling, massive object that is asymptotically at rest at \(r\rightarrow\infty\) (_i.e._, those with \({\cal E}=1\)), one obtains for the 4-velocity and 3-velocity, respectively,
\[u^{r}\left(r<R\right) = \frac{dr}{d\tau}\ =\ \sqrt{1-f}\ =\ \left|\upsilon\right|\;, \tag{23}\] \[v^{r}\left(r<R\right) = \frac{dr}{dt}\ =\ \left(1-v^{2}\right)\left|\upsilon\right|\;. \tag{24}\]
On the other hand, for massive objects that do have some initial (asymptotic) radial 3-velocity \(\upsilon_{0}\), the corresponding results are
\[u^{r}\left(r<R\right)\ =\ \frac{dr}{d\tau}\ =\ \gamma\sqrt{\upsilon_{0}^{2}+ \left(1-\upsilon_{0}^{2}\right)\upsilon^{2}}\;, \tag{25}\]
\[v^{r}\left(r<R\right)\ =\ \frac{dr}{dt}\ =\ \left(1-\upsilon^{2}\right)\sqrt{ \upsilon_{0}^{2}+\left(1-\upsilon_{0}^{2}\right)\upsilon^{2}}\;, \tag{26}\]
where \(\gamma\) is the corresponding Lorentz factor. One can readily verify that the radial 3-velocity \(v^{r}\) cannot exceed the speed of light for any initial \(\upsilon_{0}\). Note that Eq. (25) follows from rewriting the energy-conservation equation (20)
at infinity as \(\ \frac{1}{2}\mathcal{E}^{2}=\frac{1}{2}\left(\upsilon_{0}\mathcal{E}\right)^{2}+ \frac{1}{2}\), solving it for \(\mathcal{E}\) and then substituting into Eq. (19) with \(\ L=0\) and \(\ k=1\). Equation (26) requires the standard conversion from proper to coordinate time.
The crossing time can be calculated for the two types of radial trajectories, but they both lead to approximately the same outcome. In terms of proper time,
\[\Delta\tau\ =\ \frac{2\left(2M/\upsilon^{2}\right)}{\sqrt{\mathcal{E}^{2}-kf}} \ =\ \frac{2R}{\sqrt{\mathcal{E}^{2}-\varepsilon k}}\ \simeq\ \frac{2R}{\mathcal{E}}\, \tag{27}\]
and, in terms of asymptotic coordinate time,
\[\Delta t\ =\ \frac{4M\mathcal{E}}{\upsilon^{2}\left(1-\upsilon^{2}\right) \sqrt{\mathcal{E}^{2}-k\left(1-\upsilon^{2}\right)}}\ =\ \frac{2R}{\varepsilon}\frac{1}{\sqrt{1-\varepsilon k/ \mathcal{E}^{2}}}\ \simeq\ \frac{2R}{\varepsilon}. \tag{28}\]
We now discuss non-radial trajectories. Because of the \(\varepsilon\)-suppression of the \(L^{2}\) term in Eq. (19), all the trajectories are practically radial for any object until it reaches the proximity of its turning point at a parametrically small distance from the origin. The discussion is therefore, to some extent, moot because, after the frozen star has formed, light and matter will, for all practical purposes, never reach anywhere close to the center. The discussion may still have some relevance to the trajectories of matter and/or light that are already trapped inside the frozen star when it is formed and are extremely close to the center.
All objects with non-zero angular momentum (no matter how little) will be deflected by the centrally located potential barrier. The turning point \(r_{\mathit{TP}}\) for a given choice of conserved quantities can be worked out by setting the
left-hand side of Eq. (19) to zero,
\[r_{{}_{TP}}\ =\ L\sqrt{\frac{f}{{\cal E}^{2}-kf}}. \tag{29}\]
Moreover, since \(\ f=\varepsilon\ll 1\) and \(\ {\cal E}\gtrsim 1\),
\[r_{{}_{TP}}\ \approx\ \sqrt{\varepsilon}\frac{L}{{\cal E}}\, \tag{30}\]
where \(|L/{\cal E}|\) is identifiable as the impact parameter of the trajectory.
It is interesting to ask about the deflection angle in the current scenario. To address this, we start by using Eq. (17) to solve for \(\frac{d\phi}{d\tau}\) and Eq. (19) to solve for \(\ u^{r}=\frac{dr}{d\tau}\). We then divide the former by the latter.
Let us specifically discuss light deflection, so that \(\ {\cal E}=1\) and \(\ k=0\). In this case,
\[r_{{}_{TP}}\ =\ \sqrt{\varepsilon}L\, \tag{31}\]
Figure 3: Trajectory deflecting from the frozen star geometry. As in the Schwarzschild geometry, trajectories become radial as they approach the surface of the frozen star. They stay almost radial until they reach the core of the star and are then deflected into another almost-radial trajectory.
and then, on the equatorial plane,
\[\frac{d\phi}{dr}\;=\;\frac{L}{r^{2}\sqrt{1-\frac{r_{TP}^{2}}{r^{2}}}}\;. \tag{32}\]
The total angular deflection inside of the star is given by the integral
\[\Delta\phi = 2L\int_{r_{TP}}^{R}\frac{dr}{r^{2}\sqrt{1-\frac{r_{TP}^{2}}{r^{2 }}}}\;. \tag{33}\]
In terms of the dimensionless variable \(\;x=\frac{r}{r_{{}_{TP}}}\;\), this becomes
\[\Delta\phi\;=\;\frac{1}{\sqrt{\varepsilon}}\int_{1}^{R/r_{TP}}\frac{dx}{x \sqrt{x^{2}-1}}\;, \tag{34}\]
with the result that
\[\Delta\phi\;=\;\frac{\pi}{2}\frac{1}{\sqrt{\varepsilon}}+\ldots\;, \tag{35}\]
where relative corrections of order \(r_{{}_{TP}}/R\ll 1\) were neglected. The resulting angular deflection is very large and mostly accumulated near the turning point of the trajectory. The photon spins around the core of the BH and emerges onto another almost-radial trajectory.
## 4 Smoothing the core
We have now seen that the frozen star metric can be made regular throughout the spacetime while maintaining most of the essential features of the model. Nevertheless, even with \(\;f=|g_{tt}|=g^{rr}>0\;\), the density profile for the interior is \(\rho\sim\frac{1}{r^{2}}\), which diverges at \(\;r=0\;\). The divergence is rather mild, as the
mass in the core region is parametrically small. Nevertheless, our objective is to complete the frozen star model such that it is regular everywhere, and we accomplish this by applying a suitable regularization scheme at small values of \(r\). General considerations suggest that the regularization scale is some fundamental scale such as the string length, which is small but larger than the Planck length.
Our scheme will entail examining a sphere of radius \(2\eta\) centered around \(r=0\), where \(\eta\ll R\). The sphere consists of an inner sphere of radius \(\eta\), surrounded by a spherical shell of width \(\eta\). The metric function \(f(r)\) of the shell connects smoothly at \(r=2\eta\) to that of the bulk of the frozen star, \(f=\varepsilon\), _and_ smoothly at \(r=\eta\) to that of the inner sphere, which will be discussed later. The regularization prescription is not unique. We present here the simplest procedure that works, allowing us to verify that such a regularization does not change the physics of the frozen star and that it is consistent with all the fundamental constraints, such as the positivity of \(\rho\), the integrity of the null energy condition and so on.
We will assume the same form of line element as presented in Section 2,
\[ds^{2}\;=\;-f(r)dt^{2}+\frac{1}{f(r)}dr^{2}+r^{2}(d\theta^{2}+\sin^{2}\theta d \phi^{2})\;, \tag{36}\]
Figure 4: Smoothed core of the frozen star. The interior of the core \(r\leq\eta\) is denoted by \(I\), the transitional layer \(\eta\leq r\leq 2\eta\) by \(T\) and the frozen star bulk \(r\geq 2\eta\) by \(S\). The metric function \(f(r)\) is depicted by the thick black line. The value of \(f\) at the center is \(f(0)=1\) and it decreases to \(\varepsilon\) at \(r=2\eta\).
along with same form of stress tensor, \(\ T^{\mu}_{\ \nu}=\text{diag}(-\rho,-\rho,p_{\perp},p_{\perp})\). Also as in Section 2, \(\ f=\varepsilon\ll 1\\) is the metric function for the star interior, extending from the outer surface at about \(\ R=2M(1+\varepsilon)\\) down to the surface \(\ r=2\eta\\).
Let us write the metric function for the region of interest, \(\ 0<r<2\eta\\), as
\[f(B,\eta,r)\ =\ \begin{cases}f_{I}(B,\eta,r)\;,\quad r<\eta\;,\\ f_{T}(B,\eta,r)\;,\quad\eta<r<2\eta\;,\\ f_{S}\;,\quad r>2\eta\;,\end{cases} \tag{37}\]
where \(f_{S}\) is the metric function of the frozen star interior, \(f_{T}\) is the metric function of the translational layer, \(f_{I}\) is the metric function within the inner sphere of radius \(\eta\) and \(B\) is the energy density at the center or \(\ \rho(r=0)=B\\). The constant \(B\) then has dimensions of inverse length squared and is assumed to be positive. As shown in the Appendix, the self-consistency of our model requires the dimensionless combination \(B\eta^{2}\) to be a number of order unity, making it convenient to redefine \(\ B\eta^{2}\to B\\).
In what follows, we will simplify notation by working in units with \(\ 8\pi G=1\\) and all other dimensional quantities will be in units of \(R\). Equivalently, we are setting \(\ R=1\\) and rescaling other quantities appropriately, so that the parameter \(\eta\) is a small constant number and \(r\) is similarly small in the region of interest.
It is useful to define the energy density and transverse pressure in the
same way as done in Eq. (37),
\[\rho(B,\eta,r)\ =\ \begin{cases}\rho_{I}(B,\eta,r)\,\quad r<\eta\,\\ \rho_{T}(B,\eta,r)\,\quad\eta<r<2\eta\,\\ \rho_{S}\,\quad r>2\eta\,\end{cases} \tag{38}\]
\[p_{\perp}(B,\eta,r)\ =\ \begin{cases}p_{I\perp}(B,\eta,r)\,\quad r<\eta\,\\ p_{T\perp}(B,\eta,r)\,\quad\eta<r<2\eta\,\\ p_{S\perp}\,\quad r>2\eta\.\end{cases} \tag{39}\]
We can then derive \(f_{I}\) by solving the Einstein equations and, in turn, \(f_{T}(B,\eta,r)\) by matching smoothly to \(f_{I}\) and to \(f_{S}\) at their respective connecting surfaces.
### Matching conditions and the function \(f_{I}\)
Continuing with our (non-unique) regularization procedure, we require that \(\rho\) decreases monotonically in the region \(\ 0<r<\eta\\) from its maximal value of \(B\) at the center. Let us then consider the function \(\ \rho_{I}(B,\eta,r)=-r+\frac{B}{\eta^{2}}\.\) Using this form and Einstein's equations, we can derive the metric function and its first two derivatives,
\[f_{I}(B,\eta,r)\ =\ 1-\frac{r^{2}}{3}\frac{B}{\eta^{2}}+\frac{r^{3}}{4}\, \tag{40}\]
\[f_{I}^{\prime}(B,\eta,r)\ =\ -\frac{2r}{3}\frac{B}{\eta^{2}}+\frac{3r^{2}}{4}\, \tag{41}\]
\[f_{I}^{\prime\prime}(B,\eta,r)\ =\ -\frac{2}{3}\frac{B}{\eta^{2}}+\frac{6r}{4}. \tag{42}\]
The last term in each of these equations is subleading, but we need to retain them to ensure the continuity of \(f\) and of the energy density.
Equations (37), (38) and (39) can now be rewritten as follows:
\[f(B,\eta,r)\ =\ \begin{cases} 1-\dfrac{r^{2}}{3}\dfrac{B}{\eta^{2}}+ \dfrac{r^{3}}{4}\;,\quad r<\eta\;,\\ f_{T}(B,\eta,r)\;,\quad\eta<r<2\eta\;,\\ \varepsilon\;,\quad r>2\eta\;,\end{cases} \tag{43}\]
\[\rho(B,\eta,r)\ =\ \begin{cases}-r+\dfrac{B}{\eta^{2}}\;,\quad r<\eta\;,\\ \rho_{T}(B,\eta,r)\;,\quad\eta<r<2\eta\;,\\ \dfrac{1-\varepsilon}{r^{2}}\;,\quad r>2\eta\;,\\ \end{cases} \tag{44}\]
\[p_{\perp}(B,\eta,r)\ =\ \begin{cases}\dfrac{3r}{2}-\dfrac{B}{\eta^{2}}\;,\quad r <\eta\;,\\ p_{T\perp}(B,\eta,r)\;,\quad\eta<r<2\eta\;,\\ 0\;,\quad r>2\eta\;.\end{cases} \tag{45}\]
It should be kept in mind that the validity of our model depends on \(B\) being the order of unity. In fact, the allowable range of \(B\) values happens to be \(\frac{6}{5}<B\leq\frac{30}{19}\).
The form of the metric \(f_{T}(B,\eta,r)\), is found by adopting a polynomial ansatz for it in terms of \((r-\eta)\). The order of the polynomial and, thus, the number of adjustable parameters in the ansatz is determined by the number of relevant matching conditions. Here, we are requiring that \(f(B,\eta,r)\) and its first two derivatives be continuous at both ends of the interpolating layer, \(r=\eta\) and \(r=2\eta\), which necessitates a fifth-order polynomial. These
matching conditions and the restriction of \(B\) as discussed is enough to ensure that the energy density, pressure and their first and second derivatives are all continuous. Additionally, these conditions guarantee that \(f(B,\eta,r)\) is positive, \(\rho\) is positive and \(\rho^{\prime}\) is negative. Interestingly, they also result in a negative value of \(p_{\perp}\); however, the null energy condition is not violated.
The details of the matching procedure, the results and the detailed discussion of the constraints have been relegated to the Appendix.
## 5 Causal structure
For any finite \(\varepsilon\), the Penrose coordinates for a frozen star can be defined in the standard way. One starts with the coordinates
\[X(r,t) = -e^{-\frac{1}{2}[t-r_{*}(r)]}\;, \tag{46}\] \[Y(r,t) = -e^{-\frac{1}{2}[t+r_{*}(r)]}\;, \tag{47}\]
where \(r_{*}\) is the usual tortoise coordinate,
\[dr_{*}\ =\ \frac{dr}{f(r)}\;. \tag{48}\]
For \(\ r>R\), this is the standard Schwarzschild form, \(\ r_{*}=r+R_{S}\ln(r-R_{S})\). However, for \(\ r\leq R\) and so inside of the frozen star,
\[r_{*}\ =\ \frac{r}{\varepsilon}\;. \tag{49}\]
In the transitional region near the surface of the frozen star and in the
smoothed central region, the functional form of \(r_{*}\) changes, but the numerical values do not differ much. For the purpose of the discussion of the causal structure, we can ignore these differences.
The Penrose coordinates can now be defined as
\[U(r,t)\;=\;\arctan(X(r,t))\;, \tag{50}\] \[V(r,t)\;=\;\arctan(Y(r,t))\;. \tag{51}\]
The Penrose diagram of the frozen star is depicted in Fig. 5. For any finite \(\varepsilon\), it is similar to that of the whole of Minkowski space. The thick blue line marks the position of the surface of the star. For small enough \(\varepsilon\), the whole interior of the star is null.
The causal structure at \(\varepsilon=0\) is more subtle. To help visualize this rather unorthodox geometry, it is useful to recast the discussion in terms of a set of adapted Kruskal-like coordinates. To do this, we introduce the coordinate transformation
\[U_{0} =\;\exp\left[-\frac{r}{\sqrt{\varepsilon}}+\sqrt{\varepsilon}\ t \right]\;, \tag{52}\] \[V_{0} =\;\exp\left[-\frac{r}{\sqrt{\varepsilon}}-\sqrt{\varepsilon}\ t \right]\;, \tag{53}\]
which results in a Kruskal-like form of line element for the interior of the frozen star,
\[ds^{2}\;=\;-\frac{dU_{0}dV_{0}}{U_{0}V_{0}}+r^{2}\left(U_{0},V_{0}\right)d \Omega^{2}\;. \tag{54}\]
Radially null light cones (\(ds^{2}=0\) at constant \(\theta,\;\phi\)) appear as horizontal
Figure 5: The Penrose diagram of a frozen star for finite \(\varepsilon\) (left) and for \(\varepsilon\to 0\) (right). The thick, blue line marks the position of the surface of the star. The diagram for \(\varepsilon=0\) is degenerate: all radial surfaces in the interior of the star collapse to a single null surface.
and vertical lines of constant \(U_{0},\ V_{0}\). The following ratio,
\[\frac{U_{0}}{V_{0}}\ =\ \exp\left[2\sqrt{\varepsilon}\ t\right]\, \tag{55}\]
reveals that trajectories of constant \(t\) appear in the form of straight lines. Similarly, the product \(U_{0},\ V_{0}\) shows that lines of constant \(r\) have the form
\[U_{0}V_{0}\ =\ \exp\left[-\frac{2r}{\sqrt{\varepsilon}}\right]. \tag{56}\]
In the limit \(\varepsilon\to 0\), the product \(U_{0}V_{0}\) vanishes unless \(\ r\lesssim\sqrt{\varepsilon}\\) and the ratio \(U_{0}/V_{0}\) approaches 1 unless \(\ t\gtrsim 1/\sqrt{\varepsilon}\\). it follows that when \(\varepsilon=0\\), the entire interior geometry is defined by the lines \(\ U_{0}=0\\) and \(\ V_{0}=0\\). This implies, in turn, that the \(\ \varepsilon\to 0\\) limiting case is effectively 1+1 dimensional as each 3-sphere collapses to a point. This is interesting in that the polymer model has the effective thermodynamics of a 1+1-dimensional radiative matter system.
## 6 Briefly on stability
One can repeat the analysis on stability in [32] to show that a choice of \(f=1-v^{2}\\) different from zero has no bearing on the ultra-stability of the frozen star, nor does the inclusion of the regularized core as discussed in Section 4 and the Appendix. As it turns out, the exact value of \(f\) has essentially no role in stabilizing the star against perturbations; this really falls under the purview of the equation-of-state condition \(\ \rho+p_{r}=0\\). In fact, repeating the analysis, one finds that the only difference between a vanishing
\(f\) and a constant \(f\) is that the perturbation of \(g_{tt}\) -- what we called \(H_{0}\) in [32] -- need not be set to zero _a priori_; it could just as well be set to a non-zero constant. However, any choice but zero would violate the equation of state over macroscopic time scales, meaning that zero is the uniquely correct choice after all. In short, the thawing out of a frozen star requires deviations from \(\ \rho+p_{r}=0\\) and thus from \(\ \widetilde{f}=f\).
As for the geometry of the regularized core, this can be incorporated into our stability analysis just as the was done for the transitional layer at the outermost surface.
### Overview
We have furthered our investigations into a classical but regular model for a BH interior that is known as the frozen star. This classical version of our highly quantum polymer model has two prominent features: maximally negative pressure throughout and each spherical slice of the interior is a surface of infinite redshift. Here, we have relaxed the latter (but not the former) on the basis that the idealized situation is probably not physically realistic and that the metric is now regular throughout the spacetime. We used this softened picture to understand the fate of infalling matter and found that a particle with any amount of angular momentum will be reflected before reaching the center of the star. We described a regularization procedure at the core of the frozen star resulting in a completely regular metric. The technical details of the regularization procedure were presented in the Appendix. Meanwhile, Kruskal-like coordinates were introduced so as to provide us with a metric that remains regular even when the limiting case of slice-by-slice infinite
redshift is restored.
Now having a regular and classical metric to work with, we are well positioned to compare and contrast our model with that of a general-relativistic BH. This should soon be possible from the analysis of gravitational waves emerging from BH mergers, as out-of-equilibrium physics is the key to understanding how the "true" theory of gravity may distinguish itself from Einstein's. For further reading on this perspective, see [26, 27, 28, 29, 30].
## Acknowledgments
We thank Eran Palti for pointing out the necessity of smoothing the core of the frozen star. The research is supported by the German Research Foundation through a German-Israeli Project Cooperation (DIP) grant "Holography and the Swampland." The research of AJMM received support from an NRF Evaluation and Rating Grant 119411 and a Rhodes Discretionary Grant SD07/2022. AJMM thanks Ben Gurion University for their hospitality during his visit.
## Appendix A Regularizing the core
Here, we present the details of the calculation for determining \(f_{T}\), the metric function for the translational layer in the core, and the allowed range of values for \(B=\rho(r=0)\). One should have already read Section 4 before proceeding any further.
As discussed in the main text, the continuity of the metric and its first two derivatives require six matching conditions for \(f_{T}\) (three at each of the two connecting surfaces). These conditions are
\[f_{T}(B,\eta,r=2\eta)\;=\;\varepsilon\;, \tag{57}\]
\[f_{T}(B,\eta,r=\eta)\;=\;f_{I}(B,r=\eta)\;, \tag{58}\]
\[f_{T}^{\prime}(B,\eta,r=2\eta)\;=\;0\;, \tag{59}\]
\[f_{T}^{\prime}(B,\eta,r=\eta)\;=\;f_{I}^{\prime}(B,r=\eta)\;, \tag{60}\]
\[f_{T}^{\prime\prime}(B,\eta,r=2\eta)\;=\;0\;, \tag{61}\]
\[f_{T}^{\prime\prime}(B,\eta,r=\eta)\;=\;f_{I}^{\prime\prime}(B,r=\eta)\;, \tag{62}\]
where \(f_{I}\) is the metric function for the innermost region of the core. Conditions (57)-(62) also ensure the continuity of the energy density, its first derivative and the transverse pressure. (It should be kept in mind that the radial pressure follows automatically as the negative of \(\rho\).)
Assuming that \(f_{T}(B,\eta,r)\) adopts the polynomial form (the alphabetically
ordered letters are yet-to-be-determined coefficients)
\[f_{T}(B,\eta,r)\ =\ a+b(r-\eta)+c(r-\eta)^{2}+d(r-\eta)^{3}+e(r-\eta)^{4}+g(r-\eta )^{5} \tag{63}\]
and using the matching conditions (57)-(62), one can find the coefficients of the polynomial and, thus, also derive \(\rho_{T}\) and \(p_{T\perp}\) by way of Einstein's equations.
### Results of \(f\), \(\rho\) and \(p_{\perp}\) as a function of \(B,\eta,r\)
In this section of the Appendix, we present the expressions for the metric function, energy density and transverse pressure, respectively. Let us recall the conventions; namely that \(\ 8\pi G=1\), \(B\) really means the dimensionless product \(B\eta^{2}\) (which happens to be of order unity) and all other dimensional parameters are expressed in units of \(\ R=1\)
For simplicity, we have neglected subleading terms; meaning that all of the terms in a given equation are of the same order in the small parameters \(\varepsilon\), \(\eta\), \(r\) and \((r-\eta)\) (the last two being small specifically in the region of interest). The resulting expressions go as
\[f_{T}(B,\eta,r)\ =\ \frac{1}{12}\left(-4B+12\right)+\frac{(26B-36) \left(r-\eta\right)^{5}}{6\eta^{5}}+\frac{\left(-34B+45\right)\left(r-\eta \right)^{4}}{3\eta^{4}}\] \[+\frac{\left(100B-120\right)\left(r-\eta\right)^{3}}{12\eta^{3}} +\frac{1}{12}\left(-\frac{4B}{\eta^{2}}\right)\left(r-\eta\right)^{2}+\frac{ 1}{12}\left(-\frac{8B}{\eta}\right)\left(r-\eta\right), \tag{64}\]
\[r^{2}\rho_{T}(B,\eta,r) = \frac{1}{12}\left(4B-12\right)-\frac{\left(26B-36\right)\left(r- \eta\right)^{5}}{6\eta^{5}}-\frac{\left(-34B+45\right)\left(r-\eta\right)^{4} }{3\eta^{4}}\]
\[- \frac{(100B-120)\left(r-\eta\right)^{3}}{12\eta^{3}}-\frac{1}{12} \left(-\frac{4B}{\eta^{2}}\right)\left(r-\eta\right)^{2}-\frac{1}{12}\left(- \frac{8B}{\eta}\right)\left(r-\eta\right)\] \[- r\bigg{(}\frac{1}{12}\left(-\frac{8B}{\eta}\right)+\frac{5\left( 26B-36\right)\left(r-\eta\right)^{4}}{6\eta^{5}}+\frac{4\left(-34B+45\right) \left(r-\eta\right)^{3}}{3\eta^{4}}\] \[+ \frac{(100B-120)\left(r-\eta\right)^{2}}{4\eta^{3}}+\frac{1}{6} \left(-\frac{4B}{\eta^{2}}\right)\left(r-\eta\right)\bigg{)}+1\,\] \[2rp_{T\perp}(B,\eta,r)\ =\ r\Bigg{(}\frac{1}{6}\left(-\frac{4B}{\eta^{2}} \right)+\frac{10\left(26B-36\right)\left(r-\eta\right)^{3}}{3\eta^{5}}+\frac{ 4\left(-34B+45\right)\left(r-\eta\right)^{2}}{\eta^{4}} \tag{65}\] \[+ \frac{\left(100B-120\right)\left(r-\eta\right)}{2\eta^{3}} \Bigg{)}+2\bigg{(}\frac{1}{12}\left(-\frac{8B}{\eta}\right)+\frac{5\left(26B- 36\right)\left(r-\eta\right)^{4}}{6\eta^{5}}\] \[+ \frac{4\left(-34B+45\right)\left(r-\eta\right)^{3}}{3\eta^{4}}+ \frac{\left(100B-120\right)\left(r-\eta\right)^{2}}{4\eta^{3}}+\frac{1}{6} \left(-\frac{4B}{\eta^{2}}\right)\left(r-\eta\right)\bigg{)}\.\]
One can see that these results do not depend on \(\varepsilon\) since terms involving it are subleading.
By applying some additional physical constraints, one can find a range of values for \(B\), which we do next.
### Non-negativity of \(f_{T}\)
As already seen in Section 4, the metric function is non-negative in the innermost region \(\ 0<r<\eta\.\) We need to verify that the same is true for the transitional region \(\ \eta<r<2\eta\ \.\)
One can simplify Eq. (64) to obtain
\[f_{T}(B,\eta,r)\ =\ \frac{(r-2\eta)^{3}\left(3\eta^{2}\left(3B-4\right)+r^{2} \left(13B-18\right)-3\eta r\left(7B-9\right)\right)}{3\eta^{5}}. \tag{67}\]
The denominator is clearly positive. As for the numerator, the factor
is negative for \(\ \eta<r<2\eta\\). Thus, in order for \(f_{T}(B,\eta,r)\) to be positive, the other factor in the numerator must then be negative. This will happen for \(\ 0<B\leq\frac{30}{19}\\).
### Positivity of \(\rho_{T}\)
Simplifying Eq. (65), we have
\[r^{2}\rho_{T}(B,\eta,r)\ =\ \frac{r^{5}\left(36-26B\right)+15\eta r^{4} \left(11B-15\right)-8\eta^{4}r\left(23B-30\right)}{\eta^{5}r^{2}}\] \[+\frac{\eta^{5}\left(24B-31\right)+10\eta^{3}r^{2}\left(41B-54 \right)+r^{3}\left(520\eta^{2}-388B\eta^{2}\right)}{\eta^{5}r^{2}}. \tag{68}\]
The denominator is clearly positive. As for the numerator, it is positive for \(\ 0<B<2.53\\).
### Negativity of \((r^{2}\rho_{T})^{\prime}\)
Differentiating Eq. (68) with respect to \(r\) and then simplifying, one can show that
\[(r^{2}\rho_{T})^{\prime}(B,\eta,r) = \frac{-48B\eta^{5}+30\eta r^{4}\left(11B-15\right)+r^{5}\left(10 8-78B\right)+62\eta^{5}}{\eta^{5}r^{3}} \tag{69}\] \[+ \frac{r^{3}\left(520\eta^{2}-388B\eta^{2}\right)+8\eta^{4}r\left( 23B-30\right)}{\eta^{5}r^{3}}\.\]
The denominator is positive, whereas the numerator is negative as long as \(\frac{6}{5}<B<1.84\\) is satisfied.
Combining the results of Sections A.2, A.3 and A.4, we find that \(B\) is
restricted to the advertised range of values,
\[\frac{6}{5}\;<\;B\;\leq\;\frac{30}{19}\;. \tag{70}\]
### Limited negativity of \(p_{\perp}\)
The transverse pressure \(p_{T\perp}(B,\eta,r)\) has to connect to a negative function at \(\;r=\eta\;\) and to \(0\) at \(\;r=2\eta\;\), necessitating a sign change. However, the transverse form of the null energy condition, \(\;\rho_{T}(B,\eta,r)+p_{T\perp}(B,\eta,r)\geq 0\;\), is never violated for the allowable range (70) of \(B\) values.
### Conditions for continuity of first and second derivative of \(\rho_{T}\) and \(p_{T\perp}\)
In this section, we will use the exact expressions of the energy density and transverse pressure, since the truncated forms in Eq. (65) and Eq. (66) will lead to an apparent discontinuity in their respective derivatives. To start, the first derivative of the energy density \(\rho(B,\eta,r)\) can be expressed as
\[\rho^{\prime}(B,\eta,r)\;=\;\begin{cases}-1\;,\quad r<\eta\;,\\ \rho^{\prime}_{T}(B,\eta,r)\;,\quad\eta<r<2\eta\;,\\ -\frac{2(1-\varepsilon)}{r^{3}}\;,\quad r>2\eta\;.\end{cases} \tag{71}\]
If the energy denisty is continuous, it has to satisfy the matching conditions \(\rho^{\prime}_{T}(B,\eta,r\;=\;\eta)\;=\;-1\quad\text{and}\quad\rho^{\prime}_ {T}(B,\eta,r\;=\;2\eta)\;=\;-\frac{2(1-\varepsilon)}{(2\eta)^{3}}\;\), both of which can be readily verified using our previous results for the metric function and
Einstein's equations.
The same basic procedure can be repeated for \(\rho^{\prime\prime}(B,\eta,r)\) and \(p^{\prime}_{\perp}(B,\eta,r)\),
\[\rho^{\prime\prime}(B,\eta,r)\ =\ \begin{cases}0\;,\quad r<\eta\;,\\ \rho^{\prime\prime}_{T}(B,\eta,r)\;,\quad\eta<r<2\eta\;,\\ \frac{6(1-\varepsilon)}{r^{4}}\;,\quad r>2\eta\;,\end{cases} \tag{72}\]
\[p^{\prime}_{\perp}(B,\eta,r)\ =\ \begin{cases}\frac{3}{2}\;,\quad r<\eta\;,\\ p^{\prime}_{T\perp}(B,\eta,r)\;,\quad\eta<r<2\eta\;,\\ 0\;,\quad r>2\eta\;.\end{cases} \tag{73}\]
The continuity of \(\rho^{\prime\prime}(B,\eta,r)\) and \(p^{\prime}_{\perp}(B,\eta,r)\) at \(r=\eta\) under the condition (70) forces us to fix \(B=\frac{1}{50}(60-60\varepsilon+57\eta^{3})\) and further restrains \(0<\eta<0.69\) and \(0<\varepsilon<\frac{19}{20}\eta^{3}\). However, continuity at \(r=2\eta\) rather requires that \(B=\frac{1}{76}(120-120\varepsilon+75\eta^{3})\) and \(\frac{5}{8}\eta^{3}<\varepsilon<\frac{1}{200}(48+125\eta^{3})\;.\)
Our conclusion is that, to satisfy this amount of continuity on both ends simultaneously, necessitates the inclusion of two more matching conditions, \(f^{\prime\prime\prime}_{T}(B,\eta,r=\eta)=\frac{6}{4}\) and \(f^{\prime\prime\prime}_{T}(B,\eta,r=2\eta)=0\;;\) meaning that our polynomial ansatz (63) would have to be extended to one of seventh order.
Clearly, enforcing additional constraints on the continuity of even higher-order derivatives of the energy density and the transverse pressure, one requires a yet higher-order polynomial ansatz.
### Graphs
Here, we present graphs for the metric function, energy density, first derivative of the density and transverse pressure in the core of the frozen star and its vicinity, \(\ r<4\eta\), as a function of the radius \(r\) in units of \(R\). The values for \(B\), \(\eta\) and \(\varepsilon\) are meant as reasonable estimates but otherwise chosen arbitrarily.
Figure 6: The metric function (upper left), the energy density (lower left), the first derivative of the energy density (upper right) and the transverse pressure (lower right) in the smoothed core of the frozen for \(B=1.3\), \(\eta=0.0001\) and \(\varepsilon=0.0001\). |
2302.06171 | An ANN Approach in Predicting Solar and Geophysical Indices from
Ionospheric TEC Over Indore | In this paper, preliminary results from the artificial neural network (ANN)
based model developed at IIT Indore has been presented. One year's hourly total
electron content (TEC) database has been created from the International
Reference Ionosphere (IRI) 2016 model. For the first time, a reverse problem
has been addressed, wherein the training has been performed for predicting the
three indices: 13-month running sunspot number, ionospheric index, and daily
solar radio flux also called targets to the network when hourly TEC values are
the inputs. The root mean square errors (RMSEs) of these targets have been
compared and minimized after several training of the dataset using different
sets of combinations. Unknown data fed to the network yielded 0.99%, 3.12%, and
0.90% errors for Rz12, IG12, and F10.7 radio flux, respectively, thus
signifying ~97% prediction accuracy of the model. | Sumanjit Chakraborty, Abhirup Datta | 2023-02-13T08:13:07Z | http://arxiv.org/abs/2302.06171v1 | # An ANN Approach in Predicting Solar
###### Abstract
In this paper, preliminary results from the artificial neural network (ANN)-based model developed at IIT Indore have been presented. One year hourly total electron content (TEC) database has been created from the International Reference Ionosphere (IRI)--2016 model. For the first time, a reverse problem has been addressed, wherein the training has been performed for predicting the three indices: 13-month running sunspot number, ionospheric index and daily solar radio flux also called targets to the network when hourly TEC values are the inputs. The root mean square errors (RMSEs) of these targets have been compared and minimized after several training of the dataset using different sets of combinations. Unknown data fed to the network yielded 0.99%, 3.12% and 0.90% errors for Rz12, IG12 and F10.7 radio flux, respectively, thus signifying \(\sim\)97% prediction accuracy of the model.
Keywords:Solar indices Geophysical indices Ionospheric TEC IRI Machine learning ANN
## 1 Introduction
The Earth's ionosphere is formed as a consequence of ionization by solar radiation. It extends from about 60-1000 km above the Earth's surface. Since nearly 70% of the global ionization is concentrated in and around \(\pm\)15\({}^{\circ}\) magnetic latitude crests due to the equatorial ionization anomaly (EIA) [1], it becomes essential to characterize dynamism of the variable ionosphere over this region where sharp latitudinal gradient exists. The total electron content (TEC), which is the columnar number density of electrons expressed in TEC units (1 TEC unit = 10\({}^{16}\) electrons/m\({}^{2}\)), is a fundamental observable parameter [6] which plays an important role in characterization of the ionosphere. The location chosen for the analysis, Indore (22.52\({}^{\circ}\) N, 75.92\({}^{\circ}\) E geographic) falls near the northern crest of EIA and as a result is a suitable location to study the variability of the ionosphere. Complexity of the spatial and temporal variations of the ionosphere makes it difficult to characterize or model the ionosphere and accurately forecast its impact on the global navigation satellite system (GNSS) signals. Therefore, a requirement arises
for the development of an artificial neural network (ANN)-based model that would be able to predict ionospheric behavior over the regions where physical data is unavailable.
ANNs, inspired from the neural networks which constitute animal brains, have collection of connected nodes known as artificial neurons. Each connection similar to the synapses in a biological brain can transmit signal from one artificial neuron to another [8]. These connections between neurons are called edges. Artificial neurons and these edges have a weight that self adjust as learning proceeds. This weightage may increase or decrease the strength of the signal at a connection. The neurons have a threshold above which the signal is sent and are aggregated into different layers which perform various transformations on their inputs. The signals travel from the input (first layer), traverses multiple layers (hidden layers) and arrives to the output (last layer) [4]. The ANN discussed in this paper has been developed by preparing dataset from the empirical International Reference Ionosphere (IRI) model. The sources of data to this model are the dense global network of ionosondes, incoherent scatter radars, and Alouette topside sounders in situ instruments on board satellites. Inputs to this model are the date, latitude, longitude and topside electron boundary while the outputs are electron temperature and density, ion temperature and composition and the TEC from 50 to 2000 km altitude range [7].
Studies have been made by several researchers [2, 9, 10] to predict TEC model of the ionosphere by auto-regressive method. Studies have also been made [3, 5] in development of ANN-based TEC model where the solar and geophysical indices are fed as inputs to obtain predicted TEC as output, but for the first time, to the best of our knowledge, the TEC data has been fed as network inputs to obtain the solar and geophysical indices. This work thus addresses a reverse problem, wherein by having the knowledge of TEC variation, one could be able to infer about the indices that would be vital in understanding space weather and ensure flawless service to GNSS users.
## 2 Methodology
A feed forward network has been used where the signal gets propagated from the input layer to the hidden layer and then to the output layer. The present model is generated by using a single hidden layer of 25 neurons. The model inputs are the hourly vertical TEC values over Indore obtained from the IRI-2016 Web model for the entire year of 2017 which had been in the declining phase of solar cycle 24. The targets set to this model are the 13-month running mean of sunspot number (Rz12), the ionospheric index (IG12) and the F10.7 daily radio flux (sfu). Connections between the nodes are such that they represent the feeding of the output from one node to the other, multiplied by a weight. The weights given to the hidden layer are appropriately modified to obtain relatively less prediction error between the targets and predicted indices. The optimized architecture for the network is obtained by trial and error while the biases and weights are adjusted according to the Levenberg-Marquardt algorithm. The architecture of an ANN with an input layer, a single hidden layer and an output layer is depicted in Fig. 1.
## 3 Results
For training the neural network, 300 days were randomly selected from a one year database. The splitting of data for training, validation and testing was randomly selected as 70%, 15% and 15%, respectively. The activation function found to be suitable was tan sigmoid given by:
\[\tan\mathrm{sig}(x)=\frac{1-\mathrm{e}^{-2x}}{1+\mathrm{e}^{-2x}} \tag{1}\]
This activation function is used to introduce nonlinearity to the network. This helps the network to understand the complexity and give accurate results. The error function at the end of one feed forward process to check training performance was the mean squared error (MSE) given by the mean of the squared of the error defined as the difference between the predictions and the targets. The idea is to minimize this error function by assigning suitable weights and biases at every step. The network was then trained several times by changing the number of neurons until the cost function was minimum that would perform well when subjected to unknown data. The remaining 65 days that is unknown to the trained network was used in order to check the model performance. Figure 2 shows the scatter plot of predictions and target with the 1:1 red line signifying an accuracy of 100%. The normalized RMSE values obtained are 0.0099, 0.0312 and 0.0090 translating to percentage errors of 0.99%, 3.12% and 0.90% for the indices Rz12, IG12 and F10.7 radio flux, respectively. These RMSE values are computed by using:
Figure 1: Typical ANN structure showing the input, hidden and output layers ([https://www.analyticsindiamag.com/artificial-neural-networks-101/](https://www.analyticsindiamag.com/artificial-neural-networks-101/))
\[\sqrt{\frac{1}{N}\sum\biggl{(}\frac{\mbox{targets}-\mbox{predictions}}{\mbox{targets}} \biggr{)}^{2}} \tag{2}\]
## 4 Conclusions
The paper presents initial results from the ANN model developed over Indore, which is near to the anomaly crest, using the IRI model derived database to predict solar and geophysical indices. A single hidden layer has been used and a number of neurons have been varied to obtain the optimized results of this model. The normalized RMSE gave about 0.99, 0.90 and 3.12% errors in Rz12, F10.7 and IG12 index, respectively. Thus, ~97% accuracy has been achieved when unknown data was fed to the trained network. A reverse problem approach is addressed, wherein with the knowledge of TEC, prediction of various indices can be obtained even if real-time indices are not available. Path forward to this work would be to use this approach in training the network with real data and compare with the presented work for validation. This model could help in understanding the variable ionosphere where real data is unavailable.
|
2304.02099 | Coarse Grained FLS-based Processor with Prognostic Malfunction Feature
for UAM Drones using FPGA | Many overall safety factors need to be considered in the next generation of
Urban Air Mobility (UAM) systems and addressing these can become the anchor
point for such technology to reach consent for worldwide application. On the
other hand, fulfilling the safety requirements from an exponential increase of
prolific UAM systems, is extremely complicated, and requires careful
consideration of a variety of issues. One of the key goals of these Unmanned
Air Systems (UAS) is the requirement to support the launch and control of
hundreds of thousands of these advanced drones in the air simultaneously. Given
the impracticalities of training the corresponding number of expert pilots,
achieving this goal can only be realized through safe operation in either
fullautonomous or semi-autonomous modes. According to many recent studies, the
majority of flight accidents are concentrated on the last three stages of a
flight trip, which include the Initial Approach, Final Approach, and Landing
Phases of an airplane trip. Therefore, this paper proposes a novel
decentralized processing system for enhancing the safety factors during the
critical phases of Vertical and/or Short Take-Off and Landing (V/STOL) drones.
This has been achieved by adopting several processing and control algorithms
such as an Open Fuzzy Logic System (FLS) integrated with a Flight Rules Unit
(FRU), FIR filters, and a novel Prognostic Malfunction processing unit. After
applying several optimization techniques, this novel coarse-grained Autonomous
Landing Guidance Assistance System (ALGAS3) processing architecture has been
optimized to achieve a maximum computational processing performance of 70.82
Giga Operations per Second (GOPS). Also, the proposed ALGAS3 system shows an
ultra-low dynamic thermal power dissipation (I/O and core) of 145.4 mW which is
ideal for mobile avionic systems using INTEL 5CGXFC9D6F27C7 FPGA chip. | Hossam O. Ahmed | 2023-04-04T19:55:15Z | http://arxiv.org/abs/2304.02099v1 | # Coarse Grained FLS-based Processor with
###### Abstract
Many overall safety factors need to be considered in the next generation of Urban Air Mobility (UAM) systems and addressing these can become the anchor point for such technology to reach consent for worldwide application. On the other hand, fulfilling the safety requirements from an exponential increase of prolific UAM systems, is extremely complicated, and requires careful consideration of a variety of issues. One of the key goals of these Unmanned Air Systems (UAS) is the requirement to support the launch and control of hundreds of thousands of these advanced drones in the air simultaneously. Given the impracticalities of training the corresponding number of expert pilots, achieving this goal can only be realized through safe operation in either full-autonomous or semi-autonomous modes. According to many recent studies, the majority of flight accidents are concentrated on the last three stages of a flight trip, which include the Initial Approach, Final Approach, and Landing Phases of an airplane trip. Therefore, this paper proposes a novel decentralized processing system for enhancing the safety factors during the critical phases of Vertical and/or Short Take-Off and Landing (V/STOL) drones. This has been achieved by adopting several processing and control algorithms such as an Open Fuzzy Logic System (FLS) integrated with a Flight Rules Unit (FRU), FIR filters, and a novel Prognostic Malfunction processing unit. After applying several optimization techniques, this novel coarse-grained Autonomous Landing Guidance Assistance System (ALGAS3) processing architecture has been optimized to achieve a maximum computational processing performance of 70.82 Giga Operations per Second (GOPS). Also, the proposed ALGAS3 system shows an ultra-low dynamic thermal power dissipation (I/O and core) of 145.4 mW which is ideal for mobile avionic systems using INTEL 5CGXFC9D6F2TC7 FPGA chip.
_distributed systems, fault tolerant systems, parallel circuits, prognostics, register transfer level implementation, urban air mobility, FPGA, flight rules, artificial intelligence, fuzzy-neuro, neuromorphic, robotics, automation._
## I Introduction
Sustainability for future generations also depend on finding innovative solutions to critical issues that we face nowadays, such as the rapidly increasing mobile population, and the concentration of migration into cities. Even though the current population has already added undeniable pressure to our existing infrastructure, the projections for the future show an even more devastating degradation of the quality of the most essential services such as transportation. Subsequently, research into innovative solutions for the expected issues in the transportation sector is a very active field of development. One of the promising solutions proposed is via the Urban Air Mobility (UAM) initiatives [1-4]. One common thread in UAM proposals depends on creating a well-defined, and multi-level spatial network of virtual air routes, allowing the usage of autonomous taxi drones in under-utilized airspace domains, as a promising solution to the predicted surge of transportation demands of the future. On the other hand, there are enormous challenges to making such technology dependable. And importantly, these technology solutions must be even safer than conventional air-flighting systems, given an implicitly wider proliferation. Thus, reducing the accidents of UAM drones is a very high priority [5].
Breaking down the bigger challenges, into smaller solvable problems, can facilitate forward progress by taking into consideration the current key factors. Firstly, around 53.85% of the total current flight accidents occur during the last three stages of a flight trip; which include the Initial Approach, Final Approach, and Landing phases of an airplane [6, 7]. Secondly, we must consider the proliferation of UAM technology must necessarily be based on semi or fully autonomous flighting systems, given the impractical alternative of training expert pilots for such an enormous number of UAM drones per city. Thirdly, the planned UAM technology needs to consider that the routing of these trips will be dynamically assigned, and must be adaptable based on the variety of factors and scenarios that could occur spontaneously [8].
These factors could include natural or human-made ones, in addition to vehicle malfunctions, that might require immediately actionable emergency precautionary control
Fig. 1: The Graphical illustration of the proposed constellation of the two pairs of the differential HOA units.
procedures, referred herein as the "rules of flight". Considering these limitations leads us towards a better vision of how to make such UAM drones safer. Also, it should be considered that most of these taxi drones will depend on the Vertical and/or Short Take-Off and Landing (V/STOL) mechanism, which will increase the complexity of the electronic elements in the systems to guarantee the targeted safety factors [9]. Subsequently, we assume evolving research contributes to improving the safety measures for most of the flight stages through adoption of new high-tech approaches such as Machine Learning (ML), Deep Neural Networks (DNN), Neuromorphic Spiking Neural Networks (SNN), and other advanced control systems. However, we focused our efforts to enhance the safety precautions during the landing phases of a trip [10].
The proposed vision can be achieved through open-architecture development of decentralized and collaborative processing cores to control the autonomous landing mechanism by receiving sensory data from the distributed Hybrid Obstacle Avoidance (HOA) sensory nodes allocated on the bottom side of a drone as illustrated in Fig 1. The proposed Autonomous Landing Guidance Assistance System (ALGAS3) processing architecture provides a real-time solution for avoiding landing accidents in different types of aircraft and automated drones. The proposed ALGAS3 processing unit depends on coarse-grained open-architecture Fuzzy Logic System (FLS) processing cores, and a configurable Flight Rules Unit (FRU). Also, other Digital Signal Processing (DSP) units are assumed to augment the proposed prognostic malfunction feature as will be explained in the upcoming sections. The proposed ALGAS3 processing architecture has been designed using the Very High-Speed Integrated Circuit Hardware Description Language (VHDL) in which approximate computing and a few other optimization techniques have been adopted to improve the dynamic power dissipation level and the computational speed performance. In section II of this paper, we review the related works, similar research, and contributions. In section III of this paper, we discuss the various architectural elements and techniques that have been used to boost the computational speed performance of the proposed ALGAS3 processing unit. In section IV of this paper, we discuss the results achieved. These were obtained after synthesis of the ALGAS3 architecture using Intel Quartus Prime tools.
The achieved computational speed performance of the ALGAS3 processing unit is about 69.89 Giga of Operations per Second (GOPS) using the INTEL Cyclone V 5CGXFC9D6F27C7 Field Programmable Gate Array
Fig. 4: The proposed block diagram of the coarse-grained architecture of the moving average FIR filter.
Fig. 3: The detailed structure of one spatial corner of the proposed ALGAS3 system.
Fig. 2: The proposed distributed processing units of a complete ALGAS3 system.
(FPGA) chip. Also, we discussed the coding techniques that allowed the proposed ALGAS3 system to have an ultra-low dynamic thermal power dissipation (I/O and core) of 131.96 mW. And finally, in section V of this paper, we present our conclusions, and proposed future works.
## II Related Work
The next generation of UAM needs more complex electronics systems to adhere with the flight safety regulations such as the Federal Aviation Administration (FAA) [11], Single European Sky ATM Research (SESAR), and the Next Generation (NextGen) air transportation Systems [12, 13, 14].
Many interesting contributions and research outcomes have been proposed to find innovative solutions to this issue by applying Artificial Intelligence such as Deep Neural Networks, Neuromorphic Systems, and Machine Learning; also, by using advanced control algorithms such as Fuzzy Logic Systems (FLS) [15-18]. Unlike Neural Networks, Fuzzy Logic Systems offer the advantage of applying pre-determined rules to imprecise data over variable conditions, without pre-training for every potential scenario. The usage of the FLS algorithm has proved its trustworthiness as a reliable solution to solve different issues related to UAM. The domains that the FLS could contribute substantially to this UAM field could cover autopilot dynamics, visual human tracking drone systems, etc. However, using FLS to reduce the self-landing and altitude failure dilemma could be the most requested demand for the UAM market in the soon future [19, 20].
Also, depending on a reconfigurable and powerful processing unit such as the FPGA could be considered as the anchor point for the development of such systems due its capability to adapt to the various changes that might be required based on the change of the flight terrain as an example. Importantly, the rules of flight can be reviewed and encoded in human-readable Fuzzy-Rule format, simplifying regional-authority governance, and rapid updates, in-sync with evolving compliance requirements. In general, the Fuzzy-Rules of any flight control system could be exemplified using fuzzy qualifiers (_Italics_) and logical semantics (in CAPS) as showing below:
* IF (Not Landing-Mode) AND IF (Region-Beacon-Signal is _Weak_) AND (Direction is _Away-From-Region-Beacon_) THEN (_further-reduce_ Speed, signal Range-Limit-Error).
* IF (Landing-Mode) AND IF (Optical-Sensor is _Very Noisy_) AND (uWave-Sensor _is Very Noisy_) AND (UWB-Sensor _is Very Noisy_) THEN (Stop Landing-Mode, Enter Hover-Mode, signal Sensor-Error, enable Manual-Control).
Fig. 5: The block diagram of the Adder and Accumulate (AAC)
Fig. 6: The MATLAB validation of the Systolic FIR filter for a descending altitude scenario during a landing stage of a drone.
Fig. 7: The Questa Simulation validation of the Systolic FIR filter for a descending altitude scenario during a landing stage of a drone.
## III The Proposed ALGAS3 Processing Unit
The main contribution of this paper is to enhance the safety factors during the autonomous landing stages of the V/STOL taxi drones. This could be achieved by creating an autonomous system that could guarantee a stable landing procedure by the continuous distance measurements and data processing between the taxi drone and the landing area from four segregated HOA units and processing these aggregated data using the proposed ALGAS3 system. The proposed ALGAS3 system is a distributed and collaborative processing system that consists of four ALGAS3 cores as illustrated in Fig 2. Each two spatially opposite ALGAS3 cores are creating a differential processing pair to increase the safety factors by having a continuous doubled confirmation that the measured distance at these opposite sides of the taxi drone is matching to each other and is within the acceptable preset margin.
The communications and data exchange between these ALGAS3 cores are carried via the High-Speed Differential Comm Interface (HSDCI) unit. A detailed structure of one spatial corner of the proposed ALGAS3 system has been depicted in Fig. 3. Each ALGAS3 processing corner consists of four main subsections: The HOA unit, the ALGAS3 processing core, the HSDCI, and the Differential Inclination Control (DIC) unit. However, the focus of this paper is only on the ALGAS3 processing core. In general, the HOA unit is responsible for the data acquisition from the short-range 24GHz radar sensor and the 840nm lidar sensor via the Sensor Interface Unit (SIU). The selection of these two different sensors, to perform the same distance measuring task, is not only to increase the reliability of the measured distance but also to enhance the safety factor by applying the frequency spectrum separation concept. Hence, in case there is any spectrum attack in one of the bandwidths, the ALGAS3 system could give priority to the other sensor readings.
The ALGAS3 core consists of two systolic moving average Finite Impulse Response (FIR) filters to remove the noises from the sensory data of the two sensors in the HOA unit before being processed in the next stages. Each FIR filter has 15 TAPs. The proposed FIR filter consists of an Adder and Accumulate (AAC) unit and coefficient storage elements stage as shown in Fig. 4 and Fig. 5. The reason for this proposed coarse-grained architecture of the FIR filter is to be aligned with the other processing blocks in order to boost the computational speed of the next ALGAS3 version, as will be elaborated in the next proposed papers. The output signals of the two systolic FIR filters were fed to both the localized systolic FLS processing node and the Prognostic Malfunction Unit (PMU). The localized systolic FLS processing node is responsible for the data fusion task of the two sensory data to give output signals to the actuator control unit(s) for adjusting the altitude position of the drone itself during a landing mission.
The FRU is responsible for directing the Open FLS with the results of evaluating the IF-THEN-ELSE rule conditions. More details about the structure, simulations, and the analysis of this FLS unit could be found in [21, 22, 23]. The PMU is a novel processing element in the ALGAS3 core that could predict
whether there is a significant drop in the sensory reading quality during the landing mission of the drone. Simply, it reads the sensory information of the two sensors and monitors the difference between these two signals over a certain time frame of 16 samples. Based on the outcomes of this process, the module could expect any urgent precautions that should be taken by the pilot directly. These precautions are indicating whether there is a failure of one of the sensors to provide a reasonable predicted data in comparison with the other sensor, or it could indicate that there is an intentional hacking (cyberattack) at the bandwidth of one of the sensors.
## IV The Results
To verify the proposed design, we divided the proposed system into two parts. The first part is related to the FLS unit that has been explicitly verified and analyzed in [20-22] as mentioned in section III. The second part is related to the newly proposed processing block that has been added to the ALGAS3 processing core such as the PMU and the systolic moving-average FIR filters as illustrated in Fig. 3. The validation of the systolic FIR filter design went through two stages. First, the design was validated using MATLAB, as shown in Fig. 6, by assuming a descending altitude scenario during a landing stage of a drone. The same signal from MATLAB has been fed as the input signals for simulating the VHDL version of the systolic FIR filter design using the Questa Simulation tool. As shown in Fig. 7, we achieved very satisfactory results in comparing with the desired assumptions. The validation of the PMU was completed using a different approach, in which we compared the simulation results of the VHDL version of this block, using the Questa Simulation tool, to the targeted mathematical model and conditions that have been assumed.
The achieved results were extremely matched to the expectations. To have a clear vision of the overall performance of the ALGAS3 system, we synthesized the entire system using the INTEL Quartus Prime tool. For better understanding the new introduced computational elements in the proposed ALGAS3 architecture, Table I is elaborating the computational performance for both the single FIR module and the Single PMU. Similar to all the other modules in the proposed ALGAS3 architecture, the FIR and the PMU are independent on the external memory usage or BRAM of the FPGA chip. Numerous logic improvements have been performed to decrease the amount of Adaptive Logic Module (ALM) resources required to implement the FIR and PMU on the FPGA device as depicted in Table I.
Consequently, both the FIR module and the PMU module have a low dynamic power consumption of 16.45 mW and 11.44 mW, respectively. As shown in Table II, the additional features that have been added to the ALGAS3 system cause it to utilize a total logic resource of 4,336 ALMs from the logic resources of the INTEL 5GXFC9D6F27C7 FPGA chip. This increase by around 24.31% of the logic resources in comparison with the previous ALGAS2 system is reasonable since the computational speed performance escalated to 3.3x and 2.77x higher than the ALGAS2 system and ALGAS1 system respectively. Furthermore, the strategy of using different optimization techniques leads to achieving these promising results while having new features have been added to the system such as the prognostic malfunction feature and the Flight Rule Unit (FRU). The design of all the processing elements for the proposed ALGAS3 system has been accomplished at the gate level and RTL level using VHDL.
Moreover, the entire ALGAS3 system architecture has been optimized by depending on what we can call: "based-on-the-need" widths of all the buses, which has been briefly illustrated in the design of the systolic FIR filter in Fig. 4 and Fig. 5. This even helped us to enhance the total dynamic thermal power dissipation (I/O and core) to only 131.96 mW per the ALGAS3 system. Also, the focus on the dynamic power consumption in this paper is due to the fact of having meaningful information that will be used in the comparison with other similar ASIC-based systems in the future, since the static power consumption of the FPGA-based designs are representing the entire static power consumption per chip and not per the implemented system. Furthermore, Table II is showing more interesting features that have been taken into consideration for protecting the ALGAS3 system from any kind of fault injection attacks by eliminating the need to exchange the data with any type of external memory, and only the BRAM has been used for storing the coefficients.
## V The Conclusion
The paper introduced a new prognostic malfunction feature, and an Open FLS processing core to the ALGAS3 system that provides flexibility and transparent governance, which we assert are necessary for prolifically safer air travel in accordance with the rapidly evolving UAM guidelines from the FAA, the SESAR and NextGen safety measures during the autonomous landing of taxi drones. The ALGAS3 system presents a generic Cyber-Physical System (CPS-5) architecture that could enhance the safety of the taxi drones during landing operation. The current modifications to the ALGAS3 architecture will be the anchor point for achieving a more tangible processing performance in the futuristic version of this ALGAS3 architecture.
## VI Acknowledgment
Many thanks to David Wyatt, IEEE CAS/DSASC member, for technical and support in the completion of this research project.
|
2310.09053 | DATT: Deep Adaptive Trajectory Tracking for Quadrotor Control | Precise arbitrary trajectory tracking for quadrotors is challenging due to
unknown nonlinear dynamics, trajectory infeasibility, and actuation limits. To
tackle these challenges, we present Deep Adaptive Trajectory Tracking (DATT), a
learning-based approach that can precisely track arbitrary, potentially
infeasible trajectories in the presence of large disturbances in the real
world. DATT builds on a novel feedforward-feedback-adaptive control structure
trained in simulation using reinforcement learning. When deployed on real
hardware, DATT is augmented with a disturbance estimator using L1 adaptive
control in closed-loop, without any fine-tuning. DATT significantly outperforms
competitive adaptive nonlinear and model predictive controllers for both
feasible smooth and infeasible trajectories in unsteady wind fields, including
challenging scenarios where baselines completely fail. Moreover, DATT can
efficiently run online with an inference time less than 3.2 ms, less than 1/4
of the adaptive nonlinear model predictive control baseline | Kevin Huang, Rwik Rana, Alexander Spitzer, Guanya Shi, Byron Boots | 2023-10-13T12:22:31Z | http://arxiv.org/abs/2310.09053v3 | # DATT: Deep Adaptive Trajectory Tracking for Quadrotor Control
###### Abstract
Precise arbitrary trajectory tracking for quadrotors is challenging due to unknown nonlinear dynamics, trajectory infeasibility, and actuation limits. To tackle these challenges, we present Deep Adaptive Trajectory Tracking (DATT), a learning-based approach that can precisely track arbitrary, potentially infeasible trajectories in the presence of large disturbances in the real world. DATT builds on a novel feedforward-feedback-adaptive control structure trained in simulation using reinforcement learning. When deployed on real hardware, DATT is augmented with a disturbance estimator using \(\mathcal{L}_{1}\) adaptive control in closed-loop, without any fine-tuning. DATT significantly outperforms competitive adaptive nonlinear and model predictive controllers for both feasible smooth and infeasible trajectories in unsteady wind fields, including challenging scenarios where baselines completely fail. Moreover, DATT can efficiently run online with an inference time less than \(3.2\,\mathrm{ms}\), less than 1/4 of the adaptive nonlinear model predictive control baseline1.
Footnote 1: Videos and demonstrations in [https://sites.google.com/view/deep-adaptive-traj-tracking](https://sites.google.com/view/deep-adaptive-traj-tracking) and code in [https://github.com/KevinHuang/DATT](https://github.com/KevinHuang/DATT).
Quadrotor, Reinforcement Learning, Adaptive Control
## 1 Introduction
Executing precise and agile flight maneuvers is important for the ongoing commoditization of unmanned aerial vehicles (UAVs), in applications such as drone delivery, rescue and search, and urban air mobility. In particular, accurately following _arbitrary trajectories_ with quadrotors is among the most notable challenges to precise flight control for the following reasons. First, quadrotor dynamics are highly nonlinear and underactuated, and often hard to model due to unknown system parameters (e.g., motor characteristics) and uncertain environments (e.g., complex aerodynamics from unknown wind gusts). Second, aggressive trajectories demand operating at the limits of system performance, requiring awareness and proper handling of actuation constraints, especially for quadrotors with small thrust-to-weight ratios. Finally, the arbitrary desired trajectory might not be _dynamically feasible_ (i.e., impossible to stay on such a trajectory), which necessities long-horizon reasoning and optimization in real-time. For instance, to stay close to the five-star trajectory in Fig. 1, which is infeasible due to the sharp changes of direction, the quadrotor must predict, plan, and react online before the sharp turns.
Traditionally, there are two commonly deployed control strategies for accurate trajectory following with quadrotors: nonlinear control based on differential flatness and model predictive control
(MPC). However, nonlinear control methods, despite their proven stability and efficiency, are constrained to differentially flat trajectories (i.e., smooth trajectories with bounded velocity, acceleration, jerk, and snap) satisfying actuation constraints [1; 2; 3]. On the other hand, MPC approaches can potentially incorporate constraints and non-smooth arbitrary trajectories [4; 5], but their performances heavily rely on the accuracy of the model and the optimality of the solver for the underlying nonconvex optimization problems, which could also be expensive to run online.
Reinforcement learning (RL) has shown its potential flexibility and efficiency in trajectory tracking problems [6; 7; 8]. However, most existing works focus on tracking smooth trajectories in stationary environments. In this work, we aim to design an RL-based flight controller that can (1) follow feasible trajectories as accurately as traditional nonlinear controllers and MPC approaches; (2) accurately follow arbitrary infeasible and dynamic trajectories to the limits of the hardware platform; and (3) adapt to unknown system parameters and uncertain environments online. Our contributions are:
* We propose DATT, a novel feedforward-feedback-adaptive policy architecture and training pipeline for RL-based controllers to track arbitrary trajectories. In training, this policy is conditioned on ground-truth translational disturbance in a simulator, and such a disturbance is estimated in real using \(\mathcal{L}_{1}\) adaptive control in closed-loop;
* On a real, commercially available, lightweight, and open-sourced quadrotor platform (Crazylie 2.1 with upgraded motors), we show that our approach can track feasible smooth trajectories with 27%-38% smaller errors than adaptive nonlinear or adaptive MPC baselines. Moreover, our approach can effectively track infeasible trajectories where the nonlinear baseline completely fails, with a 39% smaller error than MPC and 1/4th the computational time;
* On the real quadrotor platform, we show that our approach can adapt zero-shot to unseen turbulent wind fields with an extra cardboard drag plate for both smooth desired trajectories and infeasible trajectories. Specifically, for smooth trajectories, our method achieves up to 22% smaller errors than the state-of-the-art adaptive nonlinear control method. In the most challenging scenario (infeasible trajectories with wind and drag plate), our method significantly outperforms the adaptive MPC approach with 15% less error and 1/4th of the computation time.
Figure 1: Trajectory visualizations for example infeasible trajectories. (a-c) Long-exposure photos of different methods for an equilateral triangle reference trajectory. (d) Long-exposure photo of our method for a five-pointed star reference trajectory. (e) Quantitative comparisons between our approach and baselines for the five-pointed star. Numbers indicate the tracking error in meters.
## 2 Problem Statement and Related Work
### Problem Statement
In this paper, we let \(\dot{\mathbf{x}}\) denote the derivative of a continuous variable \(\mathbf{x}\) regarding time. We consider the following quadrotor dynamics:
\[\dot{\mathbf{p}} =\mathbf{v}, m\dot{\mathbf{v}} =m\mathbf{g}+\mathbf{R}\mathbf{e}_{3}f_{\Sigma}+\mathbf{d} \tag{1a}\] \[\dot{\mathbf{R}} =\mathbf{R}S(\mathbf{\omega}), J\dot{\mathbf{\omega}} =\mathbf{J}\mathbf{\omega}\times\mathbf{\omega}+\mathbf{\tau}, \tag{1b}\]
where \(\mathbf{p},\mathbf{v},\mathbf{g}\in\mathbb{R}^{3}\) are position, velocity, and gravity vectors in the world frame, \(\mathbf{R}\in\mathrm{SO}(3)\) is the attitude rotation matrix, \(\mathbf{\omega}\in\mathbf{R}^{3}\) is the angular velocity in the body frame, \(m,\mathbf{J}\) are mass and inertia matrix, \(\mathbf{e}_{3}=[0;0;1]\), and \(S(\cdot):\mathbb{R}^{3}\rightarrow\mathrm{so}(3)\) maps a vector to its skew-symmetric matrix form. Moreover, \(\mathbf{d}\) is the time-variant translational disturbance, which includes parameter mismatch (e.g., mass error) and environmental perturbation (e.g., wind perturbation) [9; 10; 11; 12]. The control input is the total thrust \(f_{\Sigma}\) and the torque \(\mathbf{\tau}\) in the body frame. For quadrotors, there is a linear invertible actuation matrix between \([f_{\Sigma};\mathbf{\tau}]\) and four motor speeds.
We let \(\mathbf{x}_{t}\) denote the temporal discretization of \(\mathbf{x}\) at time step \(t\in\mathbb{Z}_{+}\). In this work, we focus on the 3-D trajectory tracking problem with the desired trajectory \(\mathbf{p}_{1}^{d},\mathbf{p}_{2}^{d},\cdots,\mathbf{p}_{T}^{d}\), with average tracking error as the performance metric: \(\frac{1}{T}\sum_{t=1}^{T}\|\mathbf{p}_{t}-\mathbf{p}_{t}^{d}\|\). We do not have any assumptions on the desired trajectory \(\mathbf{p}^{d}\). In particular, \(\mathbf{p}^{d}\) is not necessarily differentiable or smooth.
### Differential Flatness
The differential flatness property of quadrotors allows efficient generation of control inputs to follow smooth trajectories [1; 5]. Differential flatness has been extended to account for unknown linear disturbances [3], learned nonlinear disturbances [13], and also to deal with the singularities associated with pitching and rolling past 90 degrees [14]. While differential-flatness-based methods can show impressive performance for smooth and aggressive trajectories, they struggle with nondifferentiable trajectories or trajectories that require reasoning about actuation constraints.
### Model Predictive Control (MPC)
MPC is a widely used optimal control approach that online optimizes control inputs over a finite time horizon, considering system dynamics and constraints [15; 16].
Model Predictive Path Integral Control (MPPI) [4; 17] is a sampling-based MPC incorporating path integral control formulation and stochastic sampling. Unlike deterministic optimization, MPPI employs a stochastic optimization approach where control sequences are sampled from a distribution. These samples are then evaluated based on a cost function, and the distribution is iteratively updated to improve control performance. Recently MPPI has been applied to quadrotor control [18; 19].
Gradient-based nonlinear MPC techniques have been widely used for rotary-winged-based flying robots or drones. Hanover et al. [12] and Sun et al. [5] have shown good performance of nonlinear MPC in agile trajectory tracking of drones and adaptation to external perturbations. Moreover, these techniques are being used for vision-based agile maneuvers of drones [20; 7].
However, for either sampling-based or gradient-based MPC, the control performance heavily relies on the optimality of the optimizer for the underlying nonconvex problems. Generally speaking, MPC-based approaches require much more computing than differential-flatness-based methods [5]. Moreover, MPC's robustness and adaptability for infeasible trajectories remain unclear since existing works consider smooth trajectory tracking. In this paper, we implemented MPPI [4] and \(\mathcal{L}_{1}\) augmented MPPI [18] for our baselines.
### Adaptive Control and Disturbance Estimation
Adaptive controllers aim to improve control performance through online estimation of unknown system parameters in closed-loop. For quadrotors, adaptive controllers typically estimate a three-dimensional force disturbance \(\mathbf{d}\)[21; 10; 22; 23; 18]. Most recently, \(\mathcal{L}_{1}\) adaptive control for quadrotors [11] has been shown to improve trajectory tracking performance in the presence of complex and time-varying disturbances such as sloshing payloads and mismatched propellers. Recently, deep-learning-based adaptive flight controllers have also emerged [10; 24; 25].
Learning dynamical models is a common technique to improve quadrotor trajectory tracking performance [9; 26; 27; 28] and can provide more accurate disturbance estimates than purely reactive adaptive control, due to the model of the disturbance over the state and control space. In this work, we use the disturbance estimation from \(\mathcal{L}_{1}\) adaptive control, but we note that our method can leverage any disturbance estimation or model learning techniques.
In particular, Rapid Motor Adaptation (RMA) is a supervised learning-based approach that aims to predict environmental parameters using a history of state-action pairs, which are then inputted to the controller [29]. This approach has been shown to work for real legged-robots, but we find that it can be susceptible to domain shift during sim2real transfer on drones.
### Reinforcement Learning for Quadrotor Control
Reinforcement learning for quadrotor stabilization is studied in [6; 30; 24]. Molchanov et al. [30] uses domain randomization to show policy transfer between multiple quadrotors. Kaufmann et al. [31] compares three different policy formulations for quadrotor trajectory tracking and finds that outputting body thrust and body rates outperforms outputting desired linear velocities and individual rotor thrusts. [31] only focuses on feasible trajectories while in this work, we aim to track infeasible trajectories as accurately as possible. Simulation-based learning with imitation learning to an expert MPC controller is used to generate acrobatic maneuvers in [7]. In this work, we focus on trajectories and environments for which obtaining an accurate expert even in simulation is difficult or expensive and thus use reinforcement learning to learn the controller.
## 3 Methods
### Algorithm Overview
A high-level overview of DATT is given in Fig. 2. Using model-free RL, DATT learns a neural network quadrotor controller \(\mathbf{\pi}\) capable of tracking arbitrary reference trajectories, including infeasible trajectories, while being able to adapt to various environmental disturbances, even those unseen during training. We condition our policy on a learned _feedforward embedding_\(\mathbf{h}\), which encodes the desired reference trajectory, in the body frame, over a fixed time horizon, as well as the force disturbance \(\mathbf{d}\) in Eq. (1).
The state \(\mathbf{x}_{t}\) consists of the position \(\mathbf{p}\), the velocity \(\mathbf{v}\), and the orientation \(\mathbf{R}\), represented as a quaternion \(\mathbf{q}\). We convert \(\mathbf{p},\mathbf{v}\) to the body frame and input them to \(\mathbf{\pi}\). Our policy controller outputs \(\mathbf{u}\) which includes the desired total thrust \(f_{\Sigma,\text{des}}\), and the desired body rates \(\mathbf{\omega}_{\text{des}}\). In summary, our controller functions as follows:
\[\mathbf{h}_{t} =\mathbf{\phi}(\mathbf{R}_{t}^{\top}(\mathbf{p}_{t}-\mathbf{p}_{t}^{d})),\dots,\bm {R}_{t}^{\top}(\mathbf{p}_{t}-\mathbf{p}_{t+H}^{d})) \tag{2a}\] \[\mathbf{u}_{t} =\mathbf{\pi}(\mathbf{R}_{t}^{\top}\mathbf{p}_{t},\mathbf{R}_{t}^{\top}\mathbf{v}_{t},\mathbf{q}_{t},\mathbf{h}_{t},\mathbf{R}_{t}^{\top}(\mathbf{p}_{t}-\mathbf{p}_{t}^{d}),\mathbf{d}_{t}) \tag{2b}\]
Figure 2: Algorithm Overview. Blue, yellow, and green blocks represent feedforward, feedback, and adaptation modules respectively. In training the policy has access to the true disturbance \(\mathbf{d}\) whereas in real we use \(\mathcal{L}_{1}\) adaptive control to get the disturbance estimation \(\hat{\mathbf{d}}\) in closed-loop.
We define the expected reward for our policy conditioned on the reference trajectory as follows:
\[J(\mathbf{\pi}|\mathbf{p}_{t:t+H}^{d}) =\mathbb{E}_{(\mathbf{x},\mathbf{u})\sim\mathbf{\pi}}\left[\sum_{t=0}^{\infty}r (\mathbf{x}_{t},\mathbf{u}_{t}|\mathbf{p}_{t:t+H}^{d})\right] \tag{3a}\] \[r(\mathbf{x}_{t},\mathbf{u}_{t}|\mathbf{p}_{t:t+H}^{d}) =\|\mathbf{p}_{t}-\mathbf{p}_{t}^{d}\|+0.5\|\psi_{t}\|+0.1\|\mathbf{v}_{t}\| \tag{3b}\]
\(\psi_{t}\) denotes the yaw of the drone. The reward function optimizes for accurate position and yaw tracking, with a small velocity regularization penalty. \(\mathbf{\pi}\) and \(\mathbf{\phi}\) are jointly optimized with respect to \(J\) using the Proximal Policy Optimization (PPO) algorithm [32].
### Arbitrary Trajectory Tracking
Classical controllers, such as differential-flatness controllers, rely on higher-order position derivatives of the reference trajectory for accurate tracking (velocity, acceleration, jerk, and snap), which are needed for incorporating future information about the reference, i.e., feedforward control. However, arbitrary trajectories can have undefined higher order derivatives, and exact tracking may not be feasible. With RL, a controller can be learned to optimally track an arbitrary reference trajectory, given just the desired future positions \(\mathbf{p}_{t}^{d}\). Thus, we input just the desired positions, in the body-frame, into a feedforward encoder \(\mathbf{\phi}\), which learns the feedforward embedding that contains the information of the desired future reference positions. For simplicity, we assume the desired yaw for all trajectories is zero. The reference positions are provided evenly spaced from the current time \(t\) to the feedforward horizon \(t+H\), and are transformed into the body frame.
### Adaptation to Disturbance
During training in simulation, we add a random time-varying force perturbation \(\mathbf{d}\) to the environment. We use \(\mathcal{L}_{1}\) adaptive control [11; 33] to estimate \(\mathbf{d}\), which is directly passed into our policy network during both training and inference. \(\mathcal{L}_{1}\) adaptive control first builds a closed-loop estimator to compute the difference between the predicted and true disturbance, and then uses a low pass filter to update the prediction. The adaptation law is given by:
\[\dot{\hat{\mathbf{v}}} =\mathbf{g}+\mathbf{R}\mathbf{e}_{3}f_{\Sigma}/m+\hat{\mathbf{d}}/m+\mathbf{A}_{s}( \hat{\mathbf{v}}-\mathbf{v}) \tag{4a}\] \[\hat{\mathbf{d}}_{\text{new}} =-(e^{\mathbf{A}_{s}dt}-\mathbf{I})^{-1}\mathbf{A}_{s}e^{\mathbf{A}_{s}dt}(\hat{ \mathbf{v}}-\mathbf{v})\] (4b) \[\hat{\mathbf{d}} \leftarrow\text{low pass filter}(\hat{\mathbf{d}},\hat{\mathbf{d}}_{\text{ new}}) \tag{4c}\]
where \(\mathbf{A}_{s}\) is a Hurwitz matrix, \(dt\) is the discretization step length and \(\hat{\mathbf{v}}\) is the velocity prediction. Generally speaking, (4a) is a velocity predictor using the estimated disturbance \(\hat{\mathbf{d}}\), and (4b) and (4c) update and filter \(\hat{\mathbf{d}}\). Compared to other sim-to-real techniques such as domain randomization [30] and student-teacher adaptation [24], the adaptive-control-based disturbance adaptation method in DATT tends to be more reactive and robust, thanks to the closed-loop nature and provable stability and convergence of \(\mathcal{L}_{1}\) adaptive control.
We note that DATT provides a general framework for adaptive control. Other methods to estimate \(\hat{\mathbf{d}}\), for example RMA, can easily be used instead, but we found them to be less robust than \(\mathcal{L}_{1}\) adaptive control. We compare against an RMA baseline in our experiments.
## 4 Experiments
### Simulation and Training
Training is done in a custom quadrotor simulator that implements (1) using on-manifold integration, with body thrust and angular velocity as the inputs to the system. In order to convert the desired body thrust \(f_{\Sigma,\text{des}}\) and body rate \(\mathbf{\omega}_{\text{des}}\) output from the controller to the actual thrust and body rate for the drone in simulation, we use a first-order time delay model:
\[\mathbf{\omega}_{t} =\mathbf{\omega}_{t-1}+k(\mathbf{\omega}_{\text{des}}-\mathbf{\omega}_{t-1}) \tag{5a}\] \[f_{\Sigma,t} =f_{\Sigma,t-1}+k(f_{\Sigma,\text{des}}-f_{\Sigma,t-1}) \tag{5b}\]
We set \(k\) to a fixed value of \(0.4\), which we found worked well on the real drone. In practice, the algorithm generalizes well to a large range of \(k\), even when training on fixed \(k\). Our simulator effectively runs at \(50\,\mathrm{Hz}\), with \(dt=0.02\) for each simulation step.
We train across a series of xy-planar smooth and infeasible reference trajectories. The smooth trajectories are randomized degree-five polynomials and series of degree-five polynomials chained together. The infeasible trajectories are we refer to as _zigzag trajectories_, which are trajectories that linearly connect a series of random waypoints, and have either zero or undefined acceleration. The average speed of the infeasible trajectories is approximately \(2\,\mathrm{m}/\mathrm{s}\). See Appendix C for more details on the reference trajectories.
At the start of each episode, we apply a force perturbation \(\mathbf{d}\) with randomized direction and strength in the range of \([-3.5\,\mathrm{m}/\mathrm{s}^{2},3.5\,\mathrm{m}/\mathrm{s}^{2}]\), representing translational disturbances. We then model time varying disturbance as Brownian motion; at each time step, we update \(\mathbf{d}\leftarrow\mathbf{d}+\epsilon\), with \(\epsilon\in\mathbb{R}^{3}\), \(\epsilon\sim\mathcal{N}(\mathbf{0},\mathbf{\Sigma}dt)\). We chose \(\mathbf{\Sigma}=0.01\mathbf{I}\). This is meant to model potentially complex time and state-dependent disturbances during inference time, while having few modeling parameters as we wish to demonstrate zero-shot generalization to complex target domains without prior knowledge. We run each episode for a total of 500 steps, corresponding to 10 seconds. By default, we set \(H\) to \(0.6\,\mathrm{s}\) with 10 feedforward reference terms. In Appendix A, we show ablation results for various different horizons.
We also note that stable training and best performance require fixing an initial trajectory for the first 2.5M steps of training (see Appendix A for more details). Only after that initial time period do we begin randomizing the trajectory. We train the policy using PPO for a total of 20M steps. Training takes slightly over 3 hours on an NVIDIA \(3080\) GPU.
### Hardware Setup and the Low-level Attitude Rate Controller
We conduct hardware experiments with the Bitcraze Crazyflie 2.1 equipped with the longer \(20\,\mathrm{mm}\) motors from the thrust upgrade bundle for more agility. The quadrotor as tested weighs \(40\,\mathrm{g}\) and has a thrust-to-weight ratio of slightly under 2.
Position and velocity state estimation feedback is provided by the OptiTrack motion capture system at \(50\,\mathrm{Hz}\) to an offboard computer that runs the controller. The Crazyflie quadrotor provides orientation estimates via a \(2.4\,\mathrm{GHz}\) radio and control commands are sent to the quadrotor over the same radio at \(50\,\mathrm{Hz}\). Communication with the drone is handled using the Crazyswarm API [34]. Body rate commands \(\mathbf{\omega}_{\text{des}}\) received by the drone are converted to torque commands \(\mathbf{\tau}\) using a custom low-level PI attitude rate controller on the firmware: \(\mathbf{\tau}=-K_{P}^{\mathbf{\omega}}(\mathbf{\omega}-\mathbf{\omega}_{\text{des}})-K_{I}^{ \mathbf{\omega}}\int(\mathbf{\omega}-\mathbf{\omega}_{\text{des}})\). Finally, this torque command and the desired total thrust \(f_{\Sigma,\text{des}}\) from the RL policy are converted to motor thrusts using the invertible actuation matrix.
### Baselines
We compare our reinforcement learning approach against two nonlinear baselines: differential flatness-based feedback control and sampling-based Model Predictive Control (MPC) [4]. We also compare using \(\mathcal{L}_{1}\) adaptive control, which we propose, against RMA.
Nonlinear Tracking Controller and \(\mathcal{L}_{1}\) Adaptive ControlThe differential flatness-based controller baseline consists of a PID position controller, which computes a desired acceleration vector, and a tilt-prioritized nonlinear attitude controller, which computes the body thrust \(f_{\Sigma}\) and desired body angular velocity \(\mathbf{\omega}_{\text{des}}\).
\[\mathbf{a}_{\text{fb}} =-K_{P}(\mathbf{p}-\mathbf{p}^{d})-K_{D}(\mathbf{v}-\mathbf{v}^{d})-K_{I}\int(\bm {p}-\mathbf{p}^{d})+\mathbf{a}^{d}-\mathbf{g}-\hat{\mathbf{d}}/m, \tag{6a}\] \[\mathbf{z}_{\text{fb}} =\frac{\mathbf{a}_{\text{fb}}}{||\mathbf{a}_{\text{fb}}||},\quad\mathbf{z}= \mathbf{Re}_{3},\quad f_{\Sigma}=\mathbf{a}_{\text{fb}}^{\top}\mathbf{z}\] (6b) \[\mathbf{\omega}_{\text{des}} =-K_{R}\mathbf{z}_{\text{fb}}\times\mathbf{z}+\psi_{\text{fb}}\mathbf{z}, \quad\psi_{\text{fb}}=-K_{\text{yaw}}(\psi\otimes\psi_{\text{ref}}) \tag{6c}\]
where \(\hat{\mathbf{d}}\) is the disturbance estimation. For the nonlinear baseline, we set \(\hat{\mathbf{d}}=0\), and for \(\mathcal{L}_{1}\) adaptive control [11] we use (4) to compute \(\hat{\mathbf{d}}\) in real time [11]. For our experiments, we set \(K_{P}=\text{diag}([6\ 6\ 6])\), \(K_{I}=\text{diag}([1.5\ 1.5\ 1.5])\), \(K_{D}=\text{diag}([4\ 4\ 4])\), \(K_{R}=\text{diag}([120\ 120\ 0])\), and \(K_{\text{yaw}}=13.75\). PID gains were empirically tuned on the hardware platform to track both smooth and infeasible trajectories while minimizing crashes.
Nonlinear MPC and Adaptive Nonlinear MPCWe use Model Predictive Path Integral (MPPI) [4] control as our second nonlinear baseline. MPPI is a sampling-based nonlinear optimal control
technique that computes the optimal control sequence w.r.t. a known dynamics model and specified cost function. In our implementation, we use (1) (\(\mathbf{d}=0\)) as the dynamics model with the body thrust \(f_{\Sigma}\) and angular velocity \(\mathbf{\omega}\) as the control input. The cost function is the sum of the position error norms along \(k=40\) horizon steps. We use 8192 samples, \(dt=0.02\), and a temperature of 0.05 for the softmax. For adaptive MPC, similar to prior works [18; 12], we augment the standard MPPI with the disturbance estimation \(\hat{\mathbf{d}}\) from \(\mathcal{L}_{1}\) adaptive control, which we refer to as \(\mathcal{L}_{1}\)-MPC.
RmaWe compare against RMA for our adaptive control baseline. Instead of using \(\mathcal{L}_{1}\) to estimate \(\hat{\mathbf{d}}\), we train an adaptation neural network \(\psi\) that predicts \(\hat{\mathbf{d}}\) from a history of state-action pairs using the RMA method (denoted DATT-RMA), similar to prior works [29]. We first train our policy \(\pi\) in sim using PPO as usual, but conditioned on the ground truth \(\mathbf{d}\). To train \(\psi\), we then roll out \(\pi\) with \(\hat{\mathbf{d}}\) predicted by a randomly initialized \(\psi\) for 500 timesteps. \(\psi\) is then trained with supervised learning in order to minimize the loss \(\|\hat{\mathbf{d}}-\mathbf{d}\|\). We repeat this process for \(10000\) iterations, when the loss converges. Our adaptation network \(\psi\) takes as input the previous seen 50 state-action pairs, and the architecture consists of 3 1D convolutional layers with 64 channels and a kernel size of 8 for each, followed by 3 fully connected layers of size 32 and ReLU activations.
### Arbitrary Trajectory Tracking
We first evaluate the trajectory tracking performance of DATT compared to the baselines in the absence of disturbances. We test on both infeasible zigzag trajectories and smooth polynomial trajectories. Each controller is run 2 times on the same bank of 10 random zigzag trajectories and 10 random polynomials. Results are shown in Table 1. For completeness, we also compare with the tracking performance of adaptive controllers in the absence of any disturbances. We also compare our method to a version without adaptation, meaning that we enforce \(\mathbf{\tilde{d}}=\mathbf{0}\).
We see that DATT achieves the most accurate tracking, with a fraction of the compute cost of MPC. With our current gains, the nonlinear and \(\mathcal{L}_{1}\) adaptive control baselines are unable to track the infeasible trajectory. With reduced controller gains, it is possible these controllers would not crash when tracking the infeasible trajectories, but doing so would greatly decrease their performance for smooth trajectories.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Method & Smooth trajectory & Infeasible trajectory & Inference time (\(\mathrm{ms}\)) \\ \hline Nonlinear tracking control & \(0.098\pm 0.012\) & _crash_ & 0.21 \\ \(\mathcal{L}_{1}\) adaptive control & \(0.091\pm 0.009\) & _crash_ & 0.93 \\ MPC & \(0.104\pm 0.009\) & \(0.183\pm 0.027\) & 12.62 \\ \(\mathcal{L}_{1}\)-MPC & \(0.088\pm 0.010\) & \(0.181\pm 0.031\) & 13.10 \\ DATT (w/ \(\mathbf{\hat{d}}=0\)) & **0.054**\(\pm 0.013\) & **0.089**\(\pm 0.026\) & 2.41 \\ DATT & **0.049**\(\pm 0.017\) & **0.083**\(\pm 0.023\) & 3.17 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Tracking error (in \(\mathrm{m}\)) of DATT vs. baselines, without any environmental disturbances (no wind or plate). _crash_ indicates a crash for all ten trajectory seeds.
Figure 3: **Left**: Crazyllie 2.1 with a swinging cardboard drag plate in an unsteady wind field. **Right**: Comparison between our methods with and without adaptation with the drag plate on a zigzag trajectory. With wind added, adaptation is needed, otherwise the drone crashes.
### Adaptation Performance in Unknown Wind Fields with a Drag Plate
To evaluate the ability of DATT to compensate for unknown disturbances, we test the Crazyllie in a high wind scenario with three fans and an attached soft cardboard plate hanging below the vehicle body. Figure 3 shows this experimental setup. We note that this setup differs significantly from simulation -- the placement of the fans and the soft cardboard plate creates highly dynamic and state dependent force disturbances, as well as torque disturbances, yet in simulation we model only the force disturbance as a simple random walk. However, our policy is able to generalize well zero-shot to this domain, as shown in Table 2.
In Table 2, we see that the baseline nonlinear adaptive controller is unable to track infeasible trajectories, similar to the experiment without adaptation. Our method with adaptation enabled is able to track all the trajectories tested, with the lowest tracking error. We also verify that using \(\mathcal{L}_{1}\) adaptive control results in better performance than using RMA. We note that this is due to a large sim2real gap with the adaptation network for RMA, which we discuss in the Appendix. Figure 3 shows the difference in tracking performance between our method using adaptive control and our method without, on an example zigzag trajectory with a drag plate. We see that our approach of integrating \(\mathcal{L}_{1}\) adaptive control with our policy controller is effective in correcting the error introduced by the presence of the turbulent wind field and plate. Our method performs better than \(\mathcal{L}_{1}\)-MPC without any knowledge of the target domain, and with a fraction of the compute cost. Figures 5 and 6 in the Appendix visualizes the tracking performance of DATT vs. \(\mathcal{L}_{1}\)-MPC on a infeasible and smooth trajectory, respectively.
#### Acknowledgments
We would like to acknowledge the Robot Learning Lab at the University of Washington for providing the resources for this paper. We would also like to thank the reviewers for their helpful and insightful comments.
|
2303.11724 | Task-based Generation of Optimized Projection Sets using Differentiable
Ranking | We present a method for selecting valuable projections in computed tomography
(CT) scans to enhance image reconstruction and diagnosis. The approach
integrates two important factors, projection-based detectability and data
completeness, into a single feed-forward neural network. The network evaluates
the value of projections, processes them through a differentiable ranking
function and makes the final selection using a straight-through estimator. Data
completeness is ensured through the label provided during training. The
approach eliminates the need for heuristically enforcing data completeness,
which may exclude valuable projections. The method is evaluated on simulated
data in a non-destructive testing scenario, where the aim is to maximize the
reconstruction quality within a specified region of interest. We achieve
comparable results to previous methods, laying the foundation for using
reconstruction-based loss functions to learn the selection of projections. | Linda-Sophie Schneider, Mareike Thies, Christopher Syben, Richard Schielein, Mathias Unberath, Andreas Maier | 2023-03-21T10:29:30Z | http://arxiv.org/abs/2303.11724v1 | # Task-based Generation of Optimized Projection Sets using Differentiable Ranking
###### Abstract
We present a method for selecting valuable projections in computed tomography (CT) scans to enhance image reconstruction and diagnosis. The approach integrates two important factors, projection-based detectability and data completeness, into a single feed-forward neural network. The network evaluates the value of projections, processes them through a differentiable ranking function and makes the final selection using a straight-through estimator. Data completeness is ensured through the label provided during training. The approach eliminates the need for heuristically enforcing data completeness, which may excludes valuable projections. The method is evaluated on simulated data in a non-destructive testing scenario, where the aim is to maximize the reconstruction quality within a specified region of interest. We achieve comparable results to previous methods, laying the foundation for using reconstruction-based loss functions to learn the selection of projections.
## 1 Introduction
In the field of computed tomography (CT), a series of projections are obtained to produce a three-dimensional representation of the object of interest. However, not all projections are equally essential for image reconstruction and diagnostic purposes [2]. A projection is considered more valuable when the amount of information gain in the chosen set of projections for reconstruction is higher. Selecting the most valuable projections can enhance the detection of anomalies for defects, improve imaging efficiency, and minimize noise and artefacts in the final reconstruction. In order to determine an optimized CT trajectory, it is necessary to balance the individual value of each projection with the overall value of the set of projections used for reconstruction.
One approach to selecting valuable projections is through task-based image quality metrics. These metrics quantify the performance of a given projection with respect to a specific imaging task, such as detecting small structures or preserving low contrast details. By evaluating each projection using a task-based image quality metric, the most valuable projections can be selected for inclusion in the image reconstruction process. As a secondary criterion, data completeness should be satisfied. Simultaneously optimizing both projection-based metrics and set-based metrics is essential for improving the quality of the reconstruction. However, being effective in projection-based metrics does not necessarily result in being valuable in the context of the set-based metrics.
In our recent work [1], we investigated the trainability of the projection-dependent detectability index (PDI) for a specific class of objects and its usability for CT trajectory optimization. Because this does not ensure data variability if computed per projection, we additionally introduced a haversine distance constraint in the optimization. The aim is to optimize the quality of the reconstruction in a predefined region of interest. The optimization problem is formulated to maximize the detectability index predicted by a neural network, while ensuring that the haversine distance constraint is met. However, this approach may excluded valuable projections that enhance resolution in the region of interest.
In this work, we present an approach to address the issue of manually incorporating data completeness into the data analysis process. Our solution involves integrating this constraint into the neural network architecture. The network output directly indicates an optimized set of projections, which can be used to reconstruct the volume with high accuracy in the region of interest. To achieve this, we propose an adaptation of the ResNet-18 architecture. This outputs a hidden representation for each projection, which are processed through a differentiable ranking function to rank the projections. A straight-through estimator is used to make the final selection of projections. To connect this output to a set of projections, the integer program introduced in [1] is used to generate a label during training. Our approach is unique in that it directly balances individual projection value and overall set value, ensuring that the final selection of projections maximizes reconstruction quality in the region of interest. We demonstrate the utility of our method in a non-destructive testing scenario by maximizing the reconstruction quality within a specified region of interest using a limited number of projections.
To summarize, this paper makes the following contributions:
* We introduce a network architecture that integrates both projection-based and set-based evaluation metrics for selecting a pre-defined number of projections.
* Our approach demonstrates the capability of learning a heuristic that is not limited to individual projections, thereby establishing the basis for using reconstruction-based loss functions to guide the selection of projections in future studies.
## 2 Methods
We propose a combination of projection-based and set-based evaluation metrics using a three-step neural network ap
proach. The first step involves reducing each projection to a single value using a modified ResNet-18 architecture. The regressed values are then collected in a single vector, which represents the valuability of each projection. The second step involves applying a differentiable ranking function (described in Section 2.2) to this vector, resulting in a ranking of the projections. Lastly, the threshold function of the Straight-Through Estimator (detailed in Section 2.3) is applied to the ranking to convert it into a binary vector that represents the selection of projections. An overview can be seen in Figure 1.
### Projection-Dependent Detectability Index
The projection-dependent detectability index is a measure of the quality of a single projection and its ability to contribute to the observability of a signal in a reconstructed image. It is used to evaluate the performance of different CT projection angles. The PDI is calculated using the non-prewhitening matched filter observer model (NPWM) described in Stayman et al. [2, 3, 4]. The NPWM model defines the modulation transfer function (MTF) and noise power spectrum (NPS) as functions of the position of the target voxels in the volume, denoted by (x,y,z). The analytical equations for both the MTF and NPS in the context of iterative penalized-likelihood reconstruction were developed by Gang et al. [5]. The PDI is given by the equation
\[d^{2}(x,y,z)=\frac{[\iint\![J\!]\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J \!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J \!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J \!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J \!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J \!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J \!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J \!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J \!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!\!][\!J\!][\!J\!][\!J\!\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!\!][\!J\!][\!J\!][\!J\!\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!\!][\!J\!][\!J\!\!][\!J\!][\!J\!][\!J\!\!][\!J\!][\!J\!][\!J\!][\!J\!\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!][\!J\!\!][\!J\!][\!J\!][\!J\!\!][\!J\!\!][\!J\!][\!J\!\!][\!J\!][\!J\!\!][\!J\!][\!J\!][\
with a radius of \(1\,\mathrm{mm}\). As a result, \(18\) distinct test specimens were obtained.
We defined a geometry for the CT imaging setup by fixing the detector-isocentre distance and the source-detector distance to \(3\,\mathrm{m}\) and \(4\,\mathrm{m}\), respectively. The detector and source were positioned to face each other, which results in restricting the scan positions to a sphere. We parametrized the problem using azimuth angle \(\varphi\) and elevation angle \(\theta\) using spherical coordinates. Each CT trajectory consisted of a set of pairs \((\varphi_{i},\theta_{i})\) for \(t\in 0,\dots,N\), where \(N\) is the total number of projection images. Initially, we sampled \(N=1000\) scan positions on a sphere using a Fibonacci-based sampling for uniform surface coverage [9].
The projection data was simulated using the Fraunhofer EZRT simulation software XSimulation with a \(225\,\mathrm{keV}\) polychromatic spectrum. The test specimens were placed at the centre of the world coordinate system. The detector was chosen to have a size of \(375\times 375\) pixels and a pixel pitch of \(400\,\mathrm{\mu m}\times 400\,\mathrm{\mu m}\). The PDI was calculated analytically using Equation (1). This allowed us to determine a set of projections through the integer optimization problem including the haversine distance constraint introduced in [1], which resulted in a binary vector \(y\in\{0,1\}^{N}\) where \(1\) indicates a chosen projection. In this work, this used as a label for training. In this study, we set the number of possible projections in the optimized CT trajectory to \(k=100\).
### Neural Network Architecture
The network was trained on \(1000\) projections of \(15\) distinct test specimens, such that \(3\) test specimens, one for each type of test specimen, remained to test the network performance. To predict the optimal set of projections for a specific task, we employed a Convolutional Neural Network (CNN) approach. The ResNet-18 architecture was adapted for regression by adding a single fully connected layer to perform the regression towards a scalar value. The network was trained using a Binary Cross Entropy loss function with the Adam optimizer and a learning rate of \(3\mathrm{e}^{-5}\). The pre-trained ResNet-18 was initialized with weights from the ImageNet dataset and the fully connected regression layer was initialized randomly. The final selection of projections was determined using the straight-through estimator as described in Section 2.3, yielding a binary representation of the ranking where a value of \(1\) indicates the projection belongs to the \(k\) highest-ranked projections. The training process utilized a batch size of \(1\) corresponding to a complete CT scan of a test specimen and was performed for a maximum of \(600\) epochs.
## 4 Results
To assess the performance of the proposed method, a comparison was conducted between the reconstructed volume obtained from the predicted optimal set of projections and the optimal set of projections according to the integer program in [1]. The reconstruction was performed using the Algebraic Reconstruction Technique (ART) with \(3\) iterations. The evaluation of the reconstructed images was based on two standard image quality metrics,the Structural Similarity Index (SSIM) and the Root Mean Squared Error (RMSE). The results of this comparison are presented in Table 1. Our analysis showed a positive correlation between a decrease in RMSE and an improvement in SSIM. Moreover, the location of the defect was crucial for both the reference method and the proposed method to perform effectively. Our proposed method was able to achieve results that were comparable to the projection set indicated by the label, especially in terms of reconstruction quality. This is shown Figure 2, where the grey values of a specific slice of our heartgear test specimen were examined.It was found that the areas of the defect between the prediction and the label are very similar. In addition, it can be seen that the structure of the defect in particular could be clearly depicted, but the same resolution could not be achieved as with the reference reconstruction. Even though there were more reconstruction artefacts in our proposed method, this did not affect the reconstruction quality in the region of interest, as shown by the results in Table 1.
Figure 1: Overview of the proposed approach. First, each projection is regressed to a single value. This value is used as a basis for the ranking. To generate a selection out of the ranking, we utilize a straight trough estimator.
## 5 Discussion and Conclusion
This paper presents a new approach for selecting valuable projections in computed tomography scans, with the aim of improving reconstruction quality in a region of interest. The proposed solution uses a neural network architecture that integrates the projection-based detectability index and data completeness into a single model. The network, based on a modified ResNet-18 architecture, evaluates the value of projections, passes them through a differentiable ranking function, and the final selection is made using a straight-through estimator.
The results suggest that our proposed method is capable of selecting projections for a predefined task and a known object structure. It was found that the performance of the proposed solution is comparable to the baseline method, which is used for labelling. The fact that the performance is comparable to the baseline method demonstrates that the selection of projections can be learned by the neural network. This finding suggests that other set-based metrics, such as a reconstruction-based loss, can also be learned with our presented approach. One drawback of the proposed solution is that it is object-specific, meaning that it is trained on a specific object structure and does not generalize to structures not seen during training. Despite this, the approach is still suitable for non-destructive testing, as it is often necessary to examine similar or identical objects.
For future work, we want to add the embedding of positions, which may be useful for improvement. This can help to further guide the neural network in the direction of data completeness.
## Acknowledgements
This research was financed by the,,SmartCT - Methoden der Kunstlichen Intelligenz fur ein autonomes Roboter-CT System" project (project nr. DIK-2004-0009).
|
2301.10243 | Polarized signatures of adiabatically expanding hotspots in Sgr A*'s
accretion flow | We report 235 GHz linear and circular polarization (LP and CP) detections of
Sgr A* at levels of $\sim10\%$ and $\sim-1\%$, respectively, using ALMA. We
describe the first full-Stokes modeling of an observed submillimeter flare with
an adiabatically-expanding synchrotron hotspot using a polarized radiative
transfer prescription. Augmented with a simple full-Stokes model for the
quiescent emission, we jointly characterize properties of both the quiescent
and variable components by simultaneously fitting all four Stokes parameter
light curves. The hotspot has magnetic field strength $71$ G, radius $0.75$
Schwarzschild radii, and expands at speed $0.013$c assuming magnetic
equipartition. The magnetic field's position angle projected in the
plane-of-sky is $\approx55^\circ$ East of North, which previous analyses reveal
as the accretion flow's angular momentum axis and further supports Sgr A*
hosting a magnetically-arrested disk. The magnetic field is oriented
approximately perpendicular to the line of sight, which suggests repolarization
as the cause of the high circular-to-linear polarization ratio observed at
radio frequencies. We additionally recover several properties of the quiescent
emission, consistent with previous analyses of the accretion flow, such as a
rotation measure $\approx-4.22\times10^{5}$ rad m$^{-2}$. Our findings provide
critical constraints for interpreting and mitigating the polarized variable
emission in future Event Horizon Telescope images of Sgr A*. | Joseph M. Michail, Farhad Yusef-Zadeh, Mark Wardle, Devaky Kunneriath | 2023-01-24T18:59:58Z | http://arxiv.org/abs/2301.10243v1 | # Polarized signatures of adiabatically expanding hotspots in Sgr A*'s accretion flow
###### Abstract
We report 235 GHz linear and circular polarization (LP and CP) detections of Sgr A* at levels of \(\sim\) 10% and \(\sim-1\%\), respectively, using ALMA. We describe the first full-Stokes modeling of an observed submillimeter flare with an adiabatically-expanding synchrotron hotspot using a polarized radiative transfer prescription. Augmented with a simple full-Stokes model for the quiescent emission, we jointly characterize properties of both the quiescent and variable components by simultaneously fitting all four Stokes parameter light curves. The hotspot has magnetic field strength 71 G, radius 0.75 Schwarzschild radii, and expands at speed 0.013c assuming magnetic equipartition. The magnetic field's position angle projected in the plane-of-sky is \(\approx 55^{\circ}\) East of North, which previous analyses reveal as the accretion flow's angular momentum axis and further supports Sgr A* hosting a magnetically-arrested disk. The magnetic field is oriented approximately perpendicular to the line of sight, which suggests repolarization as the cause of the high circular-to-linear polarization ratio observed at radio frequencies. We additionally recover several properties of the quiescent emission, consistent with previous analyses of the accretion flow, such as a rotation measure \(\approx-4.22\times 10^{5}\) rad m\({}^{-2}\). Our findings provide critical constraints for interpreting and mitigating the polarized variable emission in future Event Horizon Telescope images of Sgr A*.
keywords: Galaxy: centre - techniques: photometric - techniques: polarimetric - stars: individual: Sgr A* - techniques: interferometric
## 1 Introduction
Sagittarius A* (Sgr A*) is the \((4.152\pm 0.014)\times 10^{6}\) M\({}_{\odot}\) supermassive black hole located at the Galactic Center at a distance of 8178 pc from the Earth (Gravity Collaboration et al., 2019). Sgr A* is well-known to vary across the electromagnetic spectrum (e.g., Yusef-Zadeh et al., 2011; Neilsen et al., 2013; Subroweit et al., 2017; Do et al., 2019; Witzel et al., 2021; Wielgus et al., 2022). In the radio and submillimeter (submm), the emission is dominated by two components, both of which originate from the accretion flow: (quasi-)quiescent and variable radiation. The accretion flow is turbulent, which causes variations in its flux on timescales \(\gtrsim\) minute. The accretion flow produces an overall net level of quiescent emission on top of low- and high-level amplitude changes owing to its variable nature. The large-amplitude variations are known as "flares" and dominate the variable emission. Previous work on Sgr A*'s variability in the radio/submm regimes has focused on total intensity observations. However, these earlier results largely ignore the polarization of Sgr A*, which helps uncover the accretion flow's magnetic properties.
Bower et al. (1999a, c, 2001) first searched for linear polarization (LP) between 5-112 GHz and found Sgr A* to be linearly unpolarized at these frequencies. However, circular polarization (CP) was detected at 5 and 8 GHz (Bower et al., 1999b; Sault and Macquart, 1999). Further observations extended polarimetric wavelength coverage from 1.5 GHz to 230 GHz (Bower et al., 2002; Tsuboi et al., 2003; Munoz et al., 2012). Aitken et al. (2000) first hinted at intrinsic LP from Sgr A* at 400 GHz; however, their measurements may have been contaminated by the abundance of polarized dust immediately surrounding Sgr A* in the circumnuclear disk (e.g., Hsieh et al., 2018).
The first interferometric observations detected LP at 230 and 340 GHz (Bower et al., 2003; Marrone et al., 2006). Later observations broadened this range from 86 to 700 GHz (Macquart et al., 2006; Liu et al., 2016, 2016). Presently, Sgr A* is known to be circularly polarized from 1.5-230 GHz and linearly polarized between 86-700 GHz. Liu et al. (2016, 2016) find Sgr A*'s LP percent to increase from \(\sim\) 0.5% at 93 GHz to \(\sim\) 8.5% at 500 GHz, which may decrease at higher frequencies. Munoz et al. (2012) compiled CP measurements of Sgr A* from 1.5
GHz to 345 GHz finding levels of \(\sim-0.2\%\) to \(\sim-1\%\), respectively. While the absolute CP amplitude is known to vary (see Munoz et al., 2012), the _sign_ is consistently negative in the radio and submm in all currently published data, which they suggest is caused by a highly coherent magnetic field configuration throughout the accretion flow.
In addition to studying Sgr A*'s long-term polarimetric trends (i.e., Bower et al., 2002, 2005; Macquart et al., 2006; Munoz et al., 2012; Bower et al., 2018), the detection of hourly-timescale variation of LP by Marrone et al. (2006) opened up an additional avenue by which to study the accretion flow via the variable emission. Several models describing the total intensity flaring emission have been proposed, such as jets/outflows (Falcke & Markoff, 2000; Brinkerink et al., 2015) and adiabatically-expanding synchrotron hotspots embedded in the accretion flow (van der Laan, 1966; Yusef-Zadeh et al., 2006). Yusef-Zadeh et al. (2007) first modeled the full-Stokes light curves of these hotspots using an analytic formalism for the transfer of polarized synchrotron radiation through a homogeneous medium (Jones & O'Dell, 1977a). Supplementing this simple picture with full-Stokes radiative transfer presents a new opportunity to study the magnetic field configuration in a localized region of the accretion flow. Previously, only the equipartition magnetic field strength could be estimated from this model. However, the observed polarization light curves are regulated by the orientation of the magnetic field relative to the observer. The orientation is a crucial physical parameter that could not previously be determined. Yusef-Zadeh et al. (2007) tested this full-polarization hotspot model at 22 and 43 GHz; however, their analysis was limited as the LP level was low (\(\sim 0.2-0.8\%\)), and the data were noisy. Sgr A* is brighter and more linearly and circularly polarized at submm frequencies, which decrease the overall uncertainty in the polarimetric properties.
In this paper, we present the first full-Stokes modeling of Sgr A*'s submm flaring emission using the adiabatically-expanding hotspot model. This paper is organized as follows. In Section 2, we discuss the observations and processing of the data and analyze possible systematic issues in the CP products. In Section 3, we describe the models adopted for the quiescent and variable components used to fit the full-Stokes light curves and present the best-fit values. For the first time, we determine the orientation of the hotspot's magnetic field on the plane-of-sky and along the line-of-sight. We find the projected magnetic field to be oriented along the Galactic Plane and approximately perpendicular to the line-of-sight. This has interesting implications for the accretion flow's magnetic field configuration, which we discuss in Section 4. Furthermore, in Section 4, we discuss the other results in the context of previous analyses in the literature and consider some limitations with our chosen data set. Finally, in Section 5, we present a summary of our findings and discuss future work.
## 2 Data
### Observations and Processing
The Atacama Large Millimeter/submillimeter Array (ALMA) observed Sgr A* on 16 July 2017 in band 6 (\(\approx 230\) GHz) in full polarization (project ID 2016.A.00037.T). These data, part of a multi-wavelength campaign of Sgr A* concurrent with the Chandra X-ray Observatory and the Spitzer Space Telescope, were taken with the 12-meter array in the C40-5 configuration (the baselines ranged from 17 to 1100 meters). For our analysis, we focus only on the submm data.
The observation consists of two line and two continuum spectral windows. The two continuum windows are centered on 233 and 235 GHz, each having a bandwidth of 2 GHz with 64 31.25-MHz bandwidth channels. The first spectral line window is centered on SiO (5-4) at \(\approx\)217 GHz with a 1.875 GHz bandwidth of 1920 0.976-MHz channels. The second spectral line window is centered on \({}^{13}\)CO (2-1) at 220.398 GHz with 1920 0.244-MHz channels for a total bandwidth of \(\approx 0.47\) GHz. In this analysis, we average over all of the channels per spectral window to obtain four frequency-averaged continuum windows.
Only one of five execution blocks was observed owing to technical issues which occurred during the observation. We used the ALMA pipeline (version 2020.1.0.40) with CASA 6.1.1.15 (McMullin et al., 2007) to generate the calibrated data. The following calibrators are used to generate the calibration tables: J1733-1304 (flux), J1517-2422 (bandpass), J1549+0237 (instrumental polarization), and J1744-3116 (phase). The QA2 team designated these data "semi-pass" since the parallactic coverage (\(\approx 46^{\circ}\)) was lower than recommended to determine the instrumental polarization terms (\(60^{\circ}\)). Despite this, we were able to calibrate the instrumental polarization. We imaged and phase self-calibrated the data starting with a solution interval of 30 seconds and stopping at a single integration time (6.05 seconds). After phase self-calibration, we flagged any obvious misbehaving baselines or antennas.
We developed a CASA script to autonomously determine the full polarization light curves for a point source located at the phase center using TCLEAN and IMFIT. Briefly, the code bins every scan on a single source to a user-defined value for imaging. TCLEAN images the visibilities in all four Stokes parameters for each time bin. IMFIT is used to fit a point source + zero-level offset at the phase center, where Sgr A* is located, in each image and polarization to determine the point source flux density and (statistical) error. We construct the point source light curves using the fitted IMFIT parameters and export them to a text file for further analysis, where we calculate the polarization product light curves (see Appendix A for our chosen conventions). Since Sgr A* is surrounded by diffuse emission which is not fully resolved out in the observed configuration, this method yields contamination-free light curves without restricting the projected baseline length, which would lower the overall sensitivity.
We use 30-second binning in our analysis. Each image is \(1024\times 1024\) (\(\approx 51^{\prime\prime}\times 51^{\prime\prime}\)) and uses a cell size of \(0.\!\!^{\prime\prime}05\). We apply the standard 20% primary beam cut to remove imaging artifacts toward the edge of the image, resulting in a final image size of \(\approx 37^{\prime\prime}\times 37^{\prime\prime}\). We do not primary-beam correct the image since Sgr A* is at the phase center. We restrict the maximum number of iterations to 1000 to properly clean any extended emission while not cleaning noise artifacts. We show a sample image of Sgr A* during a 30-second binning time in all four Stokes parameters in Figure 1. In Figure 2, we show the final Stokes I, Q, U, and V, LP percent (\(p_{l}\)), CP percent (\(p_{c}\)), and LP angle (\(\chi\)) light
curves used in our analysis. Overall, we find Sgr A* to be linearly and circularly polarized at levels of \(\sim 10\%\) and \(\sim-1\%\), respectively. The definitions of these parameters and their uncertainties are detailed in Appendix A. We discuss the absence of CP products for J1744-3116 in Section 2.2.
### Verifying Sgr A*s Circular Polarization Detection
There has been great care taken in previous polarimetric analyses to rule out calibration-error-based CP detections. Goddi et al. (2021) present a detailed description of the issue (see their Appendix G). In short, the polarization calibrator is assumed to have Stokes \(V=0\), which can induce a false (and time-dependent) CP onto the target sources. To check for systematics, we focus on the CP characteristics of J1517-2422 and J1744-3116 following the prescription given in Munoz et al. (2012). J1517-2422 has a similar declination to Sgr A* (\(17^{\rm h}45^{\rm m}40.04^{\rm s}\), \(-29^{\circ}00^{\prime}28.17^{\prime\prime}\)), while J1744-3116 has a comparable right ascension. To check for intrinsic CP for the calibrators, we image each spectral window in Stokes I and V during the entire observing window using the same non-interactive process in Section 2.1. We obtain a higher sensitivity image to detect CP by imaging the entire observation. We fit a point source to the phase center using IMFIT and report the integrated flux density and statistical error. These results are shown in Table 1. While IMFIT returns converged flux densities and errors for J1744-3116 in Stokes V, the images do not show circularly polarized emission at or near the phase center. To quantify the 3\(\sigma\) upper limit on CP, we calculate 3\(\times\) the Stokes V root-mean-square (RMS) provided by IMSTAT.
For J1517-2422, we detect a statistically significant \(p_{c}\approx-0.1\%\). Since this source is bright (\(>3\) Jy), residual or uncalibrated instrumental polarization terms in V could lead to spurious CP measurements. Despite the unfortunately sparse coverage of this source, we compare our results to those in the literature. Bower et al. (2018) report \(p_{c}\approx 0.1\%\) for this source in August 2016 at \(\approx 240\) GHz, having the same magnitude (but opposite sign) as our result. Following Goddi et al. (2021), we use the AMAPOLA1 project, which tracks the flux density and polarization properties of several ALMA calibrators for more nearby observations to July 2017. At 233 GHz, the CP of J1517-2422 ranged between roughly \(-0.4\%\) and \(0.3\%\) during January-April 2017 and between \(-1.0\%\) to \(-0.4\%\) between October-December 2017. Given that our \(-0.1\%\) detection is well within the historical average and that Sgr A* is at least \(10\times\) more circularly polarized than either J1517-2422 or J1744-3116, we robustly detect intrinsic CP from Sgr A*.
Footnote 1: [http://www.alma.cl/~skameno/AMAPOLA/](http://www.alma.cl/~skameno/AMAPOLA/)
The final aspect to consider is a time-dependent Stokes V leakage. We cannot directly check for this as J1744-3116 is not circularly polarized, and no other gain calibrators were observed. Goddi et al. (2021, Appendix G) study the measured Stokes V as a function of feed angle (parallactic angle + receiver rotation relative to the antenna mount) in search for uncalibrated Stokes V terms. In their April 2017 data (near 230 GHz), they found a modulating \(\approx 0.1\%\) leakage in Stokes V for the nearest calibrators to Sgr A* (J1733-1304 and J1924-2914). This modulation occurs over a range of \(\gtrsim 100^{\circ}\) in the feed angle. In this observation, the feed angle changes by only \(\approx 8^{\circ}\). By our estimates, this induces a maximum of \(\approx 2\) mJy (absolute) variation in Stokes V. As Sgr A*'s Stokes V light curves vary by \(\approx 15\) mJy, these time-dependent variations are intrinsic to Sgr A* and are not caused by uncalibrated polarization terms.
## 3 Modeling the Light Curves
We adopt a two-component model consistent with previous work to account for the variable and (quasi-)quiescent components of Sgr A*'s light curves. In contrast to previous work, however, we incorporate a full-Stokes picture. The flaring component is modeled as a homogeneous, spherical synchrotron hotspot adiabatically expanding at a constant speed on a roughly one-hour timescale. This model is characterized by several physical parameters, such as the initial radius, expansion speed, magnetic field strength and orientation, and power-law population of relativistic electrons. Our model does not intrinsically include orbital motion (i.e., a varying magnetic field orientation), gravitational effects
Figure 1: A sample of example images of Sgr A* on 16 July 2017 at 00:28:07 UTC using a 30-second binning time for each Stokes parameter at 233.5 GHz. The full image is \(=51^{\prime\prime}\) per side. We flag pixels below a normalized primary beam limit of 20%, resulting in an image that is approximately \(37^{\prime\prime}\) per side. The panels use the same gray scale to show the noise level. The inset is a \(2.^{\prime\prime}5\times 2.^{\prime\prime}5\) subregion centered on Sgr A*.
(i.e., lensing), non-symmetric geometric evolution (i.e., shearing), nor a sense of the hotspot's location in the accretion flow. We account for secular variations in the accretion flow by modeling the slowly-varying frequency-dependent quiescent component. At each frequency, the four Stokes parameters are assumed to rise or fall linearly during the observation and are characterized by phenomenological parameters, such as gradients with respect to time and reference flux densities. Additionally, the Stokes parameters are frequency-dependent, accounting for physical properties of the accretion flow (such as rotation measure, RM), which we model with spectral indices and gradients with respect to frequency. The two components are described in detail below.
### Polarization Model for Flaring Emission
The Stokes I temporal- and frequency-dependent flaring emission are well-modeled by an adiabatically-expanding synchrotron plasma (thencephalon referred to as a "hotspot;" van der Laan 1966; Yusef-Zadeh et al. 2006). The hotspot is homogeneous and characterized by five parameters: \(I_{p}\), \(p\), vexp, \(R_{0}\), and \(t_{0}\). \(I_{p}\) is the peak flare flux density at frequency \(\nu_{0}\) at time \(t_{0}\) having radius \(R_{0}\), \(p\) is the electron energy power-law index (\(N(E)\propto E^{-\rho}\)) valid between energies \(E_{\rm min}\) and \(E_{\rm max}\), and \(v_{\rm exp}\) is the (normalized) radial expansion velocity. The flux density
\begin{table}
\begin{tabular}{c c c c} \hline \hline \(\nu\) & Stokes I & Stokes V & \(p_{e}\) \\ [GHz] & [mJy] & [mJy] & [\%] \\ \hline \multicolumn{4}{c}{J1517-2422} \\ \hline
217.1 & \(3099\pm 0.54\) & \(-3.33\pm 0.17\) & \(-0.11\pm 0.005\) \\
220.0 & \(3088\pm 0.59\) & \(-2.37\pm 0.17\) & \(-0.08\pm 0.006\) \\
233.5 & \(3059\pm 0.61\) & \(-2.75\pm 0.17\) & \(-0.09\pm 0.006\) \\
235.0 & \(3049\pm 0.56\) & \(-2.16\pm 0.14\) & \(-0.07\pm 0.005\) \\ \hline \multicolumn{4}{c}{J1744-3116} \\ \multicolumn{4}{c}{\(3\sigma\) Upper Limits} \\ \hline
217.1 & \(217.4\pm 0.17\) & \(<|0.27|\) & \(<|0.12|\) \\
220.0 & \(214.3\pm 0.29\) & \(<|0.51|\) & \(<|0.24|\) \\
233.5 & \(210.2\pm 0.15\) & \(<|0.27|\) & \(<|0.13|\) \\
235.0 & \(207.8\pm 0.17\) & \(<|0.27|\) & \(<|0.13|\) \\ \hline \end{tabular}
\end{table}
Table 1: The measured Stokes I and V properties for the two non-instrumental polarization calibrators on 16 July 2017. The errors quoted are only statistical. We do not detect CP in J1744-3116 but do detect it in J1517-2422 at a level \(\approx-0.1\%\).
Figure 2: Full Stokes and polarization light curves of Sgr A* (red, left) and the phase calibrator J1744-3116 (blue, right) on 16 July 2017. As J1744-3116 is not circularly polarized (see Section 2.2), not shown are Stokes V and \(p_{e}\) for this source. Error bars for both sources are shown and are often smaller than the marker size.
at any frequency and size is calculated via
\[I_{f}(R)=I_{p}\left(\frac{\nu}{\nu_{0}}\right)^{5/2}\left(\frac{R}{R_{0}}\right) ^{3}\frac{f(\tau_{\nu})}{f(\tau_{0})}. \tag{1}\]
\(f(...)\) is a non-trivial function encompassing the full-Stokes radiative transfer equations (briefly described below). \(\tau_{\nu}\), the frequency- and size-dependent optical depth, is given by,
\[\tau_{\nu}(R)=\tau_{0}\left(\frac{\nu}{\nu_{0}}\right)^{-(p+4)/2}\left(\frac{R }{R_{0}}\right)^{-(2p+3)}. \tag{2}\]
\(\tau_{0}\) is the critical optical depth where the hotspot becomes optically thin and is determined by
\[e^{\tau_{0}}-\left(\frac{2p}{3}+1\right)\tau_{0}-1=0. \tag{3}\]
Finally, we assume a uniform expansion to relate the time and radius:
\[R(t)=R_{0}\left(1+v_{\rm{exp}}\left(t-t_{0}\right)\right). \tag{4}\]
Assuming magnetic equipartition, we can determine the hotspot's physical radius and expansion velocity, mass, magnetic field strength, and electron number density.
Given the remarkable high-sensitivity light curves found in Section 2, we model them using a full-Stokes adiabatically-expanding synchrotron hotspot. To do so, we use the prescription given in Jones & O'Dell (1977a), which describes the transfer of full-Stokes synchrotron radiation for a static source through a homogeneous medium, i.e., where \(v_{\rm{exp}}\ll c\). The Stokes I model described above temporally evolves depending upon the size of the emitting region. Since this temporal evolution is secular, we can model the expanding source as a sequence of hotspots with varying parameters (such as radius, magnetic field strength, and electron population) to account for the time-dependent properties as it grows. Therefore, we convert a stationary solution into a dynamic one to produce full-Stokes light curves of such a source.
Supplementing this model with polarization adds only two parameters: \(\theta\) and \(\phi\), which are related to the orientation of the magnetic field. A schematic of these angles is shown in Figure 3. \(\phi\) is the intrinsic electric vector position angle (EVPA; see Equation A7) of the hotspot, measured East of North, projected in the plane-of-sky (POS). It is closely related to the projected magnetic field orientation in the POS, \(\phi_{B}=\phi+\pi/2\). \(\phi\) and \(\phi_{B}\) are pseudovectors and obey the \(\pi\)-ambiguity. \(\theta\) is the projected magnetic field vector to the line of sight (LOS). In this convention, \(\theta=0^{\circ}\) and \(\theta=90^{\circ}\) occur when the projected magnetic fields are along and perpendicular to the LOS, respectively.
Under the assumption of homogeneity, the radiative transfer equations as given by Jones & O'Dell (1977a) read
\[\begin{bmatrix}J_{\nu}\\ \epsilon_{Q}J_{\nu}\\ 0\\ \epsilon_{V}J_{\nu}\end{bmatrix}=\begin{bmatrix}\left(\frac{d}{dt_{\nu}}+1 \right)&\zeta_{Q}&0&\zeta_{V}\\ \zeta_{Q}&\left(\frac{d}{dt_{\nu}}+1\right)&\zeta_{V}^{\star}&0\\ 0&-\zeta_{V}^{\star}&\left(\frac{d}{dt_{\nu}}+1\right)&\zeta_{Q}^{\star}\\ \zeta_{V}&0&-\zeta_{Q}^{\star}&\left(\frac{d}{dt_{\nu}}+1\right)\end{bmatrix} \begin{bmatrix}I_{f}/\Omega\\ Q_{f}/\Omega\\ U_{f}/\Omega\\ V_{f}/\Omega\end{bmatrix}. \tag{5}\]
Inside the dielectric tensor, \(\tau_{\nu}\) is the optical depth, \(\zeta_{Q,V}\) are the Stokes Q and V absorption coefficients, \(\zeta_{V}^{\star}\) is the rotativity coefficient (responsible for Faraday rotation), and \(\zeta_{Q}^{\star}\) is the convertibility (or repolarization; Pacholczyk 1973) coefficient between LP and CP. \(\epsilon_{(Q,V)}\) are the Stokes Q and V emissivity coefficients, and \(J_{\nu}\) is a source function. \(\Omega\) is the solid angle subtended by the source given by \(\Omega\equiv\pi R^{2}/d^{2}\), where \(R\) and \(d\) are the radius of the hotspot and its distance from the Earth, respectively. Jones & O'Dell (1977a) provide analytic solutions for the emergent Stokes flux densities (\(I_{f}\), \(Q_{f}\), \(U_{f}\), and \(V_{f}\)) integrated over the homogeneous source (see their equations C4-C17).
We note the number of assumptions made in this picture, specifically that the hotspot undergoes only secular evolution, which is a limitation in our modeling and differs from other approaches (such as Tiede et al. 2020; Gelles et al. 2021) that include the hotspot's evolution as it orbits Sgr A*. Our attempt to formally fit this data is to describe the general nature of the expanding hotspot. However, if these secondary processes dominate adiabatic expansion, we would not have expected this modeling to be successful. This is due to the frequency- and polarization-dependent coupling of the polarized radiative transfer equations (Equation 5 and see Section 4.4). These secondary effects can be included and are planned for future work.
### Quiescent Frequency and Temporal Variations
The quiescent component is known to have frequency- and time-dependent baselines that must be accounted for while modeling the flaring emission. The time dependence likely arises from continual, longer-term variability within the accretion flow. For example, Dexter et al. (2014) find an \(\sim\) 8-hour characteristic timescale in Sgr A*'s submm light curves. While their analysis focuses only on Stokes I, we include time-dependent terms for the other three Stokes parameters for consistency. If there is no time dependence in the Stokes Q, U, and V light curves, we expect their time-dependent fitting parameters to be consistent with 0. Frequency-dependent variations emerge from processes like optical depth (Stokes I and V) or RM (Stokes Q and U).
To account for the frequency- and time-dependent nature of the quiescent component, we use the following model:
\[I_{q}(v,t) =\left(I_{0}+I_{1}\left(t-t_{0}\right)\right)\left(\frac{v}{v_{0}} \right)^{\alpha_{I}}, \tag{6}\] \[Q_{q}(v,t) =Q_{0}+Q_{1}\left(t-t_{0}\right)+Q_{2}\left(v-v_{0}\right),\] (7) \[U_{q}(v,t) =U_{0}+U_{1}\left(t-t_{0}\right)+U_{2}\left(v-v_{0}\right),\] (8) \[V_{q}(v,t) =\left(V_{0}+V_{1}\left(t-t_{0}\right)\right)\left(\frac{v}{v_{0}} \right)^{\alpha_{V}}. \tag{9}\]
We choose two different frequency dependencies based on the Stokes parameter. The signs of Stokes I and V cannot or do not, respectively, change across the 18 GHz of bandwidth. (The sign of Stokes V does not change from \(1.4-340\) GHz; Munoz et al., 2012). Therefore, we use the classic frequency power-law form. Equations 6 and 9 follow the time- and frequency-dependent model in Michail et al. (2021). Due to Faraday rotation, the signs for Stokes Q and U change owing to RM across the bandpass. Therefore, we model frequency-dependent changes in Stokes Q and U using a linear form. Here, \(I_{i},Q_{i},U_{i}\), and \(V_{i}\) are all constants; parameters with a "0" subscript reflect reference flux densities for the four Stokes parameters at frequency \(v_{0}\) at time \(t_{0}\). The time- and frequency-dependent slopes are denoted by parameters with subscripts "1" and "2", respectively. The spectral indices for Stokes I and V are \(\alpha_{I}\) and \(\alpha_{V}\), respectively.
### Results of Model Fitting
We use LMFIT (Newville et al., 2014) to simultaneously fit the 16 light curves (4 spectral windows \(\times\) 4 Stokes parameters) by minimizing the \(\chi^{2}\) of the variable + quiescent models (i.e., \(I_{v}=I_{f}+I_{q}\), \(Q_{v}=Q_{f}+Q_{q}\), \(U_{v}=U_{f}+U_{q}\), \(V_{v}=V_{f}+V_{q}\)) discussed above. Due to the lack of time coverage, we only fit the data through 00:36 hrs UTC. While there appears to be a second flare beginning near 00:45 hrs UTC, the time coverage is insufficient to model it. Therefore, we do not include those data in the fit. We discuss the implications of this limited time coverage in Section 4.3. For this analysis, we set the reference frequency to \(v_{0}=235.1\) GHz. In Table 2, we present the fitted parameters values and errors. In Figure 4 (left), we show the best-fit model superimposed on Sgr A*'s light curves.
#### 3.3.1 Variable Component
Modeling the light curves gives the six variable component parameters, which characterize the hotspot and are listed in Table 2. To determine physical parameters, we assume the hotspot is in magnetic equipartition with the electrons responsible for the synchrotron emission between energies \(E_{\rm min}\) and \(E_{\rm max}\). In Table 3, we present the physical properties of the hotspot fixing \(E_{\rm min}\) and \(E_{\rm max}\) to 1 and 500 MeV (\(\gamma_{e}\sim 2-1000\)), respectively. We disregard contributions from protons and non-relativistic electrons in the magnetic field strength, so this is a lower limit on the true value. Overall, we find a 235.1 GHz peak flare flux density of 0.19 Jy produced by an electron energy spectrum \(N(E)\propto E^{-3.1}\). The hotspot expands at speed \(\approx 0.013c\) with an equipartition magnetic field strength 71 G and radius 0.75 \(R_{\rm S}\) (1 \(R_{\rm S}=1.23\times 10^{12}\) cm for a \(4.152\times 10^{6}\)\(M_{\odot}\) Schwarzschild black hole). Our model robustly detects the two new parameters in this full-polarization fit: \(\theta\) and \(\phi\). For the intrinsic EVPA of
Figure 3: We show several perspectives of the various angles used in this analysis. (a) The general schematic setup, where “N,” “E,” and “\(\hat{k}\)” denote north, east (in equatorial coordinates), and the unit vector toward the observer, respectively. The hotspot (gray sphere) possesses a three-dimensional magnetic field vector (\(\hat{B}\), orange). The pink dashed line denotes the projected magnetic field orientation (\(\phi_{B}\)) in the North-East plane perpendicular to the LOS, measured East of North. The dot-dashed cyan arrow shows the angle between the projected magnetic field vector to the LOS (\(\theta\)). (b) The schematic along the observer’s LOS. The dashed pink line again shows the projected magnetic field orientation. In this analysis, we focus on the electric vector position angle (EVPA, \(\phi\)) shown as a solid green line. The EVPA is also measured East of North. \(\phi_{B}\) and \(\phi\) are related by \(\phi_{B}=\phi+\pi/2\) and is wrapped through the \(\pi\)-ambiguity. For clarity, we do not show the dot-dashed cyan vector. (c) A “side” view along the eastern direction to show the projected magnetic field vector along the LOS. Again, for clarity, we do not show the dashed pink or solid green lines.
the source, we find \(\phi\approx 145^{\circ}\), corresponding to \(\phi_{B}=55^{\circ}\) East of North (\(\phi+\pi/2\) wrapped through the \(\pi\)-ambiguity). Additionally, we determine \(\theta=90.09^{\circ}\), placing the projected magnetic field orientation approximately perpendicular to the LOS.
To compare the overall variability of the flaring component to the quiescent emission, we calculate the hotspot's mean LP and CP and their relative fractional change (\(\mathrm{RFC}\equiv(\mathrm{max}-\mathrm{min})/\mathrm{average}\)). During the modeled range, we find the flare to have average LP and CP of \(\approx 35\%\) and \(\approx-4.2\%\), respectively, at 235.1 GHz. The LP goes from a minimum of \(\approx 9.5\%\) to a maximum of \(\approx 81\%\), giving an \(\mathrm{RFC}=~{}2.04\). The CP ranges from \(\approx-15\%\) to \(\approx-0.1\%\) with an \(\mathrm{RFC}=~{}3.48\).
#### 3.3.2 Quiescent Component
We find statistically-significant time dependencies in the quiescent component's Stokes I, Q, and U light curves. In Figure 4 (right), we present the quiescent-only full-Stokes light curves during our modeled range. While the Stokes I time-dependence has been observed previously (e.g., Michail et al., 2021), this is the first detection of the quiescent component's Stokes Q and U time-variability. We do not find any changes in the quiescent emission's Stokes V properties, as the time-dependent term is not significant. An uncalibrated Stokes V polarization term (Section 2.2) would contribute to the final fitted value, further proof that the variations in Stokes V are intrinsic to Sgr A*'s flaring emission.
We find the quiescent emission has average LP \(\approx 12\%\) and average CP of \(\approx-0.7\%\). The LP ranges between \(\approx 9.9\%\) to \(\approx 14\%\), giving RFC \(=~{}0.31\). Since we conclude above the Stokes V quiescent emission is not time-dependent, we do not calculate its RFC.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Parameter & Description & Value & Unit \\ \hline \multicolumn{4}{c}{Hotspot} \\ \hline \(I_{p}\) & Peak Flare Flux Density at 235.1 GHz & \(0.19\pm 0.01\) & Jy \\ \(p\) & Electron Power-Law Index & \(3.10\pm 0.05\) & – \\ \(v_{\mathrm{exp}}\) & Relative Expansion Speed & \(1.48\pm 0.05\) & hr\({}^{-1}\) \\ \(\phi\) & Intrinsic EVPA projected in POS (East of North) & \(144.8\pm 1.5\) & degrees \\ \(\theta\) & Angle of projected magnetic field vector relative to LOS & \(90.09\pm 0.01\) & degrees \\ \(t_{0}\) & Time of Peak Flux at 235.1 GHz & \(0.455\pm 0.002\) & hr UTC \\ \hline \multicolumn{4}{c}{Quiescent Component} \\ \hline \(I_{0}\) & Stokes I flux density at \(t=t_{0}\) at 235.1 GHz & \(2.46\pm 0.01\) & Jy \\ \(Q_{0}\) & Stokes Q flux density at \(t=t_{0}\) at 235.1 GHz & \(-82.6\pm 3.7\) & mJy \\ \(U_{0}\) & Stokes U flux density at \(t=t_{0}\) at 235.1 GHz & \(-292.3\pm 4.7\) & mJy \\ \(V_{0}\) & Stokes V flux density at \(t=t_{0}\) at 235.1 GHz & \(-16.5\pm 1.2\) & mJy \\ \(I_{1}\) & Stokes I Time-Dependent Slope & \(-238.2\pm 13.0\) & mJy hr\({}^{-1}\) \\ \(Q_{1}\) & Stokes Q Time-Dependent Slope & \(-118.3\pm 14.2\) & mJy hr\({}^{-1}\) \\ \(U_{1}\) & Stokes U Time-Dependent Slope & \(-146.6\pm 16.5\) & mJy hr\({}^{-1}\) \\ \(V_{1}\) & Stokes V Time-Dependent Slope & \(6.5\pm 5.2\) & mJy hr\({}^{-1}\) \\ \(\alpha_{I}\) & Stokes I Spectral Index & \(0.16\pm 0.01\) & – \\ \(Q_{2}\) & Stokes Q Frequency-Dependent Slope & \(2.82\pm 0.07\) & mJy GHz\({}^{-1}\) \\ \(U_{2}\) & Stokes U Frequency-Dependent Slope & \(-2.88\pm 0.09\) & mJy GHz\({}^{-1}\) \\ \(\alpha_{V}\) & Stokes V Spectral Index & \(-1.31\pm 0.45\) & – \\ \hline \end{tabular}
\end{table}
Table 2: Fitted parameters (with errors) for the joint quiescent and variable model discussed in Section 3.
\begin{table}
\begin{tabular}{c c c} \hline \hline Parameter & Description & Value \\ \hline \multicolumn{4}{c}{Adopted Parameters} \\ \hline \(E_{\mathrm{min}}\) & Electron Lower Energy Bound & 1 MeV \\ \(E_{\mathrm{max}}\) & Electron Upper Energy Bound & 500 MeV \\ \hline \multicolumn{4}{c}{Derived Parameters} \\ \hline \(n_{c}\) & Electron density & \(6.5\times 10^{7}\) cm\({}^{-3}\) \\ \(R_{0}\) & Radius of flaring region at \(t=t_{0}\) & \(9.2\times 10^{11}\) cm \\ \(v_{\mathrm{exp}}\cdot R_{0}\) & Physical Expansion Speed & 0.013c \\ \(B_{\mathrm{eq}}\) & Equipartition magnetic field strength & 71 G \\ \(M\) & Mass of flaring region & \(3.62\times 10^{20}\) g \\ \hline \end{tabular}
\end{table}
Table 3: Adopted and derived hotspot properties from the variable parameters in Table 2. We do not account for non-relativistic electrons or protons while estimating the equipartition magnetic field strength. Therefore, this is a lower limit on the true value.
Additionally, we identify strong frequency-dependent terms in Stokes I, Q, and U, and a marginal dependence in Stokes V. We find the quiescent emission's Stokes I spectral index is \(\alpha_{I}=0.15\). The Stokes V spectral index value is much steeper at \(\alpha_{V}=-1.31\). The detection of frequency-dependent slopes in Stokes Q and U generates a non-zero RM in the quiescent emission, which has been found in previous analyses (e.g., Marrone et al., 2006; Bower et al., 2018).
## 4 Discussion
In the previous section, we presented the first-ever full-Stokes modeling of Sgr A*'s total intensity and polarized light curves to simultaneously characterize the quiescent and variable emission. Determining the fitted parameters for our two-component model allows us to study and derive additional physical properties of both components. We derived a few of these properties for the variable emission above by assuming magnetic equipartition. In this section, we compare our results for each component to previous analyses and broadly find them consistent with those in the literature. Finally, given our \(\sim\) 40 minute observation and the 18 free parameters in this fit, we examine their implications on our results.
### Variable Component
The electron power-law index responsible for the flaring emission is consistent with multi-wavelength constrained values, which broadly range from \(p\approx 1-3\)(e.g., Yusef-Zadeh et al., 2006, 2008; Eckart et al., 2009; Michail et al., 2021; GRAVITY Collaboration et al., 2021; Witzel et al., 2021). The calculated magnetic field strength is on the higher side of those previously reported, which typically averages a few to tens of Gauss. However, Yusef-Zadeh et al. (2008) and Eckart et al. (2009) report magnetic field strengths \(\approx 80\) G. These are somewhat stronger than the average field strengths of stellar wind-fed simulations (e.g., Ressler et al., 2020). It suggests the hotspot may have occurred near the inner accretion flow, where field strengths are stronger and/or in a concentration of flux when the acceleration of particles drives the flaring. The \(\approx 7\times 10^{7}\) cm\({}^{-3}\) electron density is consistent with the Witzel et al. (2021) joint variability analysis of Sgr A* at submm, infrared, and X-ray frequencies.
Previous observations (i.e., Marrone et al., 2007; GRAVITY Collaboration et al., 2018; Wielgus et al., 2022) have detected variability in the LP angle (\(\chi\)) of Sgr A* caused by orbital motion. \(\chi\) changes by \(180^{\circ}\) over half of the orbital period of the hotspot (GRAVITY Collaboration et al., 2018). We only observe \(\Delta\chi\sim 15^{\circ}\) over \(\sim 40\) minutes, the latter of which roughly corresponds to the period at the innermost stable circular orbit (ISCO) for a non-spinning black hole with the mass of Sgr A* (\(P=31.5\) minutes). If the hotspot was near the ISCO, we expect \(\Delta\chi\sim 180^{\circ}\) from orbital motion during our modeling range, which would dominate over changes caused by adiabatic expansion. However, the observed change of \(\sim 15^{\circ}\) implies the hotspot is far outside the ISCO, and orbital motion-induced variations in \(\chi\) are subdominant to changes
Figure 4: _Left_: Sgr A*’s light curves are shown in red, and the best-fit model is superimposed in black. Due to the short time coverage, we only model the light curves before 00:36 hr UTC. _Right_: Light curves and linear polarimetric quantities for the quiescent component. The best-fit model is plotted in red, and the 1\(\sigma\) error range is shaded in gray. Unlike the left panel, these figures are plotted to 00:36 hr UTC.
from the adiabatic expansion. Therefore, the 71 G field we derive above, which places us towards the inner accretion flow when compared to Ressler et al. (2020), is likely overestimated. We discuss the possible cause in Section 4.3.
We find the magnetic field angle projected on the POS is \(\phi_{B}=55^{\circ}\) East of North. As a point probe of the conditions within the accretion flow, these results present the first _direct_ detection of the accretion flow's projected magnetic field orientation in the POS. Several analyses suggest this position angle is a favored orientation for the Sgr A* system. Near-infrared polarimetric observations of Sgr A*'s flaring emission over several years find a mean EVPA of approximately \(60^{\circ}\) East of North with a range of about \(45^{\circ}\)(Eckart et al., 2006; Meyer et al., 2007). Eckart et al. (2006) speculate this indicates the projected spin axis of a disk around Sgr A*, while Meyer et al. (2007) merely propose this as a preferred orientation for the Sgr A* system. Continuum and spectral observations near 1.5 GHz by Yusef-Zadeh et al. (2020) find a symmetric jet-like structure oriented along the Galactic plane at a position angle \(\sim 60^{\circ}\), which they attribute to evidence of a jet/outflow from Sgr A*. A more recent analysis by Wielgus et al. (2022) uses ALMA linear polarimetry at similar frequencies (\(\sim 220\) GHz) in the context of an orbiting, non-expanding hotspot. Their analysis again confirms a \(\sim 60^{\circ}\) EVPA, which they conclude is the hotspot's projected orbital angular momentum axis. Wielgus et al. (2022) conclude that the orbital motion of a near-infrared hotspot observed by GRAVITY Collaboration et al. (2018) is consistent with their results, as well. In the context of these previous observations, we conclude the magnetic and angular momentum axes of the accretion flow are parallelly-oriented, a key signature of magnetically arrested disks (MAD; Narayan et al., 2003).
Of all 18 parameters required to fit our model, \(\theta\) (the angle between the LOS and magnetic field vector) is the most well-constrained. For self-absorbed synchrotron sources, Jones & O'Dell (1977a) show the Stokes V absorption, emission, and rotativity (\(\zeta_{V}\), \(\epsilon_{V}\), and \(\zeta_{V}^{*}\), respectively) coefficients depend on \(\theta\). As \(\theta\to 90^{\circ}\), the variations in Stokes V decrease as CP emission and absorption are suppressed. Additionally, the "strength" of internal Faraday rotation within the hotspot decreases as \(\theta\to 90^{\circ}\). The convertibility coefficient (\(\zeta_{Q}^{*}\)) is not a function of \(\theta\), causing the process of repolarization to be dominant over Faraday rotation. Repolarization has been suggested as one possible explanation for Sgr A*'s low LP but high CP detections at radio frequencies (e.g., Bower et al., 1999; Sault & Macquart, 1999). If the magnetic field configuration throughout the accretion flow is uniform and stable in time (as suggested by Munoz et al., 2012), then this result corroborates repolarization as the cause of the CP-only detections in the radio.
### Quiescent Component
The time- and frequency-dependent nature of the quiescent component is clear and leads us to search for secular variations in the RM and intrinsic EVPA (\(\chi_{0}\)). Using an error-weighted linear least squares, we calculate these values by assuming the normal form for Faraday rotation (i.e.,\(\chi=\) RM\(\cdot\lambda^{2}+\chi_{0}\)). We show the fitted values over the modeled time range in Figure 5. Overall, we find the RM to vary between \(\approx-4.9\times 10^{5}\) rad m\({}^{-2}\) and \(\approx-3.8\times 10^{5}\) rad m\({}^{-2}\), while \(\chi_{0}\) ranges between \(\approx-4^{\circ}\) to \(\approx-19^{\circ}\). Variations in these two parameters are strongly anti-correlated, which Bower et al. (2018) also suggest. Goddi et al. (2021) found RMs in the range \(-4.84\times 10^{5}\) to \(-3.28\times 10^{5}\) rad m\({}^{-2}\) and \(\chi_{0}\) ranging from \(-18.8^{\circ}\) to \(-14.7^{\circ}\), which averaged over the entire observation to determine the "quiescent" parameters. Our fitted RM and \(\chi_{0}\) match those from their analysis of April 2017 data.
We calculate the average LP and RFC for both components in Sections 3.3.1 and 3.3.2. We find that the variable emission is \(\sim 3\times\) more linearly polarized than the quiescent component. Unsurprisingly, the flare's LP properties also vary \(\approx 6.6\times\) more than the quiescent emission. However, the quiescent emission's modeled LP does change by an appreciable amount (\(\mathrm{RFC}=\,0.31\)). Some of its variability may be caused by unmodeled hotspot evolution. The two dominant sources are likely from non-symmetric geometric evolution, such as shearing, and orbital motion. Shearing occurs when the hotspot size is \(\sim 0.5\)\(R_{\mathrm{S}}\)(Wielgus et al., 2022); in our modeling, we find \(R\sim 0.75\)\(R_{\mathrm{S}}\). However, the shearing timescale is similar to the orbital period (Tiede et al., 2020). As discussed in Section 4.1, the hotspot is not in the inner accretion flow, so the orbital period is longer than our modeling range. The hotspot's orbital motion will affect the measured Stokes Q and U light curves as \(\phi\) will vary in the POS and result in some level of hotspot-induced time variability in the quiescent emission's linear polarization properties. These are modelable effects (i.e., Jones & O'Dell, 1977b; Wielgus et al., 2022) that will be considered in future work.
Goddi et al. (2021) found the Stokes I spectral index (\(\alpha_{I}\)) consistent with 0, whereas we find 0.15. This is likely explained by a variable spectral index between April and July 2017, which varies on daily to weekly timescales at submm frequencies (see Wielgus et al., 2022). Goddi et al. (2021) accounted for the \(\approx 10\%\) absolute uncertainty in ALMA's flux calibration, whereas we only factor in statistical errors. While this tends to underestimate our uncertainty on \(\alpha_{I}\) by \(\approx 20\%\), it cannot fully account for the discrepancy.
We find \(\alpha_{V}\approx-1.3\), which implies weaker (less negative) Stokes V flux density at higher frequencies. This is in contrast to Munoz et al. (2012) that find Stokes V flux density \(\propto\nu^{0.35}\) (more negative) at increasingly higher frequencies. Bower et al. (2018) finds epochs consistent with both positive and negative \(\alpha_{V}\), although they used a linear frequency term instead of a power-law. Despite only having three epochs from which to draw conclusions, there seems to be a general trend in their data. When Sgr A* is brighter, Stokes V is stronger (more negative, \(\alpha_{V}>0\)) at higher frequencies. When Stokes I is lower, Stokes V is weaker (\(\alpha_{V}<0\)) at higher frequencies. In one of three epochs, they find \(\alpha_{V}<0\) when Sgr A* is 2.68 Jy at 233 GHz. Here, we find \(\alpha_{V}<0\) when Sgr A*'s quiescent component is 2.46 Jy at 235 GHz. Notably, \(\alpha_{V}>0\) when Sgr A*'s 227 GHz flux density was \(\approx 3.6\) Jy (Munoz et al., 2012). This may suggest a fundamental relationship between Sgr A*'s flux density and the CP spectrum. However, additional data and a more uniform analysis are required before such a correlation is proposed.
This phenomenological model predicts a frequency-dependent RM for the quiescent emission. Yusef-Zadeh et al. (2007) also find this trend from compiling published RM values. This suggests that classic Faraday rotation, where the RM is frequency-independent, is not valid for Sgr A*. In Figure 6, we plot the predicted (absolute) RM as a function of frequency using our model. We compare these values to those previously published in the literature. At lower frequencies, the model and published values are discrepant by more than an order of magnitude. However,
our model predicts the same general trend, where the (absolute) RM falls off at lower frequencies. Unfortunately, the lack of \(\approx 350\) GHz RM measurements (outside of Marrone et al. 2007) makes it difficult to determine if our model underpredicts the average RM, if Sgr A*'s RM significantly changes near this frequency, or a combination of both.
### Effect of Short Time Coverage
We only model about half of the light curve on this day (see Section 3.3). Data after 00:45 hr UTC appear to show the beginning of a new flare. It is impossible to fit this second flare with the hotspot model as the peak was not observed, which is crucial to determine its physical parameters. The short time coverage, compounded with not detecting the beginning of the first flare, complicates fitting the proper quiescent baseline. At submm, the flaring emission is historically \(\approx 20\%\) of the overall emission. Here, however, \(I_{p}/I_{0}\approx 8\%\). While \(I_{p}/I_{0}\) can vary, and \(I_{0}\) matches Sgr A*'s flux density in April 2017 (Godti et al. 2021), this does seem somewhat low.
We explore the sensitivity to \(I_{p}/I_{0}\) by assuming \(I_{p}=0.5\) Jy, which is closer to the historic \(I_{p}/I_{0}\approx 20\%\) mentioned above. If \(I_{p}=0.5\) Jy, then the fixed-\(I_{p}\) model has \(p=2.53\), \(\phi=147.5^{\circ}\) (\(\phi_{B}=57.5^{\circ}\)), \(\theta=90.06^{\circ}\), and \(v_{\rm exp}=0.86\) hr\({}^{-1}\). The hotspot has an equipartition magnetic field strength 55 G, radius 1 \(R_{\rm S}\), electron density \(2.73\times 10^{7}\) cm\({}^{-3}\), and expansion velocity 0.01c. Noticeably, \(\theta\) and \(\phi\) are practically unchanged. For completeness, we list other physical properties of the quiescent emission: average RM \(=-3.60\times 10^{5}\) rad m\({}^{-2}\), average \(\chi_{0}=-12.96^{\circ}\), \(\alpha_{I}=-0.016\), average LP \(=18.4\%\), and average CP \(=-0.57\%\). While these values are within their nominal ranges, the reduced \(\chi^{2}\) of this fixed-\(I_{p}\) fit is 26% higher than for the model with our best-fitting parameters. We find the properties of the flaring and quiescent components are not extremely sensitive to the value of \(I_{p}\). \(\theta\) and \(\phi\), which regulate the variations in the Stokes Q, U, and V light curves, are virtually unaffected.
Figure 5: _Left_: A plot of the quiescent component’s RM during the modeled time range. Red depicts the best-fit RM value, and the shaded region is the \(1\sigma\) model range. _Right_: Similar to the left panel but for the intrinsic EVPA (\(\chi_{0}\)) of the quiescent emission.
Figure 6: Predicted absolute RM for the quiescent component as a function of frequency from our linear model is plotted as the black dashed line. We compare our predicted RM to those previously published in the literature. Goddi et al. (2021, 221 GHz), Marrone et al. (2007, 227/343 GHz), and Bower et al. (2018, 233 GHz) published multiple RM values in a single paper. In these cases, the marker shows the average value while the vertical bars denote the total published range.
### Concerning the Number of Fitting Parameters
We require fitting 18 parameters to model the quiescent and variable components. There is a concern that given the number of parameters, we may be able to fit any data regardless of its true nature. We have taken several steps throughout this analysis as a safeguard, which we detail below.
Of the 18 values fitted here, 12 are dedicated to characterizing the quiescent emission's frequency dependence and temporal evolution. Most of these parameters are phenomenological (i.e., the time-slopes, frequency-slopes, and reference flux densities) as we do not have a physical model for the quiescent emission. To limit over-fitting, we require the quiescent model (Equations 6-9) to have the same time-slope across all frequencies, but we do not couple these terms across the Stokes parameters. We fit three quiescent terms using four spectral windows for each Stokes parameter. By not pairing terms across Stokes parameters, we guarantee that any non-linear changes in the light curves are due to the variable component.
The final six fitting parameters for the flaring emission characterize these non-linear amplitude variations and link changes in the four Stokes parameters. The resulting full-Stokes light curves have unique patterns depending on the physical parameters of the hotspot. The power of ALMA's wide frequency bandwidths is that we can simultaneously observe Sgr A*'s time and frequency dependence. Since we jointly fit all 16 light curves, variations in any spectral window and/or Stokes parameter must occur in the other light curves following the full-polarization radiative transfer model. Therefore, we conclude that these simultaneous fits cannot model any light curves that do not follow this full-Stokes prescription.
## 5 Summary
We present the first full-Stokes analysis of the adiabatically-expanding synchrotron hotspot model using 230 GHz ALMA light curves of Sgr A* on 16 July 2017. This work is the most robust test of the hotspot model yet performed at any frequency regime by including all four Stokes parameter light curves. The full-polarization modeling we complete is additional proof of the adiabatically-expanding hotspot model, aside from time delay measurements. By modeling the time- and frequency-dependent nature of the variable and quiescent components, we constrain the physical and magnetic properties of the hotspot located within Sgr A*'s accretion flow. Our results are fundamental to the Event Horizon Telescope's future efforts to untangle the full-Stokes variable emission from that of the underlying accretion flow (Broderick et al., 2022; Event Horizon Telescope Collaboration et al., 2022, 2022). Our analysis will benefit from past (i.e., Event Horizon Telescope Collaboration et al., 2022) and future simultaneous multi-wavelength observations, even those with only total intensity data. These additional data would further constrain the frequency-dependent nature of the variable emission. Our fitted parameters show remarkable consistency with those previously published in the literature. We describe several of our key findings below:
1. We observe average LP and CP detections of Sgr A* at levels \(\approx 10\%\) and \(\approx-1\%\) at 235 GHz, respectively, which are consistent with previous measurements.
2. We find the quiescent component's average RM and \(\chi_{0}\) as \(-4.22\times 10^{5}\) rad m\({}^{-2}\) and \(-13.3^{\circ}\), respectively. These values match well with a recent analysis of Sgr A*'s April 2017 average polarimetric properties (Goddi et al., 2021).
3. As a point probe of the accretion flow, this hotspot is likely located near the inner edge of the accretion flow owing to the inferred magnetic field strength and electron density being a few times larger than those typically found in MHD simulations (i.e., Ressler et al., 2020).
4. The hotspot magnetic field orientation projected on the POS is aligned parallel to the Galactic Plane, matching a previously discovered jet-like feature emanating from Sgr A*, and near-infrared/submm polarization results indicating the accretion flow's angular momentum axis. This reveals the first direct evidence that the accretion flow's magnetic and angular momentum axes are aligned parallel, a key signature of a magnetically-arrested disk.
5. The hotspot's magnetic field axis is aligned almost perpendicular to the LOS. This suggests repolarization is dominant over Faraday rotation and corroborates it as the cause of low-LP but high-CP in Sgr A* at radio frequencies.
6. We find the results of the variable component are not drastically (or at all) altered from the lack of long-duration time coverage provided by these data.
Several exciting prospects remain to test this model. There is a diverse dataset of full-track, long-duration ALMA and Submillimeter Array (SMA) observations of Sgr A*. Many of these are simultaneous observations at similar or vastly different frequencies. In the first case, these data provide full-Stokes light curves over a broader time range than a single array could provide. In the latter case, multiwavelength observations allow us to constrain the frequency-dependent total intensity and polarized nature of the quiescent emission and solidly test this full-Stokes hotspot model across a wide range of frequencies. Future analyses will benefit from fast-frequency switching or sub-array ALMA observations (for example, simultaneously using the 12-meter and 7-meter arrays at two separate frequencies). This expanded analysis will allow us to test for variability in the hotspot's physical parameters, such as the magnetic field orientation and strength. For example, if \(\phi\) is variable, as is suggested in the near-infrared (i.e., Eckart et al., 2006), its range may signify the opening angle of an outflow emanating from Sgr A*.
## Acknowledgements
We thank the anonymous referee for their very helpful and constructive comments, which strengthened the arguments and analysis in this work. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. This paper makes use of the following ALMA data: ADS/JAO.ALMA#2016.A.00037.T. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. This research was supported in part through the computational resources and staff contributions provided for the Quest high performance computing facility at Northwestern University which is jointly supported by the Office of the Provost, the Office for Research, and Northwestern University Information Technology.
## Data Availability
The data used in this analysis are publicly available from the ALMA archive. The light curves and code used to model these data are available upon request to the first author.
|
2303.06258 | Probabilistic Guarantees for Nonlinear Safety-Critical Optimal Control | Leveraging recent developments in black-box risk-aware verification, we
provide three algorithms that generate probabilistic guarantees on (1)
optimality of solutions, (2) recursive feasibility, and (3) maximum controller
runtimes for general nonlinear safety-critical finite-time optimal controllers.
These methods forego the usual (perhaps) restrictive assumptions required for
typical theoretical guarantees, e.g. terminal set calculation for recursive
feasibility in Nonlinear Model Predictive Control, or convexification of
optimal controllers to ensure optimality. Furthermore, we show that these
methods can directly be applied to hardware systems to generate controller
guarantees on their respective systems. | Prithvi Akella, Wyatt Ubellacker, Aaron D. Ames | 2023-03-11T00:39:01Z | http://arxiv.org/abs/2303.06258v1 | # Probabilistic Guarantees for Nonlinear Safety-Critical Optimal Control
###### Abstract
Leveraging recent developments in black-box risk-aware verification, we provide three algorithms that generate probabilistic guarantees on (1) optimality of solutions, (2) recursive feasibility, and (3) maximum controller runtimes for general nonlinear safety-critical finite-time optimal controllers. These methods forgeo the usual (perhaps) restrictive assumptions required for typical theoretical guarantees, _e.g._ terminal set calculation for recursive feasibility in Nonlinear Model Predictive Control, or convexification of optimal controllers to ensure optimality. Furthermore, we show that these methods can directly be applied to hardware systems to generate controller guarantees on their respective systems.
## I Introduction
From Kalman till date, the pursuit of theoretical guarantees for optimal controllers has fascinated the controls and robotics communities alike [1, 2, 3, 4]. This fascination arises as optimal controllers provide a natural way of expressing and segmenting disparate control objectives, as can be easily seen in works regarding model predictive control (MPC) [5, 6, 7], control barrier functions [8, 9, 10], and optimal path planning [11, 12, 13], among others. However, optimization problems becoming central to controller synthesis resulted in newer problems such as determining whether solutions exist, _e.g._ recursive feasibility in MPC, determining the efficiency with which solutions can be identified to inform control loop rates, and determining the optimality of identified solutions in non-convex optimization settings.
Recent years have seen tremendous strides in answering these questions, but areas of improvement still exist. For example, advances in Nonlinear MPC still require assumptions on the existence of control invariant terminal sets and stabilizing controllers for recursive feasibility, though identification of such items for general nonlinear systems remains a difficult problem [14, 15, 16, 17, 18]. In general, determination of solution optimality for MPC problems is equivalent to solving the Hamilton-Jacobi-Bellman equation which is known to be difficult [19]. For path-planning problems, RRT* and other, sampling-based methods are known to be probabilistically complete, _i.e._ they will produce the optimal solution given an infinite runtime, though sample-complexity results for sub-optimal solutions are few [20, 12, 21]. Finally, there are similarly few theoretical results on the time complexity of these controllers on hardware systems, as such an analysis is heavily dependent on the specific hardware.
**Our Contribution:** Here, the authors believe recent results in black-box risk-aware verification might prove useful in generating theoretical statements on recursive feasibility, provable sub-optimality of results, and time complexity of the associated controllers on hardware systems, without the need for restrictive assumptions. Our results are threefold.
* We provide theoretical guarantees on the provable sub-optimality of percentile-based optimization procedures [22] on producing input sequences for general, finite-time optimal control problems.
* We provide an algorithm for determining the probability with which a black-box controller is successively feasible on existing system hardware.
* We provide an algorithm to determine a probabilistic upper bound on hardware-specific controller runtimes.
**Structure:** To start, Section II motivates and formally states the problems under study in this paper, and the introduction to Section III provides the general theorem employed throughout. Then, Section III-A details our algorithm that
Fig. 1: Finite-time optimal controllers and their guarantees can be expressed as optimization problems. We provide probabilistic guarantees on solutions to these problems using novel results in black-box risk-aware verification.
provides probabilistic guarantees on the optimality of outputted solutions to nonlinear safety-critical finite-time optimal control problems. Likewise, Section III-B details our algorithm that provides probabilistic guarantees on successive feasibility for the same type of optimal controllers. Finally, Section III-C details our algorithm that provides probabilistic guarantees on maximum controller runtimes. Lastly, we portray all our theoretical results on hardware, as described for the quadrupedal example in Section IV-A and for the Robotarium in Section IV-B [23].
## II General Motivation and Problem Statements
We assume the existence of a nonlinear discrete-time system whose dynamics \(f\) are (potentially) unknown:
\[x_{k+1}=f(x_{k},u_{k},d),\ x\in\mathcal{X},\ u\in\mathcal{U},\ d\in\mathcal{D}. \tag{1}\]
Here, \(\mathcal{X}\subseteq\mathbb{R}^{n}\) is the state space, \(\mathcal{U}\subseteq\mathbb{R}^{m}\) is the input space, and \(\mathcal{D}\subseteq\mathbb{R}^{p}\) is the space of variable objects in our environment that we can control, _e.g._ center locations of obstacles and goals for path-planning examples, variable wind-speeds for a drone, _etc._ Provided this dynamics information, a cost \(J\), state constraints, and input constraints, one could construct a Nonlinear Model Predictive Controller of the following form (with \(j\in[0,1,\ldots,H-1]\)):
\[\mathbf{u}^{*}=\operatorname*{argmin}_{\mathbf{u}=(u^{0},u^{1}, \ldots,u^{H-1})\in\mathcal{U}^{H}} J(\mathbf{u},x_{k},d),\] (NMPC) \[\operatorname*{subject\ to} x_{k}^{j+1}=f(x_{k}^{j},u^{j},d),\] \[x_{k}^{0}=x_{k},\] \[x_{k}^{j+1}\in\mathcal{X}_{k}^{j+1},\] \[u^{j}\in\mathcal{U}.\]
For the analysis to follow, however, we note that the general NMPC problem posed in (NMPC) can be posed as the following Finite-Time Optimal Control Problem.
\[\operatorname*{argmin}_{\mathbf{u}=(u^{0},u^{1},\ldots,u^{H-1}) \in\mathcal{U}^{H}} J(\mathbf{u},x_{k},d),\] (FTOCP) \[\operatorname*{subject\ to} \mathbf{u}\in\mathbb{U}(x_{k},d)\subseteq\mathcal{U}^{H}.\]
Here, \(J\) is a bounded (perhaps) nonlinear cost function, and \(\mathbb{U}\) is a set-valued function outputting a constraint space for input sequences that (potentially) depends on the initial system and environment states \((x_{k},d)\), respectively. Specific examples following this general form will be provided in Sections IV-A and IV-B. Finally, \(H>0\) is the horizon length for the finite-time optimal control problem. Then, the three problem statements predicated on this optimal controller (FTOCP) follow.
**Problem 1**.: _Develop a procedure to identify input sequences \(\mathbf{u}\) that are in the \(100(1-\epsilon)\%\)-ile for some \(\epsilon\in(0,1]\) with respect to solving (FTOCP)._
**Problem 2**.: _Develop a procedure to determine whether (FTOCP) is recursively feasible._
**Problem 3**.: _Develop a procedure to upper bound maximum controller runtimes for optimal controllers of the form in (FTOCP)._
## III Probabilistic Guarantees
To make progress on the aforementioned problem statements -- each will be addressed in a separate subsection to follow -- we will first state a general result combining existing results on black-box risk-aware verification. To that end, consider the following optimization problem:
\[\min_{\mathrm{s}\in\mathbb{S}}\ J(\mathrm{s}), \tag{2}\]
subject to the following assumption:
**Assumption 1**.: The decision space \(\mathbb{S}\) is a set with bounded volume, _i.e._\(\int_{\mathbb{S}}\ 1\ ds=V_{\mathbb{S}}<\infty\) or \(\mathbb{S}\) has a finite number of elements. Furthermore, the cost function \(J\) is bounded over \(\mathbb{S}\), _i.e._\(\exists\ m,M\in\mathbb{R},\ \mathrm{s.\ t.}\ \ m\leq J(\mathrm{s})\leq M,\ \forall\ \mathrm{s}\in\mathbb{S}.\)
This assumption permits us to define the functions \(\mathcal{V},F\) corresponding to the volume fraction occupied by a subset \(A\) of \(\mathbb{S}\) and the set of strictly better decisions for a provided decision \(\mathrm{s}^{\prime}\in\mathbb{S}\), respectively:
\[\mathcal{V}(A)=\frac{\int_{A}\ 1\ ds}{\int_{\mathbb{S}}\ 1\ ds}, \tag{3}\] \[F(\mathrm{s}^{\prime})=\{\mathrm{s}\in\mathbb{S}\ |\ J(\mathrm{s})<J( \mathrm{s}^{\prime})\}. \tag{4}\]
Naturally then, for a given decision \(\mathrm{s}^{\prime}\in\mathbb{S}\), were \(\mathcal{V}(F(\mathrm{s}^{\prime}))\leq\epsilon\) for some \(\epsilon\in(0,1]\), _i.e._\(\mathrm{s}^{\prime}\) is such that the volume fraction of strictly better decisions is no more than \(\epsilon\), then \(\mathrm{s}^{\prime}\) would be in the \(100(1-\epsilon)\%\)-ile with respect to minimizing \(J\). Likewise, the associated minimum cost of such a decision \(J(\mathrm{s}^{\prime})\) should also be a probabilistic lower bound on achievable costs. Both of these notions are expressed formally in the theorem below, which combines similar results from [22, 24].
**Theorem 1**.: _Let \(\{(\mathrm{s}_{i},J(\mathrm{s}_{i}))\}_{i=1}^{N}\) be a set of \(N\) decisions and costs for decisions \(\mathrm{s}_{i}\) sampled via \(\mathbb{U}[\mathbb{S}]\), with \(\zeta_{N}^{*}\) the minimum sampled cost and \(\mathrm{s}_{N}^{\prime}\) the (perhaps) non-unique decision with minimum cost. Then \(\forall\ \epsilon\in[0,1]\), the probability of sampling a decision whose cost is at-least \(\zeta_{N}^{*}\) is at minimum \(1-\epsilon\) with confidence \(1-(1-\epsilon)^{N}\), i.e._
\[\mathbb{P}_{\mathbb{U}[\mathbb{S}]}^{N}\left[\mathbb{P}_{\mathbb{U}[\mathbb{S}]} \left[J(\mathrm{s})\geq\zeta_{N}^{*}\right]\geq 1-\epsilon\right]\geq 1-(1- \epsilon)^{N}. \tag{5}\]
_Furthermore, \(\forall\ \epsilon\in(0,1]\), \(\mathrm{s}_{N}^{*}\) is in the \(100(1-\epsilon)\%\)-ile with minimum confidence \(1-(1-\epsilon)^{N}\), i.e._
\[\mathbb{P}_{\mathbb{U}[\mathbb{S}]}^{N}\left[\mathcal{V}(F(\mathrm{s}_{N}^{*})) \leq\epsilon\right]\geq 1-(1-\epsilon)^{N}. \tag{6}\]
**Proof:** This is a direct application of Theorem 7 in [22] and Theorem 2 in [24].
To clarify then, this is the central result on probabilistic optimality -- derived from existing results on black-box risk-aware verification -- that we will exploit in the remainder of the paper to address the three aforementioned questions. Our efforts regarding the first problem statement will follow.
### _Percentile-Based Input Selection_
Problem 1 references the development of an efficient method to solve (FTOCP). To that end, we aim to take a percentile method that exploits equation (6) in Theorem 1.
As a result, our corollary in this vein stems directly from Theorem 1, though we will make one clarifying assumption.
**Assumption 2**.: Let \(J\) and \(\mathbb{U}\) be as per (FTOCP), let \(\mathcal{V}\) be as per (3) with respect to the decision space \(\mathbb{U}(x_{k},d)\), and let \(F\) be as per (4) with respect to this cost \(J\) and \(\mathbb{U}(x_{k},d)\). Furthermore, let \(J\) be bounded over \(\mathbb{U}(x_{k},d)\), and let \(\mathbb{U}(x_{k},d)\) be a set of bounded volume (or finitely many elements if a discrete set) for any choice of \((x_{k},d)\in\mathcal{X}\times\mathcal{D}\) (these sets defined in (1)). Finally, let \(\{(\mathbf{u}_{i},J(\mathbf{u}_{i},x_{k},d))\}_{i=1}^{N}\) be a set of \(N\) uniformly sampled sequences \(\mathbf{u}_{i}\) from \(\mathbb{U}(x_{k},d)\) with their corresponding costs, and let \(\mathbf{u}_{N}^{*}\) be the (potentially) non-unique sequence with minimum sampled cost.
**Corollary 1**.: _Let Assumption 2 hold and let \(\epsilon\in(0,1]\). Then, \(\mathbf{u}_{N}^{*}\) is in the \(100(1-\epsilon)\%\)-ile with respect to minimizing \(J\) at the current system and environment state \((x_{k},d)\) with minimum confidence \(1-(1-\epsilon)^{N}\), i.e.,_
\[\mathbb{P}_{\mathbb{U}[\mathbb{U}(x_{k},d)]}^{N}\left[\mathcal{V}(F(\mathbf{u }_{N}^{*}))\leq\epsilon\right]\geq 1-(1-\epsilon)^{N}.\]
**Proof:** Use equation (6) in Theorem 1.
In short, Corollary 1 tells us that if we have a finite-time optimal control problem of the form in (FTOCP), where for some system and environment state \((x_{k},d)\), the cost function \(J\) is bounded over a bounded decision space \(\mathbb{U}(x_{k},d)\), then we can take a percentile approach to identify input sequences that are better than a large fraction of the space of all feasible input sequences. Notably, this statement is made independent of the convexity, or lack thereof, of (FTOCP), making it especially useful for non-convex MPC. Furthermore, as is done in Section IV-A to follow, one can further optimize over the outputted percentile solution \(\mathbf{u}_{N}^{*}\) via gradient descent -- should gradient information be available. The resulting solution then retains the same confidence on existing within the same percentile, while also being efficient to calculate. This does introduce new questions, however. Namely, will a percentile solution always exist, and how efficient is the calculation of these sequences on a given hardware? These questions will be answered in the sections to follow.
### _Determining Recursive Feasibility_
Problem 2 references the development of an algorithm to efficiently determine the recursive feasibility of (FTOCP). To ease the statement of the theoretical results to follow, we indicate via \(|\mathbb{U}(x_{k},d)|\) the "size" of the constraint space \(\mathbb{U}(x_{k},d)\) for (FTOCP), with \(|\varnothing|=0\). Additionally, we will assume that there exists some controller \(U\) that either utilizes the aforementioned percentile method in Section III-A or some other technique to produce (potentially approximate) solutions to (FTOCP), _i.e._
\[\exists\ U:\mathcal{X}\times\mathcal{D}\rightarrow\mathcal{U}\ \ \mathrm{s.t.}\ U(x,d)=u\in\mathcal{U} \tag{7}\]
Furthermore, we will indicate via the following notation, the evolution of our system under this controller \(U\), provided an initial system and environment state:
\[x^{+}[x,d]=f(x,U(x,d),d).\]
This allows us to formally define recursive feasibility.
**Definition 1**.: An optimal controller of the form in (FTOCP) is _recursively feasible_ if and only if for all system and environment states, the feasible space for (FTOCP) is non-empty for successive timesteps, _i.e._\(\forall\ (x,d)\in\mathcal{X}\times\mathcal{D},\ |\mathbb{U}(x,d)|>0\implies| \mathbb{U}(x^{+}[x,d],d)|>0\).
As motivated earlier, we can express recursive feasibility determination as an optimization problem. Specifically, let our cost function \(C\) be as follows:
\[\mathbb{T}(x,d)=|\mathbb{U}(x,d)|>0\ \mathrm{and}\ |\mathbb{U}(x^{+}[x,d],d)| >0,\] \[C(x,d)=\begin{cases}1&\text{if }\mathbb{T}(x,d)=\mathrm{True}, \\ 0&\text{else}.\end{cases} \tag{8}\]
We can generate a minimization problem provided this cost function \(C\) over the joint state space \(\mathcal{X}\times\mathcal{D}\):
\[\min_{x\in\mathcal{X},\ d\in\mathcal{D}}\ \ C(x,d). \tag{9}\]
If the solution to (9) were positive, then (FTOCP) is recursively feasible. Likewise, if the solution were negative, then there exists a counterexample. As a result, not only can we express recursive feasibility determination as an optimization problem, but this problem is also of the same form as in (2), permitting a probabilistic solution approach as expressed in the following assumption and corollary.
**Assumption 3**.: Let \(C\) be as per (8), let \(\mathcal{X},\mathcal{D}\) be as per (1) and also be spaces of bounded volume, let \(\{C(x_{i},d_{i})\}_{i=1}^{N}\) be a set of \(N\) cost evaluations of decision tuples \((x_{i},d_{i})\)
Fig. 3: Experimental setup for Quadruped reach-avoid tests.
Fig. 2: Experimental setup for Robotarium reach-avoid tests.
sampled independently via \(\mathrm{U}[\mathcal{X}\times\mathcal{D}]\triangleq\mu\), let \(\zeta_{N}^{*}\) be the minimum cost evaluation, and let \(\epsilon\in[0,1]\).
**Corollary 2**.: _Let Assumption 3 hold. Then if \(\zeta_{N}^{*}=1\), (FTOCP) is successively feasible with minimum probability \(1-\epsilon\) and with minimum confidence \(1-(1-\epsilon)^{N}\)._
**Proof:** Equation (5) in Theorem 1 tells us that
\[\mathbb{P}_{\mu}^{N}\left[\mathbb{P}_{\mu}\left[C(x,d)\geq\zeta_{N}^{*}\right] \geq 1-\epsilon\right]\geq 1-(1-\epsilon)^{N}.\]
By definition of \(C\) in (8), if \(\zeta_{N}^{*}=1\), then with minimum probability \(1-\epsilon\), \(\left|\mathbb{U}(x,d)\right|>0\implies\left|\mathbb{U}(x^{+}[x,d],d)\right|>0\). In other words, with minimum probability \(1-\epsilon\), if (FTOCP) were feasible at the prior time step, then it will also be feasible at the next time step, _i.e._ successively feasible.
In other words, Corollary 2 tells us that to probabilistically determine whether a given finite-time optimal control problem is successively feasible, it is sufficient to identify at least one input in the constraint space for successive optimization problems starting at \(N\) randomly sampled state pairs \((x,d)\). Determining at least one such input could be achieved by querying the corresponding controller \(U\) or some other method desired by the practitioner. Notably, this does not guarantee recursive feasibility as that would correspond to the optimal value of (9) being positive. However, with arbitrarily high probability, we can provide guarantees that even hardware controllers will be successively feasible for sampled state pairs \((x,d)\in\mathcal{X}\times\mathcal{D}\), which is the underlying requirement for recursive feasibility as per Definition 1.
### _Determining Hardware-Specific Controller Runtimes_
Lastly, Problem 3 references the development of an algorithm to efficiently identify maximum controller runtimes on existing system hardware. To address this from a probabilistic perspective, we will first define some notation. To start, we will use the same controller \(U\) as per equation (7). We also denote via \(T\) a timing function that outputs the evaluation time for querying the controller \(U\) at a given state pair \((x,d)\), _i.e._\(T:\mathcal{X}\times\mathcal{D}\rightarrow\mathbb{R}_{++}\). Then we can nominally express maximum controller runtime determination as an optimization problem:
\[\max_{x\in\mathcal{X},\ d\in\mathcal{D}}\ T(x,d). \tag{10}\]
Under the fairness assumption that the controller does have a bounded runtime, however, identification of a probabilistic maximum runtime is solvable via probabilistic optimization procedures as outlined by Theorem 1. In a similar fashion as prior, we will state a clarifying assumption and the formal corollary statement will follow.
**Assumption 4**.: Let \(T\) be as per (10), let \(\mathcal{X},\mathcal{D}\) be as per (1) and be of bounded volume, let \(\{T(x_{i},d_{i})\}_{i=1}^{N}\) be a set of \(N\) controller runtimes for state pairs \((x_{i},d_{i})\) sampled independently via \(\mathrm{U}[\mathcal{X}\times\mathcal{D}]\triangleq\mu\), let \(\zeta_{N}^{*}\) be the maximum runtime, and let \(\epsilon\in[0,1]\).
**Corollary 3**.: _Let Assumption 4 hold. Then, the probability of sampling a state pair whose controller runtime is at most \(\zeta_{N}^{*}\) is at-least \(1-\epsilon\) with confidence \(1-(1-\epsilon)^{N}\), i.e._
\[\mathbb{P}_{\mu}^{N}\left[\mathbb{P}_{\mu}\left[T(x,d)\leq\zeta_{N}^{*} \right]\geq 1-\epsilon\right]\geq 1-(1-\epsilon)^{N}.\]
**Proof:** Consider (10) expressed as a minimization. Under the same assumptions, equation (5) in Theorem 1 states that
\[\mathbb{P}_{\mu}^{N}\left[\mathbb{P}_{\mu}\left[-T(x,d)\geq-\zeta_{N}^{*} \right]\geq 1-\epsilon\right]\geq 1-(1-\epsilon)^{N},\]
and flipping the innermost inequality provides the result.
In short then, Corollary 3 tells us that probabilistic determination of maximum controller runtimes stems easily by recording controller runtimes for \(N\) randomly sampled scenarios identified through \(N\) randomly sampled system and environment state pairs \((x,d)\) from \(\mathcal{X}\times\mathcal{D}\).
## IV Experimental Demonstrations
To demonstrate the contributions of our work, we applied the aforementioned methods to two reach-avoid navigation examples: 1) an A1 Unitree Quadruped [25] in a field of static obstacles, and 2) a Robotarium [23] scenario with the controlled agent subject to both static obstacles and an additional uncontrolled, dynamic agent.
### _Quadrupedal Walking_
**Reach Avoid Navigation Task:** In the quadruped example, the agent is tasked to reach a specific goal location (green) while avoiding static obstacles (yellow) within a 5m by 4m space-the agent and obstacles move and can be placed continuously within this space. The set of all environments \(\mathcal{D}\) corresponds to the set of all setups, including goals, robot starting locations, and obstacles, that satisfy the aforementioned conditions while allowing for at-least one feasible path to the goal. Figure 3 depicts an example setup, with Figure 8 showing multiple examples of viable environments in \(\mathcal{D}\).
**FT-OCP formulation:** We formulated quadrupedal navigation as an optimal control problem of the form in (FTOCP). We consider as states, the position of the robot within a bounded rectangle \(\mathcal{X}=[0,5]\times[0,4]\). Individual inputs are discrete changes in position with bounded magnitude, with corresponding \(H\)-length input sequence \(\mathbf{u}\) a finite horizon of positional waypoints. Mathematically, the state-dependent subset of permissible sequences \(\mathcal{U}_{p}^{H}(x)\) is as follows, with \(j\in[0,1,\ldots,H-2]\):
\[\mathcal{U}_{p}^{H}(x)=\left\{\mathbf{u}\in\mathcal{U}^{H}\ \left|\begin{array}{l}\|u^{0}-x\|\leq 0.03,\ \mathrm{and}\,\\ \|u^{j+1}-u^{j}\|\leq 0.03.\end{array}\right\}\right.\]
\(\mathbb{U}(x_{k},d)\) then further constrains \(\mathbf{u}\) to remain within a feasible set of states via a discrete barrier-like condition. To define that feasible state set, for \(D\) obstacle positions let \(d=[d_{1}^{T},d_{2}^{T},\ldots,d_{D}^{T}]^{T}\in\mathbb{R}^{2\times D}\). Then with a collision radius \(r\), the feasible state set is:
\[\mathcal{F}(d)=\{x\in\mathcal{X}\ |\ ||x-d_{j}||\geq r\ \forall j=1,...,D\}.\]
Then we can define the overall constrained input space \(\mathbb{U}(x,d)\) as follows, with \(x^{0}=x\), \(x^{j+1}=f(x^{j},u^{j},d)\), and \(\forall\ \ell\in 0,1,\ldots,H\):
\[\mathbb{U}(x,d)=\left\{\mathbf{u}\in\mathcal{U}_{p}^{H}(x)\ |\ x^{\ell}\in\mathcal{F}(d)\right\}. \tag{11}\]
### _Quadrupedal Walking_
**Reach Avoid Navigation Task:** In the quadruped example, the agent is tasked to reach a specific goal location (green) while avoiding static obstacles (yellow) within a 5m by 4m space-the agent and obstacles move and can be placed continuously within this space. The set of all environments \(\mathcal{D}\) corresponds to the set of all setups, including goals, robot starting locations, and obstacles, that satisfy the aforementioned conditions while allowing for at-least one feasible path to the goal. Figure 3 depicts an example setup, with Figure 8 showing multiple examples of viable environments in \(\mathcal{D}\).
**FT-OCP formulation:** We formulated quadrupedal navigation as an optimal control problem of the form in (FTOCP). We consider as states, the position of the robot within a bounded rectangle \(\mathcal{X}=[0,5]\times[0,4]\). Individual inputs are discrete changes in position with bounded magnitude, with corresponding \(H\)-length input sequence \(\mathbf{u}\) a finite horizon of positional waypoints. Mathematically, the state-dependent subset of permissible sequences \(\mathcal{U}_{p}^{H}(x)\) is as follows, with \(j\in[0,1,\ldots,H-2]\):
\[\mathcal{U}_{p}^{H}(x)=\left\{\mathbf{u}\in\mathcal{U}^{H}\ \left|\begin{array}{l}\|u^{0}-x\|\leq 0.03,\ \mathrm{and}\,\\ \|u^{j+1}-u^{j}\|\leq 0.03.\end{array}\right.\right\}\]
\(\mathbb{U}(x_{k},d)\) then further constrains \(\mathbf{u}\) to remain within a feasible set of states via a discrete barrier-like condition. To define that feasible state set, for \(D\) obstacle positions let \(d=[d_{1}^{T},d_{2}^{T},\ldots,d_{D}^{T}]^{T}\in\mathbb{R}^{2\times D}\). Then with a collision radius \(r\), the feasible state set is:
\[\mathcal{F}(d)=\{x\in\mathcal{X}\ |\ ||x-d_{j}||\geq r\ \forall j=1,...,D\}.\]
Then we can define the overall constrained input space \(\mathbb{U}(x,d)\) as follows, with \(x^{0}=x\), \(x^{j+1}=f(x^{j},u^{j},d)\), and \(\forall\ \ell\in 0,1,\ldots,H\):
\[\mathbb{U}(x,d)=\left\{\mathbf{u}\in\mathcal{U}_{p}^{H}(x)\ |\ x^{\ell}\in\mathcal{F}(d)\right\}. \tag{12}\]
Here, the discrete-time dynamics are simply \(f(x,u,d)=x+u\). Finally, with goal state \(x_{d}\), we have our cost function \(J\) as follows, again with \(x^{0}=x\) and \(x^{j+1}=f(x^{j},u^{j},d)\):
\[J(\mathbf{u},x,d)=10||x^{H}-x_{d}||+\sum_{i=0}^{H-1}||x^{i+1}-x^{i}||. \tag{12}\]
This cost simultaneously rewards the final waypoint when closer to the goal and a shorter overall path length. As a result, the overall finite-time optimal control problem is:
\[\mathbf{u}^{*} =\underset{\mathbf{u}\in\mathcal{U}^{H}}{\operatorname{argmin}} J(\mathbf{u},x_{k},d)\text{ as per }\eqref{eq:cost}, \tag{13}\] \[\operatorname{subject\ to} \mathbf{u}\in\mathbb{U}(x_{k},d)\text{ as per }\eqref{eq:cost}.\]
**Solving the FT-OCP:** To solve (13), we employ the procedure described in Section III-A. We directly sample the input space \(\mathcal{U}^{H}\) and employ rejection sampling to generate samples \(\mathbf{u}\in\mathbb{U}(x_{k},d)\), until we collect \(1000\) such samples. From this collection of samples, we choose the minimum cost sample by evaluating \(J(\mathbf{u},x_{k},d)\). This sample meets our guarantees as described in Corollary 1. However, we recognize that our cost function is differentiable in \(\mathbf{u}\), and we can employ constrained gradient descent [26] to further improve the solution. This process is illustrated in Figure 4.
**Experiments and Results:** Tests were performed for both random and curated obstacle locations, with care taken to reject samples without a feasible path to the goal. The quadruped was given a random start position and orientation, and a fixed goal, \(x_{d}\). (13) was solved using a Python implementation of the above procedure at \(\sim\)1.5 Hz, taking \(x_{k}\) to be the position of the quadruped as measured by an Optitrack motion capture system. An IDQP-based walking controller [27] tracked the computed plan, with tangent angles along the plan used as desired quadruped heading.
By Corollary 1, choosing the best out of \(1000\) uniformly chosen waypoint sequences implies that the best sequence \(\mathbf{u}_{N}^{*}\) should be in the \(99\%\)-ile with \(99.995\%\) confidence. This is indeed the case as can be seen in the data portrayed at the top of Figure 6, corroborating Corollary 1. Both Corollaries 2 and 3 were also corroborated by recording successive feasibility and controller runtimes for \(1000\) randomized instances of the percentile method applied to (13). In all cases, the controller was successively feasible, and the maximum controller runtime was \(0.92\) seconds. Comparing against another \(5000\) random samples affirms that the reported maximum runtime exceeded the \(99\%\)-ile cutoff, while the controller was successively feasible in all instances as well. The data for runtimes is shown on the bottom in Figure 6. Qualitatively speaking, however, the proposed procedure produces a valid, collision-free plan in all tested scenarios. This plan ultimately leads to the quadruped reaching the desired goal in many scenarios. However, some obstacle placements lead to local minima that cannot be escaped, as this is a finite-time method. Increasing the horizon \(H\) allows for success in these conditions, but requires a trade-off in execution time. These results are elucidated in the supplemental video.
### _Multi-Agent Verification_
Figure 2 depicts the reach-avoid scenario for the Robotarium [23] agents which can be modeled as unicycle systems, _i.e._ with \(x_{k}\in\mathcal{X},\ u_{k}\in\mathcal{U}\):
\[x_{k+1}=\underbrace{x_{k}+(\Delta t=0.033)\begin{bmatrix}\cos \left(x_{k}[3]\right)&0\\ \sin\left(x_{k}[3]\right)&0\\ 0&1\end{bmatrix}u_{k}}_{f(x_{k},u_{k},d)}. \tag{14}\]
Here, \(\mathcal{X}=[-1.6,1.6]\times[-1.2,1.2]\times[0,2\pi]\) and \(\mathcal{U}=[-0.2,0.2]\times[\frac{-\pi}{2},\frac{\pi}{2}]\). Additionally, each agent comes equipped with a Lyapunov controller \(U\) that steers the agent to a provided waypoint \(w\in\mathcal{W}\):
\[U:\mathcal{X}\times\mathcal{D}\times\mathcal{W}\triangleq[-1.6,1.6]\times[-1.2,1.2]\rightarrow\mathcal{U}.\]
The environment space \(\mathcal{D}\) consists of the grid locations of \(8\) static obstacles on an \(8\times 5\) grid overlaid on the state space \(\mathcal{X}\), the cells of \(3\) goals on the same grid, the starting position in \(\mathcal{X}\) of another, un-controlled moving agent that is at-least \(0.3\) meters away from the ego agent of interest, and the un-controlled agent's goal cell on the same grid. No static obstacles are allowed to overlap with any of the goals, though the un-controlled agent's goal may overlap with at least one of the goals of the ego agent, and the setup of
Fig. 4: Solving the FT-OCP for the quadruped reach-avoid experiment. (a) generates uniformly random feasible input sequence samples. (b) selects the best sample according to cost function \(J(\mathbf{u},x_{k},d_{k})\). Finally, (c) leverages the differentiability of \(J\) to further improve the choice of \(\mathbf{u}\) via constrained gradient descent.
static obstacles must always allow for there to exist at least one path to one of the ego agent's goals. Figure 7 shows multiple examples of environment setups within \(\mathcal{D}\).
**NMPC Formulation:** Based on the setup of static obstacles and goal locations on the grid, we define a function \(S:\mathcal{W}\rightarrow\mathbb{R}_{+}\) that outputs the length of the shortest feasible path to a goal from a provided planar waypoint. Should no feasible path exist from a waypoint \(w\in\mathcal{W}\), \(S(w)=100\) to indicate infeasibility. Inspired by discrete control barrier function theory [28], we define a control barrier function \(h\) which accounts for both the ego agent state \(x_{a}\) and the un-controlled agent state \(x_{o}\) (with \(P=[I_{2\times 2}\ \mathbf{0}_{2\times 1}]\)):
\[h(x_{a},x_{o})=\begin{cases}-5&\text{in static obstacle cell},\\ \|P(x_{a}-x_{o})\|-0.18&\text{else}.\end{cases}\]
Then, provided \(h(x_{a},x_{o})\geq 0\), the ego agent hasn't crashed into a static obstacle and is maintaining at least a distance of \(0.18\) m from the un-controlled agent.
This permits us to define an NMPC problem as follows with the dynamics \(f\) as per (14) and \(\forall\ j\in[1,2,3,4,5]\):
\[w_{k}^{*}=\operatorname*{argmin}_{w\in\mathcal{W}} S(w),\] (NMPC-A) \[\operatorname*{subject\ to} x_{k}^{j}=f(x_{k}^{j-1},u^{j-1},d),\] (a) \[x_{k}^{0}=x_{k},\] (b) \[h(x_{k,a}^{j},x_{o})\geq 0\] (c) \[u^{j-1}=U\left(x_{k}^{j-1},d,w\right),\] (d) \[0.05\leq\|w-x_{k}\|\leq 0.2.\]
To ease sampling then, we will consider an augmented cost \(J\) that outputs \(100\) whenever a waypoint \(w\) fails to satisfy constraints (a)-(d) in (NMPC-A). Then we define the NMPC problem to-be-solved as follows:
\[w_{k}^{*}=\operatorname*{argmin}_{w\in\mathcal{W}} J(w),\] (NMPC-B) \[\operatorname*{subject\ to} 0.05\leq\|w-x_{k}\|\leq 0.2.\]
**Results:** By Corollary 1, if we wish to take a percentile approach to determine a waypoint \(w_{N}^{*}\) in the \(95\%\)-ile with \(99.4\%\) confidence we need to evaluate \(N=100\) uniformly chosen waypoints from the constraint space for (NMPC-B). Figure 5 shows the cost of the outputted waypoint sequence compared against \(5000\) randomly sampled values, and as can be seen, the outputted waypoint \(w_{N}^{*}\) is indeed in the \(95\%\)-ile, confirming Corollary 1. Calculating this controller's runtime in \(460\) randomly sampled initial state and environment scenarios yielded a probabilistic maximum \(\zeta_{N}^{*}=0.043\) seconds. According to Corollary 3, this maximum runtime should be an upper bound on the true, \(99\%\) cutoff on controller runtimes with confidence \(99\%\) -- and as can be seen in Figure 5, \(\zeta_{N}^{*}\) exceeds the true value. Finally, to corroborate Corollary 2, we evaluated the recursive feasibility cost function \(C\) as per (8) in each of the same \(460\) randomly sampled scenarios from prior. In each scenario, the percentile controller was successively feasible, indicating that with \(99\%\) probability the controller will be successively feasible. Evaluating the same cost for \(5000\) more uniformly chosen samples resulted in the controller being successively feasible
Fig. 5: Robotarium Hardware data when (top) taking a percentile method to solving (NMPC-B), and (bottom) calculating a probabilistic cutoff on maximum controller runtime. In both cases, the red lines corresponding to (top) the identified waypoint and (bottom) the reported maximum controller runtime are to the left and right, respectively, of their corresponding, true probabilistic cutoffs. In other words, the identified values satisfy their corresponding probabilistic statements, affirming Corollaries 1 and 3. Numeric distributions were calculated by evaluating \(5000\) random samples.
Fig. 6: Quadruped Hardware data when (top) taking a percentile method to solve (13), and (bottom) calculating a probabilistic cutoff on maximum controller runtime. In both cases, the red lines corresponding to (top) the identified path and (bottom) the reported maximum controller runtime are to the left and right, respectively, of their corresponding, true probabilistic cutoffs. This affirms Corollaries 1 and 3 insofar as the identified values satisfy their corresponding probabilistic statements. Numeric distributions were calculated by evaluating \(5000\) random samples.
each time, corroborating Corollary 2.
## V Conclusion
Based on existing work in black-box risk-aware verification, we provided probabilistic guarantees for percentile approaches to solving finite-time optimal control problems, recursive feasibility of such approaches, and bounds on maximum controller runtimes. In future work, the authors plan to explore how the generated probabilistic guarantees can be applied in other scenarios, _e.g._ probabilistic planning procedures. Secondly, we aim to bound the optimality gap between our percentile solutions and the global optimum.
|
2310.06360 | On the existence of minimal expansive solutions to the $N$-body problem | We deal, for the classical $N$-body problem, with the existence of action
minimizing half entire expansive solutions with prescribed asymptotic direction
and initial configuration of the bodies. We tackle the cases of hyperbolic,
hyperbolic-parabolic and parabolic arcs in a unitary manner. Our approach is
based on the minimization of a renormalized Lagrangian action, on a suitable
functional space. With this new strategy, we are able to confirm the
already-known results of the existence of both hyperbolic and parabolic
solutions, and we prove for the first time the existence of
hyperbolic-parabolic solutions for any prescribed asymptotic expansion in a
suitable class. Associated with each element of this class we find a viscosity
solution of the Hamilton-Jacobi equation as a linear correction of the value
function. Besides, we also manage to give a better description of the growth of
parabolic and hyperbolic-parabolic solutions. | Davide Polimeni, Susanna Terracini | 2023-10-10T06:57:42Z | http://arxiv.org/abs/2310.06360v1 | # On the existence of minimal expansive solutions to the \(N\)-body problem
###### Abstract
We deal, for the classical \(N\)-body problem, with the existence of action minimizing half entire expansive solutions with prescribed asymptotic direction and initial configuration of the bodies. We tackle the cases of hyperbolic, hyperbolic-parabolic and parabolic arcs in a unitary manner. Our approach is based on the minimization of a renormalized Lagrangian action, on a suitable functional space. With this new strategy, we are able to confirm the already-known results of the existence of both hyperbolic and parabolic solutions, and we prove for the first time the existence of hyperbolic-parabolic solutions for any prescribed asymptotic expansion in a suitable class. Associated with each element of this class we find a viscosity solution of the Hamilton-Jacobi equation as a linear correction of the value function. Besides, we also manage to give a better description of the growth of parabolic and hyperbolic-parabolic solutions.
## 1 Introduction and main results
In this paper, we deal with half entire solutions to the \(N\)-body problem of Celestial Mechanics in the Euclidean space \(\mathbb{R}^{d}\) of hyperbolic, parabolic or mixed hyperbolic-parabolic type. We first investigate the existence of trajectories to the gravitational \(N\)-body problem having prescribed growth at infinity. This classical line of research has recently been re-energized by the injection of new methods of analysis, of perturbative, variational, geometric and/or analytic functional nature. Indeed, in addition to the classical literature on the subject [1, 9, 23, 28, 29], we quote the recent results about existence of hyperbolic solutions [11, 15, 17, 19], parabolic ones [3, 4, 5, 18, 20, 30] and hyperbolic-parabolic ones [6], without neglecting those ending in an oscillatory manner [13, 14, 25] and references therein.
To start with, let us consider \(N\) point masses \(m_{1},...,m_{N}>0\) moving under the action of the mutual attraction, with the inverse-square law of universal gravitation. We denote the components of the configuration vector \(x=(r_{1},...,r_{N})\in\mathbb{R}^{dN}\) of the positions of the bodies and by \(|r_{i}-r_{j}|\) the Euclidean distance between two bodies \(i\) and \(j\). Newton's equation of motion for the \(i\)-th body of the \(N\)-body problem reads as
\[m_{i}\ddot{r_{i}}=-\sum_{j=1,...,N,\ j\neq i}^{N}m_{i}m_{j}\frac{r_{i}-r_{j}}{| r_{i}-r_{j}|^{3}}.\]
Since these equations are invariant by translation, we can fix the origin of our inertial frame at the center of mass of the system. We can thus define the configuration space of the system as
\[\mathcal{X}=\left\{x=(r_{1},...,r_{N})\in\mathbb{R}^{dN},\ \sum_{i=1}^{N}m_{i}r_{i}=0\right\}\]
and denote by \(\Omega=\{x\in\mathcal{X}\ |\ r_{i}\neq r_{j}\ \forall\ i\neq j\}\subset\mathcal{X}\) the set of configurations without collisions, which is open and dense in \(\mathcal{X}\), and with \(\Delta\) its complement, that is the collision set. Now we can write the equations of motion as
\[\mathcal{M}\ddot{x}=\nabla U(x), \tag{1.1}\]
where \(\mathcal{M}=\mathrm{diag}(m_{1}I_{d},...,m_{N}I_{d})\) is the matrix of the masses and the function \(U:\Omega\rightarrow\mathbb{R}\cup\{+\infty\}\) is the Newtonian potential
\[U(x)=\sum_{i<j}\frac{m_{i}m_{j}}{|r_{i}-r_{j}|}. \tag{1.2}\]
Newton's equations define an analytic local flow on \(\Omega\times\mathbb{R}^{dN}\) with a first integral given by the mechanical energy:
\[h=\frac{1}{2}\|\dot{x}\|_{\mathcal{M}}^{2}-U(x).\]
We will use \(\|\cdot\|_{\mathcal{M}}\) to denote the norm induced by the mass scalar product
\[\langle x,y\rangle_{\mathcal{M}}=\sum_{i=1}^{N}m_{i}\langle r_{i},s_{i}\rangle, \qquad\text{for any }x=(r_{1},...,r_{N}),\ y=(s_{1},...,s_{N})\in\mathcal{X},\]
where, with a little abuse, \(\langle\cdot,\cdot\rangle\) denotes the standard scalar product in \(\mathbb{R}^{d}\) and also in \(\mathcal{X}\).
In this paper we will be concerned with the class of expansive motions, which is defined in the following way.
**Definition 1.1**.: A motion \(x:[0,+\infty)\to\Omega\) is said to be expansive when all the mutual distances diverge, that is, when \(|r_{i}(t)-r_{j}(t)|\to+\infty\) as \(t\to+\infty\) for all \(i<j\). Equivalently, the motion is expansive if \(U(x(t))\to 0\) as \(t\to+\infty\).
From the conservation of the energy, we observe that, since \(U(x(t))\to 0\) implies \(\|\dot{x}(t)\|_{\mathcal{M}}^{2}\to 2h\) as \(t\to+\infty\), expansive motions can only occur at nonnegative energies.
For a given motion, we introduce the minimum and the maximum separation between the bodies at time \(t\) as the two functions
\[r(t)=\min_{i<j}|r_{i}(t)-r_{j}(t)|\quad\text{and}\quad R(t)=\max_{i<j}|r_{i}(t )-r_{j}(t)|,\]
where we write \(|\cdot|\) to denote the standard Euclidean norm in \(\mathbb{R}^{d}\). The next fundamental theorems give us a more accurate description of the system's expansion.
**Theorem 1.2** (Pollard, 1967 [27]).: _Let \(x\) be a motion defined for all \(t>t_{0}\). If \(r\) is bounded away from zero, then we have that \(R=O(t)\) as \(t\to+\infty\). In addition, \(R(t)/t\to+\infty\) if and only if \(r(t)\to 0\)._
**Theorem 1.3** (Marchal-Saari, 1976 [23]).: _Let \(x\) be a motion defined for all \(t>t_{0}\). Then either \(R(t)/t\to+\infty\) and \(r(t)\to 0\), or there is a configuration \(a\in\mathcal{X}\) such that \(x(t)=at+O(t^{2/3})\). In particular, for superhyperbolic motions (i.e. motions such that \(\limsup_{t\to+\infty}R(t)/t=+\infty\)) the quotient \(R(t)/t\) diverges._
**Theorem 1.4** (Marchal-Saari, 1976 [23]).: _Suppose that \(x(t)=at+O(t^{2/3})\) for some \(a\in\mathcal{X}\) and that the motion is expansive. Then, for each pair \(i<j\) such that \(a_{i}=a_{j}\), we have \(|r_{i}(t)-r_{j}(t)|\approx\)1\(t^{2/3}\)._
Footnote 1: Given positive functions \(f\) and \(g\), we write \(f\approx g\) when there exist two positive constants \(\alpha\) and \(\beta\) such that \(\alpha\leq\frac{f}{g}\leq\beta\).
Next, let us recall the well known Chazy classification of the expansive motions for the three-body problem (cfr. [9]), based on the asymptotic order of growth of the distances between the bodies. This prevents an expansive motion to be superhyperbolic, so we can assume that it is of the form \(x(t)=at+O(t^{2/3})\) for some limit \(a\in\mathcal{X}\). Assuming that the center of mass of the system is at rest, Chazy classified these motions as follows:
* _Hyperbolic_: \(a\in\Omega\) and \(|r_{i}(t)-r_{j}(t)|\approx t\) for all \(i<j\);
* _Hyperbolic-parabolic_: \(a\in\Delta\) but \(a\neq 0\);
* _Completely parabolic_: \(a=0\) and \(|r_{i}(t)-r_{j}(t)|\approx t^{2/3}\) for all \(i<j\).
The following definition is in order.
**Definition 1.5**.: A motion \(x(t)\) is said to have limit shape when there is a time dependent similarity \(S(t)\) of the space \(\mathbb{R}^{d}\) such that \(S(t)x(t)\) converges to some configuration \(a\neq 0\).
In our case, there is a diagonal action of \(S(t)\), which means that \(S(t)x=(S(t)r_{1},...,S(t)r_{N})\) for \(x=(r_{1},...,r_{N})\in\mathcal{X}\). In particular, for the case of (half) hyperbolic motions, we can say that the limit shape of such a motion is its asymptotic velocity \(a=\lim_{t\to+\infty}\frac{x(t)}{t}\). Similarily, (half) parabolic motions also possess a limit shape, which is now bound to be a central configuration, that is, a critical point of the
potential \(U\) constrained on the intertia ellipsoid \(\mathcal{E}=\{x\in\mathcal{X}\,:\,\|x\|_{\mathcal{M}}^{2}=1\}\).
In this paper, we are going to tackle the existence of half entire expansive solutions for the Newtonian \(N\)-body problem from a unitary perspective by a global variational approach, using a suitable renormalized action functional, as the Lagrangian is not expected to be integrable on the half line. In particular, referring to Chazy's classification, we will show a proof of existence of motions for each one of the previous three classes of motions. As a first step, we shall revisit recent works by E. Maderna and A. Venturelli about the existence of half hyperbolic and parabolic trajectories from this new angle.
**Theorem 1.6** (Maderna and Venturelli 2020, [20]).: _Given \(d\in\mathbb{N}\), \(d\geq 2\), for the Newtonian \(N\)-body problem in \(\mathbb{R}^{d}\) there is a hyperbolic motion \(x:[1,+\infty)\to\mathcal{X}\) of the form_
\[x(t)=at-\log(t)\nabla U(a)+o(1)\quad\text{as }t\to+\infty,\]
_for any initial configuration \(x^{0}=x(1)\in\mathcal{X}\) and for any collisionless configuration \(a\in\Omega\)._
As far as the parabolic case is concerned, in addition to providing an alternative proof, we will be able to extend the result of Maderna and Venturelli [19] by improving the estimate of the remainder as follows.
**Theorem 1.7**.: _Given \(d\in\mathbb{N}\), \(d\geq 2\), for the Newtonian \(N\)-body problem in \(\mathbb{R}^{d}\) there is a parabolic solution \(x:[1,+\infty)\to\mathcal{X}\) of the form_
\[x(t)=\beta b_{m}t^{2/3}+o(t^{1/3^{+}})\quad\text{as }t\to+\infty, \tag{1.3}\]
_for any initial configuration \(x^{0}=x(1)\in\mathcal{X}\), for any minimal normalized central configuration \(b_{m}\) and for \(\beta=\sqrt[3]{\frac{9}{2}U(b_{m})}\)._
Here, a minimal central configuration is a minimizer of the potential \(U\) constrained to the intertia ellipsoid \(\mathcal{E}=\{x\in\mathcal{X}\,:\,\|x\|_{\mathcal{M}}^{2}=1\}\). As said, the existence of hyperbolic and parabolic solutions for the Newtonian \(N\)-body problem has already been proved by Maderna and Venturelli in 2020 and 2009, respectively. In [20], the authors proved the existence of hyperbolic motions for any prescribed limit shape, any initial configuration of the bodies and any positive value of the energy. These solutions, whose actions are infinite, were found as the limits of locally converging subsequences in families of minimizing motions, where the existence of the approximate solutions are minimal geodesics of the Maupertuis'-Jacobi metric. More specifically, these solutions were obtained as the limits of solutions of sequences of approximating two-point boundary value problems. To exclude collisions, both proofs in [20] and [19] invoke Marchal's Principle ensuring the absence of collisions for action-minimizing paths (Theorem 2.1). There trajectories are characteristic curves of a global viscosity solutions for the Hamilton-Jacobi equation \(H(x,\nabla u)=h\). In such, these solutions are fixed points of the associated Lax-Oleinik semigroup. In [19], for any starting configuration they proved the existence of parabolic arcs asymptotic to any prescribed normalized minimal central configuration.
Compared to Maderna and Venturelli's articles, in this paper we show alternative and simpler proofs for the existence of hyperbolic and parabolic solutions in a unitary framework, which is based on a straightforward application of the Direct Method of the Calculus of Variations to minimize the renormalized Lagrangian actions associated to the problem. This approach has the advantage of allow us to complement the existence of parabolic arcs with their (almost exact) expansion (1.3).
Finally, after proving Theorems 1.6 and 1.7, we will extend our approach to similarly prove the existence of hyperbolic-parabolic solutions for the \(N\)-body problem. In order to state our main result we need to introduce the \(a\)_-cluster partition_ associated with a collision asymptotic velocity \(a\in\Delta\setminus\{0\}\), where clusters are the equivalence classes of the relation \(i\sim j\Longleftrightarrow a_{i}-a_{j}=0\). Given a cluster \(K\), we consider the associated partial potential \(U_{K}\), where the sum in (1.2) is restricted to the cluster \(K\). The \(a\)-clustered potential \(U_{a}\) is the sum of all the cluster potentials of the partition. Now we can state our main theorem:
**Theorem 1.8**.: _Given \(d\in\mathbb{N}\), \(d\geq 2\), for the Newtonian \(N\)-body problem in \(\mathbb{R}^{d}\) there is a hyperbolic-parabolic motion \(x:[1,+\infty)\to\mathcal{X}\) of the form_
\[x(t)=at+\beta b_{m}t^{2/3}+o(t^{1/3^{+}})\quad\text{as }t\to+\infty,\]
_for any initial configuration \(x^{0}=x(1)\in\mathcal{X}\), for any collision configuration \(a\in\Delta\), for any normalized
minimal central configuration2\(b_{m}\in\mathcal{X}\) of the \(a\)-clustered potential and for any choice of the energy constant \(h>0\)._
Footnote 2: See Section 5 for the exact definition of \(\beta\) and \(b_{m}\).
Intuitively, hyperbolic-parabolic motions are those expansive motions of the form \(x(t)=at+o(t)\), as \(t\to+\infty\), when their limit shapes have collisions, that is, \(a\in\Delta\setminus\{0\}\). This means that hyperbolic-parabolic motions can be viewed as clusters of bodies moving asymptotically with a linear growth, while the distances of the bodies inside each cluster grow with a rate of order \(t^{2/3}\) and, referred to its center of mass, the cluster has a limit shape which is a prescribed minimal configuration of the cluster potential \(U_{K}\). For the Newtonian \(N\)-body problem, the existence of hyperbolic-parabolic solutions for any prescribed positive energy and any given initial configuration of the bodies has been tackled by Burgos in [6], where his proof follows from and application of Maderna and Venturelli's Theorem on the existence of hyperbolic motions and a limiting procedure as the limit shape approaches the collision set. With respect to Burgos' result, we can provide a a much wider class of such hyperbolic-parabolic trajectories. Moreover, our approach provides a much more detailed information about the asymptotic behaviour of the solution and a better description of the motion of the bodies. Indeed, to prove Theorem 1.8, we partition the set of bodies following the natural cluster partition that was presented by Burgos and Maderna in [7] and is defined as follows: if \(x(t)=(r_{1}(t),...,r_{N}(t))\) and \(a=(a_{1},...,a_{N})\), then \(a_{i}=a_{j}\) if and only if \(|r_{i}(t)-r_{j}(t)|=O(t^{2/3})\), and the partition of the set of bodies is defined by this equivalence relation. Using this particular partition, we are able to decompose the Lagrangian action into two terms: the first is related to the hyperbolic motion of the clusters and the second is related to the parabolic motion of the bodies inside the clusters. Through similar proofs to the ones in Theorems 1.6 and 1.7, we can thus apply the Direct Method of the Calculus of Variation and Marchal's Theorem also to the case of hyperbolic-parabolic motions.
**Corollary 1.9**.: _The motions \(x(t)\) given by Theorems 1.6, 1.7 and 1.8 are continuous at \(t=1\) and collisionless for \(t>1\). Moreover they are free time action minimizers at their energy level._
As already pointed out by Maderna and Venturelli, a family of hyperbolic trajectories that are minimal in free time is associated, via the Busemann function, with a solution of the time-independent Hamilton-Jacobi equation. A further advantage of the approach through the direct minimization of a renormalized action functional is that a value function, dependent on the initial point, is directly defined. As we shall outline in Section 7, a linear correction to the value function is, as expected from theory, a solution of the Hamilton-Jacobi equation.
Our general strategy in the proofs of Theorems 1.6, 1.7 and 1.8 is to seek solutions to (1.1) which are lower order perturbations of a given path:
\[x(t)=r_{0}(t)+\varphi(t)+\tilde{x}_{0}\,,\qquad\tilde{x}_{0}=x^{0}-t_{0}(1).\]
Here \(\varphi(t)\) is the lower order term and the reference path \(r_{0}\) is linear in the hyperbolic case, is a parabolic self-similar solution in the parabolic one and mixes the two types in the hyperbolic-parabolic case and \(\tilde{x}_{0}=x_{0}-r_{0}(1)\). In particular, we will consider functions \(\varphi\) belonging to the functional space of continuous functions on \([1,+\infty)\) which vanish at \(t=1\) and can be written as primitives of functions in \(L^{2}(1,+\infty)\). With this choice of space, which is denoted by \(\mathcal{D}_{0}^{1,2}(1,+\infty)\), we will be able to give the problem a global variational structure, so that we can prove the existence of solutions of the \(N\)-body problem through the minimization of a Lagrangian action on the space \(\mathcal{D}_{0}^{1,2}(1,+\infty)\). The crucial idea will be to minimize the action after a necessary proper renormalization (cfr. Definition 2.5), since the Lagrangian is never integrable at infinity.
## 2 The variational setting
For the \(N\)-body problem, the Hamiltonian \(H\) is defined over \(\Omega\times\mathbb{R}^{dN}\) as
\[H(x,p)=\frac{1}{2}\|p\|_{\mathcal{M}^{-1}}^{2}-U(x), \tag{2.1}\]
while the Lagrangian is defined over \(\Omega\times\mathbb{R}^{dN}\) as
\[L(x,v)=\frac{1}{2}\|v\|_{\mathcal{M}}^{2}+U(x).\]
This means, in particular, that \(L\) and \(H\) become infinite when \(x\) has collisions. Given two configurations \(x,y\in\mathcal{X}\) and \(T>0\), we denote by \(\mathcal{C}(x,y,T)\) the set of absolutely continuous curves \(\gamma:[a,b]\to\mathcal{X}\) going from \(x\) to \(y\) in time \(T=b-a\) and we write \(\mathcal{C}(x,y)=\bigcup_{T>0}\mathcal{C}(x,y,T)\). We define the Lagrangian action of a curve \(\gamma\in\mathcal{C}(x,y,T)\) as the functional
\[\mathcal{A}_{L}(\gamma)=\int_{a}^{b}L(\gamma,\dot{\gamma})\ \mathrm{d}t=\int_{a}^{b} \frac{1}{2}\|\dot{\gamma}\|_{\mathcal{M}}^{2}+U(\gamma)\ \mathrm{d}t.\]
Hamilton's principle of least action implies that if a curve \(\gamma\) is a minimizer of the Lagrangian action in \(\mathcal{C}(x,y,T)\), then \(\gamma\) satisfies Newton's equations at every time \(t\in[a,b]\) in which \(\gamma(t)\) has no collisions. However, as Poincare already noticed in [26], there are curves with isolated collisions and finite action, which means that minimizing orbits may not always be true motions. The following theorem represents a big step forward in this theory, since it enabled the application of variational techniques to study the Newtonian \(N\)-body problem. The main idea to prove the theorem was given by Marchal in [22], while more complete proofs are due to Chenciner in [10] and Ferrario and Terracini in [12].
**Theorem 2.1** (Marchal [22], Chenciner [10], Ferrario and Terracini [12]).: _Given \(x,y\in\mathcal{X}\), if \(\gamma\in\mathcal{C}(x,y)\) is defined on some interval \([a,b]\) and satisfies_
\[\mathcal{A}_{L}(\gamma)=\min\{\mathcal{A}(\sigma)\ |\ \sigma\in\mathcal{C}(x,y,b-a)\},\]
_then \(\gamma(t)\in\Omega\) for all \(t\in(a,b)\)._
Marchal's Theorem will be fundamental in our proofs, since it will guarantee that the minimizers of the action (whose existence is the object of our proofs) are in fact true motions of the \(N\)-body problem free of collisions. The Principle of Least Action, jointly with Theorem 2.1, has been widely applied in the search for collisionless periodic solutions to the N-body problem (cfr. e.g. [24, 12]). However, we must now build a suitable variational framework for the search of expansive solutions.
Our minimization will take place on the functional space
\[\mathcal{D}_{0}^{1,2}([1,+\infty),\mathcal{X})=\{\varphi\in H^{1}_{loc}([1,+ \infty),\mathcal{X})\ :\ \varphi(1)=0\ \text{and}\ \int_{1}^{+\infty}\|\dot{\varphi}(t)\|_{\mathcal{M}}^{2}\ \mathrm{d}t<+\infty\},\]
which is endowed with the norm
\[\|\varphi\|_{\mathcal{D}}=\bigg{(}\int_{1}^{+\infty}\|\dot{\varphi}(t)\|_{ \mathcal{M}}^{2}\ \mathrm{d}t\bigg{)}^{1/2}.\]
**Remark 2.2**.: Given a configuration \(\varphi=(\varphi_{1},...,\varphi_{n})\in\mathcal{D}_{0}^{1,2}([1,+\infty), \mathcal{X})\), we will say that its components belong to the space \(\mathcal{D}_{0}^{1,2}([1,+\infty),\mathbb{R}^{d})\) and the \(\mathcal{D}_{0}^{1,2}\)-norm of each component is
\[\|\varphi_{i}\|_{\mathcal{D}}=\bigg{(}\int_{1}^{+\infty}|\dot{\varphi}_{i}(t)| ^{2}\ \mathrm{d}t\bigg{)}^{1/2},\]
for \(i=1,...,N\). We will write \(\mathcal{D}_{0}^{1,2}(1,+\infty)\) to denote both the spaces \(\mathcal{D}_{0}^{1,2}([1,+\infty),\mathcal{X})\) and \(\mathcal{D}_{0}^{1,2}([1,+\infty),\mathbb{R}^{d})\), since it will be trivial to distinguish them.
**Proposition 2.3** (Cfr. Boscaggin-Dambrosio-Feltrin-Terracini, 2021 [5]).: _The space \(\mathcal{D}_{0}^{1,2}(1,+\infty)\) is a Hilbert space containing the set \(C_{c}^{\infty}(1,+\infty)\) as a dense subspace._
We recall here the following paramount Hardy-type inequality, which will be used several times in the paper. It states that the space \(\mathcal{D}_{0}^{1,2}(1,+\infty)\) is continuously embedded in a weighted \(L^{2}\)-space with measure \(\mathrm{d}t/t^{2}\).
**Proposition 2.4** (_Hardy inequality_, Cfr. Boscaggin-Dambrosio-Feltrin-Terracini, 2021 [5]).: _For every \(\varphi\in\mathcal{D}^{1,2}_{0}(1,+\infty)\), it holds that_
\[\int_{1}^{+\infty}\frac{\|\varphi(t)\|_{\mathcal{M}}^{2}}{t^{2}}\ \mathrm{d}t\leq 4 \int_{1}^{+\infty}\|\dot{\varphi}(t)\|_{\mathcal{M}}^{2}\ \mathrm{d}t, \tag{2.2}\]
_and moreover_
\[\sup_{t\in[1,+\infty)}\frac{\|\varphi(t)\|_{\mathcal{M}}^{2}}{t-1}\leq\int_{1 }^{+\infty}\|\dot{\varphi}(t)\|_{\mathcal{M}}^{2}\ \mathrm{d}t. \tag{2.3}\]
In order to prove the existence of minima for the functional \(\mathcal{A}\) on \(\mathcal{D}^{1,2}_{0}(1,+\infty)\), we will properly renormalize the Lagrangian action and, after proving its coercivity and weak lower semicontinuity, we will apply the Direct Method of the Calculus of Variations. In particular, we will use the following renormalization.
**Definition 2.5** (Renormalized Lagrangian action).: Given a motion \(x(t)\in\Omega\) of the form \(x(t)=\varphi(t)+r_{0}(t)+\tilde{x}^{0}\), where \(\varphi\in\mathcal{D}^{1,2}_{0}(1,+\infty)\), \(r_{0}(t)=at+\beta bt^{2/3}\) for proper \(a,\beta b\in\mathcal{X}\), and \(\tilde{x}^{0}\in\mathcal{X}\), then we can define the renormalized Lagrangian action
\[\mathcal{A}^{ren}(\varphi)=\int_{1}^{+\infty}\frac{1}{2}\|\ddot{\varphi}(t)\| _{\mathcal{M}}^{2}+U(\varphi(t)+r_{0}(t)+\tilde{x}^{0})-U(r_{0}(t))-\langle \mathcal{M}\tilde{r}_{0}(t),\varphi(t)\rangle\ \mathrm{d}t. \tag{2.4}\]
In order to shorten the notation, throughout the paper we will usually write \(\mathcal{A}\) instead of \(\mathcal{A}^{ren}\).
To describe the asymptotic expansion of our motions, we will use the following theorem and lemma. The former, applies to the cases of hyperbolic and hyperbolic-parabolic motions, while the latter, which is typically known as _Chazy's Lemma_, states that the set of initial conditions in the phase space that generate hyperbolic motions is an open set and that the map defined on this set that gives the asymptotic velocity in the future is continuous.
**Theorem 2.6** (Chazy, 1922 [9]).: _Let \(x(t)\) be a motion with energy constant \(h>0\) and defined for all \(t>t_{0}\)._
1. _The limit_ \[\lim_{t\to+\infty}R(t)r(t)^{-1}=L\in[1,+\infty]\] _always exists._
2. _If_ \(L<+\infty\)_, then there are a configuration_ \(a\in\Omega\) _and some function_ \(P\)_, which is analytic in a neighborhood of_ \((0,0)\)_, such that for every_ \(t\) _large enough, we have_ \[x(t)=at-\log(t)\nabla U(a)+P(u,v),\] _where_ \(u=1/t\) _and_ \(v=\log(t)/t\)_._
**Lemma 2.7** (Maderna-Venturelli, 2020 [20]).: _Working on an Euclidean space \(E\), which is endowed with an Euclidean norm \(\|\cdot\|\), let \(U:E^{N}\to\mathbb{R}\cup\{+\infty\}\) be a homogeneous potential of degree -1 of class \(C^{2}\) on the open set \(\Omega=\{x\in E^{N}\ |\ U(x)<+\infty\}\). Let \(x:[0,+\infty)\to\Omega\) be a given solution of \(\dot{x}=\nabla U(x)\) satisfying \(x(t)=at+o(t)\) as \(t\to+\infty\) with \(a\in\Omega\). Then we have the following:_
1. _The solution_ \(x\) _has asymptotic velocity_ \(a\)_, meaning that_ \[\lim_{t\to+\infty}\dot{x}(t)=a.\]
2. _(Chazy's continuity of the limit shape). Given_ \(\varepsilon>0\)_, there are constants_ \(t_{1}>0\) _and_ \(\delta>0\) _such that, for any maximal solution_ \(y:[0,T)\to\Omega\) _satisfying_ \(\|y(0)-x(0)\|<\delta\) _and_ \(\|\dot{y}(0)-\dot{x}(0)\|<\delta\)_, we have_ * \(T=+\infty\)_,_ \(\|y(t)-at\|<t\) _for all_ \(t>t_{1}\)_;_ * _there is_ \(b\in\Omega\) _with_ \(\|b-a\|<\varepsilon\) _for which_ \(y(t)=bt+o(t)\)_._
## 3 Existence of minimal half hyperbolic motions
This section is devoted to the proof of Theorem 1.6. The class of hyperbolic motions has the following equivalent definition, also due to Chazy (see [9]).
**Definition 3.1**.: Hyperbolic motions are those motions such that each body has a different limit velocity vector, that is, \(\dot{r}_{i}(t)\to a_{i}\in\mathbb{R}^{d}\), as \(t\to+\infty\), and \(a_{i}\neq a_{j}\) whenever \(i\neq j\).
We consider the differential system
\[\begin{cases}\mathcal{M}\tilde{x}=\nabla U(x)\\ x(1)=x^{0}\\ \lim_{t\to+\infty}\dot{x}(t)=a\end{cases}, \tag{3.1}\]
where \(x^{0}\in\mathcal{X}\) and \(a\in\Omega\).
To prove the existence of hyperbolic motions to Newton's equations (3.1), we will look for solutions having the form \(x(t)=\varphi(t)+at+x^{0}-a\), where \(\varphi:[1,+\infty)\to\mathcal{X}\) belongs to the space \(\mathcal{D}_{0}^{1,2}(1,+\infty)\). We can thus equivalently study the system
\[\begin{cases}\mathcal{M}\tilde{\varphi}=\nabla U(\varphi+x^{0}-a+at)\\ \varphi(1)=0\\ \lim_{t\to+\infty}\dot{\varphi}(t)=0\end{cases}. \tag{3.2}\]
Taking advantage of the problem's variational structure, we would be tempted to prove the existence of hyperbolic motions through the minimization of the Lagrangian action associated to the system (3.2), that is, the functional
\[\int_{1}^{+\infty}\frac{1}{2}\|\dot{\varphi}(t)\|_{\mathcal{M}}^{2}+U(\varphi (t)+x^{0}-a+at)\ \mathrm{d}t, \tag{3.3}\]
where
\[U(\varphi(t)+x^{0}-a+at)=\sum_{i<j}\frac{m_{i}m_{j}}{|(\varphi_{i}(t)+x_{i}^{0 }-a_{i}+a_{i}t)-(\varphi_{j}(t)+x_{j}^{0}-a_{j}+a_{j}t)|}.\]
In attempting to work with the action functional as above, the major problem we encounter is that \(U(\varphi(t)+x^{0}-a+at)\) needs not to be integrable at infinity. Indeed, when \(\varphi\in C_{0}^{\infty}([1,+\infty))\), \(U(\varphi(t)+x^{0}-a+at)\) decays as \(\frac{1}{t}\) for \(t\to+\infty\). To overcome this problem, as we can add arbitrary functions to the Lagrangian without changing the associated Euler-Lagrange equations, we can renormalize the action functional in order to have a finite integral in the following way:
\[\mathcal{A}(\varphi)=\mathcal{A}^{ren}(\varphi)=\int_{1}^{+\infty}\frac{1}{2} \|\dot{\varphi}(t)\|_{\mathcal{M}}^{2}+U(\varphi(t)+x^{0}-a+at)-U(at)\ \mathrm{d}t.\]
### Coercivity
In order to apply the Direct Method of the Calculus of Variations, we start by proving the coercivity of the functional, that is to say, that \(\mathcal{A}(\varphi)\to+\infty\) as \(\|\varphi\|_{\mathcal{D}}\to+\infty\). From now on, we will use the notations \(\varphi_{ij}=\varphi_{i}-\varphi_{j}\), \(x_{ij}^{0}=x_{i}^{0}-x_{j}^{0}\) and \(a_{ij}=a_{i}-a_{j}\). We observe that the action can be equivalently written as
\[\mathcal{A}(\varphi)=\int_{1}^{+\infty}\frac{1}{2}\sum_{i=1}^{N}m_{i}|\dot{ \varphi}_{i}(t)|^{2}+U(\varphi(t)+x^{0}-a+at)-U(at)\ \mathrm{d}t,\]
where
\[U(\varphi(t)+x^{0}-a+at)-U(at) =\sum_{i<j}\bigg{(}\frac{m_{i}m_{j}}{|(\varphi_{i}(t)+x_{i}^{0}- a_{i}+a_{i}t)-(\varphi_{j}(t)+x_{j}^{0}-a_{j}+a_{j}t)|}-\frac{m_{i}m_{j}}{|a_{i}- a_{j}|t}\bigg{)}\] \[=\sum_{i<j}\bigg{(}\frac{m_{i}m_{j}}{|\varphi_{ij}(t)+x_{ij}^{0} -a_{ij}+a_{ij}t|}-\frac{m_{i}m_{j}}{|a_{ij}|t}\bigg{)}.\]
Since we are working in the space of configurations whose center of mass is null at every time, it can easily be proved that
\[\sum_{i=1}^{N}m_{i}|\dot{\varphi}_{i}(t)|^{2}=\frac{1}{M}\sum_{i<j}m_{i}m_{j}| \dot{\varphi}_{i}(t)-\dot{\varphi}_{j}(t)|^{2}, \tag{3.4}\]
where \(M=\sum_{i=1}^{N}m_{i}\). Indeed, we have
\[\sum_{i<j}m_{i}m_{j}|\dot{\varphi}_{i}(t)-\dot{\varphi}_{j}(t)|^{2} =\frac{1}{2}\sum_{i,j}m_{i}m_{j}(|\dot{\varphi}_{i}(t)|^{2}+|\dot{ \varphi}_{j}(t)|^{2}-2\langle\dot{\varphi}_{i}(t),\dot{\varphi}_{j}(t)\rangle)\] \[=\frac{1}{2}\bigg{(}M\sum_{i=1}^{N}m_{i}|\dot{\varphi}_{i}(t)|^{2 }+M\sum_{j=1}^{N}m_{j}|\dot{\varphi}_{j}(t)|^{2}-2\langle\sum_{i=1}^{N}m_{i} \dot{\varphi}_{i}(t),\sum_{j=1}^{N}m_{j}\dot{\varphi}_{j}(t)\rangle\bigg{)}\] \[=\frac{1}{2}\bigg{(}M\sum_{i=1}^{N}m_{i}|\dot{\varphi}_{i}(t)|^{2 }+M\sum_{j=1}^{N}m_{j}|\dot{\varphi}_{j}(t)|^{2}\bigg{)}\] \[=M\sum_{i=1}^{N}m_{i}|\dot{\varphi}_{i}(t)|^{2}.\]
Using (3.4), we can then write the Lagrangian action as
\[\mathcal{A}(\varphi)=\int_{1}^{+\infty}\sum_{i<j}m_{i}m_{j}\bigg{(}\frac{| \dot{\varphi}_{ij}(t)|^{2}}{2M}+\frac{1}{|\varphi_{ij}(t)+x_{ij}^{0}-a_{ij}+a_ {ij}t|}-\frac{1}{|a_{ij}|t}\bigg{)}\;\mathrm{d}t.\]
Since \(\|\dot{\varphi}\|_{L^{2}}\to+\infty\) if and only if there is \(i<j\) such that \(\|\dot{\varphi}_{i}-\dot{\varphi}_{j}\|_{L^{2}}\to+\infty\), we can prove the coercivity of the action by proving the coercivity of each term \(\mathcal{A}_{ij}\), where
\[\mathcal{A}(\varphi)=\sum_{i<j}\mathcal{A}_{ij}(\varphi)\]
and
\[\mathcal{A}_{ij}(\varphi)=\int_{1}^{+\infty}m_{i}m_{j}\bigg{(}\frac{|\dot{ \varphi}_{ij}(t)|^{2}}{2M}+\frac{1}{|\varphi_{ij}(t)+x_{ij}^{0}-a_{ij}+a_{ij}t |}-\frac{1}{|a_{ij}|t}\bigg{)}\;\mathrm{d}t.\]
Using the inequality
\[|\varphi_{i}(t)|\leq\|\varphi_{i}\|_{\mathcal{D}}\sqrt{t},\qquad\text{for every $i=1,...,N,\ t\geq 1$ and $\varphi_{i}\in\mathcal{D}_{0}^{1,2}$}, \tag{3.5}\]
which follows from (2.3), we have
\[U(\varphi(t)+x^{0}-a+at)-U(at)\geq\sum_{i<j}\bigg{(}\frac{m_{i}m_{j}}{\| \varphi_{ij}\|_{\mathcal{D}}\sqrt{t}+|x_{ij}^{0}-a_{ij}|+|a_{ij}|t}-\frac{m_{ i}m_{j}}{|a_{ij}|t}\bigg{)};\]
We can then look for an upper bound for the integral
\[\int_{1}^{+\infty}\bigg{(}\frac{1}{|a_{ij}|t}-\frac{1}{|a_{ij}|t+\|\varphi_{ ij}\|_{\mathcal{D}}\sqrt{t}+|x_{ij}^{0}|}\bigg{)}\;\mathrm{d}t.\]
Using the change of variables \(t=s^{2}\), we obtain
\[\frac{2}{|a_{ij}|}\int_{1}^{+\infty}\bigg{(}\frac{1}{s^{2}}-\frac{1}{s^{2}+ \frac{\|\varphi_{ij}\|_{\mathcal{D}}}{|a_{ij}|}s+\frac{|\varphi_{ij}^{0}-a_{ ij}|}{|a_{ij}|}}\bigg{)}s\;\mathrm{d}s. \tag{3.6}\]
Since
\[s^{2}+\frac{\|\varphi_{ij}\|_{\mathcal{D}}}{|a_{ij}|}s+\frac{|x_{ ij}^{0}-a_{ij}|}{|a_{ij}|} =\left(s+\frac{\|\varphi_{ij}\|_{\mathcal{D}}}{2|a_{ij}|}\right)^{2}-\frac{\| \varphi_{ij}\|_{\mathcal{D}}^{2}}{4|a_{ij}|^{2}}+\frac{|x_{ij}^{0}-a_{ij}|}{|a _{ij}|}\] \[=\frac{\|\varphi_{ij}\|_{\mathcal{D}}^{2}}{4|a_{ij}|^{2}}\bigg{[} \bigg{(}\frac{2|a_{ij}|s}{\|\varphi_{ij}\|_{\mathcal{D}}}+1\bigg{)}^{2}-1+ \frac{4|x_{ij}^{0}-a_{ij}||a_{ij}|}{\|\varphi_{ij}\|_{\mathcal{D}}^{2}}\bigg{]},\]
(3.6) is equal to
\[\frac{2}{|a_{ij}|}\frac{4|a_{ij}|^{2}}{\|\varphi_{ij}\|_{\mathcal{D}}^{2}}\int_ {1}^{+\infty}\bigg{[}\frac{1}{\bigg{(}\frac{2|a_{ij}|s}{\|\varphi_{ij}\|_{ \mathcal{D}}}\bigg{)}^{2}}-\frac{1}{\bigg{(}\frac{2|a_{ij}|s}{\|\varphi_{ij} \|_{\mathcal{D}}}+1\bigg{)}^{2}-1+\frac{4|x_{ij}^{0}-a_{ij}||a_{ij}|}{\| \varphi_{ij}\|_{\mathcal{D}}^{2}}}\bigg{]}s\ \mathrm{d}s. \tag{3.7}\]
Changing variables again with \(\tau=\frac{2|a_{ij}|s}{\|\varphi_{ij}\|_{\mathcal{D}}}\), we obtain that (3.7) is equal to
\[\frac{2}{|a_{ij}|}\int_{\frac{2|a_{ij}|}{\|\varphi_{ij}\|_{\mathcal{D}}}}^{+ \infty}\bigg{[}\frac{1}{\tau^{2}}-\frac{1}{(\tau+1)^{2}-1+\frac{4|x_{ij}^{0}-a _{ij}||a_{ij}|}{\|\varphi_{ij}\|_{\mathcal{D}}^{2}}}\bigg{]}\tau\ \mathrm{d}\tau.\]
Since we are interested in large values of \(\|\varphi_{ij}\|_{\mathcal{D}}\), we can suppose that there is some \(\lambda<1\) such that \(\frac{4|x_{ij}^{0}-a_{ij}||a_{ij}|}{\|\varphi_{ij}\|_{\mathcal{D}}^{2}}\leq\lambda\). We then have
\[\frac{2}{|a_{ij}|}\int_{\frac{2|a_{ij}|}{\|\varphi_{ij}\|_{\mathcal{D}}}}^{+ \infty}\bigg{[}\frac{1}{\tau^{2}}-\frac{1}{(\tau+1)^{2}-1+\frac{4|x_{ij}^{0}- a_{ij}||a_{ij}|}{\|\varphi_{ij}\|_{\mathcal{D}}^{2}}}\bigg{]}\tau\ \mathrm{d}\tau\leq\frac{2}{|a_{ij}|}\int_{\frac{2|a_{ij}|}{\|\varphi_{ij}\|_{ \mathcal{D}}}}^{+\infty}\bigg{[}\frac{1}{\tau^{2}}-\frac{1}{(\tau+1)^{2}-1+ \lambda}\bigg{]}\tau\ \mathrm{d}\tau. \tag{3.8}\]
The integrand of the last integral is a positive function. We observe that it is asymptotic to \(\frac{1}{\tau}\) as \(\tau\to 0\) and to \(\frac{1}{\tau^{2}}\) as \(\tau\to+\infty\). In particular, the integral exists at infinity, uniformly in \(\lambda\). Taking \(\|\varphi_{ij}\|_{\mathcal{D}}\) large enough, we can equivalently study the integral
\[\int_{\varepsilon}^{+\infty}\bigg{[}\frac{1}{\tau^{2}}-\frac{1}{(\tau+1)^{2}- 1+\lambda}\bigg{]}\tau\ \mathrm{d}\tau,\]
where \(\varepsilon=\frac{2|a_{ij}|}{\|\varphi_{ij}\|_{\mathcal{D}}}<1\). Since the integrand is asymptotic to \(\frac{1}{\tau}\) as \(\tau\to 0\), it is equivalent to consider the sum of integrals
\[\int_{\varepsilon}^{1}\frac{1}{\tau}\ \mathrm{d}\tau+\int_{1}^{+\infty}\bigg{[} \frac{1}{\tau^{2}}-\frac{1}{(\tau+1)^{2}-1+\lambda}\bigg{]}\tau\ \mathrm{d}\tau,\]
where the second integral is constant (we will call it \(C_{1}\)) and does not depend on \(\varepsilon\). We have
\[\int_{\varepsilon}^{1}\frac{1}{\tau}\ \mathrm{d}\tau+\int_{1}^{+\infty}\bigg{[} \frac{1}{\tau^{2}}-\frac{1}{(\tau+1)^{2}-1+\lambda}\bigg{]}\tau\ \mathrm{d}\tau=\log\tau\bigg{|}_{ \varepsilon}^{1}+C_{1}=-\log\varepsilon+C_{1}.\]
Then, as \(\|\varphi_{ij}\|_{\mathcal{D}}\to+\infty\), we know that the integral on the right-hand side of (3.8) behaves like
\[\frac{2}{|a_{ij}|}\bigg{(}-\log\frac{2|a_{ij}|}{\|\varphi_{ij}\|_{\mathcal{D}}} +C_{1}\bigg{)}=\frac{2}{|a_{ij}|}\bigg{(}\log\|\varphi_{ij}\|_{\mathcal{D}}+C_ {1}-\log 2|a_{ij}|\bigg{)}=\frac{2}{|a_{ij}|}(\log\|\varphi_{ij}\|_{\mathcal{D}} +C_{2}),\]
where \(C_{2}=C_{1}-\log 2|a_{ij}|\).
We have thus proved that
\[\int_{1}^{+\infty}\bigg{(}\frac{1}{|a_{ij}|t}-\frac{1}{|a_{ij}|t+\|\varphi_{ ij}\|_{\mathcal{D}}\sqrt{t}+|x_{ij}^{0}-a_{ij}|}\bigg{)}\ \mathrm{d}t\leq\frac{2}{|a_{ij}|}(\log\|\varphi_{ij}\|_{\mathcal{D}}+C_{2}).\]
This means that given \(R>0\), when \(\|\varphi_{ij}\|_{\mathcal{D}}\geq R\) for \(R\) large enough, we have
\[\mathcal{A}_{ij}(\varphi)\geq m_{i}m_{j}\bigg{[}\frac{\|\varphi_{ij}\|_{ \mathcal{D}}^{2}}{2M}-\frac{2}{|a_{ij}|}(\log\|\varphi_{ij}\|_{\mathcal{D}}+C_{ 2})\bigg{]}\]
and we can conclude that \(\mathcal{A}_{ij}(\varphi)\to+\infty\) as \(\|\varphi_{ij}\|_{\mathcal{D}}\to+\infty\).
### Weak lower semicontinuity
Now, we prove that the functional \(\mathcal{A}\) is weakly lower semicontinuous. Since the kinetic term \(\frac{1}{2}\|\dot{\varphi}(t)\|_{\mathcal{M}}\) is convex, it is straightforward that the term \(\int_{1}^{+\infty}\frac{1}{2}\|\dot{\varphi}(t)\|_{\mathcal{M}}^{2}\ \mathrm{d}t\) is weakly lower semicontinuous. However it is worthwhile noticing that Fatou's Lemma cannot be applied to the term \(\int_{1}^{+\infty}U(\varphi(t)+x^{0}-a+at)-U(at)\ \mathrm{d}t\), since the integrand is not a positive function, and we must proceed in a different way. We know that there is at least a sequence of functions in \(\mathcal{D}_{0}^{1,2}(1,+\infty)\) that converges uniformly on the compact subsets of \([1,+\infty)\). To show this, consider a bounded sequence \((\varphi^{n})_{n}\) in \(\mathcal{D}_{0}^{1,2}(1,+\infty)\). We also know, by the definition of this space, that \(\|\dot{\varphi}^{n}\|_{L^{2}([1,+\infty))}<+\infty\) and that \(\varphi^{n}(1)=0\), for every \(n\). From the inequality
\[\|\varphi(t)\|_{\mathcal{M}}\leq\|\dot{\varphi}\|_{L^{2}}\sqrt{t-1}\leq\|\dot{ \varphi}\|_{L^{2}}\sqrt{t}\qquad\text{ for every }t\geq 1, \tag{3.9}\]
we have \(\|\varphi^{n}(t)\|_{\mathcal{M}}\leq\|\dot{\varphi}^{n}\|_{L^{2}}\sqrt{t}\) for every \(t\geq 1\) and for every \(n\), which means that the \(L^{\infty}\)-norm in \([1,T]\) of \(\varphi^{n}\) is bounded, for every fixed \(T\geq 1\) and for every \(n\). On the other hand, we have
\[\|\varphi^{n}(t_{1})-\varphi^{n}(t_{2})\|_{\mathcal{M}}\leq\|\dot{\varphi}^{n }\|_{L^{2}}\sqrt{t_{1}-t_{2}},\]
for every \(t_{1},t_{2}\in[1,+\infty)\) and for every \(n\), which implies that the sequence \((\varphi^{n})_{n}\) is equicontinuous on each interval \([1,T]\), for \(T\) fixed. Then, by Ascoli-Arzela's Theorem, we can say that for every fixed \(T\geq 1\) there is a subsequence \((\varphi^{n_{k}})_{k}\) that converges uniformly on \([1,T]\) (and, consequently, it converges pointwise on each compact). Besides, it can also be proved, through a diagonal procedure, that there is a subsequence converging pointwise in \([1,+\infty)\).
Consider now a sequence \((\varphi^{n})_{n}\) in \(\mathcal{D}_{0}^{1,2}(1,+\infty)\) converging weakly to some limit \(\varphi\in\mathcal{D}_{0}^{1,2}(1,+\infty)\). By the properties of weak convergence we know that the sequence is bounded on \(\mathcal{D}_{0}^{1,2}(1,+\infty)\) and, from the previous considerations, there is a subsequence \((\varphi^{n_{k}})_{k}\) converging uniformly on compact subsets of \([1,+\infty)\) (and hence pointwise in \([1,+\infty)\)). We write
\[\frac{1}{|x_{ij}^{0}-a_{ij}+a_{ij}t+\varphi_{ij}^{n}(t)|}-\frac{1}{|a_{ij}t|} =\int_{0}^{1}\frac{\mathrm{d}}{\mathrm{d}s}\bigg{[}\frac{1}{|a_{ij}t+s(x_{ij} ^{0}-a_{ij}+\varphi_{ij}^{n}(t))|}\bigg{]}\ \mathrm{d}s. \tag{3.10}\]
However, this inequality holds only when the denominator of the integrand is not zero, which happens for \(t\) sufficiently small. In particular, for all \(s\in(0,1)\) we have
\[|a_{ij}t+s(x_{ij}^{0}-a_{ij}+\varphi_{ij}^{n}(t))| \geq|a_{ij}|t-s(|x_{ij}^{0}-a_{ij}|+\|\varphi_{ij}^{n}\|_{ \mathcal{D}}\sqrt{t})\] \[>|a_{ij}|t-(|x_{ij}^{0}-a_{ij}|+\|\varphi_{ij}^{n}\|_{\mathcal{D} }\sqrt{t}),\]
and, since \(|\varphi_{ij}^{n}(t)|\leq k\sqrt{t}\) for \(k\in\mathbb{R}^{+}\) large enough, we have
\[|a_{ij}t+s(x_{ij}^{0}-a_{ij}+\varphi_{ij}^{n}(t))|>|a_{ij}|t-(|x_{ij}^{0}-a_ {ij}|+k\sqrt{t}),\]
where the last term is larger then zero if \(t\) is larger than some \(\bar{T}=\bar{T}(k)\); it is easy to compute \(\bar{T}\) by studying the function \(g(t)=|a_{ij}|t-|x_{ij}^{0}-a_{ij}|+k\sqrt{t}|\). For these reasons, it is better to study the potential term separately on the two intervals \([1,\bar{T}]\) and \([\bar{T},+\infty)\).
We observe that \(U(x^{0}-a+at+\varphi)\in L^{1}([1,\bar{T}])\), since
\[\frac{1}{|x_{ij}^{0}-a_{ij}+a_{ij}t+\varphi_{ij}^{n}(t)|}\leq\frac{1}{|x_{ij}^ {0}-a_{ij}|-|a_{ij}|t-\|\varphi_{ij}^{n}\|_{\mathcal{D}}\sqrt{t}}.\]
Besides, since \(U\) is a positive function, we can use the pointwise convergence of the sequence and Fatou's
Lemma to state that
\[\int_{1}^{\bar{T}}\frac{1}{|x_{ij}^{0}-a_{ij}+a_{ij}t+\varphi_{ij}(t)|}\ \mathrm{d}t \leq\liminf_{n\to+\infty}\int_{1}^{\bar{T}}\frac{1}{|x_{ij}^{0}-a_{ij}+a_{ij}t +\varphi_{ij}^{n}(t)|}\ \mathrm{d}t.\]
Now, knowing that the sequence \((\varphi^{n})_{n}\) is bounded, we wish to prove that the term \(U(\varphi^{n}(t)+x^{0}-a+at)-U(at)\) converges in \(L^{1}([\bar{T},+\infty))\). By using (3.10), we can write
\[\int_{T}^{+\infty} \frac{1}{|x_{ij}^{0}-a_{ij}+a_{ij}t+\varphi_{ij}^{n}(t)|}-\frac{1 }{|a_{ij}t|}\ \mathrm{d}t\] \[=\int_{\bar{T}}^{+\infty}\bigg{(}\int_{0}^{1}-\frac{[a_{ij}t+s(x _{ij}^{0}-a_{ij}+\varphi_{ij}^{n}(t))](x_{ij}^{0}-a_{ij}+\varphi_{ij}^{n}(t)) }{|a_{ij}t+s(x_{ij}^{0}-a_{ij}+\varphi_{ij}^{n}(t))|^{3}}\ \mathrm{d}s\bigg{)}\ \mathrm{d}t.\]
Our goal is to find an upper bound for the term
\[\int_{T}^{+\infty}\bigg{|}\frac{1}{|x_{ij}^{0}-a_{ij}+a_{ij}t+\varphi_{ij}^{n }(t)|}-\frac{1}{|a_{ij}t|}\bigg{|}\ \mathrm{d}t.\]
To find the upper bound, we will need the inequality
\[\frac{|b+c|^{2}}{|b|^{2}-|c|^{2}}\geq\frac{1}{3},\qquad\text{for each $b,c\in\mathbb{R}^{d}$ such that $|b|\geq 2|c|$}, \tag{3.11}\]
which can easily be proved by elementary calculus. By (3.11) and using the fact that \(|x_{ij}^{0}-a_{ij}|+\|\varphi_{ij}^{n}\|_{\mathcal{D}}\sqrt{t}\leq k^{\prime} \sqrt{t}\) for \(k^{\prime}\in\mathbb{R}^{+}\) large enough, we thus have
\[\int_{T}^{+\infty}\bigg{|}\int_{0}^{1} -\frac{[a_{ij}t+s(x_{ij}^{0}-a_{ij}+\varphi_{ij}^{n}(t))](x_{ij} ^{0}-a_{ij}+\varphi_{ij}^{n}(t))}{|a_{ij}t+s(x_{ij}^{0}-a_{ij}+\varphi_{ij}^{ n}(t))|^{3}}\ \mathrm{d}s\bigg{|}\ \mathrm{d}t\] \[\leq\int_{\bar{T}}^{+\infty}\bigg{(}\int_{0}^{1}\frac{|x_{ij}^{0 }-a_{ij}+\varphi_{ij}^{n}(t)|}{|a_{ij}t+s(x_{ij}^{0}-a_{ij}+\varphi_{ij}^{n}(t ))|^{2}}\ \mathrm{d}s\bigg{)}\ \mathrm{d}t\] \[\leq\int_{\bar{T}}^{+\infty}\bigg{(}\int_{0}^{1}\frac{3k^{\prime} \sqrt{t}}{|a_{ij}|^{2}t^{2}-sk^{\prime}t}\ \mathrm{d}s\bigg{)}\ \mathrm{d}t.\]
By choosing \(\bar{T}(k)\gg k^{\prime}/|a_{ij}|^{2}\) so that \(|a_{ij}|^{2}t>sk^{\prime}\) for all \(s\in(0,1)\) and for all \(t\in[\bar{T},+\infty)\) (take \(k\) large enough), we have that the last integral is finite and we have thus proved that there is a \(\hat{T}\) such that, for all \(\bar{T}\geq\bar{T}\),
\[\int_{T}^{+\infty}\bigg{|}\frac{1}{|x_{ij}^{0}-a_{ij}+a_{ij}t+\varphi_{ij}^{n }(t)|}-\frac{1}{|a_{ij}t|}\bigg{|}\ \mathrm{d}t<+\infty.\]
From this result, the \(L^{1}\) convergence of the term \(U(\varphi^{n}(t)+x^{0}-a+at)-U(at)\) follows: by the dominated convergence Theorem we have, in particular,
\[\lim_{n\to+\infty}\int_{\bar{T}}^{+\infty}U(\varphi^{n}(t)+x^{0}-a+at)-U(at) \ \mathrm{d}t=\int_{\bar{T}}^{+\infty}U(\varphi(t)+x^{0}-a+at)-U(at)\ \mathrm{d}t.\]
Thus, if we consider any sequence \((\varphi^{n})_{n}\) in \(\mathcal{D}_{0}^{1,2}(1,+\infty)\) converging weakly to some \(\varphi\in\mathcal{D}_{0}^{1,2}(1,+\infty)\), we have
\[\mathcal{A}(\varphi)\leq\liminf_{n\to+\infty}\int_{1}^{+\infty}\frac{1}{2}\| \dot{\varphi}^{n}(t)\|_{\mathcal{M}}^{2}+U(\varphi^{n}(t)+x^{0}-a+at)-U(at)\ \mathrm{d}t,\]
which proves the weak lower semicontinuity of the renormalized Lagrangian action in the space \(\mathcal{D}_{0}^{1,2}(1,+\infty)\).
**Remark 3.2**.: The same reasoning leads to the continuity of the renormalised action with respect to the
strong topology, in all elements \(\varphi\) that do not give rise to collisions.
### Absence of collisions and hyperbolicity of the motion
Now we can apply the Direct Method of the Calculus of Variations, obtaining a minimizer \(\varphi\) on \(\mathcal{D}^{1,2}_{0}(1,+\infty)\) of the renormalized action \(\mathcal{A}\) and, by Marchal's Principle applied to \(x(t)=\varphi(t)+x^{0}-a+at\), we have that \(x(t)\in\Omega\) for all \(t\in(1,+\infty)\). Indeed, in each finite time interval, the full path \(x\) minimizes the Lagrangian action among all paths joining the two ends. Being free of collisions, it solves the associated Euler-Lagrange equations.
It remains to prove that \(\lim_{t\to+\infty}\dot{\varphi}(t)=0\). We already know that \(\dot{\varphi}\in L^{2}\) and that there is some \(k\in\mathbb{R}^{+}\) such that \(\|\varphi(t)\|_{\mathcal{M}}\leq k\sqrt{t}\). By this last inequality, we have that
\[\sum_{i<j}m_{i}m_{j}\frac{1}{|a_{ij}t+x_{ij}^{0}-a_{ij}+\varphi_{ij}(t)|}\leq \sum_{i<j}m_{i}m_{j}\frac{1}{|a_{ij}|t-|x_{ij}^{0}-a_{ij}|-k\sqrt{t}}\]
and since \(|a_{ij}|t-|x_{ij}^{0}-a_{ij}|-k\sqrt{t}\to+\infty\) as \(t\to+\infty\) for all \(i,j=1,...,N\), we obtain that \(\lim_{t\to+\infty}U(x(t))=0\). Besides, since \(\int_{1}^{+\infty}|\dot{\varphi}_{ij}(t)|^{2}\ \mathrm{d}t<+\infty\), we have that
\[\liminf_{t\to+\infty}|\dot{\varphi}_{ij}(t)|=0. \tag{3.12}\]
**Remark 3.3**.: A solution \(x(t)=\varphi(t)+at+x^{0}-a\) of the equation \(\mathcal{M}\ddot{x}=\nabla U(x)\) has positive energy. Indeed,
\[\frac{1}{2}\|\dot{x}(t)\|_{\mathcal{M}}^{2}-U(x(t))=\frac{1}{2}\sum_{i=1}^{N} m_{i}|\dot{\varphi}_{i}(t)+a_{i}|^{2}-U(x(t))=h,\]
and since by (3.12) there is some \(t_{k}\to+\infty\) such that \(\lim_{t_{k}\to+\infty}\dot{\varphi}_{i}(t_{k})=0\), we have \(h=\frac{1}{2}\|a\|_{\mathcal{M}}\).
By Remark 3.3, we can apply Chazy's Lemma (Lemma 2.7), which implies that the limit of \(\dot{x}(t)\) exists for \(t\to+\infty\). Since, by (3.12), there is at least a sequence \((t_{k})_{k}\) such that \(\dot{x}(t_{k})\to a\) as \(t_{k}\to+\infty\), we can conclude that
\[\lim_{t\to+\infty}\dot{x}(t)=a.\]
Besides, we can apply Chazy's Theorem (Theorem 2.6) to state that the minimizing motion \(x\) has the asymptotic expansion
\[x(t)=at-\log(t)\nabla U(a)+o(1)\quad\text{as }t\to+\infty.\]
We have thus proved that \(x\) is a solution of the system
\[\begin{cases}\mathcal{M}\ddot{x}=\nabla U(x)\\ x(1)=x^{0}\\ \lim_{t\to+\infty}\dot{x}(t)=a\end{cases},\]
which means that there is a hyperbolic motion for the \(N\)-body problem, starting at any initial configuration \(x^{0}\) and having prescribed asymptotic velocity \(a\) without collisions.
## 4 Existence of minimal half completely parabolic motions
We now focus on the class of completely parabolic motions, that is, those motions that have the form \(x(t)=at+O(t^{2/3})\) for \(t\to+\infty\), with \(a=0\) and \(|r_{i}(t)-r_{j}(t)|\approx t^{2/3}\) for \(i<j\). Equivalently, we have the following definition.
**Definition 4.1**.: An expansive solution \(x\) of the \(N\)-body problem is said to be parabolic if every body approaches infinity with zero velocity.
In this section we will prove Theorem 1.7. More specifically, we will prove, for the \(N\)-body problem, the
existence of orbits having the form
\[x(t)=\beta bt^{2/3}+o(t^{1/3^{+}}),\quad\text{as }t\to+\infty,\]
where \(\beta\in\mathbb{R}\) is a proper value and \(b\) is a minimal central configuration. The remainder is \(o(t^{1/3^{+}})\) in the sense that it grows less than order \(\gamma\) for every \(\gamma>1/3\).
**Definition 4.2**.: We say that \(b\in\mathcal{X}\) is a central configuration if it is a critical point of \(U\) when restricted to the inertial ellipsoid
\[\mathcal{E}=\{x\in\mathcal{X}\ :\ \langle\mathcal{M}x,x\rangle=1\}.\]
A central configuration \(b_{m}\in\mathcal{E}\) is said to be minimal if
\[U(b_{m})=\min_{b\in\mathcal{E}}U(b).\]
More precisely, we will work with normalized central configurations, that is, central configurations \(b\) such that \(\langle\mathcal{M}b,b\rangle=1\).
**Remark 4.3**.: Obviously, as \(U\) is infinite on collisions, minimal central configuration \(b_{m}\) are non collision, i.e. \(b_{m}\in\Omega\).
Given a Kepler potential \(U\), we observe that from the definition of central configurations, it follows
\[\nabla U(b)=\lambda\mathcal{M}b,\]
where \(\lambda\) is a Lagrange multiplier. Besides, we have the equality
\[\lambda=\lambda\langle\mathcal{M}b,b\rangle=\langle\nabla U(b),b\rangle=-U(b). \tag{4.1}\]
We first recall that there are self-similar solutions to Newton's equations \(\mathcal{M}\tilde{x}=\nabla U(x)\) having the form
\[x(t)=\beta bt^{2/3},\]
for a proper constant \(\beta\) and a central configuration \(b\). Indeed
\[\mathcal{M}\tilde{x}=-\frac{2}{9}\mathcal{M}\beta bt^{-4/3}=\nabla U(x)=\nabla U (\beta bt^{2/3})=\frac{1}{\beta^{2}}t^{-4/3}\nabla U(b)=\frac{1}{\beta^{2}}t^{ -4/3}\lambda\mathcal{M}b\]
and, by (4.1), we also have
\[\beta^{3}=\frac{9}{2}U(b).\]
This means that for \(\beta=\sqrt[3]{\frac{9}{2}U(b)}\), the orbit \(x(t)=\beta bt^{2/3}\) can be a homothetic solution of Newton's equations.
Now, let us define
\[r_{0}(t)=\beta b_{m}t^{2/3},\]
where \(b_{m}\in\Omega\) is a normalized minimal central configuration. We wish to prove the existence of solutions of the system
\[\begin{cases}\mathcal{M}\tilde{x}=\nabla U(x)\\ x(1)=x^{0}\\ \lim_{t\to+\infty}\dot{x}(t)=0\end{cases},\]
given \(x^{0}\in\mathcal{X}\). We seek solutions having the form
\[x(t)=r_{0}(t)+\varphi(t)-r_{0}(1)-x^{0}=r_{0}(t)+\varphi(t)+\tilde{x}^{0}, \tag{4.2}\]
where \(\varphi\in\mathcal{D}^{1,2}_{0}(1,+\infty)\). In this case, we have
\[\nabla U(x(t))=\mathcal{M}\tilde{x}(t)=\mathcal{M}\tilde{r_{0}}(t)+\mathcal{ M}\tilde{\varphi}(t)=\nabla U(r_{0}(t))+\mathcal{M}\tilde{\varphi}(t),\]
which means that
\[\mathcal{M}\tilde{\varphi}(t)=\nabla U(r_{0}(t)+\varphi(t)+\tilde{x}^{0})-\nabla U (r_{0}(t)).\]
We can thus define the renormalized Lagrangian action as
\[\mathcal{A}(\varphi)=\int_{1}^{+\infty}\frac{1}{2}\langle\mathcal{M}\dot{ \varphi}(t),\dot{\varphi}(t)\rangle+U(r_{0}(t)+\varphi(t)+\tilde{x}^{0})-U(r_{ 0}(t))-\langle\nabla U(r_{0}(t)),\varphi(t)\rangle\ \mathrm{d}t. \tag{4.3}\]
Besides the coercivity and weak lower semicontinuity of the Lagrangian action, we have to verify that:
* \(\forall\ \varphi\in\mathcal{D}^{1,2}_{0}(1,+\infty)\) such that \(r_{0}(t)+\varphi(t)+\tilde{x}^{0}(t)\neq 0\) for all \(t\geq 1\), \(\mathcal{A}(\varphi)<+\infty\);
* the action is continuous and \(C^{1}\) on \(\mathcal{D}^{1,2}_{0}\setminus\{\varphi\in\mathcal{D}^{1,2}_{0}\ :\ \exists\ t\text{ such that }r_{0}(t)+\varphi(t)+\tilde{x}^{0}(t)=0\}\).
### Coercivity
To minimize the action on the set \(\mathcal{D}^{1,2}_{0}(1,+\infty)\), we start by proving its coercivity. We do this by reconducting the problem to a Kepler problem, where we denote \(U_{min}=\min_{b\in\mathcal{E}}U(b)\). We notice that, for any orbit \(x\),
\[U(x)\geq\frac{U_{min}}{\|x\|},\]
where \(\|\cdot\|\) represents the Euclidean norm on \(\mathbb{R}^{dN}\). Indeed, because of the homogeneity of the potential,
\[U(x)=U\bigg{(}\|x\|\frac{x}{\|x\|}\bigg{)}=\frac{1}{\|x\|}U\bigg{(}\frac{x}{\| x\|}\bigg{)}\geq\frac{1}{\|x\|}U_{min}. \tag{4.4}\]
Besides,
\[\nabla U(r_{0})=\nabla U(\beta b_{m}t^{2/3})=\frac{1}{\beta^{2}t^{4/3}}\nabla U (b_{m})=\frac{1}{\beta^{2}t^{4/3}}\lambda\mathcal{M}b_{m}=-\frac{U_{min}}{ \beta^{2}t^{4/3}}\mathcal{M}b_{m}. \tag{4.5}\]
Using (4.4) and (4.5), we can then write
\[\mathcal{A}(\varphi) \geq\int_{1}^{+\infty}\frac{1}{2}\langle\mathcal{M}\dot{\varphi }(t),\dot{\varphi}(t)\rangle+\frac{U_{min}}{\|r_{0}(t)+\varphi(t)+\tilde{x}^{ 0}\|}-\frac{U_{min}}{\|r_{0}(t)\|}+\frac{1}{\beta^{2}t^{4/3}}\langle U_{min} \mathcal{M}b,\varphi(t)\rangle\ \mathrm{d}t\] \[=\int_{1}^{+\infty}\frac{1}{2}\langle\mathcal{M}\dot{\varphi}(t), \dot{\varphi}(t)\rangle+\frac{U_{min}}{\|r_{0}(t)+\varphi(t)+\tilde{x}^{0}\| }-\frac{U_{min}}{\|r_{0}(t)\|}+\frac{\langle U_{min}\mathcal{M}r_{0}(t), \varphi(t)\rangle}{\|r_{0}(t)\|^{3}}\ \mathrm{d}t.\]
We have
\[\|r_{0}(t)+\varphi(t)+\tilde{x}^{0}\|^{2}=\|r_{0}(t)\|^{2}+2\langle\mathcal{M }r_{0}(t),\varphi(t)\rangle+2\langle\mathcal{M}\varphi(t),x^{0}\rangle+2 \langle\mathcal{M}r_{0}(t),x^{0}\rangle+\|\varphi(t)\|^{2}+\|x^{0}\|^{2}=u+v,\]
where we define
\[u :=\|r_{0}(t)\|^{2}\] \[v :=2\langle\mathcal{M}r_{0}(t),\varphi(t)\rangle+2\langle \mathcal{M}\varphi(t),x^{0}\rangle+2\langle\mathcal{M}r_{0}(t),x^{0}\rangle+\| \varphi(t)\|^{2}+\|x^{0}\|^{2}.\]
**Remark 4.4**.: The following equalities hold true:
\[U(b+s)-U(b)=\int_{0}^{1}\frac{\mathrm{d}}{\mathrm{d}t}U(b+st)\ \mathrm{d}t=\int_{0}^{1}\nabla U(b+st)\ \mathrm{d}t,\] \[U(b+s)-U(b)-\nabla U(b)s=\int_{0}^{1}\int_{0}^{1}\langle\nabla^{ 2}U(b+st_{1}t_{2})s,s\rangle t_{2}\ \mathrm{d}t_{1}\ \mathrm{d}t_{2}.\]
Using Remark 4.4, we then have
\[\|r_{0}(t)+\varphi(t)+\tilde{x}^{0}\|^{-1}=(u+v)^{-1/2}=u^{-1/2}-\frac{1}{2}u^ {-3/2}v+\frac{3}{4}\int_{0}^{1}\int_{0}^{1}\langle(u+stv)^{-5/2}v,v\rangle s \ \mathrm{d}s\ \mathrm{d}t.\]
Since the integral in the last expression is positive, it follows
\[\|r_{0}(t)+\varphi(t)+\tilde{x}^{0}\|^{-1} =(u+v)^{-1/2}\] \[\geq u^{-1/2}-\frac{1}{2}u^{-3/2}v\] \[=\|r_{0}(t)\|^{-1}-\frac{1}{2\|r_{0}(t)\|^{3}}[2\langle\mathcal{M }r_{0}(t),\varphi(t)\rangle+2\langle\mathcal{M}\varphi(t),x^{0}\rangle+2 \langle\mathcal{M}r_{0}(t),x^{0}\rangle+\|\varphi(t)\|^{2}+\|x^{0}\|^{2}]\] \[=\|r_{0}(t)\|^{-1}-\frac{\langle\mathcal{M}r_{0},\varphi(t) \rangle}{\|r_{0}(t)\|^{3}}-\frac{\langle\mathcal{M}\varphi(t),x^{0}\rangle}{ \|r_{0}(t)\|^{3}}-\frac{\langle\mathcal{M}r_{0}(t),x^{0}\rangle}{\|r_{0}(t)\| ^{3}}-\frac{1}{2}\frac{\|\varphi(t)\|^{2}}{\|r_{0}(t)\|^{3}}-\frac{1}{2}\frac{ \|x^{0}\|^{2}}{\|r_{0}(t)\|^{3}}-\frac{1}{2}\frac{\|x^{0}\|^{2}}{\|r_{0}(t)\| ^{3}}. \tag{4.6}\]
At this point we can use (4.6) to obtain
\[\mathcal{A}(\varphi) \geq\int_{1}^{+\infty}\frac{1}{2}\langle\mathcal{M}\dot{\varphi }(t),\dot{\varphi}(t)\rangle+\frac{U_{min}}{\|r_{0}(t)+\varphi+\tilde{x}^{0} \|}-\frac{U_{min}}{\|r_{0}(t)\|}+\frac{\langle U_{min}\mathcal{M}r_{0}(t), \varphi(t)\rangle}{\|r_{0}(t)\|^{3}}\ \mathrm{d}t\] \[\geq\int_{1}^{+\infty}\frac{1}{2}\langle\mathcal{M}\dot{\varphi} (t),\dot{\varphi}(t)\rangle-\frac{U_{min}}{2}\frac{\|\varphi(t)\|^{2}}{\|r_{0 }(t)\|^{3}}-\frac{\langle U_{min}\mathcal{M}\varphi(t),x^{0}\rangle}{\|r_{0}( t)\|^{3}}\ \mathrm{d}t+C_{3},\]
where \(C_{3}\) is a constant. By Hardy inequality (2.4) and the fact that, for \(\beta=\sqrt[3]{\frac{9}{2}U(b)}\),
\[\frac{U_{min}}{\|r_{0}(t)\|^{3}}=\frac{U_{min}}{\|\beta b_{m}t^{2 /3}\|^{3}}=\frac{U_{min}}{\beta^{3}t^{2}\|b_{m}\|^{3}}=\frac{2}{9}\frac{1}{t^ {2}}, \tag{4.7}\]
we have
\[\mathcal{A}(\varphi) \geq\int_{1}^{+\infty}\frac{1}{2}\bigg{[}\langle\mathcal{M}\dot {\varphi}(t),\dot{\varphi}(t)\rangle-\frac{8}{9}\langle\mathcal{M}\dot{ \varphi}(t),\dot{\varphi}(t)\rangle\bigg{]}-\frac{U_{min}\langle\mathcal{M} \varphi(t),x^{0}\rangle}{\|r_{0}(t)\|^{3}}\ \mathrm{d}t\] \[=\int_{1}^{+\infty}\frac{1}{18}\langle\mathcal{M}\dot{\varphi}(t ),\dot{\varphi}(t)\rangle-\frac{U_{min}\langle\mathcal{M}\varphi(t),x^{0} \rangle}{\|r_{0}(t)\|^{3}}\ \mathrm{d}t.\]
Using again (4.7), we observe that
\[\frac{U_{min}\langle\mathcal{M}\varphi(t),x^{0}\rangle}{\|r_{0}(t)\|^{3}}= \frac{2}{9}\frac{\langle\mathcal{M}\varphi(t),x^{0}\rangle}{t^{2}}.\]
By Cauchy-Sewartz and Hardy inequalities, it follows
\[\int_{1}^{+\infty}-\frac{U_{min}\langle\mathcal{M}\varphi(t),x^{0 }\rangle}{\|r_{0}(t)\|^{3}}\ \mathrm{d}t \geq-\int_{1}^{+\infty}\frac{2}{9}\frac{|\langle\mathcal{M}\varphi (t),x^{0}\rangle|}{t^{2}}\ \mathrm{d}t\geq-\int_{1}^{+\infty}\frac{2}{9}\frac{\|\varphi(t)\|_{ \mathcal{M}}}{t}\frac{\|x^{0}\|_{\mathcal{M}}}{t}\ \mathrm{d}t\] \[\geq-\frac{2}{9}\bigg{(}\int_{1}^{+\infty}\frac{\|\varphi(t)\|_{ \mathcal{M}}^{2}}{t^{2}}\ \mathrm{d}t\bigg{)}^{1/2}\bigg{(}\int_{1}^{+\infty}\frac{\|x^{0}\|_{ \mathcal{M}}^{2}}{t^{2}}\ \mathrm{d}t\bigg{)}^{1/2}\ \mathrm{d}t\] \[\geq-\frac{4}{9}C_{4}\|\varphi\|_{\mathcal{D}},\]
where \(C_{4}\) is constant. This means that
\[\mathcal{A}(\varphi)\geq\frac{1}{18}\|\varphi\|_{\mathcal{D}}^{2}-\frac{4}{9}C _{4}\|\varphi\|_{\mathcal{D}},\]
which proves the coercivity of the action.
### Weak-lower semicontinuity
Now, we can focus on the proof of the weak lower semicontinuity of the action. Consider a sequence of functions \((\varphi^{n})_{n}\subset\mathcal{D}_{0}^{1,2}(1,+\infty)\) converging weakly in \(\mathcal{D}_{0}^{1,2}(1,+\infty)\) to some \(\varphi\), for \(n\to+\infty\). It trivially
follows that, for every \(n\), \(\|\varphi\|_{\mathcal{D}}<+\infty\) and \(\|\varphi^{n}\|_{\mathcal{D}}<+\infty\). Let us divide the action in two parts:
\[\mathcal{A}(\varphi)=\mathcal{A}_{[1,\overline{T})}(\varphi)+\mathcal{A}_{[ \overline{T},+\infty)}(\varphi),\]
where
\[\mathcal{A}_{[1,\overline{T})}(\varphi) =\int_{1}^{\overline{T}}\frac{1}{2}\|\dot{\varphi}(t)\|_{\mathcal{ M}}^{2}+U(r_{0}(t)+\varphi(t)+\check{x}^{0})-U(r_{0}(t))-\langle\nabla U(r_{0}(t) ),\varphi(t)\rangle\,\mathrm{d}t,\] \[\mathcal{A}_{[\overline{T},+\infty)}(\varphi) =\int_{\overline{T}}^{+\infty}\frac{1}{2}\|\dot{\varphi}(t)\|_{ \mathcal{M}}^{2}+U(r_{0}(t)+\varphi(t)+\check{x}^{0})-U(r_{0}(t))-\langle \nabla U(r_{0}(t)),\varphi(t)\rangle\,\,\mathrm{d}t\]
for some \(\overline{T}\in(1,+\infty)\). Using Ascoli-Arzela's Theorem, we can say that \(\varphi^{n}\to\varphi\) uniformly on compact sets, which implies that \(\langle\nabla U(r_{0}),\varphi^{n}\rangle\to\langle\nabla U(r_{0}),\varphi\rangle\) uniformly in \([1,\overline{T}]\), as \(n\to+\infty\), for every \(\overline{T}<+\infty\). Then, using Fatou's Lemma, it easily follows that the term \(\mathcal{A}_{[\overline{T},+\infty)}(\varphi)\) is weak lower semicontinuous.
Concerning the term \(\mathcal{A}_{[\overline{T},+\infty)}(\varphi)\), we can write:
\[\mathcal{A}_{[\overline{T},+\infty)}(\varphi)=\int_{\overline{T}}^ {+\infty} \frac{1}{2}\|\dot{\varphi}(t)\|_{\mathcal{M}}^{2}+\frac{1}{2} \langle\nabla^{2}U(r_{0}(t))\varphi(t),\varphi(t)\rangle\] \[+U(r_{0}(t)+\varphi(t)+\check{x}^{0})-U(r_{0}(t))-\langle\nabla U (r_{0}(t)),\varphi(t)\rangle-\frac{1}{2}\langle\nabla^{2}U(r_{0}(t))\varphi( t),\varphi(t)\rangle\,\,\mathrm{d}t.\]
**Claim**: The map \(\varphi(t)\mapsto\bigg{(}\int_{1}^{+\infty}\frac{1}{2}\|\dot{\varphi}(t)\|_{ \mathcal{M}}^{2}+\frac{1}{2}\langle\nabla^{2}U(r_{0}(t))\varphi(t),\varphi(t )\rangle\,\,\mathrm{d}t\bigg{)}^{1/2}\) is an equivalent norm to \(\|\cdot\|_{\mathcal{D}}\). Indeed:
* Since \(U(x)\geq\frac{U_{min}}{\|x\|}\) for each \(x\neq 0\), it follows that \(\nabla^{2}U(x)\geq-U_{min}\frac{Id}{\|x\|^{3}}\), which implies \(\langle\nabla^{2}U(r_{0}(t))\varphi(t),\varphi(t)\rangle\geq-\frac{2}{9}\frac {\|\varphi(t)\|_{\mathcal{M}}^{2}}{t^{2}}\) for each \(t\in[1,+\infty)\). Then, by Hardy inequality, we have \[\int_{1}^{+\infty}\frac{1}{2}\|\dot{\varphi}(t)\|_{\mathcal{M}}^{2}+\frac{1}{2 }\langle\nabla^{2}U(r_{0}(t))\varphi(t),\varphi(t)\rangle\,\,\mathrm{d}t\geq \bigg{(}1-\frac{8}{9}\bigg{)}\|\varphi\|_{\mathcal{D}}^{2}=\frac{1}{9}\| \varphi\|_{\mathcal{D}}^{2}.\]
* Using the fact that, for some constant \(C_{5}>0\), \[\langle\nabla^{2}U(r_{0}(t))\varphi(t),\varphi(t)\rangle\leq C_{5}\frac{\| \varphi(t)\|_{\mathcal{M}}}{t^{2}}\] and Hardy inequality, we have \[\int_{1}^{+\infty}\frac{1}{2}\|\dot{\varphi}(t)\|_{\mathcal{M}}^{2}+\frac{1}{ 2}\langle\nabla^{2}U(r_{0}(t))\varphi(t),\varphi(t)\rangle\,\,\mathrm{d}t\leq C _{6}\|\varphi\|_{\mathcal{D}}^{2},\] for some constant \(C_{6}>0\).
From the equivalence between the two norms, we have that the term \(\int_{T}^{+\infty}\frac{1}{2}\|\dot{\varphi}(t)\|_{\mathcal{M}}^{2}+\frac{1}{2 }\langle\nabla^{2}U(r_{0}(t))\varphi(t),\varphi(t)\rangle\,\mathrm{d}t\) is weak lower semicontinuous.
Using Taylor's series expansion, we can write
\[\int_{\overline{T}}^{+\infty} U(r_{0}(t)+\varphi(t)+\check{x}^{0})-U(r_{0}(t))-\langle \nabla U(r_{0}(t)),\varphi(t)\rangle-\frac{1}{2}\langle\nabla^{2}U(r_{0}(t)),\varphi(t),\varphi(t)\rangle\,\,\mathrm{d}t\] \[=\int_{\overline{T}}^{+\infty}\int_{0}^{1}\int_{0}^{1}\int_{0}^{1} \langle\nabla^{3}U(r_{0}(t)+\tau_{1}\tau_{2}\tau_{3}(\varphi^{n}(t)+\check{x}^ {0}))(\varphi^{n}(t)+\check{x}^{0}),\varphi^{n}(t)+\check{x}^{0},\varphi^{n}(t )+\check{x}^{0})\tau_{1}\tau_{2}^{2}\,\,\mathrm{d}\tau_{1}\,\,\mathrm{d}\tau_{2} \,\,\mathrm{d}\tau_{3}\,\,\mathrm{d}t.\]
Obviously there is a \(\tilde{t}>1\) such that
\[\|r_{0}(t)+\tau_{1}\tau_{2}\tau_{3}(\varphi^{n}(t)+\check{x}^{0})\|_{\mathcal{M }}>0\]
for every \(t\geq\tilde{t}\). We can then choose \(\overline{T}\geq\tilde{t}\) and we have
\[\langle\nabla^{3}U(r_{0}(t)+\tau_{1}\tau_{2}\tau_{3}(\varphi^{n}(t)+\tilde{x}^{0 }))(\varphi^{n}(t)+\tilde{x}^{0}),\varphi^{n}(t)+\tilde{x}^{0},\varphi^{n}(t)+ \tilde{x}^{0}\rangle\leq C_{7}\frac{\|\varphi^{n}(t)+\tilde{x}^{0}\|_{\mathcal{ M}}^{3}}{t^{8/3}}\leq C_{8}\frac{\|\varphi^{n}\|_{\mathcal{D}}^{3}t^{3/2}}{t^{8/3}} \leq\frac{C_{9}}{t^{7/6}},\]
for every \(t\geq\overline{T}\) and for proper constants \(C_{7},C_{8},C_{9}>0\). This means that the term \(\langle\nabla^{3}U(r_{0}(t)+\tau_{1}\tau_{2}\tau_{3}(\varphi^{n}(t)+\tilde{x}^ {0}))(\varphi^{n}(t)+\tilde{x}^{0}),\varphi^{n}(t)+\tilde{x}^{0},\varphi^{n}(t )+\tilde{x}^{0}\rangle_{71}\tau_{2}^{2}\) is \(L^{1}\)-dominated and the weak lower semicontinuity of \(\mathcal{A}_{[\overline{T},+\infty)}\) follows from the Dominated Convergence Theorem.
### The renormalized action is of class \(C^{1}\) over non-collision sets
Now, we prove that the action \(\mathcal{A}\) is \(C^{1}\) over the set \(\mathcal{D}^{1,2}_{0}([1,+\infty))\setminus\{\varphi\in\mathcal{D}^{1,2}_{0} \,:\exists\,t\text{ such that }r_{0}(t)+\varphi(t)+\tilde{x}^{0}=0\}\). The term \(\int_{1}^{+\infty}\frac{1}{2}\langle\mathcal{M}\dot{\varphi}(t),\dot{\varphi }(t)\rangle\;\mathrm{d}t=\frac{1}{2}\|\varphi\|_{\mathcal{D}}^{2}\) is of course a smooth functional, so we focus on the term
\[\mathcal{A}^{2}(\varphi):=\int_{1}^{+\infty}K(t,\varphi(t))\;\mathrm{d}t,\]
where
\[K(t,\varphi(t)):=U(r_{0}(t)+\varphi(t)+\tilde{x}^{0})-U(r_{0}(t))-\langle \nabla U(r_{0}(t)),\varphi(t)\rangle.\]
We have
\[\mathrm{d}\mathcal{A}^{2}(\varphi)[\psi]=\int_{1}^{+\infty}\langle\nabla K(t,\varphi(t)),\psi(t)\rangle\;\mathrm{d}t=\int_{1}^{+\infty}\langle\nabla U(r_ {0}(t)+\varphi(t)+\tilde{x}^{0})-\nabla U(r_{0}(t)),\psi(t)\rangle\;\mathrm{d}t\]
for every \(\psi\in\mathcal{D}^{1,2}_{0}(1,+\infty)\). Given a sequence \((\varphi^{n})_{n}\subset\mathcal{D}^{1,2}_{0}(1,+\infty)\) we have to prove that if \(\varphi^{n}\to\varphi\) in \(\mathcal{D}^{1,2}_{0}(1,+\infty)\), then
\[\sup_{\|\psi\|_{\mathcal{D}}\leq 1}\bigg{|}\int_{1}^{+\infty}\langle\nabla K(t,\varphi^{n}(t))-\nabla K(t,\varphi(t)),\psi(t)\rangle\;\mathrm{d}t\bigg{|} \to 0.\]
Since
\[\nabla K(t,\varphi(t))=\nabla U(r_{0}(t)+\varphi(t)+\tilde{x}^{0})-\nabla U(r _{0}(t))=\int_{0}^{1}\nabla^{2}K(t,s\varphi(t))\varphi(t)\;\mathrm{d}s,\]
we can estimate
\[\|\nabla K(t,\varphi(t))\|_{\mathcal{M}}\leq\int_{0}^{1}\|\nabla^{2}K(t,s \varphi(t))\|_{\mathcal{M}}\|\varphi(t)\|_{\mathcal{M}}\;\mathrm{d}s\leq C_{ 10}\frac{\|\varphi(t)\|_{\mathcal{M}}}{t^{2}}, \tag{4.8}\]
where \(C_{10}>0\) is a proper constant. Using the Cauchy-Schwartz inequality we can then compute
\[\sup_{\|\psi\|_{D}\leq 1}\bigg{|}\int_{1}^{+\infty}\langle\nabla K(t,\varphi^{n}(t))-\nabla K(t,\varphi(t)),\psi(t)\rangle\;\mathrm{d}t\bigg{|}\] \[\leq\sup_{\|\psi\|_{D}\leq 1}\int_{1}^{+\infty}t\|\nabla K(t, \varphi^{n}(t))-\nabla K(t,\varphi(t))\|_{\mathcal{M}}\frac{\|\psi(t)\|_{ \mathcal{M}}}{t}\;\mathrm{d}t\] \[\leq 2\bigg{(}\int_{1}^{+\infty}t^{2}\|\nabla K(t,\varphi^{n}(t))- \nabla K(t,\varphi(t))\|_{\mathcal{M}}^{2}\;\mathrm{d}t\bigg{)}^{1/2}.\]
Now, using (4.8)
\[\|\nabla K(t,\varphi^{n}(t))-\nabla K(t,\varphi(t))\|_{\mathcal{M}}^{2} =\bigg{|}\int_{0}^{1}\nabla^{2}K(t,\varphi(t)+\sigma(\varphi^{n}(t )-\varphi))(\varphi^{n}(t)-\varphi(t))\ \mathrm{d}\sigma\bigg{|}^{2}\] \[\leq\bigg{(}\int_{0}^{1}\|\nabla^{2}K(t,\varphi(t)+\sigma( \varphi^{n}(t)-\varphi(t)))(\varphi^{n}(t)-\varphi(t))\|_{\mathcal{M}}\ \mathrm{d}\sigma\bigg{)}^{2}\] \[\leq\bigg{(}\int_{0}^{1}\frac{\|\varphi^{n}(t)-\varphi(t)\|_{ \mathcal{M}}}{t^{2}}\ \mathrm{d}\sigma\bigg{)}^{2}\] \[=\frac{\|\varphi^{n}(t)-\varphi(t)\|_{\mathcal{M}}^{2}}{t^{4}}.\]
From this last computation, it follows that
\[\bigg{(}\int_{1}^{+\infty}t^{2}\|\nabla K(t,\varphi^{n}(t))- \nabla K(t,\varphi(t))\|_{\mathcal{M}}^{2}\ \mathrm{d}t\bigg{)}^{1/2} \leq\bigg{(}\int_{1}^{+\infty}\frac{\|\varphi^{n}(t)-\varphi(t)\|_ {\mathcal{M}}^{2}}{t^{2}}\ \mathrm{d}t\bigg{)}^{1/2}\] \[\leq 2\bigg{(}\int_{1}^{+\infty}\|\dot{\varphi}^{n}(t)-\dot{ \varphi}(t)\|_{\mathcal{M}}^{2}\ \mathrm{d}t\bigg{)}^{1/2}\] \[=2\|\varphi^{n}-\varphi\|_{\mathcal{D}}\]
and since \(\|\varphi^{n}-\varphi\|_{\mathcal{D}}\to 0\) as \(n\to+\infty\), this proves our thesis.
### Absence of collisions and parabolicity of the motion
Given a minimizer of the Lagrangian action \(\varphi\in\mathcal{D}_{0}^{1,2}(1,+\infty)\), we apply Marchal's Theorem to state that \(\varphi\) has no collisions.
To conclude, we observe that given
\[x(t)=\varphi(t)+\beta b_{m}t^{2/3}+\ddot{x}^{0},\]
we have
\[\dot{x}(t)=\dot{\varphi}(t)+\frac{2}{3}\beta b_{m}t^{-1/3}.\]
To prove that the motion \(x\) is indeed parabolic, we still have to prove that
\[\lim_{t\to+\infty}\dot{x}(t)=\lim_{t\to+\infty}\dot{\varphi}(t)=0.\]
Since \(\int_{1}^{+\infty}|\dot{\varphi}_{ij}(t)|^{2}\ \mathrm{d}t<+\infty\), we have
\[\liminf_{t\to+\infty}|\dot{\varphi}_{ij}(t)|=0.\]
Because of the conservation of the energy along the motion, we have
\[\frac{1}{2}\|\dot{x}(t)\|_{\mathcal{M}}^{2}-U(x(t))=\frac{1}{2}\sum_{i=1}^{N} m_{i}\bigg{|}\dot{\varphi}_{i}(t)+\frac{2}{3}\beta b_{m_{i}}t^{-1/3}\bigg{|}^{2}-U(x(t ))=h.\]
Since there is at least a subsequence \((t_{k})_{k}\), with \(t_{k}\to+\infty\), such that \(\lim_{t_{k}\to+\infty}\dot{\varphi}_{i}(t_{k})=0\), it follows that \(h=0\) and, consequently,
\[\frac{1}{2}\|\dot{x}(t)\|_{\mathcal{M}}^{2}-U(x(t))=0.\]
From this, we have that \(\lim_{t\to+\infty}\dot{x}(t)=0\).
### Asymptotic estimates for half parabolic motions
In order to give a better description of the asymtpotic expansion of parabolic motions, we can improve inequality (3.5). In particular, we can show that for any \(\varphi\in\mathcal{D}_{0}^{1,2}(1,+\infty)\), it holds
\[\|\varphi(t)\|_{\mathcal{M}}\leq ct^{\frac{1}{3}+\varepsilon},\quad\forall \varepsilon>0, \tag{4.9}\]
for a proper constant \(c\in\mathbb{R}\). This section is devoted to the proof of this estimate.
Let us consider a half parabolic motion \(x(t)\) having the form (4.2), where \(\varphi\in\mathcal{D}_{0}^{1,2}(1,+\infty)\) is a solution of the equations of motion \(\mathcal{M}\tilde{\varphi}(t)=\nabla U(r_{0}(t)+\varphi(t)+\tilde{x}^{0})- \nabla U(r_{0}(t))\). We can write:
\[\mathcal{M}\tilde{\varphi}(t) =\frac{1}{\beta^{2}t^{4/3}}\bigg{[}\nabla U\bigg{(}\frac{x(t)}{ \beta t^{2/3}}\bigg{)}-\nabla U\bigg{(}\frac{r_{0}(t)}{\beta t^{2/3}}\bigg{)} \bigg{]}\] \[=\frac{1}{\beta^{2}t^{4/3}}\bigg{[}\nabla U\bigg{(}b_{m}+\frac{ \varphi(t)}{\beta t^{2/3}}+\frac{\tilde{x}^{0}}{\beta t^{2/3}}\bigg{)}-\nabla U (b_{m})\bigg{]}\] \[=\frac{1}{\beta^{3}t^{2}}\int_{0}^{1}\nabla^{2}U\bigg{(}b_{m}+ \theta\frac{(\varphi(t)+\tilde{x}^{0})}{\beta t^{2/3}}\bigg{)}(\varphi(t)+ \tilde{x}^{0})\ \mathrm{d}\theta\] \[=\frac{1}{\beta^{3}t^{2}}\bigg{[}\int_{0}^{1}\nabla^{2}U\bigg{(}b _{m}+\theta\frac{(\varphi(t)+\tilde{x}^{0})}{\beta t^{2/3}}\bigg{)}\ \mathrm{d}\theta\bigg{]}(\varphi(t)+\tilde{x}^{0}),\]
where we can view the integral term as a matrix.
Fixing a real constant \(\delta\in(1,2)\) and a sufficiently big constant \(k\in\mathbb{R}\), we define a test function \(\psi_{k}:\mathbb{R}\to\mathcal{X}\) as
\[\psi_{k}(t)=\eta^{2}\min\{k,\|\varphi(t)\|_{\mathcal{M}}^{\delta-1}\}\varphi(t)\]
where \(\eta:\mathbb{R}\to\mathbb{R}\) is a \(C^{\infty}\)-class cut-off function having the form
\[\eta(t)=\begin{cases}0,&t\in[1,R]\\ 1,&t\in[2R,+\infty)\end{cases},\]
for \(R\) big enough, with \(0<\eta(t)<1,\ \forall t\in(R,2R)\). We point out that \(k\) can be chosen such that \(\eta\equiv 1\) when \(\|\varphi(t)\|_{\mathcal{M}}^{\delta-1}>k\), so that we have
\[\dot{\psi}_{k}(t)=\begin{cases}2\eta\dot{\eta}\|\varphi(t)\|_{\mathcal{M}}^{ \delta-1}\varphi(t)+\eta^{2}\delta\|\varphi(t)\|_{\mathcal{M}}^{\delta-2} \langle\varphi(t),\dot{\varphi}(t)\rangle_{\mathcal{M}},&t\in I_{k}\\ k\dot{\varphi}(t),&t\in\hat{I}_{k}\end{cases},\]
where \(I_{k}=\{t\in[1,+\infty):\|\varphi(t)\|_{\mathcal{M}}^{\delta-1}\leq k\}\) and \(\hat{I}_{k}=[1,+\infty)\setminus I_{k}=\{t\in[1,+\infty):\|\varphi(t)\|_{ \mathcal{M}}^{\delta-1}>k\}\).
Multiplying the equations of motion for \(\psi_{k}(t)\) and integrating, we obtain
\[\int_{R}^{+\infty}-\langle\ddot{\varphi}(t),\psi_{k}(t)\rangle_{ \mathcal{M}}+\left\langle\frac{1}{\beta^{3}t^{2}}\bigg{[}\int_{0}^{1}\nabla^ {2}U\bigg{(}b_{m}+\theta\frac{(\varphi(t)+\tilde{x}^{0})}{\beta t^{2/3}}\bigg{)} \ \mathrm{d}\theta\bigg{]}(\varphi(t)+\tilde{x}^{0}),\psi_{k}\right\rangle\ \mathrm{d}t\] \[=\int_{R}^{+\infty}\langle\dot{\varphi}(t),\dot{\psi}_{k}(t) \rangle_{\mathcal{M}}+\left\langle\frac{1}{\beta^{3}t^{2}}\bigg{[}\int_{0}^{1} \nabla^{2}U\bigg{(}b_{m}+\theta\frac{(\varphi(t)+\tilde{x}^{0})}{\beta t^{2/3 }}\bigg{)}\ \mathrm{d}\theta\bigg{]}(\varphi(t)+\tilde{x}^{0}),\psi_{k}\right\rangle\ \mathrm{d}t.\]
Recalling that \(\|\nabla^{2}U(r_{0}+\theta(\varphi(t)+\tilde{x}^{0}))\|_{\mathcal{M}}\leq\frac{ C_{11}}{t^{2}}\) for a proper constant \(C_{11}\), for every \(t>1\) and for every
\(\theta\in[0,1]\), we can use Holder's and Hardy's inequalities to estimate
\[\int_{R}^{+\infty}\langle\dot{\varphi}(t),\dot{\psi}_{k}(t)\rangle_ {\mathcal{M}}+\left\langle\bigg{[}\int_{0}^{1}\nabla^{2}U(r_{0}(t)+\theta( \varphi(t)+\tilde{x}^{0}))\ \mathrm{d}\theta\bigg{]}\varphi(t),\psi_{k}(t)\right\rangle\, \mathrm{d}t\] \[=-\int_{R}^{+\infty}\left\langle\bigg{[}\int_{0}^{1}\nabla^{2}U( r_{0}(t)+\theta(\varphi(t)+\tilde{x}^{0}))\ \mathrm{d}\theta\bigg{]}\tilde{x}^{0},\psi_{k}(t)\right\rangle\, \mathrm{d}t\] \[\leq C_{11}\int_{R}^{+\infty}\frac{\|\psi_{k}(t)\|_{\mathcal{M}} }{t^{2}}\ \mathrm{d}t\] \[\leq C_{11}\int_{R}^{+\infty}\frac{\|\varphi(t)\|_{\mathcal{M}} ^{\delta}}{t^{2}}\ \mathrm{d}t\] \[=C_{11}\int_{R}^{+\infty}\frac{1}{t^{2-\delta}}\frac{\|\varphi(t )\|_{\mathcal{M}}^{\delta}}{t^{\delta}}\ \mathrm{d}t\] \[\leq C_{11}\bigg{(}\int_{R}^{+\infty}\frac{1}{t^{2}}\ \mathrm{d}t \bigg{)}^{(2-\delta)/2}\bigg{(}\int_{R}^{+\infty}\frac{\|\varphi(t)\|_{ \mathcal{M}}^{2}}{t^{2}}\ \mathrm{d}t\bigg{)}^{\delta/2}\] \[\leq C_{12}\|\varphi\|_{\mathcal{D}}^{\delta},\]
where \(C_{12}\) is a proper constant.
**Remark 4.5**.: We recall that the Keplerian potential \(U\) is homogeneous of degree -1. For any configuration \(x\in\mathcal{X}\), denoting \(s=\frac{x}{\|x\|_{\mathcal{M}}}\),
\[U(x)=U\bigg{(}\|x\|_{\mathcal{M}}\frac{x}{\|x\|_{\mathcal{M}}}\bigg{)}=\frac{U (s)}{\|x\|_{\mathcal{M}}}.\]
The Hessian matrix of \(U\) with respect to \(x\) is
\[\nabla^{2}U(x)=-\frac{U(s)\mathcal{M}}{\|x\|_{\mathcal{M}}^{3}}+3\frac{U(s)}{ \|x\|_{\mathcal{M}}^{5}}\mathcal{M}x\otimes\mathcal{M}x-2\frac{\nabla_{s}U(s )\otimes\mathcal{M}x}{\|x\|_{\mathcal{M}}^{4}}+\frac{\nabla_{s}^{2}U(s)}{\|x \|_{\mathcal{M}}^{3}},\]
where \(x\otimes x\) denotes the symmetric square matrix with components \((x\otimes x)_{ij}=x_{i}x_{j}\) for \(i,j\in{1,...,N}\), and \(\nabla_{s}U(s)\) and \(\nabla_{s}^{2}U(s)\) represent the gradient and the Hessian matrix of \(U\) with respect to \(s\), respectively. Choosing \(s=b_{m}\), since \(b_{m}\) is the minimum of the restricted potential, we have \(\frac{\nabla_{s}U(s)\otimes\mathcal{M}x}{\|x\|_{\mathcal{M}}^{4}}=0\). Besides, since \(\mathcal{M}x\otimes\mathcal{M}x\) and \(\nabla_{s}^{2}U(s)\) are positive semidefinite quadratic forms, it holds
\[\nabla^{2}U(x)\geq-\frac{U(b_{m})\mathcal{M}}{\|x\|_{\mathcal{M}}^{3}}. \tag{4.10}\]
Using a continuity argument and (4.10), we can also say that for every \(\mu>0\) there is a \(\bar{T}>0\) such that, for every \(t>\bar{T}\),
\[\frac{1}{\beta^{3}t^{2}}\nabla^{2}U\bigg{(}b_{m}+\theta\frac{(\varphi(t)+ \tilde{x}^{0})}{\beta t^{2/3}}\bigg{)}\geq-\frac{2}{9}(1+\mu)\frac{\mathcal{M }}{t^{2}}\]
in the sense of quadratic forms. It follows
\[\int_{R}^{+\infty}\langle\dot{\varphi}(t),\dot{\psi}_{k}(t) \rangle_{\mathcal{M}}+\left\langle\frac{1}{\beta^{3}t^{2}}\bigg{[}\int_{0}^{1} \nabla^{2}U\bigg{(}b_{m}+\theta\frac{(\varphi(t)+\tilde{x}^{0})}{\beta t^{2/3 }}\bigg{)}\ \mathrm{d}\theta\bigg{]}\varphi(t),\psi_{k}(t)\right\rangle\, \mathrm{d}t\] \[\geq\int_{R}^{+\infty}\langle\dot{\varphi}(t),\dot{\psi}_{k}(t) \rangle_{\mathcal{M}}-\frac{2}{9}(1+\mu)\bigg{\langle}\frac{\varphi(t)}{t^{2} },\psi_{k}(t)\bigg{\rangle}_{\mathcal{M}}\ \mathrm{d}t.\]
To estimate the right-hand side of the last inequality, we study the integral separately on the two complementary sets \(I_{k}\) and \(\hat{I}_{k}\). In \(I_{k}\), we have
\[\int_{I_{k}}\langle\dot{\varphi}(t),\dot{\psi}_{k}(t)\rangle_{ \mathcal{M}}-\frac{2}{9}(1+\mu)\bigg{\langle}\frac{\varphi(t)}{t^{2}},\psi_{k }(t)\bigg{\rangle}_{\mathcal{M}}\ \mathrm{d}t\] \[=\int_{I_{k}}2\eta\dot{\eta}\|\varphi(t)\|_{\mathcal{M}}^{\delta-1 }\langle\dot{\varphi}(t),\varphi(t)\rangle_{\mathcal{M}}+\eta^{2}\delta\| \varphi(t)\|_{\mathcal{M}}^{\delta-1}\|\dot{\varphi}(t)\|_{\mathcal{M}}- \frac{2}{9}(1+\mu)\eta^{2}\frac{\|\varphi(t)\|_{\mathcal{M}}^{\delta+1}}{t^{2}} \ \mathrm{d}t,\]
which implies
\[\int_{I_{k}}\eta^{2}\delta\|\varphi(t)\|_{\mathcal{M}}^{\delta-1}\|\dot{\varphi} (t)\|_{\mathcal{M}}-\frac{2}{9}(1+\mu)\eta^{2}\frac{\|\varphi(t)\|_{\mathcal{M} }^{\delta+1}}{t^{2}}\ \mathrm{d}t\leq\int_{I_{k}}2\eta\dot{\eta}\|\varphi(t)\|_{\mathcal{M}}^{ \delta}\|\dot{\varphi}(t)\|_{\mathcal{M}}\ \mathrm{d}t+C_{12}\|\varphi\|_{\mathcal{D}}^{\delta}.\]
where the cut-off function makes sure that the last integral is finite. Besides, we also have
\[\int_{I_{k}}\eta^{2}\delta\|\varphi(t)\|_{\mathcal{M}}^{\delta-1} \|\dot{\varphi}(t)\|_{\mathcal{M}}-\frac{2}{9}(1+\mu)\eta^{2}\frac{\|\varphi(t )\|_{\mathcal{M}}^{\delta+1}}{t^{2}}\ \mathrm{d}t\] \[=\int_{I_{k}}\frac{4\delta}{(\delta+1)^{2}}\bigg{(}\eta\frac{ \mathrm{d}t}{\mathrm{d}t}\|\varphi(t)\|_{\mathcal{M}}^{\frac{\delta+1}{ \delta}}\bigg{)}^{2}-\frac{2}{9}(1+\mu)\eta^{2}\frac{\|\varphi(t)\|_{ \mathcal{M}}^{\delta+1}}{t^{2}}\ \mathrm{d}t.\]
On the other hand, working on the interval \(\hat{I}_{k}\) we obtain
\[\int_{\hat{I}_{k}}\langle\dot{\varphi}(t),\dot{\psi}_{k}(t) \rangle_{\mathcal{M}}-\frac{2}{9}(1+\mu)\bigg{\langle}\frac{\varphi(t)}{t^{2} },\psi_{k}(t)\bigg{\rangle}_{\mathcal{M}}\ \mathrm{d}t\] \[=\int_{\hat{I}_{k}}k\|\dot{\varphi}(t)\|_{\mathcal{M}}^{2}-\frac{ 2}{9}(1+\mu)k\frac{\|\varphi(t)\|_{\mathcal{M}}^{2}}{t^{2}}\ \mathrm{d}t\] \[\geq\int_{\hat{I}_{k}}\frac{4\delta}{(\delta+1)^{2}}k\|\dot{ \varphi}(t)\|_{\mathcal{M}}^{2}-\frac{2}{9}(1+\mu)k\frac{\|\varphi(t)\|_{ \mathcal{M}}^{2}}{t^{2}}\ \mathrm{d}t,\]
where we used the fact that \(\frac{4\delta}{(\delta+1)^{2}}<1\) for every \(\delta\in(1,2)\).
Now, we define a function \(u_{k}:\mathbb{R}\to\mathbb{R}\) as
\[u_{k}(t)=\min\{\eta\|\varphi(t)\|_{\mathcal{M}}^{\frac{\delta-1}{2}},k^{1/2}\} \|\varphi(t)\|_{\mathcal{M}}.\]
Putting everything together, we can use Hardy's inequality to say that
\[\int_{I_{k}}\frac{4\delta}{(\delta+1)^{2}}\bigg{(}\eta\frac{ \mathrm{d}}{\mathrm{d}t}\|\varphi(t)\|_{\mathcal{M}}^{\frac{\delta+1}{2}} \bigg{)}^{2}-\frac{2}{9}(1+\mu)\eta^{2}\frac{\|\varphi(t)\|_{\mathcal{M}}^{ \delta+1}}{t^{2}}\ \mathrm{d}t+\int_{\hat{I}_{k}}\frac{4\delta}{(\delta+1)^{2}}k\|\dot{ \varphi}(t)\|_{\mathcal{M}}^{2}-\frac{2}{9}(1+\mu)k\frac{\|\varphi(t)\|_{ \mathcal{M}}^{2}}{t^{2}}\ \mathrm{d}t\] \[=\int_{1}^{+\infty}\frac{4\delta}{(\delta+1)^{2}}\|\dot{u}_{k}(t )\|_{\mathcal{M}}^{2}-\frac{2}{9}(1+\mu)\frac{\|u_{k}(t)\|_{\mathcal{M}}^{2}} {t^{2}}\ \mathrm{d}t\] \[\geq\int_{1}^{+\infty}\bigg{(}\frac{4\delta}{(\delta+1)^{2}}- \frac{8}{9}(1+\mu)\bigg{)}\|\dot{u}_{k}(t)\|_{\mathcal{M}}^{2}\ \mathrm{d}t.\]
In particular, we can choose \(\mu\) such that \(\frac{4\delta}{(\delta+1)^{2}}-\frac{8}{9}(1+\mu)>0\), which proves that \(u_{k}\in\mathcal{D}_{0}^{1,2}(1,+\infty)\).
Since the estimates we obtained do not depend on \(k\), we can take \(k\to+\infty\) so that (3.5) leads us to the conclusion of our proof. We have thus shown that for any \(\varphi\in\mathcal{D}_{0}^{1,2}(1,+\infty)\) and for any \(\delta\in(1,2)\) there is a constant \(c\), which depends on \(\delta\) and \(\|\varphi\|_{\mathcal{D}}\), such that
\[\|\varphi(t)\|_{\mathcal{M}}\leq ct^{\frac{1}{\delta+1}},\qquad\forall t\geq 1.\]
## 5 Existence of minimal half hyperbolic-parabolic motions
This last section is devoted to the proof of Theorem 1.8. To prove the existence of hyperbolic-parabolic solutions in the \(N\)-body problem, we will use the cluster decomposition that we briefly introduced in Section 1 to decompose the Lagrangian action, so that we will finally be able to minimize the renormalized action over the set \(\mathcal{D}_{0}^{1,2}(1,+\infty)\).
**Definition 5.1**.: Given a configuration \(a\in\mathcal{X}\) and a motion \(x(t)=at+O(t^{2/3})\) as \(t\to+\infty\), its corresponding natural partition (\(a\)-partition) of the index set \(\mathcal{N}=\{1,...,N\}\) is the one for which \(i,j\in\mathcal{N}\) belong to the same class if and only if the mutual distance \(|r_{i}(t)-r_{j}(t)|\) grows as \(O(t^{2/3})\). Equivalently, if \(a=(a_{1},...,a_{N})\), then the natural partition is defined by the relation \(i\sim j\) if and only if \(a_{i}=a_{j}\). The partition classes will be called clusters.
We give now some definitions and basic notations related to a given partition \(\mathcal{P}\) of the set \(\mathcal{N}=\{1,...,N\}\).
**Definition 5.2**.: Let \(\mathcal{P}\) be a given partition of \(\mathcal{N}\) and consider a configuration \(x=(r_{1},...,r_{N})\in\mathcal{X}\). For each cluster \(K\in\mathcal{P}\) we define the mass of the cluster as
\[M_{K}=\sum_{i\in K}m_{i}.\]
Besides, for any couple of clusters \(K_{1},K_{2}\in\mathcal{P}\), \(K_{1}\neq K_{2}\), we define the mass of the two clusters as
\[M_{K_{1,2}}=\sum_{i\in K_{1}\cup K_{2}}m_{i}.\]
**Definition 5.3**.: Let \(\mathcal{P}\) be any given partition of \(\mathcal{N}\). Then, for every given curve \(x(t)=(r_{1}(t),...,r_{N}(t))\) in \(\mathcal{X}\) and for each cluster \(K\in\mathcal{P}\) we define the function
\[U_{K}(t)=\sum_{i,j\in K,\ i<j}\frac{m_{i}m_{j}}{|r_{i}(t)-r_{j}(t)|},\]
which represents the restriction of the potential \(U\) to the cluster \(K\).
The system we are studying here is
\[\begin{cases}\mathcal{M}\ddot{x}=\nabla U(x)\\ x(1)=x^{0}\\ \lim_{t\rightarrow+\infty}\dot{x}(t)=a\end{cases},\]
where \(x^{0}\in\mathcal{X}\) and \(a\) is a configuration with collisions. We will look for solutions of the form \(x(t)=\varphi(t)+\gamma_{0}(t)\), for any \(\varphi\in\mathcal{D}_{0}^{1,2}(1,+\infty)\), where \(\gamma_{0}\) is a proper function, so that our problem equivalently reads
\[\begin{cases}\mathcal{M}\ddot{\varphi}=\nabla U(\varphi+\gamma_{0})-\ddot{ \gamma}_{0}\\ \varphi(0)=0\\ \lim_{t\rightarrow+\infty}\dot{\varphi}(t)=0\end{cases}.\]
We can thus prove the existence of solutions to the last system by minimizing the associated renormalized Lagrangian action.
We partition the indexes according to the natural cluster partition, so that we obtain a partition of \(\mathcal{N}\) of the form
\[K_{1}:=\{1,...,k_{1}\},\ K_{2}:=\{k_{1}+1,...,k_{2}\},\ K_{3}:=\{k_{2}+1,...,k_ {3}\}...\]
For every \(K_{i}\), we can choose a central configuration \(b^{K_{i}}\) which is minimal for that particular cluster and we can define the configuration
\[b=(b^{K_{1}},b^{K_{2}},...)\in\mathcal{X}.\]
Using this particular definition of \(b\), we can then look for solutions of the form
\[x(t)=\varphi(t)+at+\beta bt^{2/3}-a-b+x^{0}=\varphi(t)+at+\beta bt^{2/3}+\dot{ x}^{0}. \tag{5.1}\]
Here, \(\beta\) is a real vector with as many components as the number of clusters. Precisely, we have
\[\beta=(\beta_{K_{1}},\beta_{K_{2}},...),\]
with
\[\beta_{K_{1}}=\sqrt[3]{\frac{9}{2}U_{min}^{K_{1}}}\]
and \(U_{min}^{K_{1}}\) denotes the minimum of the potential \(U\) restricted to the first cluster;
\[\beta_{K_{2}}=\sqrt[3]{\frac{9}{2}U_{min}^{K_{2}}}\]
and \(U^{K_{2}}_{min}\) denotes the minimum of the potential \(U\) restricted to the second cluster, and so on. With an abuse of notation, in this section we write \(\beta b\) to denote the configuration \((\beta_{K_{1}}b^{K_{1}},\beta_{K_{2}}b^{K_{2}},...)\in\mathcal{X}\).
Using the aforementioned partition of the bodies, it is possible to decompose the Lagrangian action of a curve: for every \(\varphi\in\mathcal{D}^{1,2}_{0}(1,+\infty)\), we define
\[\begin{split}\mathcal{A}(\varphi)&:=\sum_{K\in \mathcal{P}}\mathcal{A}_{K}(\varphi)+\sum_{K_{1},K_{2}\in\mathcal{P},\ K_{1} \neq K_{2}}\mathcal{A}_{K_{1},K_{2}}(\varphi)\\ &=\sum_{K\in\mathcal{P}}\bigg{(}\sum_{i,j\in K,\ i<j}\mathcal{A} ^{ij}_{K}(\varphi)\bigg{)}+\frac{1}{2}\sum_{K_{1},K_{2}\in\mathcal{P},\ K_{1} \neq K_{2}}\bigg{(}\sum_{i\in K_{1},\ j\in K_{2}}\mathcal{A}^{ij}_{K_{1},K_{2 }}(\varphi)\bigg{)},\end{split} \tag{5.2}\]
where
\[\begin{split}\mathcal{A}^{ij}_{K}(\varphi):=&\int_{ 1}^{+\infty}\frac{1}{2M_{K}}m_{i}m_{j}|\dot{\varphi}_{i}(t)-\dot{\varphi}_{j}( t)|^{2}+\frac{m_{i}m_{j}}{|\varphi_{ij}(t)+a_{ij}t+\beta_{K}b^{K}_{ij}t^{2/3}+ \tilde{x}^{0}_{ij}|}-\frac{m_{i}m_{j}}{|\beta_{K}b^{K}_{ij}t^{2/3}|}\\ &+\frac{2}{9}\frac{\beta_{K}}{M_{K}}m_{i}m_{j}\frac{\langle b^{K }_{ij},\varphi_{ij}(t)\rangle}{t^{4/3}}\ \mathrm{d}t,\end{split} \tag{5.3}\]
\[\mathcal{A}^{ij}_{K_{1},K_{2}}(\varphi):= \int_{1}^{+\infty}\frac{1}{2M_{K_{1},2}}m_{i}m_{j}|\dot{\varphi }_{i}(t)-\dot{\varphi}_{j}(t)|^{2}+\frac{m_{i}m_{j}}{|\varphi_{ij}(t)+a_{ij}t+ \beta_{K_{1},2}b^{K_{1},2}_{ij}t^{2/3}+\tilde{x}^{0}_{ij}|}-\frac{m_{i}m_{j}}{ |a_{ij}t|}\ \mathrm{d}t. \tag{5.4}\]
Here, we used the notations:
\[b^{K_{1},2}=(b^{K_{1}},b^{K_{2}})\] \[\beta_{K_{1},2}b^{K_{1},2}=(\beta_{K_{1}}b^{K_{1}},\beta_{K_{2}}b ^{K_{2}})\]
We point out that the term (5.3) is the part of the Lagrangian action that refer to the (parabolic) motion of the bodies inside each cluster, while the term (5.4) refers to the (linear) motion of the cluster. In the following sections, we will study the two terms separately, in order to apply the Direct Method.
### Coercivity of \(\mathcal{A}(\varphi)\)
We start with the proof of the coercivity of the Lagrangian action when restricted to a general cluster, where we denote by \(K\) the set of indexes related to this cluster. Because of the natural cluster partition of the bodies, we have \(a_{i}=a_{j}\) for any \(i,j\in K\). This means that for any \(\varphi\in\mathcal{D}^{1,2}_{0}(1,+\infty)\),
\[\mathcal{A}_{K}(\varphi)=\sum_{i,j\in K,\ i<j} \int_{1}^{+\infty}\frac{1}{2M_{K}}m_{i}m_{j}|\dot{\varphi}_{i}(t) -\dot{\varphi}_{j}(t)|^{2}+\frac{m_{i}m_{j}}{|\varphi_{ij}(t)+\beta_{K}b^{K}_ {ij}t^{2/3}+\tilde{x}^{0}_{ij}|}-\frac{m_{i}m_{j}}{|\beta_{K}b^{K}_{ij}t^{2/3}|}\] \[+\frac{2}{9}\frac{\beta_{K}}{M_{K}}m_{i}m_{j}\frac{\langle b^{K}_ {ij},\varphi_{ij}(t)\rangle}{t^{4/3}}\ \mathrm{d}t.\]
Using the homogeneity of the potential and denoting by \(U_{K}\) the potential \(U\) when restricted to the cluster \(K\), we apply the inequality
\[U_{K}(x)\geq\frac{U_{K}(b^{K})}{\|x\|_{\mathcal{M}}}=\frac{U_{min}}{\|x\|_{ \mathcal{M}}}\]
to every configuration \(x\) restricted to the cluster \(K\). It follows
\[\mathcal{A}_{K}(\varphi)\geq \int_{1}^{+\infty}\sum_{i,j\in K,\ i<j}\bigg{(}\frac{1}{2M_{K}}m_ {i}m_{j}|\dot{\varphi}_{i}(t)-\dot{\varphi}_{j}(t)|^{2}\bigg{)}+\frac{U_{min}} {\|\varphi(t)+\beta_{K}b^{K}t^{2/3}+\tilde{x}^{0}\|_{\mathcal{M}}}-\frac{U_{min} }{\|\beta_{K}b^{K}t^{2/3}\|_{\mathcal{M}}}\] \[+\frac{2}{9}\frac{\beta_{K}}{M_{K}}\langle\mathcal{M}_{K}b^{K}, \varphi(t)\rangle\ \mathrm{d}t,\]
where \(\mathcal{M}_{K}\) denotes the matrix of the masses of the cluster \(K\). Using the inequality
\[\frac{1}{\|\varphi(t)+\beta_{K}b^{K}t^{2/3}+\tilde{x}^{0}\|_{ \mathcal{M}}}\geq \frac{1}{\|\beta_{K}b^{K}t^{2/3}\|_{\mathcal{M}}}-\frac{1}{2\| \beta_{K}b^{K}\|_{\mathcal{M}}^{3}t^{2}}(2t^{2/3}\beta_{K}\langle\mathcal{M}_{ K}b^{K},\varphi(t)\rangle\] \[+2\langle\mathcal{M}_{K}\varphi(t),\tilde{x}^{0}\rangle+2t^{2/3} \beta_{K}\langle\mathcal{M}_{K}b^{K},\tilde{x}^{0}\rangle+\|\varphi(t)\|_{ \mathcal{M}}^{2}+\|\tilde{x}^{0}\|_{\mathcal{M}}^{2}\rangle,\]
which holds because of the convexity of the norm, we obtain
\[\mathcal{A}_{K}(\varphi) \geq\int_{1}^{+\infty}\sum_{i,j\in K,\ i<j}\frac{1}{2M_{K}}m_{i}m _{j}|\dot{\varphi}_{ij}(t)|^{2}+\frac{2}{9}\frac{\beta_{K}}{M_{K}}\langle \mathcal{M}_{K}b^{K},\varphi(t)\rangle\] \[-\frac{U_{min}}{2\beta_{K}^{3}\|b^{K}\|_{\mathcal{M}}^{3}t^{2}}(2 t^{2/3}\beta_{K}\langle\mathcal{M}_{K}b^{K},\varphi(t)\rangle+2\langle \mathcal{M}_{K}\varphi(t),\tilde{x}^{0}\rangle+2t^{2/3}\beta_{K}\langle \mathcal{M}_{K}b^{K},\tilde{x}^{0}\rangle+\|\varphi(t)\|_{\mathcal{M}}^{2}+\| \tilde{x}^{0}\|_{\mathcal{M}}^{2}\rangle)\ \mathrm{d}t\] \[=\int_{1}^{+\infty}\frac{1}{2}\|\dot{\varphi}(t)\|_{\mathcal{M}}^ {2}\] \[-\frac{U_{min}}{2\beta_{K}^{3}\|b^{K}\|_{\mathcal{M}}^{3}t^{2}}(2 \langle\mathcal{M}_{K}\varphi(t),\tilde{x}^{0}\rangle+2t^{2/3}\beta_{K} \langle\mathcal{M}_{K}b^{K},\tilde{x}^{0}\rangle+\|\varphi(t)\|_{\mathcal{M}} ^{2}+\|\tilde{x}^{0}\|_{\mathcal{M}}^{2}\rangle)\ \mathrm{d}t\]
We notice that the term
\[C_{13}:=\int_{1}^{+\infty}-\frac{U_{min}}{2\beta_{K}^{3}\|b^{K}\|_{\mathcal{M} }^{3}t^{2}}(2t^{2/3}\beta_{K}\langle\mathcal{M}_{K}b^{K},\tilde{x}^{0}\rangle +\|\tilde{x}^{0}\|_{\mathcal{M}}^{2}\rangle)\ \mathrm{d}t\]
is constant and finite. Using Hardy and Cauchy-Schwartz inequalities we also have
\[-\int_{1}^{+\infty}\frac{U_{min}}{\beta_{K}^{3}\|b^{K}\|_{ \mathcal{M}}^{3}t^{2}}\langle\mathcal{M}_{K}\varphi(t),\tilde{x}^{0}\rangle \ \mathrm{d}t =-\frac{2}{9}\int_{1}^{+\infty}\frac{1}{t^{2}}\langle\mathcal{M}_{K }\varphi(t),\tilde{x}^{0}\rangle\ \mathrm{d}t\] \[\geq-\frac{2}{9}\bigg{(}\int_{1}^{+\infty}\frac{\|\varphi(t)\|_{ \mathcal{M}}^{2}}{t^{2}}\ \mathrm{d}t\bigg{)}^{1/2}\bigg{(}\int_{1}^{+\infty}\frac{\|\tilde{x}^{0}\|_{ \mathcal{M}}^{2}}{t^{2}}\ \mathrm{d}t\bigg{)}^{1/2}\] \[\geq-C_{14}\|\varphi\|_{\mathcal{D}},\]
where \(C_{14}:=\frac{8}{9}\big{(}\int_{1}^{+\infty}\frac{\|\tilde{x}^{0}\|_{3 \mathcal{M}}^{2}}{t^{2}}\ \mathrm{d}t\big{)}^{1/2}<+\infty\). Again by Hardy inequality, we obtain
\[\mathcal{A}_{K}(\varphi)\geq\frac{1}{18}\|\varphi\|_{\mathcal{D}}^{2}-C_{14} \|\varphi\|_{\mathcal{D}}+C_{13},\]
which implies that the functional \(\mathcal{A}_{K}\) is coercive.
We now focus on studying the terms
\[\mathcal{A}_{K_{1},K_{2}}(\varphi):=\sum_{i\in K_{1},\ j\in K_{2}}\int_{1}^{+ \infty}\frac{1}{2M_{K_{1,2}}}m_{i}m_{j}|\dot{\varphi}_{i}(t)-\dot{\varphi}_{j} (t)|^{2}+\frac{m_{i}m_{j}}{|\varphi_{ij}(t)+a_{ij}t+\beta_{K_{1,2}}b_{ij}^{K_{ 1,2}}t^{2/3}+\tilde{x}_{ij}^{0}|}-\frac{m_{i}m_{j}}{|a_{ij}t|}\ \mathrm{d}t.\]
**Remark 5.4**.: We notice that if two bodies of the configuration \(b^{K_{1,2}}\) belong to different clusters and have collisions, that is, if there are \(i\in K_{1}\) and \(j\in K_{2}\) such that \(b_{i}^{K_{1,2}}=b_{j}^{K_{1,2}}\), then the functional reads
\[\mathcal{A}_{K_{1},K_{2}}(\varphi)=\sum_{i\in K_{1},\ j\in K_{2}}\int_{1}^{+ \infty}\frac{1}{2M_{K_{1,2}}}m_{i}m_{j}|\dot{\varphi}_{i}(t)-\dot{\varphi}_{j} (t)|^{2}+\frac{m_{i}m_{j}}{|\varphi_{ij}(t)+a_{ij}t+\tilde{x}_{ij}^{0}|}-\frac{ m_{i}m_{j}}{|a_{ij}t|}\ \mathrm{d}t.\]
Since \(a_{i}\neq a_{j}\) when \(i\in K_{1}\), \(j\in K_{2}\) and \(K_{1}\neq K_{2}\), we have already proved that in this case the action functional \(\mathcal{A}\) is coercive.
Assuming \(b^{K_{1,2}}\) without collisions, we proceed in the following way. By the triangular inequality, we have
\[\int_{1}^{+\infty}\frac{1}{|\varphi_{ij}(t)+a_{ij}t+\beta_{K_{1,2}}b _{ij}^{K_{1,2}}t^{2/3}+\bar{x}_{ij}^{0}|}-\frac{1}{|a_{ij}t|}\;\mathrm{d}t\] \[\geq\int_{1}^{+\infty}\frac{1}{\|\varphi_{ij}\|_{\mathcal{D}}t^{ 1/2}+|a_{ij}|t+\beta_{K_{1,2}}|b_{ij}^{K_{1,2}}|t^{2/3}+|\bar{x}_{ij}^{0}|}- \frac{1}{|a_{ij}|t}\;\mathrm{d}t.\]
Using the changes of variables \(s=\|\varphi\|_{\mathcal{D}}u\), we obtain
\[\int_{1}^{+\infty}\frac{1}{\|\varphi_{ij}\|_{\mathcal{D}}t^{1/2}+ |a_{ij}|t+\beta_{K_{1,2}}|b_{ij}^{K_{1,2}}|t^{2/3}+|\bar{x}_{ij}^{0}|}-\frac{1 }{|a_{ij}|t}\;\mathrm{d}t\] \[=2\int_{1}^{+\infty}\bigg{(}\frac{1}{\|\varphi_{ij}\|_{\mathcal{ D}}s+|a_{ij}|s^{2}+\beta_{K_{1,2}}|b_{ij}^{K_{1,2}}|s^{4/3}+|\bar{x}_{ij}^{0}|}- \frac{1}{|a_{ij}|s^{2}}\bigg{)}s\;\mathrm{d}s\] \[=\frac{2}{\|\varphi\|_{\mathcal{D}}|a_{ij}|}\int_{1}^{+\infty} \bigg{(}\frac{1}{s^{2}+\frac{\beta_{K_{1,2}}|b_{ij}^{K_{1,2}}|}{|a_{ij}|}\frac {s^{4/3}}{\|\varphi\|_{\mathcal{D}}^{2/3}}+\frac{s}{|a_{ij}|\|\varphi\|_{ \mathcal{D}}}+\frac{|\bar{x}_{ij}^{0}|}{|a_{ij}|\|\varphi\|_{\mathcal{D}}}}- \frac{1}{\frac{s^{2}}{\|\varphi\|_{\mathcal{D}}^{2}}}\bigg{)}s\;\mathrm{d}s\] \[=\frac{2}{|a_{ij}|}\int_{1/\|\psi\|_{\mathcal{D}}}^{+\infty} \bigg{(}\frac{1}{u^{2}+\frac{\beta_{K_{1,2}}|b_{ij}^{K_{1,2}}|}{|a_{ij}|}\frac {u^{4/3}}{\|\varphi\|_{\mathcal{D}}^{2/3}}+\frac{u}{|a_{ij}|}+\frac{|\bar{x}_{ ij}^{0}|}{|a_{ij}|\|\psi\|_{\mathcal{D}}}}-\frac{1}{u^{2}}\bigg{)}u\;\mathrm{d}u.\]
We can observe that for \(\|\varphi\|_{\mathcal{D}}\to+\infty\), we have \(\frac{\beta_{K_{1,2}}|b_{ij}^{K_{1,2}}|}{|a_{ij}|\|\|\varphi\|_{\mathcal{D}}^{ 2}}\leq 1\) and \(\frac{|\bar{x}_{ij}^{0}|}{|a_{ij}|\|\psi\|_{\mathcal{D}}}\leq 1\). So, for \(\|\varphi\|_{\mathcal{D}}\to+\infty\), it follows
\[\frac{2}{|a_{ij}|}\int_{1/\|\psi\|_{\mathcal{D}}}^{+\infty}\bigg{(} \frac{1}{u^{2}+\frac{\beta_{K_{1,2}}|b_{ij}^{K_{1,2}}|}{|a_{ij}|}\frac{u^{4/3} }{\|\varphi\|_{\mathcal{D}}^{2/3}}+\frac{u}{|a_{ij}|}+\frac{|\bar{x}_{ij}^{0}| }{|a_{ij}|\|\psi\|_{\mathcal{D}}}}-\frac{1}{u^{2}}\bigg{)}u\;\mathrm{d}u\] \[\geq\frac{2}{|a_{ij}|}\int_{1/\|\varphi\|_{\mathcal{D}}}^{+\infty} \bigg{(}\frac{1}{u^{2}+u^{4/3}+\frac{u}{|a_{ij}|}+1}-\frac{1}{u^{2}}\bigg{)}u \;\mathrm{d}u\] \[=\frac{2}{|a_{ij}|}\int_{1/\|\varphi\|_{\mathcal{D}}}^{+\infty} \frac{1}{u}\bigg{(}\frac{1}{1+u^{-2/3}+\frac{u^{-1}}{|a_{ij}|}+u^{-1}}-1\bigg{)} \;\mathrm{d}u.\]
Since \(1/\|\varphi\|_{\mathcal{D}}\leq 1\) when \(\|\varphi\|_{\mathcal{D}}\to+\infty\), we can study the integral separately on the intervals \([1/\|\varphi\|_{\mathcal{D}},1]\) and \([1,+\infty)\). On the second interval, the integral is constant (let us say that it is equal to a constant \(C_{15}\)). On the other interval, we have
\[\frac{2}{|a_{ij}|}\int_{1/\|\varphi\|_{\mathcal{D}}}^{1}\frac{1}{u}\bigg{(} \frac{1}{1+u^{-2/3}+\frac{u^{-1}}{|a_{ij}|}+u^{-1}}-1\bigg{)}\;\mathrm{d}u\geq \frac{2}{|a_{ij}|}\int_{1/\|\varphi\|_{\mathcal{D}}}^{1}-\frac{\mathrm{d}u}{u}.\]
We have thus demonstrated that
\[\int_{1}^{+\infty}\frac{1}{|\varphi_{ij}(t)+a_{ij}t+\beta_{K_{1,2}}b_{ij}^{K_{1, 2}}t^{2/3}+\bar{x}_{ij}^{0}|}-\frac{1}{|a_{ij}t|}\;\mathrm{d}t\geq\frac{2}{|a _{ij}|}\log\frac{1}{\|\varphi\|_{\mathcal{D}}}+C_{15}=-\frac{2}{|a_{ij}|}\log \|\varphi\|_{\mathcal{D}}+C_{15},\]
which concludes the proof of the coercivity of the Lagrangian action.
### Weak lower semicontinuity of \(\mathcal{A}(\varphi)\)
In order to prove the weak lower semicontinuity of the Lagrangian action, we can use the decomposition (5.2) and study the weak lower semicontinuity of the terms \(\mathcal{A}_{K}\) and \(\mathcal{A}_{K_{1},K_{2}}\) separately, given arbitrary clusters \(K,K_{1},K_{2}\in\mathcal{P}\).
Concerning the term \(\mathcal{A}_{K}\), we can refer to Section 4, since our choice of \(\beta_{K}b^{K}\) leads us to the same computations.
For the proof of the weak lower semicontinuity of the terms \(\mathcal{A}_{K_{1},K_{2}}\), let us consider a sequence \((\varphi^{n})_{n}\subset\mathcal{D}^{1,2}_{0}(1,+\infty)\) converging weakly in \(\mathcal{D}^{1,2}_{0}(1,+\infty)\) to some \(\varphi\), as \(n\to+\infty\). It follows that there is a constant \(k\in\mathbb{R}\) such that \(\|\varphi^{n}\|_{\mathcal{D}}\leq k\) and \(\|\varphi\|_{\mathcal{D}}\leq k\) for every \(n\in\mathbb{N}\). We would like to use the inequality
\[\frac{1}{|\varphi^{n}_{ij}(t)+a_{ij}t+\beta_{K_{1,2}}b^{K_{1,2}}_{ij}t^{2/3}+ \tilde{x}^{0}_{ij}|}-\frac{1}{|a_{ij}t|}=\int_{0}^{1}\frac{\mathrm{d}}{\mathrm{ d}s}\bigg{[}\frac{1}{|a_{ij}t+s(\varphi^{n}_{ij}(t)+\beta_{K_{1,2}}b^{K_{1,2}}_{ij}t^{ 2/3}+\tilde{x}^{0}_{ij})|}\bigg{]}\;\mathrm{d}s, \tag{5.5}\]
which holds true when the denominator of the integrand is not zero. For all \(s\in(0,1)\) we have
\[|a_{ij}t+s(\varphi^{n}_{ij}(t)+\beta_{K_{1,2}}b^{K_{1,2}}_{ij}t^{ 2/3}+\tilde{x}^{0}_{ij})| \geq|a_{ij}|t-s(\|\varphi^{n}_{ij}\|_{\mathcal{D}}t^{1/2}+|\beta_{ K_{1,2}}b^{K_{1,2}}_{ij}|t^{2/3}+|\tilde{x}^{0}_{ij}|)\] \[>|a_{ij}|t-(\|\varphi^{n}_{ij}\|_{\mathcal{D}}t^{1/2}+|\beta_{K_{ 1,2}}b^{K_{1,2}}_{ij}|t^{2/3}+|\tilde{x}^{0}_{ij}|),\]
and since \(\|\varphi^{n}_{ij}\|_{\mathcal{D}}\leq k\), we have
\[|a_{ij}t+s(\varphi^{n}_{ij}(t)+\beta_{K_{1,2}}b^{K_{1,2}}_{ij}t^{2/3}+\tilde{ x}^{0}_{ij})|>|a_{ij}|t-(kt^{1/2}+|\beta_{K_{1,2}}b^{K_{1,2}}_{ij}|t^{2/3}+|\tilde{x}^{0 }_{ij}|),\]
where the last term is larger then zero if \(t\geq\bar{T}=\bar{T}(k)\), for a proper \(\bar{T}\). We can thus study the weak lower semicontinuity of the potential term separately on the two intervals \([1,\bar{T}]\) and \([\bar{T},+\infty)\).
On \([1,\bar{T}]\), the weak lower semicontinuity easily follows from Fatou's Lemma. On \([\bar{T},+\infty)\), we can use (5.5):
\[\int_{\bar{T}}^{+\infty}\frac{1}{|\varphi^{n}_{ij}(t)+a_{ij}t+ \beta_{K_{1,2}}b^{K_{1,2}}_{ij}t^{2/3}+\tilde{x}^{0}_{ij}|}-\frac{1}{|a_{ij}t |}\;\mathrm{d}t\] \[=\int_{\bar{T}}^{+\infty}\bigg{(}\int_{0}^{1}-\frac{[a_{ij}t+s( \varphi^{n}_{ij}(t)+\beta_{K_{1,2}}b^{K_{1,2}}_{ij}t^{2/3}+\tilde{x}^{0}_{ij}) ](\varphi^{n}_{ij}(t)+\beta_{K_{1,2}}b^{K_{1,2}}_{ij}t^{2/3}+\tilde{x}^{0}_{ ij})}{|a_{ij}t+s(\varphi^{n}_{ij}(t)+\beta_{K_{1,2}}b^{K_{1,2}}_{ij}t^{2/3}+ \tilde{x}^{0}_{ij})|^{3}}\;\mathrm{d}s\bigg{)}\;\mathrm{d}t.\]
Using (3.11), we then have
\[\int_{\bar{T}}^{+\infty}\bigg{|}\frac{1}{|\varphi^{n}_{ij}(t)+a_ {ij}t+\beta_{K_{1,2}}b^{K_{1,2}}_{ij}t^{2/3}+\tilde{x}^{0}_{ij}|}-\frac{1}{|a_ {ij}t|}\bigg{|}\;\mathrm{d}t\] \[\leq\int_{\bar{T}}^{+\infty}\bigg{(}\int_{0}^{1}\frac{|\varphi^{n }_{ij}(t)+\beta_{K_{1,2}}b^{K_{1,2}}_{ij}t^{2/3}+\tilde{x}^{0}_{ij}|}{|a_{ij}t +s(\varphi^{n}_{ij}(t)+\beta_{K_{1,2}}b^{K_{1,2}}_{ij}t^{2/3}+\tilde{x}^{0}_ {ij})|^{2}}\;\mathrm{d}s\bigg{)}\;\mathrm{d}t\] \[\leq\int_{\bar{T}}^{+\infty}\bigg{(}\int_{0}^{1}\frac{3(|kt^{1/2} +\beta_{K_{1,2}}b^{K_{1,2}}_{ij}t^{2/3}+|\tilde{x}^{0}_{ij}|)}{|a_{ij}t|^{2}-s| kt^{1/2}+\beta_{K_{1,2}}b^{K_{1,2}}_{ij}t^{2/3}+\tilde{x}^{0}_{ij}|^{2}}\; \mathrm{d}s\bigg{)}\;\mathrm{d}t\] \[\leq\int_{\bar{T}}^{+\infty}\bigg{(}\int_{0}^{1}\frac{3k^{\prime }t^{2/3}}{|a_{ij}|^{2}t^{2}-sk^{\prime}t^{4/3}}\;\mathrm{d}s\bigg{)}\; \mathrm{d}t,\]
where \(k^{\prime}\in\mathbb{R}\) is big enough so that \(|kt^{1/2}|+|\beta_{K_{1,2}}b^{K_{1,2}}_{ij}t^{2/3}+\tilde{x}^{0}_{ij}|\leq\sqrt {k^{\prime}}t^{2/3}\). The denominator of the last integral is positive when
\[t>\bigg{(}\frac{|a_{ij}|^{2}}{k^{\prime}}\bigg{)}^{2/3}=:\hat{T}.\]
If we choose \(\bar{T}(k)\gg\hat{T}\), the last integral is finite, which means that
\[\int_{\bar{T}}^{+\infty}\bigg{|}\frac{1}{|\varphi^{n}_{ij}(t)+a_{ij}t+\beta_{K _{1,2}}b^{K_{1,2}}_{ij}t^{2/3}+\tilde{x}^{0}_{ij}|}-\frac{1}{|a_{ij}t|}\bigg{|} \;\mathrm{d}t<+\infty.\]
This implies the \(L^{1}\)-convergence of the potential term, which proves its weak lower semicontinuity.
### The action is of class \(C^{1}\) over non-collision sets
The last thing we have to prove is that the action is of class \(C^{1}\) over sets of motions that don't undergo collisions. We have already proved this result for the terms \(\mathcal{A}_{K}\), so we can only focus on the terms \(\mathcal{A}_{K_{1},K_{2}}\). In particular, denoting by \(\mathcal{A}_{K_{1},K_{2}}^{2}\) the potential term, we wish to prove that the differential
\[\mathrm{d}\mathcal{A}_{K_{1},K_{2}}(\varphi)[\psi]=\int_{1}^{+\infty}\langle \nabla U(\varphi(t)+at+\beta_{K_{1,2}}b^{K_{1,2}}t^{2/3}+\check{x}^{0}),\psi( t)\rangle\ \mathrm{d}t\]
is continuous, for every \(\varphi,\psi\in\mathcal{D}_{0}^{1,2}(1,+\infty)\), over the set of non-collisional configurations when the potential \(U\) is restricted to the clusters \(K_{1}\) and \(K_{2}\).
First of all, we have
\[\|\nabla U(\varphi(t)+at+\beta_{K_{1,2}}b^{K_{1,2}}t^{2/3}+\check{x}^{0})\|_ {\mathcal{M}}\leq C_{16}\sum_{i\in K_{1},\ j\in K_{2}}\frac{1}{|\varphi_{ij}^ {n}(t)+a_{ij}t+\beta_{K_{1,2}}b_{ij}^{K_{1,2}}t^{2/3}+\check{x}_{ij}^{0}|^{2}}\]
for a proper constant \(C_{16}\), where the right-hand side term behaves like \(1/t^{2}\) when \(t\to+\infty\). This, together with the Cauchy-Schwartz inequality, proves that the differential is well-defined.
Now, given \((\varphi^{n})_{n}\subset\mathcal{D}_{0}^{1,2}(1,+\infty)\) such that \(\varphi^{n}\to\varphi\) in \(\mathcal{D}_{0}^{1,2}(1,+\infty)\) for some \(\varphi\), we wish to prove that
\[\sup_{\|\psi\|_{\mathcal{D}}\leq 1}\bigg{|}\int_{1}^{+\infty}\langle\nabla U(t, \varphi^{n}(t))-\nabla U(t,\varphi(t)),\psi(t)\rangle\ \mathrm{d}t\bigg{|}\to 0,\qquad \text{as }n\to+\infty,\]
where we write \(U(t,\varphi(t)):=U(\varphi(t)+at+\beta_{K_{1,2}}b^{K_{1,2}}t^{2/3}+\check{x}^{0})\) to lighten the notation. Using Cauchy-Schwartz and Hardy inequalities, we have
\[\sup_{\|\psi\|_{\mathcal{D}}\leq 1}\bigg{|}\int_{1}^{+\infty} \langle\nabla U(t,\varphi^{n}(t))-\nabla U(t,\varphi(t)),\psi(t)\rangle\ \mathrm{d}t\bigg{|}\] \[\leq\sup_{\|\psi\|_{\mathcal{D}}\leq 1}\int_{1}^{+\infty}t\| \nabla U(t,\varphi^{n}(t))-\nabla U(t,\varphi(t))\|_{\mathcal{M}}\frac{\|\psi (t)\|_{\mathcal{M}}}{t}\ \mathrm{d}t\] \[\leq 2\bigg{(}\int_{1}^{+\infty}t^{2}\|\nabla U(t,\varphi^{n}(t))- \nabla U(t,\varphi(t))\|_{\mathcal{M}}^{2}\ \mathrm{d}t\bigg{)}^{1/2}.\]
Now, we can write
\[\int_{1}^{+\infty}t^{2}\|\nabla U(t,\varphi^{n}(t))-\nabla U(t, \varphi(t))\|_{\mathcal{M}}^{2}\ \mathrm{d}t\] \[=\int_{1}^{+\infty}t^{2}\bigg{|}\int_{0}^{1}\nabla^{2}U(\varphi(t )+at+\beta_{K_{1,2}}b^{K_{1,2}}t^{2/3}+\check{x}^{0}+s(\varphi^{n}(t)-\varphi (t)))(\varphi^{n}(t)-\varphi(t))\ \mathrm{d}s\ \bigg{|}^{2}\ \mathrm{d}t\] \[\leq\int_{1}^{+\infty}t^{2}\bigg{(}\int_{0}^{1}C_{16}\sum_{i\in K _{1},\ j\in K_{2}}\frac{1}{|\varphi_{ij}(t)+a_{ij}t+\beta_{K_{1,2}}b_{ij}^{K_ {1,2}}t^{2/3}+\check{x}_{ij}^{0}+s(\varphi_{ij}^{n}(t)-\varphi_{ij}(t))|^{3}} \|\varphi^{n}(t)-\varphi(t)\|_{\mathcal{M}}\ \mathrm{d}s\bigg{)}^{2}\ \mathrm{d}t\] \[\leq\int_{1}^{+\infty}\bigg{(}\int_{0}^{1}C_{16}\sum_{i\in K_{1},\ j\in K_{2}}\frac{1}{|\varphi_{ij}(t)+a_{ij}t+\beta_{K_{1,2}}b_{ij}^{K_{1,2} }t^{2/3}+\check{x}_{ij}^{0}+s(\varphi_{ij}^{n}(t)-\varphi_{ij}(t))|^{3}}\| \varphi^{n}-\varphi\|_{\mathcal{D}}t^{3/2}\ \mathrm{d}s\bigg{)}^{2}\ \mathrm{d}t\] \[\leq C_{17}\|\varphi^{n}-\varphi\|_{\mathcal{D}}\]
for a proper constant \(C_{17}\in\mathbb{R}\), where the last term goes to zero as \(n\to+\infty\). This concludes the proof.
### Absence of collisions and partial hyperbolicity of the motion
Again, Marchal's Theorem implies that the motion we are considering has no collisions. Given
\[x(t)=\varphi(t)+at+\beta bt^{2/3}+\tilde{x}^{0},\]
we have
\[\dot{x}(t)=\dot{\varphi}(t)+a+\frac{2}{3}\beta bt^{-1/3}.\]
In this case, the conservation of the energy implies that the energy of the motion is positive.
**Remark 5.5**.: We observe that Chazy's Theorem can be applied to the cases of hyperbolic and hyperbolic-parabolic motions, because for completely parabolic motions the energy constant of the internal motion is null. In such cases, the limit shape of \(x(t)\) is the shape of the configuration \(a\) and, moreover, \(L=\lim_{t\to+\infty}\frac{\max_{i<j}|x_{ij}(t)|}{\min_{i<j}|x_{ij}(t)|}<+\infty\) if and only if \(x\) is hyperbolic. If the energy \(h>0\) and \(L=+\infty\), then either the motion is hyperbolic-parabolic or it is not expansive.
In our case, it is trivial to prove that \(L=+\infty\), which implies that the motion is hyperbolic-parabolic.
**Remark 5.6**.: We can observe that if the indexes \(i,j\) belong to the same cluster, we have \(\dot{x}_{ij}(t)\to 0\) when \(t\to+\infty\), while if \(i,j\) belong to different clusters, we have \(\dot{x}_{ij}(t)\to a_{ij}\) when \(t\to+\infty\).
### Hyperbolic-parabolic motions' asymtptotic expansion
We have seen that a hyperbolic-parabolic motion \(x\) can be written in the form \(x(t)=at+\beta bt^{2/3}+\varphi(t)+\tilde{x}^{0}\), as shown in (5.1), and that the bodies can be divided into subgroups following the natural cluster partition introduced in Definition 5.1. In this section, we will prove that the centers of mass of the clusters follow hyperbolic orbits. Besides, we will show that inside each cluster, the bodies move with respect to the center of mass of the cluster following a parabolic path.
We start with proving that the centers of mass of each cluster have a hyperbolic expansion. For a cluster \(K\), denoting the center of mass of \(K\) as
\[c^{K}(t)=\frac{1}{M_{K}}\sum_{i\in K}m_{i}x_{i}(t),\]
we can compute the equations of motion of the center of mass as
\[M_{K}\tilde{c}^{K}(t) =\sum_{i\in K}m_{i}\ddot{x}_{i}(t)\] \[=-\sum_{i\in K}\sum_{j\neq i}m_{i}m_{j}\frac{x_{i}(t)-x_{j}(t)}{| x_{i}(t)-x_{j}(t)|^{3}}\] \[=-\sum_{i\in K}\sum_{j\notin K}m_{i}m_{j}\frac{x_{i}(t)-x_{j}(t)} {|x_{i}(t)-x_{j}(t)|^{3}}.\]
It is easy to see that the right-hand side of the equation is a \(O\big{(}\frac{1}{t^{2}}\big{)}\)-term for \(t\to+\infty\). We also notice that
\[-\sum_{i\in K}\sum_{j\notin K}m_{i}m_{j}\frac{x_{i}(t)-x_{j}(t)}{|x_{i}(t)-x_ {j}(t)|^{3}}\simeq-\frac{1}{t^{2}}\sum_{i\in K}\sum_{j\notin K}m_{i}m_{j}\frac {a_{i}-a_{j}}{|a_{i}-a_{j}|^{3}}+O\bigg{(}\frac{1}{t^{3}}\bigg{)},\]
for \(t\to+\infty\). We can define
\[\tilde{\nabla}U(a^{K})=-\sum_{i\in K}\sum_{j\notin K}m_{i}m_{j}\frac{a_{i}-a_ {j}}{|a_{i}-a_{j}|^{3}},\]
which can be seen as a restriction of \(\nabla U(a^{K})\). Denoting with \(a^{K}\) the restriction of the configuration \(a\) to the cluster \(K\), we can thus compute
\[\lim_{t\to+\infty}\frac{M_{K}c^{K}(t)}{\log t}=\lim_{t\to+\infty}\frac{M_{K} \dot{c}^{K}(t)}{\frac{1}{t}}=-\lim_{t\to+\infty}\frac{M_{K}\tilde{c}^{K}(t)}{ \frac{1}{t^{2}}}=-\tilde{\nabla}U(a^{K}).\]
This implies that the center of mass of the cluster \(K\) has the hyperbolic asymptotic expansion
\[c^{K}(t)=a^{K}t-\tilde{\nabla}U(a^{K})\log t+o(\log t),\]
for \(t\to+\infty\).
Now, considering an index \(i\in K\), we denote the motion of a body \(x_{i}\) with respect to the center of mass of its cluster as
\[y_{i}(t)=x_{i}(t)-c_{i}^{K}(t).\]
We are going to show that its asymptotic expansion is a parabolic one.
If the cluster only has one element, we obviously have \(y_{i}\equiv 0\), so we consider the case where \(K\) has two or more elements. The equation of motion reads
\[m_{i}\ddot{y}_{i}(t) =m_{i}\ddot{x}_{i}(t)-m_{i}\ddot{c}_{i}^{K}(t)\] \[=-\sum_{j\in K}m_{i}m_{j}\frac{x_{i}(t)-x_{j}(t)}{|x_{i}(t)-x_{j} (t)|^{3}}-\sum_{j\notin K}m_{i}m_{j}\frac{x_{i}(t)-x_{j}(t)}{|x_{i}(t)-x_{j}(t) |^{3}}-m_{i}\ddot{c}_{i}^{K}(t).\]
Since we already know that \(-\sum_{j\notin K}m_{i}m_{j}\frac{x_{i}(t)-x_{j}(t)}{|x_{i}(t)-x_{j}(t)|^{3}}-m _{i}\ddot{c}_{i}^{K}(t)=O\big{(}\frac{1}{t^{2}}\big{)}\) for \(t\to+\infty\), we can then say that
\[m_{i}\ddot{y}_{i}(t)=-\sum_{j\in K}m_{i}m_{j}\frac{y_{i}(t)-y_{j}(t)}{|y_{i}(t )-y_{j}(t)|^{3}}+O\big{(}\frac{1}{t^{2}}\big{)}.\]
Using the definition of \(x(t)\) and the asymptotic expansion of \(c^{K}(t)\) we found above, we can easily see that
\[y_{i}(t)=\beta_{K}b_{i}^{K}t^{2/3}+\varphi_{i}(t)-\log t\sum_{j\notin K}m_{i} m_{j}\frac{a_{i}-a_{j}}{|a_{i}-a_{j}|^{3}}+o(\log t),\]
for \(t\to+\infty\), where \(\beta_{K}=\sqrt[3]{\frac{9}{2}\min_{K}U}\). Defining \(\psi_{i}(t):=\varphi_{i}(t)-\sum_{j\notin K}m_{i}m_{j}\frac{a_{i}-a_{j}}{|a_{i }-a_{j}|^{3}}+o(\log t)\), it is easy to prove that \(\psi_{i}\in\mathcal{D}^{1,2}(1,+\infty)\). We can then apply the estimate (4.9) to say that
\[y_{i}(t)=\beta_{K}b_{i}^{K}t^{2/3}+o(t^{\frac{t}{3}+}),\]
for \(t\to+\infty\).
## 6 Free-time minimization property
"Jacobi's principle brings out vividly the intimate relationship which exists between the motions of conservative holonomic systems and the geometry of curved space" (C. Lanczos, [16, page 138]). Accordingly, trajectories of the \(N\)-body problem at energy \(h\) are geodesics of the Jacobi-Maupertuis' metric of level \(h\) in the configuration space, i.e.,
\[\mathrm{d}\sigma^{2}=(U+h)\mathrm{d}s_{\mathcal{M}}^{2},\]
being \(\mathrm{d}s_{\mathcal{M}}^{2}\) the mass Euclidean metric in the configuration space.
**Definition 6.1**.: A curve \(x:[1,+\infty)\to E^{N}\) is said to be a geodesic ray from \(p\in E^{N}\) if \(x(1)=p\) and each restriction to a compact interval is a minimizing geodesic.
In [20], Maderna and Venturelli also proved the following theorem.
**Theorem 6.2** (Maderna-Venturelli, 2020 [20]).: _Let \(E\) be an Euclidean space. For any \(h>0\), \(p\in E^{N}\) and \(a\in\Omega\), there is geodesic ray of the Jacobi-Maupertuis' metric of level \(h\) with asymptotic direction \(a\) and starting at \(p\)._
In order to relate geodesics of the Jacobi-Maupertuis' metric to the action minimizing trajectories of our Lagrangian systems we need the following definition.
**Definition 6.3**.: A curve \(x:I\to\mathcal{X}\) is a free-time minimizer for the Lagrangian action at energy \(h\) if \(\forall\ [a,b],[a^{\prime},b^{\prime}]\subset I\) and \(\forall\ \sigma:[a^{\prime},b^{\prime}]\to\mathcal{X}\) such that \(\gamma(a)=\sigma(a^{\prime})\) and \(\gamma(b)=\sigma(b^{\prime})\), it holds
\[\int_{a}^{b}L(\gamma,\dot{\gamma})\ \mathrm{d}t+h(b-a)\leq\int_{a^{\prime}}^{b^{ \prime}}L(\sigma,\dot{\sigma})\ \mathrm{d}t+h(b^{\prime}-a^{\prime}).\]
In light of the equivalence between the variational property of being an unbounded free-time minimizer of the Lagrangian action at energy \(h\) and the geometrical property of being a geodesic ray for the Jacobi-Mapertuis metric at the same energy level (cfr. [16, 2]), we show here that our existence results of expansive motions through the minimization of the renormalized action do indeed agree with Theorem 6.2. More precisely, we prove the following corollary.
**Corollary 6.4**.: _Consider an expansive motion \(x:[1,+\infty)\to\mathcal{X}\) of the Newtonian \(N\)-body problem of the form_
\[x(t)=r_{0}(t)+\varphi(t)+\tilde{x}_{0},\]
_where \(\varphi\in\mathcal{D}^{1,2}_{0}(1,+\infty)\) minimizes the renormalized action in (2.4) in any of the settings of Theorems 1.6, 1.7 and 1.8. Then \(x\) is actually a free-time minimizer at its energy level. Therefore it is a geodesic ray for the Jacobi-Maupertuis' metric._
Proof.: We consider a curve \(\gamma:[1,+\infty)\to\mathcal{X}\) of the form \(\gamma(t)=r_{0}(t)+\varphi(t)+\tilde{x}^{0}\) such that \(\varphi\) minimizes the renormalized Lagrangian action on \(\mathcal{D}^{1,2}_{0}(1,+\infty)\).
By contradiction, we suppose that there are \(T\) and \(\bar{T}\), \(\varepsilon>0\) and there is some curve \(\bar{\sigma}:[1,\bar{T}]\to\mathcal{X}\) with \(\gamma(T)=\bar{\sigma}(\bar{T})\) such that
\[\int_{1}^{T}L(\gamma,\dot{\gamma})\ \mathrm{d}t+hT>\int_{1}^{\bar{T}}L(\bar{ \sigma},\dot{\bar{\sigma}})\ \mathrm{d}t+h\bar{T}+\varepsilon. \tag{6.1}\]
By a density and continuity argument, we can then define a compactly supported function \(\tilde{\varphi}\) such that \(\tilde{\varphi}(t)=\varphi(t)\) on \([1,\hat{T}]\), where \(\hat{T}\gg\max\{T,\bar{T}\}\), and \(\tilde{\varphi}\) is close enough to \(\varphi\) in the \(\mathcal{D}^{1,2}_{0}\)-norm to have
\[\mathcal{A}(\tilde{\varphi})\leq\mathcal{A}(\varphi)+\varepsilon,\]
where \(\mathcal{A}\) is the renormalized Lagrangian action. By the minimizing property of \(\varphi\) we infer
\[\mathcal{A}(\tilde{\varphi})\leq\mathcal{A}(\psi)+\varepsilon,\quad\forall \psi\in\mathcal{D}^{1,2}_{0}([1,+\infty)). \tag{6.2}\]
Now, denoting \(\tilde{\gamma}(t)=r_{0}(t)+\tilde{\varphi}(t)+\tilde{x}^{0}\), we build a curve \(\tilde{\sigma}:[1,+\infty)\to\mathcal{X}\) such that
\[\tilde{\sigma}(t)=\begin{cases}\bar{\sigma}(t),\quad t\in[1,\bar{T}]\\ \tilde{\gamma}(t-\bar{T}+T),\quad t\in[\bar{T},+\infty)\end{cases}.\]
Since we supposed that \(\gamma(T)=\bar{\sigma}(\bar{T})\), we know for sure that \(\tilde{\sigma}\) is continuous. Moreover we define \(\bar{\varphi}(t)=\bar{\sigma}(t)-r_{0}(t)-\tilde{x}_{0}\), so that \(\bar{\varphi}\in\mathcal{D}^{1,2}_{0}(1,+\infty)\) and, by its definition, we have
\[\bar{\varphi}(t)\equiv r_{0}(t-\bar{T}+T)-r_{0}(t)=a(T-\bar{T})+o(1),\quad \forall t\gg\max\{T,\bar{T}\}, \tag{6.3}\]
as \(\tilde{\varphi}\) is compactly supported. We notice that we can write
\[\mathcal{A}(\tilde{\varphi})=\int_{1}^{+\infty}L(\tilde{\gamma},\dot{\bar{ \gamma}})-L_{0}(t)\ \mathrm{d}t,\]
which easily follows from the fact that also \(L(\tilde{\gamma},\dot{\bar{\gamma}})-L_{0}(t)\in L^{1}[1,+\infty)\) and furthermore
\[\int_{1}^{+\infty}-\langle\mathcal{M}\bar{r_{0}},\tilde{\varphi}\rangle\ \mathrm{d}t=-\langle\mathcal{M}\dot{r_{0}},\tilde{\varphi}\rangle\bigg{|}_{1}^{+ \infty}+\int_{1}^{+\infty}\langle\mathcal{M}\dot{r_{0}},\dot{\bar{\varphi}} \rangle\ \mathrm{d}t=\int_{1}^{+\infty}\langle\mathcal{M}\dot{r_{0}},\dot{\bar{ \varphi}}\rangle\ \mathrm{d}t.\]
On the other hand, from (6.3), using \(\dot{r}_{0}\simeq t^{-1/3}\), it follows that
\[\int_{1}^{+\infty}-\langle\mathcal{M}\dot{r_{0}},\bar{\varphi} \rangle\ \mathrm{d}t=-\langle\mathcal{M}\dot{r_{0}},\bar{\varphi}\rangle\bigg{|}_{1}^{+ \infty}+\int_{1}^{+\infty}\langle\mathcal{M}\dot{r_{0}},\dot{\bar{\varphi}} \rangle\ \mathrm{d}t=\langle\mathcal{M}a,a\rangle(\bar{T}-T)+\int_{1}^{+\infty} \langle\mathcal{M}\dot{r_{0}},\dot{\bar{\varphi}}\rangle\ \mathrm{d}t\] \[=2h(\bar{T}-T)+\int_{1}^{+\infty}\langle\mathcal{M}\dot{r_{0}}, \dot{\bar{\varphi}}\rangle\ \mathrm{d}t,\]
where \(h=H(r_{0},\dot{r}_{0})\) is the energy of \(r_{0}\), which is positive equal to \(\|a\|_{\mathcal{M}}^{2}/2\) in the hyperbolic and hyperbolic-parabolic case and zero in the completely parabolic case. Consequently we have
\[\mathcal{A}(\bar{\varphi})=2h(\bar{T}-T)+\int_{1}^{+\infty}L(\bar{\sigma}, \dot{\bar{\sigma}})-L_{0}(t)\ \mathrm{d}t.\]
Let us denote \(L^{h}=L-h\) and \(L^{h}_{0}(t):=L(r_{0}(t))-h\). By (6.1), we can say that
\[\begin{split}&\int_{1}^{T}L^{h}(\gamma,\dot{\gamma})\ \mathrm{d}t+\int_{T}^{+\infty}L^{h}(\bar{\sigma},\dot{\bar{\sigma}})-L^{h}_{0}( t-\bar{T}+T)\ \mathrm{d}t\\ &>\int_{1}^{\bar{T}}L^{h}(\bar{\sigma},\dot{\bar{\sigma}})\ \mathrm{d}t+ \int_{\bar{T}}^{+\infty}L^{h}(\bar{\sigma},\dot{\bar{\sigma}})-L^{h}_{0}(t- \bar{T}+T)\ \mathrm{d}t+\varepsilon+2h(\bar{T}-T).\end{split} \tag{6.4}\]
Working on left-hand side of equation (6.4), we obtain
\[\begin{split}&\int_{1}^{T}L^{h}(\gamma,\dot{\gamma})\ \mathrm{d}t+\int_{T}^{+\infty}L^{h}(\bar{\sigma},\dot{\bar{\sigma}})-L^{h}_{0}( t-\bar{T}+T)\ \mathrm{d}t\\ &=\int_{1}^{T}L^{h}(\gamma,\dot{\gamma})\ \mathrm{d}t+\int_{\bar{T}}^{+ \infty}L^{h}(\bar{\gamma}(t-\bar{T}+T),\dot{\bar{\gamma}}(t-\bar{T}+T))-L^{h} _{0}(t-\bar{T}+T)\ \mathrm{d}t\\ &=\int_{1}^{T}L^{h}(\gamma,\dot{\gamma})-L^{h}_{0}(t)\ \mathrm{d}t+ \int_{T}^{+\infty}L^{h}(\bar{\gamma},\dot{\bar{\sigma}})-L^{h}_{0}(t)\ \mathrm{d}t+ \int_{1}^{T}L^{h}_{0}(t)\ \mathrm{d}t\\ &=\int_{1}^{+\infty}L^{h}(\bar{\gamma},\dot{\bar{\sigma}})-L^{h} _{0}(t)\ \mathrm{d}t+\int_{1}^{T}L^{h}_{0}(t)\ \mathrm{d}t.\end{split}\]
On the other hand, working on right-hand side of (6.4), we have
\[\begin{split}&\int_{1}^{\bar{T}}L^{h}(\bar{\sigma},\dot{\bar{ \sigma}})\ \mathrm{d}t+\int_{\bar{T}}^{+\infty}L^{h}(\bar{\sigma},\dot{\bar{\sigma}})-L^{h }_{0}(t-\bar{T}+T)\ \mathrm{d}t+2h(\bar{T}-T)+\varepsilon\\ =&\int_{1}^{\bar{T}}L^{h}(\bar{\sigma},\dot{\bar{ \sigma}})-L^{h}_{0}(t)\ \mathrm{d}t+\int_{\bar{T}}^{+\infty}L^{h}(\tilde{\sigma},\dot{\bar{ \sigma}})-L^{h}_{0}(t-\bar{T}+T)+L^{h}_{0}(t)-L^{h}_{0}(t)\ \mathrm{d}t+\\ &\hskip 113.811024pt+\int_{1}^{\bar{T}}L^{h}_{0}(t)\ \mathrm{d}t+2h(\bar{T}-T)+\varepsilon\\ =&\int_{1}^{\bar{T}}L^{h}(\bar{\sigma},\dot{\bar{ \sigma}})-L^{h}_{0}(t)\ \mathrm{d}t+\int_{\bar{T}}^{+\infty}L^{h}(\tilde{\sigma},\dot{\bar{ \sigma}})-L^{h}_{0}(t)\ \mathrm{d}t+\\ &\hskip 113.811024pt+\int_{1}^{\bar{T}}L^{h}_{0}(t)\ \mathrm{d}t+ \int_{\bar{T}}^{+\infty}L^{h}_{0}(t)-L^{h}_{0}(t-\bar{T}+T)\ \mathrm{d}t+2h(\bar{T}-T)+\varepsilon\\ =&\int_{1}^{+\infty}L^{h}(\tilde{\sigma},\dot{\bar{ \sigma}})-L^{h}_{0}(t)\ \mathrm{d}t+\int_{1}^{\bar{T}}L^{h}_{0}(t)\ \mathrm{d}t+ \int_{T}^{+\infty}L^{h}_{0}(t)-L^{h}_{0}(t-\bar{T}+T)\ \mathrm{d}t+2h(\bar{T}-T)+ \varepsilon.\end{split}\]
It thus follows that
\[\begin{split}&\int_{1}^{+\infty}L^{h}(\tilde{\gamma},\dot{\bar{ \gamma}})-L^{h}_{0}(t)\ \mathrm{d}t\\ &>\int_{1}^{+\infty}L^{h}(\tilde{\sigma},\dot{\bar{\sigma}})-L^{h }_{0}(t)\ \mathrm{d}t+\int_{T}^{\bar{T}}L^{h}_{0}(t)\ \mathrm{d}t+\int_{\bar{T}}^{+\infty}L^{h}_{0}(t)-L^{h}_{0}(t-\bar{T}+T)\ \mathrm{d}t+2h(\bar{T}-T)+ \varepsilon.\end{split}\]
We recall the following property, which can be demonstrated as a simple exercise.
**Proposition 6.5**.: _Given a function \(f\in L^{1}_{loc}(\mathcal{X})\) such that \(f(t)\to 0\) as \(t\to\pm\infty\) and such that \(f(t)-f(t-\tau)\in L^{1}\) for some \(\tau\in\mathbb{R}\), then_
\[\int_{-\infty}^{+\infty}f(t)-f(t-\tau)\ \mathrm{d}t=0.\]
Since
\[\int_{T}^{T}L^{h}_{0}(t)\ \mathrm{d}t+\int_{\bar{T}}^{+\infty}L^{h}_{0}(t)-L^{h} _{0}(t-\bar{T}+T)\ \mathrm{d}t=\int_{-\infty}^{+\infty}L^{h}_{0}(t)\mathcal{X}_{\{t>T\}}-L^{h}_{0} (t-\bar{T}+T)\mathcal{X}_{\{t>\bar{T}\}}\ \mathrm{d}t,\]
we can apply the Proposition 6.5 to the function \(L^{h}_{0}(t)\mathcal{X}_{\{t>T\}}\). This eventually yields
\[\int_{1}^{+\infty}L^{h}(\tilde{\gamma},\dot{\tilde{\gamma}})-L^{h}_{0}(t)\ \mathrm{d}t>\int_{1}^{+\infty}L^{h}(\tilde{\sigma},\dot{\tilde{\sigma}})-L^{h} _{0}(t)\ \mathrm{d}t+2h(\bar{T}-T)+\varepsilon,\]
and finally
\[\mathcal{A}(\tilde{\varphi})>\mathcal{A}(\tilde{\varphi})+\varepsilon,\]
in clear contradiction with (6.2).
## 7 Hamilton-Jacobi equations
We now emphasize the dependence on the initial point \(x^{0}\) and define the function
\[\begin{split} v(x_{0})&=\min_{\varphi\in\mathcal{D} ^{1,3}_{0}(1,+\infty)}\left\{\int_{1}^{+\infty}\frac{1}{2}\|\dot{\varphi}(t) \|^{2}_{\mathcal{M}}\ \mathrm{d}t\ +\right.\\ &+\left.\int_{1}^{+\infty}\ U(\varphi(t)+r_{0}(t)+x^{0}-r_{0}(1) )-U(r_{0}(t))-\langle\ddot{r}_{0}(t),\varphi(t)\rangle_{\mathcal{M}}\ \mathrm{d}t\right\}-\langle a,x^{0}\rangle_{\mathcal{M}}.\end{split} \tag{7.1}\]
We claim that \(u\) solves the Hamilton-Jacobi equation
\[H(x,\nabla v(x))=h \tag{7.2}\]
in the viscosity sense. This can be easily seen by taking a point \(x^{0}\) of differentiability, and formally differentiate (7.1) with respect to \(x^{0}\), finding
\[\nabla v(x^{0})=-\mathcal{M}\dot{x}(1)\]
where \(x(t)=r_{0}(t)+\varphi(t)+x^{0}-r_{0}(1)\) and \(\varphi\) is the minimizer of the renormalized action associated with \(x^{0}\). Therefore \(\mathcal{M}^{-1/2}\nabla v(x^{0})=-\mathcal{M}^{1/2}\dot{x}(1)\) and we easily obtain (7.2) from the expression of the Hamiltonian (2.1). Making this argument fully rigorous goes beyond the scope of this paper. The interested reader can retrace step by step the method explained in [20], also taking into account that it is known that the singular set is contained in a locally countable union of smooth hypersurfaces of codimension at least one (cfr. [8]).
Fixing \(x^{0}\) and \(T>0\), we now consider the boundary value problem
\[\begin{cases}\mathcal{M}\ddot{x}=\nabla U(x)\\ x(1)=x^{0}\\ \dot{x}(T)=\dot{r}_{0}(T)\end{cases}\]
and introduce the associated value function
\[u(T,x^{0})=\min_{\gamma\in H^{1}([1,T]),\ \gamma(1)=x^{0}}\int_{1}^{T}\frac{1}{2 }\|\dot{\gamma}(t)\|^{2}_{\mathcal{M}}+U(\gamma(t))\ \mathrm{d}t-\langle\dot{r}_{0}(T),\gamma(T)\rangle_{ \mathcal{M}}.\]
It is a standard result of the theory of Hamilton-Jacobi equations (cfr. [8]) that \(u\) is a viscosity solution of
\[-\frac{\partial u}{\partial T}=\frac{1}{2}\|\nabla u\|^{2}_{\mathcal{M}^{-1}}-U(x),\]
where the gradient is taken with respect to the second variable.
**Remark 7.1**.: Notice that, compared with [8], we have reversed time orientation.
Now, we define
\[v(T,x)=u(T,x)+\int_{1}^{T}\frac{1}{2}\|\dot{r}_{0}(t)\|^{2}_{\mathcal{M}}-U(r_{ 0}(t))\ \mathrm{d}t=u(T,x)+\int_{1}^{T}H(r_{0}(t),\dot{r}_{0}(t))\ \mathrm{d}t\]
and observe that
\[-\frac{\partial v}{\partial T}=\frac{1}{2}\|\nabla v\|_{\mathcal{M}^{-1}}-U(x )-H(r_{0},\dot{r}_{0}).\]
Assume that \(v(T,x)\) converges uniformly to some \(v(x)\) as \(T\to+\infty\). Then, \(v\) is a stationary viscosity solution to the stationary Hamilton-Jacobi equation
\[\frac{1}{2}\|\nabla v\|_{\mathcal{M}^{-1}}-U(x)=\lim_{T\to+\infty}H(r_{0}, \dot{r}_{0})=\frac{1}{2}\|a\|^{2}_{\mathcal{M}}.\]
To relate the modified value function \(v\) with the minimum of our renormalized action, let us write
\[\gamma(t)=r_{0}(t)+\varphi(t)+\tilde{x}^{0},\]
with \(\tilde{x}^{0}=x^{0}-r_{0}(1)\), and compute
\[\int_{1}^{T}\frac{1}{2}\|\dot{r}_{0}(t)+\dot{\varphi}(t)\|^{2}_{ \mathcal{M}}+U(r_{0}(t)+\varphi(t)+\tilde{x}^{0})+\frac{1}{2}\|\dot{r}_{0}(t) \|^{2}_{\mathcal{M}}-U(r_{0}(t))\ \mathrm{d}t-\langle\dot{r}_{0}(T),r_{0}(T)+\varphi(T)+x^{0}-r_{0}(1) \rangle_{\mathcal{M}}\] \[=\int_{1}^{T}\frac{1}{2}\|\dot{\varphi}(t)\|^{2}_{\mathcal{M}}+U (r_{0}(t)+\varphi(t)+\tilde{x}^{0})-U(r_{0}(t))-\langle\ddot{r}_{0}(t), \varphi(t)\rangle_{\mathcal{M}}\ \mathrm{d}t-\langle\dot{r}_{0}(T),x^{0}\rangle_{ \mathcal{M}},\]
which follows from some integration by parts. Therefore, we have
\[v(T,x^{0})=\min_{\varphi\in H^{1}([1,T]),\ \varphi(1)=0}\mathcal{A}^{ren}_{[1,T]}( \varphi)-\langle\dot{r}_{0}(T),x^{0}\rangle_{\mathcal{M}},\]
where we denoted
\[\mathcal{A}^{ren}_{[1,T]}(\varphi)=\int_{1}^{T}\frac{1}{2}\|\dot{\varphi}(t)\| ^{2}_{\mathcal{M}}+U(r_{0}(t)+\varphi(t)+\tilde{x}^{0})-U(r_{0}(t))-\langle \ddot{r}_{0}(t),\varphi(t)\rangle_{\mathcal{M}}\ \mathrm{d}t.\]
Then, it becomes natural to let \(T\to+\infty\) and define
\[v(x^{0})=\min_{\varphi\in\mathcal{D}^{1,2}_{0}(1,+\infty)}\int_{1}^{+\infty} \frac{1}{2}\|\dot{\varphi}(t)\|^{2}_{\mathcal{M}}+U(r_{0}(t)+\varphi(t)+x^{0}- r_{0}(1))-U(r_{0}(t))-\langle\ddot{r}_{0}(t),\varphi(t)\rangle_{\mathcal{M}}\ \mathrm{d}t-\langle a,x^{0}\rangle_{\mathcal{M}}.\]
We will prove in a forthcoming paper that
\[v(x)=\lim_{T\to+\infty}v(T,x)\]
uniformly on compact sets of \(\mathbb{R}^{dN}\) (actually, in the Holder norms), so that \(v\) solves
\[\frac{1}{2}\|\nabla v\|_{\mathcal{M}^{-1}}-U(x)=\frac{1}{2}\|a\|^{2}_{\mathcal{ M}}\]
in the viscosity sense. This justifies once again our choice for the renormalized action functional.
It is worthwhile noticing that the uniqueness result in [21] ensures that, in the hyperbolic case, our value function \(v\) is indeed the Busemann function. Moreover, it may be interesting that the linear correction in
(7.1) is itself the Busemann function of the free particle.
|
2308.03052 | Reducing model uncertainties using proton-oxygen collisions with
proton/neutron tagging at the LHC | A short run of proton-oxygen and oxygen-oxygen collisions is planned to take
place at the Large Hadron Collider during LHC Run 3. The primary goal of this
run is to improve the modeling of Cosmic-Ray interactions and to reduce the
uncertainties associated with proton-Air cross-sections. While the inelastic
cross-section will be measured directly, an array of very forward proton and
neutron detectors introduced by the ATLAS and CMS experiments can allow going
beyond the current physics research proposal, providing a unique opportunity to
study elastic and diffractive interactions in pO collisions at the center of
mass energies above TeV. This article presents the possible impact of proton
and neutron tagging on the measurement of the elastic and diffractive
components, as well as discusses the prospects of measuring the decay products
of oxygen ions. | Michael Pitt | 2023-08-06T08:32:08Z | http://arxiv.org/abs/2308.03052v1 | # Reducing model uncertainties using proton-oxygen collisions with proton/neutron tagging at the LHC
###### Abstract:
A short run of proton-oxygen and oxygen-oxygen collisions is planned to take place at the Large Hadron Collider during LHC Run 3. The primary goal of this run is to improve the modeling of Cosmic-Ray interactions and to reduce the uncertainties associated with proton-Air cross-sections. While the inelastic cross-section will be measured directly, an array of very forward proton and neutron detectors introduced by the ATLAS and CMS experiments can allow going beyond the current physics research proposal, providing a unique opportunity to study elastic and diffractive interactions in pO collisions at the center of mass energies above TeV. This article presents the possible impact of proton and neutron tagging on the measurement of the elastic and diffractive components, as well as discusses the prospects of measuring the decay products of oxygen ions.
## 1 Introduction
Cosmic rays (CRs) span a wide range of energies, extending up to \(10^{21}\) eV. The nature and origin of ultra-high-energy CRs (with energies above \(10^{18}\) eV) is a subject of extensive study. The energy and identity of such CRs can be studied through the extended air showers produced when CRs collide with the upper atmosphere on earth. Determining their mass and energy would help clarify the origin of the most energetic particle in the Universe. The estimation of these parameters hinges upon measuring and simulating the maxima air-shower profiles (\(X_{\rm MAX}\)). The modelling of the air-shower profiles, is done using hadronic Monte Carlo (MC) simulations [1, 2]. Some MC event generators are tuned based on the measured inelastic cross-sections in proton-proton, proton-lead, or lead-lead interactions at the Large Hadron Collider (LHC), the diffractive component remains weakly constrained. Notably, there are substantial discrepancies between the experimental data and prediction of MC simulations in cases like proton-lead collisions [3]. To improve the modeling of hadronic interaction and reduce model uncertainties associated with proton-Air cross-section modeling, a short run of proton-oxygen (\(pO\)) collisions during LHC Run 3 has been proposed [4]. While the inelastic cross-section will be measured directly [5], the elastic and diffractive interactions in \(pO\) collisions will remain unexplored. Tagging forward protons in \(pO\to pX\), or forward neutrons in \(pO\to nY\) interactions will provide a unique opportunity to study those components of the total proton-oxygen cross-section, and are the main subject of this article. Schematic diagrams of processes of interest are illustrated in figure 1.
## 2 Proton and neutron tagging at the LHC
Forward neutron and proton detectors have significantly expanded the scope of the Heavy Ion and proton-proton physics programs of the ATLAS and CMS experiments. The arrangement of the detector devices along the LHC beamline, on both sides of the Interaction Point (IP), with insertion magnets and absorbers is schematically illustrated in figure 2.
The first inner triplet of quadrupole magnets (Q1-3) is shielded from the high-energy charged and neutral particles produced at the IP by the target absorber (TAS). The neutral beam absorber (TAN) positioned between separation and recombination dipole magnets (D1, D2) protects machine components against neutral particles emerging from the IP and is used to
Figure 1: Schematic diagrams of \(pO\) collisions with a pomeron exchange (left) or a pion exchange (right), resulting in an emerging forward proton or neutron, respectively.
Following the large aperture Q4 quadrupole magnet, the proton detectors are located in the region between two quadrupole magnets, Q5 and Q6.
### Forward Proton Spectrometer (FPS)
The Forward Proton Spectrometers (FPS), introduced during LHC Run 2 by the ATLAS and CMS collaborations, consist of near beam detectors located at about 200 meters from the IP. Operated during the standard high luminosity LHC runs, these spectrometers are primarily used to study the central exclusive production processes in proton-proton collisions. Both experimental apparatus, the ATLAS Forward Proton detector (AFP) [6] and CMS-TOTEM Precision Proton Spectrometer (CT-PPS) [7] have been seamlessly integrated during the standard LHC runs and delivered a broad range of physics results.
Besides probing the central exclusive production processes, proton tagging offers a distinctive avenue for investigating the elastic and diffractive components when operated at relatively low proton-proton collision rates. The range of kinematic acceptance for protons is contingent upon the magnetic field of the LHC. Protons interacting diffractively lose a fraction of their momentum (denoted by \(\xi=\Delta p/p\)) deflected away from the beam center. The LHC magnetic field determines proton displacement, and the kinematics are computed by inverting the signal-pass proton transport matrix defined by the optical functions that describe the proton transport in the vicinity of the so-called central orbit [8]. FPS acceptance with LHC optics used in the standard runs LHC typically ranges from 1.5% to 15%. This range of proton acceptance is anticipated to hold for the forthcoming \(pO\) collisions.
To incorporate the FPS into the proton-oxygen run, a detector alignment process must be performed. The alignment can be achieved when the FPS approaches the beam up to a few beam sigma until splashes are detected in the radiation monitors, determining the distance to the beam center. Further refinement of the alignment process requires operating LHC for several hours at very low beam intensity, as outlined in [9, 10].
Following the complete alignment sequence, one can determine the charge particle displacement from the beam center and follow the optical functions to derive the hadron's transverse momentum and the momentum loss (\(\xi\)). Two distinct measurements can be performed using proton tagging during the \(pO\) runs and will be discussed subsequently:
* The measurement of the diffractive component of the total \(pO\) cross-section by tagging the intact protons in \(pO\) collisions.
Figure 2: Schematic layout of the insertion region between the interaction point and quardupole magnet Q6. to the Q6.
* Determination of light ion production rates (form Z=1 to Z=8) in \(pO\) and \(OO\) collisions by tagging outgoing light ions in FPS.
### Zero Degree Calorimeter (ZDC)
The Zero Degree Calorimeter (ZDC) is a specialized detector placed at a zero-degree angle concerning the beamline, used to detect forward neutral particles produced in \(AA\) and \(pA\) collisions, primarily spectators arising from ion collisions. In both ATLAS and CMS interaction regions of the LHC, the ZDC is installed in a dedicated slot inside the neutral beam absorbers (TAN), located at a distance of 140 meters from the IP.
The ZDC plays a crucial role in detecting forward neutrons and photons with \(|\eta|>8.5\) during \(pp\), \(pA\), and \(AA\) collisions. In \(pp\) collisions, the ZDC detectors can operate at instantaneous luminosities well below \(10^{33}\)cm\({}^{-2}\)s\({}^{-1}\). The ZDC comprises an electromagnetic module, approximately 30 radiation lengths long, and three hadronic modules, each about 1.15 interaction lengths long. Notably, vetoing events involving spectators is a core strategy for tagging ultra-peripheral collisions. The design of the ZDC enables the determination of kinematics and production cross-sections for forward-going neutral pions, kaons, and eta mesons. The ZDC provides crucial data on light meson production from protons at LHC energies that cannot be obtained through other means.
## 3 New constrains on MC hadronic models
Hadronic MC simulations are tuned using available experimental data and are applied across various research domains. One notable example involves the study of ultra-high-energy CRs through the extended air showers produced when CRs collide with the upper atmosphere on Earth. Determining their mass and energy stands to elucidate the origins of these particles, contingent upon a thorough analysis of the measured and simulated maxima air-shower profiles (\(X_{\rm MAX}\)). Large uncertainties stemming from different modeling of hadronic interactions weaken the constraints of cosmic ray mass composition [11]. For example, the EPOS-LHC - MC event generator for minimum bias hadronic interactions is tuned to the measured inelastic cross-sections in proton-proton, proton-lead, or lead-lead interactions at the LHC. Therefore the measured diffractive signatures in proton-lead collisions differ from the prediction of the MC event generators [3].
During the proton-oxygen run, proton kinematics in diffractive and elastic interactions can be determined by operating the FPS downstream of the proton beam. Utilizing proton tagging enables the exploring colorless interactions in pO collisions (including elastic, diffractive, and pion exchange processes), which represent about 20% of the total cross-section. As the FPS proton momentum loss acceptance ranges from 1.5% to 15%, these detectors can tag a subset of 2-4% of all \(pO\) events. The FPS detectors exhibit the capability to precisely measure proton kinematics, allowing refining predictions made by various hadronic models. To illustrate, a comparison highlighting the disparity in predicted proton kinematics between two Monte Carlo event generators is depicted in figure 3.
Elastic and diffractive contributions manifested by large gaps in the rapidity distribution of final-state particles, defined as \(\Delta\eta_{F}\). While the probability of finding a continuous rapidity region \(\Delta\eta_{F}\) free of particles is suppressed exponentially in non-diffractive inelastic events, discriminating different topologies of colorless interaction (pomeron or pion exchange) is only achievable through
proton and neutron tagging. Figure 4 illustrates the contribution from diffractive processes where a proton is measured by FPS or a neutron measured by the ZDC as a function of the \(\Delta\eta_{\rm F}\) spectra.
## 4 Light ion production rates
Interestingly, the magnetic fields of the ion beam exhibit similarities to those of the proton beam in terms of charged particle transport, implying the potential for detecting oxygen ions within the FPS with a similar momentum loss range of 1.5% to 15%.
During hard scattering, oxygen ions will disintegrate, yielding light ions alongside protons and neutrons resulting from the nuclear break up of oxygen ions. No measurements of the abundance of light ions from heavy ion collisions at TeV scale energies exist. The majority of light ions originating from ion disintegration are expected to possess a momentum around \(0.5\times E_{p}\times A\), where A is the mass number of the light ion, and \(E_{p}\) represents the energy required to maintain a
Figure 4: Contribution from \(pO\to pX\) and \(pO\to nX\) interactions with a proton within the FPS acceptance or a neutron within the ZDC acceptance. The peak at \(\Delta\eta_{\rm F}\) distribution corresponding to zero bias events (no particles with energy above 1 GeV are detected within the pseudorapidity range of \(|\Delta\eta|<4.5\)).
Figure 3: Differential cross section \(d\sigma/d\xi\) (left) and \(d\sigma/dp_{T}\) (right) for \(pO\) collisions assuming integrated luminosity of \(L_{\rm int}=1nb^{-1}\) at center-of-mass energy per nucleon pair of \(\sqrt{S_{\rm NN}}=9.9\) TeV, obtained using the EPOS-LHC and Sibyll 2.3 MC event generators The dashed area represent the model uncertainty derived from the comparison between the two models.
proton in a stable orbit. The light ions with different \(A/Z\) ratios will behave as the nominal beam particle with momentum loss of \(\xi=1-0.5\times A/Z\). Spectator protons will carry half of the beam energy (\(\xi=0.5\)) and escape detection. Only isotopes with \(1.7<A/Z<1.97\) could be measured by the FPS detectors. In such a scenario, FPS will serve as a mass spectrometer. A hit pattern from simulated protons and ions propagated using the LHC transport matrix to the location of the FPS detectors is shown in figure 5.
## 5 Conclusions
Including forward proton and neutron detectors during the LHC oxygen run presents an exceptional opportunity for the physics research program. This program aims to significantly constrain diffractive and elastic interactions in proton-ion collisions with high precision. Additionally, it seeks to measure the elastic component of proton-ion interactions for the first time. Furthermore, by conducting further investigations into ion disintegration and production rates, it may be possible to achieve ground-breaking measurements in this domain. Successful implementation of this research program could serve as a stepping stone for future measurements utilizing both FPS and ZDC in heavy ion runs at the LHC.
|
2310.19240 | M4LE: A Multi-Ability Multi-Range Multi-Task Multi-Domain Long-Context
Evaluation Benchmark for Large Language Models | Managing long sequences has become an important and necessary feature for
large language models (LLMs). However, it is still an open question of how to
comprehensively and systematically evaluate the long-sequence capability of
LLMs. One of the reasons is that conventional and widely-used benchmarks mainly
consist of short sequences. In this paper, we propose M4LE, a Multi-ability,
Multi-range, Multi-task, Multi-domain benchmark for Long-context Evaluation.
M4LE is based on a diverse NLP task pool comprising 36 NLP datasets, 11 task
types and 12 domains. To alleviate the scarcity of tasks with naturally long
sequences and incorporate multiple-ability assessment, we propose an automatic
approach (but with negligible human annotations) to convert short-sequence
tasks into a unified long-sequence scenario where LLMs have to identify single
or multiple relevant spans in long contexts based on explicit or semantic
hints. Specifically, the scenario includes five different types of abilities:
(1) explicit single-span; (2) semantic single-span; (3) explicit multiple-span;
(4) semantic multiple-span; and (5) global context understanding. The resulting
samples in M4LE are evenly distributed from 1k to 8k input length. We conducted
a systematic evaluation on 11 well-established LLMs, especially those optimized
for long-sequence inputs. Our results reveal that: 1) Current LLMs struggle to
understand long context, particularly when tasks require multiple-span
attention. 2) Semantic retrieval task is more difficult for competent LLMs. 3)
Models fine-tuned on longer text with position interpolation have comparable
performance to those using Neural Tangent Kernel (NTK) aware scaling methods
without fine-tuning. We make our benchmark publicly available to encourage
future research in this challenging area. | Wai-Chung Kwan, Xingshan Zeng, Yufei Wang, Yusen Sun, Liangyou Li, Lifeng Shang, Qun Liu, Kam-Fai Wong | 2023-10-30T03:11:30Z | http://arxiv.org/abs/2310.19240v2 | (\mathrm{M}^{4}\mathrm{LE}\): A Multi-Ability Multi-Range Multi-Task Multi-Domain Long-Context Evaluation Benchmark for Large Language Models
###### Abstract
Managing long sequences has become an important and necessary feature for large language models (LLMs). However, it is still an open question of how to comprehensively and systematically evaluate the long-sequence capability of LLMs. One of the reasons is that conventional and widely-used benchmarks mainly consist of short sequences. In this paper, we propose \(\mathrm{M}^{4}\mathrm{LE}\), a Multi-ability, Multi-range, Multi-task, Multi-domain benchmark for Long-context Evaluation. \(\mathrm{M}^{4}\mathrm{LE}\) is based on a diverse NLP task pool comprising 36 NLP datasets, 11 task types and 12 domains. To alleviate the scarcity of tasks with naturally long sequences and incorporate multiple-ability assessment, we propose an automatic approach (but with negligible human annotations) to convert short-sequence tasks into a unified long-sequence scenario where LLMs have to identify single or multiple relevant spans in long contexts based on explicit or semantic hints. Specifically, the scenario includes five different types of abilities: (1) explicit single-span; (2) semantic single-span; (3) explicit multiple-span; (4) semantic multiple-span; and (5) global context understanding. The resulting samples in \(\mathrm{M}^{4}\mathrm{LE}\) are evenly distributed from 1k to 8k input length.1 We conducted a systematic evaluation on 11 well-established LLMs, especially those optimized for long-sequence inputs. Our results reveal that: 1) Current LLMs struggle to understand long context, particularly when tasks require multiple-span attention. 2) Semantic retrieval task is more difficult for competent LLMs. 3) Models fine-tuned on longer text with position interpolation have comparable performance to those using Neural Tangent Kernel (NTK) aware scaling methods without fine-tuning. We make our benchmark publicly available to encourage future research in this challenging area 2.
Footnote 1: The released benchmark would contain samples up to 32K words. Even longer samples and other types of tasks can be constructed using our method.
Footnote 2: Code and data are available at [https://github.com/KwanWaiChung/M4LE](https://github.com/KwanWaiChung/M4LE).
## 1 Introduction
Large language models (LLMs) are gaining traction in addressing diverse NLP challenges. LLMs, mostly transformer-based models (Vaswani et al., 2017), are trained on a large amount of data with numerous parameters (Ouyang et al., 2022; Touvron et al., 2023b). These models have demonstrated impressive capabilities across a wide range of tasks (Brown et al., 2020; Schick et al., 2023; Shen et al., 2023; Bang et al., 2023). As LLMs continue to evolve, their ability to handle long-sequence tasks, such as extracting specific information from or summarizing lengthy documents, has become an important and competitive feature (Du et al., 2022; Chiang et al., 2023; Li et al., 2023). Therefore, a comprehensive, fair, and objective benchmark to evaluate the long-sequence capabilities of models is necessary for the progress of LLMs.
Despite numerous efforts to develop benchmarks for assessing the knowledge or reasoning ability of LLMs (Hendrycks et al., 2021; Huang et al., 2023; Suzgun et al., 2022), comprehensive evaluation of their long-context understanding ability has received limited attention. Recent concurrent works, such as L-Eval (An et al., 2023) and LongBench (Bai et al., 2023), primarily rely on existing long-sequence NLP datasets which usually limit the task diversity and flexibility in conducting length-control experiments. They lack an objective and comprehensive understanding of the model's capability across different dimensions of long sequences.
In this study, we aim to maximize the diversity of constructed tasks and analyze the long-context capabilities of LLMs from a user's practical perspective. We discovered that when processing instructions based on long sequences, the essential components for task completion can be classified as single-span, multiple-span, or global, based on relevance. Building on this and considering how to locate these information, we categorize long-context understanding into five distinct abilities and introduce an automated method to transform short-sequence tasks into a comprehensive long-sequence scenario encompassing all these capabilities. As a result, \(\texttt{M}^{\texttt{4}}\texttt{LE}\) is proposed, a multi-ability, multi-range, multi-task and multi-domain long-context evaluation benchmark for evaluating LLMs' ability in handling long inputs (Figure 1).
* Multi-ability: \(\texttt{M}^{\texttt{4}}\texttt{LE}\) includes tasks with five different types of understanding abilities, determined by whether single or multiple parts of the ongoing context are relevant to the current tasks and whether explicit or semantic hints are used in the question.
* Multi-range: Each task in \(\texttt{M}^{\texttt{4}}\texttt{LE}\) consists of samples with variable lengths, from 1K to 8K words, divided evenly into five buckets to measure the effect of length on model performance.
* Multi-task: \(\texttt{M}^{\texttt{4}}\texttt{LE}\) encompasses 36 datasets covering 11 task types, including original tasks such as classification and summarization, and their combination for more complex scenarios.
* Multi-domain: \(\texttt{M}^{\texttt{4}}\texttt{LE}\) spans a wide variety of domains, including Wikipedia, academic, news, E-Commerce, etc., prompting diversity and comprehensiveness.
Table 1 compares our benchmark with the existing similar benchmarks. \(\texttt{M}^{\texttt{4}}\texttt{LE}\) targets comprehensively evaluating LLMs' long-context understanding capabilities across different abilities and length ranges, rather than simply assessing on naturally long input tasks. Therefore, the tasks in \(\texttt{M}^{\texttt{4}}\texttt{LE}\) are constructed from both existing long
\begin{table}
\begin{tabular}{l c c c c|c} \hline Benchmarks & SCROLLS & ZeroSCROLLS & L-Eval & LongBench & \(\texttt{M}^{\texttt{4}}\texttt{LE}\) \\ \hline \#Tasks & 3 & 4 & 4 & 6 & 11 \\ \#Datasets & 7 & 10 & 18 & 21 & 36 \\ \#domains & 7 & 9 & 10 & 10 & 12 \\ Languages & em & em & em & em,\(n\) & em,\(n\) \\ Ranges & x & x & x & x & ✓ \\ Abilities & x & x & x & x & ✓ \\ \hline \end{tabular}
\end{table}
Table 1: Comparison with other long context benchmarks.
Figure 1: The illustration of \(\texttt{M}^{\texttt{4}}\texttt{LE}\). \(\texttt{M}^{\texttt{4}}\texttt{LE}\) covers multiple task types, domains and length ranges, and introduces five long-context understanding abilities, each of which is exemplified with a summarization instance, to facilitate the long-context evaluation.
context datasets and short-context datasets widely used in the NLP community, where short instances can be aggregated into long-context ones with designed procedure covering different abilities with varied instructions. Our approach is able to extend existing datasets to arbitrary context lengths.
We conducted a systematic evaluation over 11 well-known LLMs, especially those claimed to support long inputs, with M\({}^{4}\)LE. This involves evaluating their long-context understanding ability across different length ranges and their performance in our proposed five different abilities. We also delve into the factors influencing long-context understanding capability, including LLMs performance under different languages and the positioning of relevant information (Liu et al., 2023). We find that current LLMs still struggle to understand long-context inputs, especially when multiple-span attention is required. While semantic retrieval is considered more complex than explicit, the consistent performance drop in this scope can only be observed on competent models. A more effective fine-tuning approach deserves exploration, as current methods show no significant improvement over simple Neural Tangent Kernel (NTK) aware scaling methods. We also observe that language differences and the positioning of relevant information impact the long-context understanding capabilities.
## 2 Related Work
### Long-Context Modelling for LLMs
To address length extrapolation challenges in LLMs beyond the training context window, several methodologies have emerged. Position embeddings such as Alibi (Press et al., 2022) and XPos (Sun et al., 2023) have been developed. Alibi employs an exponential decay on the attention matrix to mitigate out-of-distribution positions' influence, while XPos introduces a block-wise causal attention mask. While these techniques require integration during training, alternative approaches enhance existing RoPE-based LLMs (Su et al., 2021), notably LLMA (Touvron et al., 2023a), LLMA 2 (Touvron et al., 2023b), and PaLM (Chowdhery et al., 2022). Concurrently, kaioskendev (2023) and Chen et al. (2023) propose extending context length by modifying RoPE through Position Interpolation and subsequent limited data finetuning. Another line of research introduces fine-tuning free approaches (bloc97, 2023; emozilla, 2023; Peng et al., 2023), including NTK-aware and dynamic NTK interpolations.
### Existing Evaluation Benchmarks for LLMs
As LLMs have demonstrated superior performance in a wide range of NLP tasks, comprehensively and effectively evaluating their ability becomes increasingly critical. Many of the research efforts focus on developing benchmarks for specific knowledge types (Hendrycks et al., 2021; Zhong et al., 2023) and specific task families (Chen et al., 2021; Cobbe et al., 2021). For more details, we refer readers to the recent LLMs evaluation survey Chang et al. (2023); Wang et al. (2023). Several preliminary studies have begun to assess the model capability on long context input. Long Range Arena (Tay et al., 2020) verifies the capability of transformer-based models to handle various long sequence inputs, such as languages, vision tokens and symbols. SCROLLS (Shaham et al., 2022) simply collects a set of naturally long NLP benchmarks covering multiple tasks and domains. Recently, ZeroSCROLLS (Shaham et al., 2023), L-Eval (An et al., 2023) and LongBench (Bai et al., 2023) are proposed to evaluate long text modelling capability of LLMs. However, these benchmarks are mainly compiled from a set of existing long NLP benchmarks, thereby suffering from data diversity (i.e., limited evaluation patterns) and data leakage (i.e., LLMs potentially already using these benchmarks for pre-training or alignment). In contrast, M\({}^{4}\)LE not only constructs evaluation instances from various tasks, domains and length ranges but also covers three types of attention spans, offering a comprehensive evaluation of LLMs' long text capability.
## 3 M\({}^{4}\)Le
This section introduces the rationale and design principles of the benchmark, as well as the data sources and task construction methodologies. M\({}^{4}\)LE has been carefully curated to cover a wide range of long-context natural language understanding abilities, task types, domains, and context length ranges, ensuring a thorough reflection of LLM's long-context competencies.
### Design Principle
Each sample in \(\mathbb{M}^{4}\)LE is a tuple of \(\langle\)Task description, Context, Instruction, Response\(\rangle\). In order to accomplish the instructions, LLMs need to retrieve and identify relevant parts from the long context:
* Those relevant parts could be _single-span_, _multiple-span_, or _global_. A span is a continuous text segment within the long context.
* The retrieval could be based on _explicit_ or _semantic_ hints in the instruction according to those parts could be explicitly or semantically located.
Accordingly, We break down the understanding ability into five distinctive categories: 1) _explicit single-span_ understanding, 2) _semantic single-span_ understanding, 3) _explicit multiple-span_ understanding, 4) _semantic multiple-span_ understanding and 5) _global_ context understanding (Figure 1).
We try to maximize the diversity of the constructed tasks in the following aspects:
* Data Source: We select widely-used Chinese and English datasets in NLP which covers a variety of representative task types (e.g., QA, Summarization) and domains (e.g., News, Wiki, Web). In addition, we introduce tasks that integrate multiple task types, like Classification + Retrieval. These newly integrated tasks help measure LLMs' ability of solving more complex tasks.
* Length Level: It is important to reveal how LLMs perform on various lengths of contexts. In our benchmark, we evenly divide samples into buckets according to their context lengths. In addition, in order to alleviate the effects of location of relevant parts in context (Liu et al., 2023), we intentionally construct instances with the relevant paragraphs uniformly distributed in the input context.
### Data Collection
We collect established datasets, both in English and Chinese, to cover a broad range of tasks and domains. We not only select datasets featuring long inputs, but also include datasets with shorter inputs for our customized construction, and at the same time, enriching the domain variety. The short-context datasets can be adapted to longer context using our designed process, which will be introduced in next subsection. Below we describe the datasets selected in the benchmark briefly.
_Question-Answering (QA)_: We include TriviaQA (Joshi et al., 2017), a single-document QA dataset based on web snippets and Wikipedia, with documents extended to 12k words. Additionally, NQOpen (Lee et al., 2019), HotpotQA (Yang et al., 2018), and DRCD (Shao et al., 2019) are included, all of which are based on Wikipedia articles. We further collect NewsQA (Trischler et al., 2017) and DuoRC (Saha et al., 2018), both in English and constructed from news articles and movie plots. We also add C3 (Sun et al., 2021), a Chinese dataset comprising textbook questions.
_Classification_: We incorporate BIGPATENT (Sharma et al., 2019) which includes long patent documents, and MNDS News (Petukhova and Fachada, 2023) in English and THUCNews (Hu et al., 2019) in Chinese which would be further processed for different abilities. We also utilize a sentiment classification dataset collected from e-commerce platforms (SophonPlus, 2013).
_Summarization_: For English, we include Arxiv, Pubmed (Cohan et al., 2018), BIGPATENT (Sharma et al., 2019), and Booksum (Kryscinski et al., 2022), where the corresponding domains span across academic, medical, patent documents and books. We also introduce shorter summarization datasets enabling extension, such as CNNews (See et al., 2017) and MNDS News, featuring news articles, and Wikiboup (Koupace and Wang, 2018). For Chinese, we incorporate CNewsum (Wang et al., 2021), CLTS+ (Liu et al., 2022), and News2016 (Xu, 2019), all constructed from long news articles. The LCSTS (Hu et al., 2015) dataset contains shorter news articles, while CEPSUM (Li et al., 2020) comprises product descriptions from e-commerce platforms. We also use NCLS (Zhu et al., 2019) to establish a bilingual task that generates a Chinese summary for a specific English news article.
_Natural Language Inference (NLI)_: We construct two tasks using English and Chinese Wikipedia articles from WikiText-103 (Merity et al., 2016) and Wiki2019zh (Xu, 2019), respectively.
_Translation_: Three translation datasets are included, depending sentence-level translation alignments to form long contexts, including Tedtalks (Qi et al., 2018), OpenSubtitles (Lison and Tiedemann, 2016), and News commentary (Tiedemann, 2012).
_Retrieval_: Lastly, we construct two retrieval tasks from the same datasets used for the NLI task for both languages. Since M\({}^{\texttt{4}}\)LE comprises numerous tasks combined with retrieval capability, we do not construct additional standalone retrieval datasets.
### Task Construction
Table 2 provides an overview of the constructed datasets in M\({}^{\texttt{4}}\)LE. The detailed statistics of the datasets used can be found in Appendix A.1. In this subsection, we introduce how we construct the datasets under the categories of five abilities.
Each instance in either ability should originally come from the data pool collected above. For each dataset, we construct instances with input context lengths in diverse length ranges. To construct an instance of a specific task (described by "Task description") in length range \(K\), we sample \(N\) original instances from single source dataset and combine their context paragraphs into a long sequence as "Context", where each paragraph is marked with an explicit identifier at the beginning for indexing. Here, \(N\) should be determined by the target length range \(K\) and actual lengths of the sampled paragraphs. Then "Instruction" is generated to guide models what objective to complete, resulting in different abilities to be evaluated. This approach allows us to extend existing datasets with short contexts to arbitrary context lengths. Below are the specific instructions for five abilities.
**Explicit Single-Span Understanding.** Instructions for tasks within this scope should direct models to complete the task based on a specific paragraph, with explicit hints to be located. For instance, in a question-answering task, the model might be asked to answer a question based on paragraph II. This approach has been used to construct ten unique datasets covering a wide range of task types and domains for the ability. Consequently, the task types are a fusion of retrieval and their original task, such as classification, which is labeled as "CLS + RET".
**Semantic Single-Span Understanding.** Analogous to explicit single-span understanding, the instructions for the tasks long to this ability instruct models to complete tasks based on a designated paragraph. Rather than using explicit identifiers, we provide hints about the paragraph and models are tasked with retrieving it based on semantic information. For example, in a translation task, the model might be prompted to translate a paragraph associated with sports. Tasks within this ability are designed to introduce increased complexity and challenges since semantic-level retrieval necessitates the model to understand all paragraphs to pinpoint the right one. We have constructed nine distinct datasets aligned with this ability.
**Explicit Multiple-Span Understanding.** We add further complexities to the tasks within this ability. Specifically, models are tasked with handling assignments related to multiple, disjoint paragraphs within the lengthy input context. This could necessitate addressing several original instances, for example, summarizing the first and the third paragraphs. Despite these complexities, the instructions for this ability continue to utilize explicit hints. We have constructed four distinct datasets to align with this ability.
**Semantic Multiple-Span Understanding.** We replace the explicit hints in explicit multiple-span understanding with semantic ones, resulting in the instructions for tasks in this scope. We've developed three distinct datasets of high complexity in line with this. Within this ability, we've incorporated counting tasks (labelled as "CNT"), which demand the counting of relevant paragraphs. Such tasks pose a challenge since counting is not an innate function of language models.
**Global Context Understanding.** Finally, we present tasks in global context understanding, which is a special case within our construction process. When the original instances have sufficiently extensive context, such that the target length range \(K\) can be attained with \(N=1\), we directly employ them for the associated tasks, indicating that the entire context is relevant to the task completion and global understanding is required. Within this category, we have included ten different datasets.
## 4 Experiments
### Models
We introduce the five families of LLMs evaluated in this study, comprising a total of 11 models.
**LLaMA 2:** It is a family of LLMs that support a maximum 4k input length (Touvron et al., 2023b). These models use rotary positional embeddings (RoPE) (Su et al., 2021). LLaMA 2 has 7B, 13B and 70B variant. We focus on its 7B and 13B models in this section. We also include their aligned versions: LLaMA2-7B-Chat and LLaMA2-13B-Chat.
**Vicuna:** We employ Vicuna-7B-v1.5-16K and Vicuna-13B-v1.5-16K (Chiang et al., 2023), fine-tuned based on the LLaMA2 models with 125k conversational data, collected from ShareGPT with context length up to 16K tokens using linear positional interpolation (Chen et al., 2023).
**LongChat:** We leverage LongChat-7B-v1.5-32K and LongChat-13B-16K (Li et al., 2023), fine-tuned on 80K and 18K conversations respectively, with context lengths up to 32K and 16K tokens, respectively. They utilize linear positional interpolation.
**ChatGLM2:** ChatGLM2-6B and ChatGLM2-6B-32K are based on the GLM (Du et al., 2022) models. Similar to LLaMA2, ChatGLM2 leverage RoPE. Both models are further refined on 8K and 32K input data, respectively, using linear positional interpolation.
\begin{table}
\begin{tabular}{c l l c c c c} \hline \hline Ability & Dataset & Task Type & Language & Domain & Metric & Ave. Len. \\ \hline \multirow{8}{*}{Explicit Single} & MNDS News & CLS + RET & En & News & Acc & 3805 \\ & THUCNews & CLS + RET & Zh & News & Acc & 3650 \\ & NewsQA & QA + RET & En & News & Acc & 3679 \\ & C3 & QA + RET & Zh & Textbook & Acc & 3797 \\ & WoW & RET & En & Wiki & Acc & 3434 \\ & DRCD & RET & Zh & Wiki & Acc & 3617 \\ & CNNNews & SUM + RET & En & News & Rouge-L & 3754 \\ & CEPSUM & SUM + RET & Zh & E-Commerce & Rouge-L & 4003 \\ & LCSTS & SUM + RET & Zh & News & Rouge-L & 4102 \\ & NCLS & SUM + RET & En, Zh & News & Rouge-L & 3470 \\ \hline \multirow{4}{*}{Explicit Multiple} & MNDS News & CLS + RET & En & News & F1 & 3772 \\ & THUCNews & CLS + RET & Zh & News & F1 & 3721 \\ & MARC & CLS + RET & En, Zh & E-Commerce & F1 & 3543 \\ & Online Shopping & CLS + RET & Zh & E-Commerce & F1 & 3714 \\ \hline \multirow{8}{*}{Semantic Single} & WikiText-103 & NLI + RET & En & Wiki & Acc & 3278 \\ & Wiki2019zh & NLI + RET & Zh & Wiki & Acc & 3723 \\ & DuoRC & QA & En & Movie & Acc & 3572 \\ & DuoPC & QA & En & Wiki & Acc & 3128 \\ & DuReader & QA & Zh & Web & Acc & 3261 \\ & DRCD & QA & Zh & Wiki & Acc & 3300 \\ & WikiHow & SUM + RET & En & WikiHow & Rouge-L & 3514 \\ & News2016 & SUM + RET & Zh & News & Rouge-L & 3785 \\ & TedTalks & TRAN + RET & En,Zh & TedTalks & BLEU & 2956 \\ \hline \multirow{4}{*}{Semantic Multiple} & MNDS News & CLS + CNT & En & News & Acc & 3791 \\ & THUCNews & CLS + CNT & Zh & News & Acc & 3699 \\ & HotpotQA & QA & En & Wiki & Acc & 1060 \\ \hline \multirow{8}{*}{Global} & BIGPATENT & CLS & En & Patent & Acc & 3407 \\ & TriviaQA & QA & En & Web & Acc & 3329 \\ & Arix & SUM & En & Academic & Rouge-L & 3748 \\ & BIGPATENT & SUM & En & Patent & Rouge-L & 3293 \\ & Pubmed & SUM & En & Medical & Rouge-L & 3678 \\ & Booksum & SUM & En & Book & Rouge-L & 2643 \\ & CNewsum & SUM & Zh & News & Rouge-L & 1883 \\ & CLTS+ & SUM & Zh & News & Rouge-L & 3158 \\ & OpenSubtitles & TRAN & En,Zh & Movie & BLEU & 2048 \\ & News Commentary & TRAN & En,Zh & News & BLEU & 3585 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The overview of the evaluated tasks in \(\tt{M}^{\tt{LL}}\), categorized by abilities. CLS, QA, RET, SUM, TRAN, and CNT denote classification, question-answering, retrieval, summarization, translation, and counting respectively. Acc in metric stands for accuracy.
**GPT-3.5-Turbo:** It is a closed-source language model developed based on InstructGPT (Ouyang et al., 2022). Analogous to LLaMA 2, it is fine-tuned with instruction data and refined by RLHF. We use the GPT-3.5-Turbo-16K variant 3, which supports a 16K context length.
Footnote 3: We use the GPT-3.5-Turbo-16K-0613 api from [https://cuhk-api-dev1-apim1.developer.azure-api.net](https://cuhk-api-dev1-apim1.developer.azure-api.net).
### Inference Details
Apart from the tuples introduced in Section 3.1, we also employ a concise and short in-context example, from the same dataset, to demonstrate the desired output format. Several full examples used in this work can be found in Appendix A.4. The main goal of \(\mathtt{M}^{4}\)LE is to evaluate the performance variations across different context length buckets and abilities. We did not perform extensive prompt engineering for each task to obtain the optimal performance. Instead, we focus on analysing performance changes of particular LLMs with longer input context.
Since LLaMA 2 models were trained on data within 4k tokens, we used dynamic NTK-aware RoPE scaling (emozilla, 2023; Peng et al., 2023) for context longer than 4k. We used 16 floating points precision during inference. To facilitate fair comparisons across various tasks with different metrics, we normalized the raw performance score \(r(M,l)\) (i.e., the performance of LLM \(M\) at context length \(l\)) as follows:
\[\hat{r}(M,l)=\frac{r(M,l)}{r(\text{GPT-3.5-Turbo-16K},1000)+r(M,l)}\]
\(\hat{r}(M,l)\) provides a measure of how other models perform relative to GPT-3.5-Turbo-16K in the length range bucket of 0-1000 tokens, and how their performance deteriorates with longer input.
### Results
Figure 2 illustrates the changes in normalized average scores for various evaluated models as context lengths extend, and Figure 3 depicts their ability in the context length range of 0-1000, 1000-4000, and 4000-8000 (the full results for each task can be found in Appendix A.5). Based on the figures, several key observations emerge:
Figure 2: The normalized scores of various models in different context lengths (left), accompanied by the slopes of the corresponding best-fit lines (right). The performance of all models deteriorates with increasing context length.
The performance of all models significantly deteriorates with increasing context lengths.This trend is expected, given that a longer context might necessitate more sophisticated modelling capabilities. It suggests that these LLMs struggle with understanding extensive context. The performance gap between ChatGPT and most open-source models widens as context length increases. This is largely because open-source models tend to exhibit a steeper decline, particularly when the context length exceeds 4k. For example, Vicuna-13B-v1.5-16K achieves competitive performance, compared to GPT-3.5-Turbo-16K, in the 0-4K length range, but its performance drops significantly after that. A notable exception is ChatGLM2-6B-32k which achieves similar performance when testing on 6K and 8K instances and is only surpassed by GPT-3.5-Turbo-16K on 8K instances.
Fine-tuning with additional long context data does not offer a significant advantage over simply NTK scaling for understanding long contexts.Both Vicuna and LongChat models are claimed to support long context as they are directly fine-tuned with longer context data. However, their performance still drops quickly when context length exceeds 4k, with no additional advantage compared to LLaMA2 models, which are trained only on 4k data and merely equipped with NTK scaling method when context length exceeds 4k. This suggests that existing long-context fine-tuning methods contribute minimally to improving long context understanding and a more efficient and effective way to enhance long context understanding ability is needed.
Multiple-span understanding is more difficult, and semantic retrieval is even harder for competent models.There is a significant drop in performance on tasks requiring multiple-span attention as context lengthens. This is expected since attending to multiple positions is naturally harder than a single position, and it might require additional ability to distinguish and determine compared to global understanding. Surprisingly, semantic retrieval is only more challenging for GPT-3.5-Turbo-16K, the most competent model in the experiment. We hypothesize that this is because explicit retrieval, looking for relative information by an identifier, is an unnatural task for less competent and generalized LLM. On the contrary, semantic retrieval is more similar to tasks like QA that these models experienced during instruction fine-tuning.
#### 4.3.1 Ablation Study
We perform further analysis to understand how models behave in different languages and locations of the supporting document.
Impact of language differences on long-context understanding.Tasks in different languages may have distinct ability requirements due to the nature of languages and the effects of tokenization. While most models presented in this study are primarily trained on English data, we aim to assess the influence of language differences on the results. In Figure 4, we compare the performance of the top-performing models (namely ChatGPT, ChatGLM2, Vicuna, and LongChat) in both Chinese and English tasks to determine if their long-context understanding abilities differ across languages.
Figure 3: The comparison of abilities of various models in three context length ranges, respectively. It shows that multi-span understanding is more difficult in general. While semantic retrieval appears to be intuitively more challenging, our findings indicate that it is only more demanding for competent models such as GPT-3.5-Turbo-16K at longer lengths.
We observe a comparable decline in performance for both GPT-3.5-Turbo-16K and ChatGLM2-6B-32K across the two languages. However, the Vicuna and LongChat models exhibit a more pronounced performance drop in Chinese. This suggests that the degradation of understanding ability when the context length increases is not unique to English. Furthermore, the diversity of data employed during fine-tuning, as highlighted by ChatGLM2's emphasis on its bilingual (Chinese and English) proficiency during its tuning process, appears to be a successful strategy in handling bilingual long context input.
## 5 Conclusion
In this paper, we propose a benchmark M\({}^{\text{4}}\)LE for LLMs assessing their capability of long-context understanding. To establish a benchmark with diverse NLP tasks, rather than just those that are inherently lengthy, we propose a systematic method to convert short NLP task instances into long context inputs, encompassing five distinct abilities. We collect and construct in total of 36 tasks from different sources and domains covering multiple length ranges to maximize the diversity of the tasks in benchmark, with our customized construction methods which enable flexibility to extend arbitrary context lengths. We evaluate 11 well-known LLMs with our benchmark and find that current models struggle to understand long-context inputs and the corresponding performance related to ability types, data used when fine-tuning and positions of the relevant information.
|
2306.02838 | Impact of the Covid 19 outbreaks on the italian twitter vaccination
debat: a network based analysis | Vaccine hesitancy, or the reluctance to be vaccinated, is a phenomenon that
has recently become particularly significant, in conjunction with the
vaccination campaign against COVID-19. During the lockdown period, necessary to
control the spread of the virus, social networks have played an important role
in the Italian debate on vaccination, generally representing the easiest and
safest way to exchange opinions and maintain some form of sociability. Among
social network platforms, Twitter has assumed a strategic role in driving the
public opinion, creating compact groups of users sharing similar views towards
the utility, uselessness or even dangerousness of vaccines. In this paper, we
present a new, publicly available, dataset of Italian tweets, TwitterVax,
collected in the period January 2019--May 2022. Considering monthly data,
gathered into forty one retweet networks -- where nodes identify users and
edges are present between users who have retweeted each other -- we performed
community detection within the networks, analyzing their evolution and
polarization with respect to NoVax and ProVax users through time. This allowed
us to clearly discover debate trends as well as identify potential key moments
and actors in opinion flows, characterizing the main features and tweeting
behavior of the two communities. | Veronica Lachi, Giovanna Maria Dimitri, Alessandro Di Stefano, Pietro Liò, Monica Bianchini, Chiara Mocenni | 2023-06-05T12:43:39Z | http://arxiv.org/abs/2306.02838v1 | # Impact of the COVID-19 outbreak on the Italian Twitter vaccination debate: A network-based analysis
###### Abstract
Vaccine hesitancy, or the reluctance to be vaccinated, is a phenomenon that has recently become particularly significant, in conjunction with the vaccination campaign against COVID-19. During the lockdown period, necessary to control the spread of the virus, social networks have played an important role in the Italian debate on vaccination, generally representing the easiest and safest way to exchange opinions and maintain some form of sociability. Among social network platforms, Twitter has assumed a strategic role in driving the public opinion, creating compact groups of users sharing similar views towards the utility, uselessness or even dangerousness of vaccines. In this paper, we present a new, publicly available, dataset of Italian tweets, TwitterVax, collected in the period January 2019-May 2022. Considering monthly data, gathered into forty-one retweet networks -- where nodes identify users and edges are present between users who have retweeted each other --, we performed community detection within the networks, analyzing their evolution and polarization with respect to NoVax and ProVax users through time. This allowed us to clearly discover debate trends as well as identify potential key moments and actors in opinion flows, characterizing the main features and tweeting behavior of the two communities.
Twitter, COVID-19, vaccination, Pro/NoVax community detection, social networks
## 1 Introduction
Vaccines are one of the most powerful weapons to fight infectious diseases. The use of vaccines has, in fact, helped to drastically reduce epidemic mortality rates in the 20th century [1]. Nonetheless, we can observe the presence of an ongoing phenomenon, called _vaccine hesitancy_, which can be identified as a delay in acceptance or a clear refusal of vaccination, despite the availability of vaccination services [2]. Indeed, vaccination skepticism is a phenomenon that has existed since the first vaccine became available. However, vaccine hesitancy is currently a growing global attitude -- supported and amplified by the ease of finding controversial information on the Internet -- that can pose a problem in avoiding the outbreak of communicable diseases. Therefore, recognizing the importance of the phenomenon and the high risk to which it exposes the world population, especially in disadvantaged areas, is essential [3]. For this purpose, the Strategic Advisory Group of Experts (SAGE) on Immunization of the World Health Organization (WHO) has established, since 2012, a specific working group on the subject, led by a joint WHO/Unicef Secretariat group.
The reasons that lead to hesitancy or refusal have been widely studied in the last few decades. For example, in [4], it is shown that increasing levels of perceived economic hardship are associated with vaccine hesitancy, just as less parental education is significantly associated with vaccine refusal for children. However, not only socio-economic conditions are linked to vaccine hesitancy, which is a multifaceted phenomenon where cognitive, psychological and cultural factors all play a critical role [5, 6, 7, 8].
Recently, due to the outbreak of the COVID-19 pandemic, the issue of vaccine hesitancy has become particularly important. Despite overwhelming evidence showcasing the effectiveness of vaccines [9], surpassing even alternative measures like contact tracing and lockdowns[10, 11], a large swath of the population has continued to refuse inoculation, endangering public health and economic and social life [12]. In this specific situation, in fact, social opinions on COVID-19 vaccines were negatively affected also by concerns related to the unprecedented speed with which they were developed [13], with sometimes confusing media communications and unregulated social media information sources fomenting such anti-vaccination sentiment -- not to mention conspiracy theories, which include population control through 5G technology or the extermination of humanity through vaccines [14, 15, 16]. This phenomenon has been exasperated by the presence of so-called _echo chambers_, virtual environments where like-minded people reinforce their opinions through repeated interactions that amplify their political leanings, beliefs and attitudes [17, 18, 19, 20, 21]. Such a mechanism can lead to increasingly polarized debates and phenomena of extremism [22]. To get an idea of the diffusion of certain forms of thinking, the Annual Report 2021 by Censis (Centro Studi Investimenti Sociali, [https://www.censis.it/rapporto-annuale-censis](https://www.censis.it/rapporto-annuale-censis)) reveals that: _"For 5.9% of Italians (about 3 million) Covid does not exist, for 10,9% the vaccine is useless, for 31.4% it is an experimental drug and people who get vaccinated act as guinea pigs, for 12.7% science produces more harm than good [...], for 19.9% 5G is a sophisticated tool to monitor people"_.
Similarly to many other political and social issues, social networks represent a natural source of aggregation and a powerful medium for the largely uncontrolled dissemination of information. Undoubtedly, social media has had a considerable impact on the health sector, as the user sentiment can be used to understand collective panic or for the dissemination of reliable and unreliable medical claims [23]. Additionally, social media has a recognized role in healthcare campaigns, both targeted at businesses and by government agencies and non-profits to combat rumors, encourage behavioral change, and share information, enabling the audience to engage and share feedback. Specifically, Twitter is one of the most used social media [24] and can also be identified as a popular source of health information. For these reasons, it can provide realistic insights on the society perception regarding vaccination. Twitter administration itself explicitly tried to prevent the dissemination of misleading information regarding COVID-19, publishing a series of rules for users ([https://help.twitter.com/it/rules-and-policies/medical-misinformation-policy](https://help.twitter.com/it/rules-and-policies/medical-misinformation-policy)), yet leaving to their common sense the respect of the code of ethics on the matter.
The aim of this work is that of exploiting Twitter data to study the evolution over the past three years of the Italian vaccine debate since, recently, vaccines have become a very political issue in Italy [25]. The contribution of this paper is manifold. First, we show that, with the outbreak of COVID-19, the vaccine debate has changed: it has significantly intensified and, furthermore, the discussion has gone from being widespread among all interested users (mostly parents of preschoolors) to being concentrated in the hands of a few influential hubs. Second, we have detected and monitored the NoVax and ProVax communities. The relative proportion of the two communities has not changed significantly over time and ProVax users are the most numerous but also the least active. Moreover, we have identified core NoVax users as well as core ProVax users, demonstrating that the former outnumber the latter -- who have, anyway, more followers, mostly among verified users. Finally, we provide a new dataset of Italian tweets, TwitterVax, collected between January 2019 and May 2022, which is publicly available at [https://github.com/veronicalachi/TwitterVax.1](https://github.com/veronicalachi/TwitterVax.1)
Footnote 1: Despite being in contrast with the Twitter ToS, we made the decision to publicly share the tweet texts. We did so because we firmly believe that this act brings significant benefits to the research community.
The paper is organized as follows. In Section 2, we present an overview of related works in the context of vaccine debates and social network analysis. In Section 3, we describe our approach to data collection and analysis, while in Section 4 we present and discuss the obtained results. Finally, Section 5 collects some conclusions and outlines future research perspectives.
## 2 Related works
Several works in the literature -- many of which make use of network-based approaches -- investigate the structure and characteristics of the vaccination debate.
In [26], an analysis of Twitter data and official vaccination coverage rates showed that vaccine opinions from Twitter users could affect the vaccination decision-making process; moreover, the application of a community detection algorithm led to the identification of two user communities: one in support of vaccination, including important and influential users, and one against vaccination, characterized by a lower level of interaction. Using a structural network approach, in [27], it was shown that vaccination skeptics, as well as advocates, reside in their own distinct echo chambers and that the structure of these two communities is different, with skeptics organized into highly connected clusters and supporters characterized by the presence of influential hub users. A social network approach was also used in [28] to demonstrate that NoVax users retweeted frequently each other while ProVax users formed a fragmented network, with fewer connections; moreover, while the ProVaxes were mostly healthcare workers, the NoVaxes were mainly parents and activists. In [29], seven different communities were identified, including health workers, writers and journalists, anti-establishment people and international vaccination advocates; contents shared by the healthcare
community hardly reached other communities, while messages tweeted by anti-establishment users managed to filter to other communities.
If several works have been proposed studying the structure of the vaccine debate and the interactions between vaccination advocates and skeptics, still little research has been conducted on how the outbreak of the COVID-19 pandemic has influenced this controversial discussion. In [30], a comparison of tweets from the four months prior to the onset of COVID-19 and tweets from the four months following the outbreak of the pandemic reveals that vaccine opponents on Twitter increased by 80%. Finally, in [31], the COVID-19 debate is analyzed by focusing on people who interacted and shared opinions on Twitter. Based on a sentiment dataset, called COVIDSenti, composed of 90 000 COVID-19-related tweets collected in the early stages of the pandemic (from February to March 2020), it was shown that, at the onset of the pandemic, people favored lockdown while, as expected, sentiment shifted by mid-March.
As far as the Italian scenario is concerned, so far few works have been published. In [32], a new collected dataset was used to describe the polarization and volumes of tweets, only focusing on a few months just after the vaccination campaigns (between October 2020 and January 2021). The dataset was crawled using only the four words "vaccino/i" and "vaccinazione/i" and manually annotated, showing a higher percentage of NoVax users. Furthermore, in [33], sentiment analysis was proposed for COVID-19 vaccine hesitancy posts, collected from several different social networks, with an attention towards the COVID-19 vaccine or specifically the booster shot. In [34], more than 316 million Twitter messages were gathered between October 2019 and March 2021 to identify the amount of disinformation flows among users of different countries. Indeed, the tweets were written in more than eighteen languages (including Italian), with a focus on the global debate rather than considering national realities. Finally, in [35], a comprehensive study was conducted by collecting a dataset of 16,223,749 Italian tweets spanning from September 2019 to December 2021. The main objective was to investigate the impact of unprecedented experiences and measures related to the COVID-19 pandemic on polarization in vaccine discussions. The results obtained clarified that, despite the outbreak of COVID-19, the echo chamber phenomenon within vaccine discussions persisted.
The aim of the present work is to extend the research on the Italian vaccination discussion, particularly characterizing the structural evolution of the debate and investigating the changes in the NoVax and ProVax communities in terms of size, productivity, and core users.
## 3 Materials & methods
Twitter, a widely recognized social networking platform, had its data publicly accessible via its application programming interface (API) until February 2023. In this paper, we use a network-based approach to study the evolution of Italian public opinion on vaccines over time. Transforming the unstructured information available from Twitter into graph data has allowed us not only to employ common metrics used on networks to estimate their topological features but also to apply some powerful algorithms for community detection.
### Data Collection
To perform data collection, the Twitter API v2 has been employed. In particular, the Twitter tweets/search/all endpoint2 has been used, which allows us to search the full archive of tweets and to filter them using a set of keywords. By crawling the Twitter archive based on twenty Italian vaccine-related terms as keywords, we were able to collect 9,068,389 tweets -- contributing to the vaccination debate -- posted by 300,653 users from 1 January 2019 to 31 May 2022. In particular, the twenty keywords selected to build the TwitterVax dataset were: _vaccino, vaccini, vaccinazione, vaccinazioni, vaccinare, vaccinarei, vaccinarei, vaccinarei, vaccinarei, vaccinarei, vaccinato, va@@ino, va..ino, vaxino, #iomiovaccino_. Some of the words, _vaxino_ for example, were specific jargon related to the debate and were frequently used in tweets by both the ProVax and NoVax communities. In contrast to the approach taken in [35], we have made the decision to exclusively employ Italian keywords in our analysis. In Figure 1, the most common words in our dataset are shown, where the font size is a quantitative indicator of the amount of occurrences of a specific word in the TwitterVax collection. Moreover, we further filtered the tweets to return Italian texts only. Finally, we decided to study the dynamics of users' opinions about vaccines by dividing the entire period into forty-one sub-periods, of the duration of one month each. The one-month sampling resulted heuristically the one that best preserves the information on the evolution of the community structure.
Interestingly, when using the Search API, some content is omitted from the dataset, especially content that has been deleted or posted by suspended users. The findings of [36] shed light on this issue, revealing that a considerable proportion (72%) of tweets captured through the Streaming Search API at the time of posting were not retrieved via the Historical Search API. This stark contrast in retrieval rates serves as a strong indication that a significant amount of Twitter content is swiftly removed from the platform, either due to user deletions or Twitter's enforcement actions
against violations of community norms. This phenomenon highlights the dynamic nature of the platform, wherein a considerable portion of tweets is ephemeral, quickly disappearing from public view.
### Network construction and indicators
Using the data from each month, we constructed an ordered set of forty-one networks. In each graph, nodes represent the active users of the platform in that period, while undirected edges connect users that have retweeted each other at least once. This approach addresses a limitation highlighted in [35], where an edge was considered only if there were at least two retweets, resulting in the exclusion of over half of the initial users. Since retweeting is the act of sharing another user's post without modifying it, people tend to retweet content they approve and to build a relationship with users with the same opinion about certain topics [37, 38, 39]. Therefore, relationships between users induce an "agreement network", in which groups of people with similar leaning are strongly connected to each other.
In order to reduce the dimension of the networks, tough keeping the most significant information, only the largest connected components have been used. In this way, the change in the discussion volume can be observed and the evolution of the network structure -- showing how the debate evolves in time -- can be described, by measuring its connectivity features. In particular, the following metrics were considered in the analysis.
* _Density_ -- It represents the proportion of possible relationships in the network that are actually present [40] or, in other words, it is calculated as the ratio of the number of edges in a graph over the number of edges of the complete graph with the same number of nodes. Therefore, it provides a measure of how dense a graph is in terms of edge connectivity.
* _Average clustering coefficient_ -- The local clustering coefficient of a node in a graph quantifies how close its neighbours are to being a clique (i.e., a complete graph). In other words, when computed on a single node, the clustering coefficient is a measure of how complete the neighborhood of a node is. Therefore, the overall level of clustering in a network is measured by averaging the clustering coefficient [40, 41] over all the network nodes.The clustering coefficient of a single node is calculated as: \[c_{v}=\frac{2T(v)}{deg(v)(deg(v)-1)}\] where \(T(v)\) is the the number of connected triangles including node \(v\) and \(deg(v)\) is the degree of \(v\).
* _s-metric_ -- The s-metric [42] is a structural metric that provides a measure of the extent to which a graph is scale-free. It is calculated as: \[s(\mathcal{G})=\sum_{(v,u)\in\mathcal{E}}deg(v)\times deg(u)\] where \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) is a graph, \(\mathcal{V}\) is the set of its vertices or nodes, \(\mathcal{E}\) is the set of its edges, and \(deg(v)\) is the degree of the node \(v\). A scale-free network [43] is a network whose degree distribution follows a power law, at least asymptotically. The main characteristic of scale-free networks is the presence of nodes with a degree that significantly exceeds the average degree.
Figure 1: Keywords used in the Twitter API calls. The font size identifies the number of times the various words occur in our dataset.
### Community detection and user labeling
Since the vaccination discussion is highly contentious, it is reasonable to assume that, at different stages, each active user takes a specific position in the debate. In particular, given the high polarization of this topic [44], we hypothesize that just two cohorts of people populate the platform: the vaccination skeptics (NoVax) and the vaccination advocates (ProVax). Classifying each user into these two categories for each month is not an easy task. The simplest and most reliable way to label a user as NoVax or ProVax is to analyze the content of their tweets, which is impossible to do manually, due to the large amount of text. Furthermore, despite significant advancements in natural language processing (NLP) methods for analyzing tweets, also for COVID-19 contents [45, 46], this area of research continues to present ongoing challenges. The brevity of tweets, constrained to a maximum of 280 characters, coupled with their complex and unusual semantics, poses significant obstacles. As a result, the effective utilization of NLP techniques in this domain remains a subject of active investigation. Thus, we decided to use the structure of the network to infer the opinion of the users.
Identifying communities of highly connected users within a network is one of the most popular and challenging problems in network science [47, 48]. To perform community detection we have used the multi-level graph partitioning algorithm METIS [49]. The choice of METIS was driven by several reasons. Firstly, the extensive usage of METIS in significant studies within the literature [38, 27] ensures the comparability of our results with existing research. Additionally, METIS has proven its effectiveness in handling retweet networks [38]. Furthermore, the partitions generated by METIS consistently exhibit superior quality, with improvements ranging from 0% to 50% compared to spectral partitioning algorithms [50]. Indeed, extensive experimentation across diverse graphs has demonstrated METIS's superiority in terms of speed, surpassing other widely utilized partitioning algorithms by one to two orders of magnitude [51]. Finally, a clustering algorithm was recommended in which the number of communities was defined in advance, as we aimed to specifically identify two communities (ProVaxes and NoVaxes).
The idea behind METIS is to create successively smaller graphs \(\mathcal{G}_{1}\), \(\mathcal{G}_{2}\),..., \(\mathcal{G}_{k}\) from \(\mathcal{G}_{0}=(\mathcal{V},\mathcal{E})\), to obtain a partition of \(\mathcal{G}_{k}\) in a short time, and project the partition back onto \(\mathcal{G}_{0}\), while refining it at each step. In particular, METIS consists of two stages: coarsening and refinement.
1. _Coarsening_ -- The original graph, \(\mathcal{G}_{0}\), is transformed into sequentially smaller graphs \(\mathcal{G}_{1}\), \(\mathcal{G}_{2}\),..., \(\mathcal{G}_{k}\), such that \(|\mathcal{V}_{0}|>|\mathcal{V}_{1}|>|\mathcal{V}_{2}|>\ldots>|\mathcal{V}_{k}|\). If \(\mathcal{G}_{k}\) is meant to be a good representation of \(\mathcal{G}_{0}\), a good partitioning of \(\mathcal{G}_{k}\) represents a fairly good partitioning of \(\mathcal{G}_{0}\). \(\mathcal{G}_{k}\) is small enough to make partitioning very quick.
2. _Refinement_ -- The partition \(\mathcal{P}_{k}\) is projected back onto \(\mathcal{P}_{k-1}\),..., \(\mathcal{P}_{0}\). After each projection \(\mathcal{P}_{i}\), the partitioning is refined using a greedy algorithm. Each partition \(\mathcal{P}_{i}\), for \(0\leq i\leq k-1\), is refined before projecting to \(\mathcal{P}_{i-1}\).
Following the procedure adopted in [27], we have applied METIS repeatedly one hundred times, choosing two as the number of communities. This leads to a vector of one hundred elements, with entries equal to 0 or 1, representing the partition assignment for each user. Averaging the assigned partition across the vector allows to obtain a score between 0 and 1, which represents the probability that the corresponding user belongs to one of the two partitions. The hyper-parameter of METIS, that is the relative size of the partitions, was tuned by maximizing the number of users whose leaning score is in the 95% confidence interval between 0 and 1. Finally, the tweets of the 10% of users characterized by the most extreme scores have been read in order to classify them manually as NoVax or ProVax. Every remaining user is assigned to the same partition of the "extreme" user with the closest score. Using the resulting communities of users, our aim is to monitor the change in the relative size of the two partitions and to measure the polarization over time, where the polarization is the relative density of the in-group agreement with respect to the out-group agreement. Following [52], we calculated, for each timestamp \(t\), the polarization score \(S_{t}\) as:
\[S_{t}=\frac{(E_{n}+E_{p}-E_{o})}{(E_{n}+E_{p}+E_{o})} \tag{1}\]
where \(E_{n}\) is the edge density in the NoVax partition -- calculated as the number of observed edges within the community divided by the total amount of the possible edges in the NoVax partition, i.e., \(|\mathcal{V}_{n}|\times(|\mathcal{V}_{n}|-1)/2\) --, \(E_{p}\) is the edge density in the ProVax partition and, finally, \(E_{o}\) is the density of edges connecting the two partitions -- calculated as the number of edges linking NoVax and ProVax users normalized by the total amount of possible edges between the two communities, i.e. \(\mathcal{V}_{n}\times\mathcal{V}_{p}\). The closer \(S_{t}\) is to \(1\), the denser the in-partition edges are compared with the out-partition edges and the more polarized is the debate. Conversely, a polarization score near \(-1\) indicates that the ties of the network are equally distributed within and between communities, meaning that the discussion is not significantly polarized.
### Vaccination debate analysis through the global multiplexity matrix
In order to provide a view of the evolution of the communities in time, we constructed the global multiplexity matrix \(M\), which has proven to be successful in dynamical network analysis [53, 54]. In our case, both rows and columns of
correspond to active users, while each entry \(M(i,j)\) counts the number of times that users \(i\) and \(j\) belong to the same community in the forty-one periods considered in our analysis. In fact, the monthly networks would not really represent a multiplex. Multiplex networks [55, 56, 57] are, indeed, made up of the same set of nodes over time. Therefore, we decided to construct \(M\) based on all the users who participated in the discussion at any moment during the considered period of time -- which is not an issue since our analysis aimed to select only those nodes that always belong to the same community, during the whole period. This revealed core NoVax users, as well as core ProVax users over time.
Finally, we have identified whether the core communities (for NoVaxes and ProVaxes) were composed by _verified_ users or not, also counting the number of connections of the corresponding nodes. Verified users -- which in Twitter are indicated with a blue tick -- are people (journalists, actors, conductors, singers, etc.) or companies whose identity has been checked by Twitter, guaranteeing the profile authenticity. Therefore, identifying if the most active and influential users, from both sides of the debate, belong to this category represents an important issue. The number of verified users' followers is also a fundamental indicator to quantify how much they are able to spread information.
## 4 Results and discussion
### Temporal evolution of the network structure
The retweeting relationship between users allowed us to construct forty-one retweet networks, one for each month. The giant connected component present in all the monthly networks includes more than 90% of the respective total number of nodes. As can be deduced from Figure 3, both nodes and edges follow an increasing trend over time until the beginning of 2022, witnessing a huge growth of people interacting on vaccines. Such an increase in the volume of the discussion, after the COVID-19 outbreak, was easily foreseeable. In particular, a real soar of users and exchange of tweets was registered around the 21st month considered in our analysis. This is not surprising, as October 2020 coincides with the onset of the second and more aggressive wave of COVID-19 and, in turn, the spread of rumors about the imminent release of anti-Covid vaccines. In fact, after a few months during which the debate was weaker (see also Figure 6, where the bottleneck in the clustering leaning score is visible), a strong peak can be observed in the number of nodes and edges in early autumn 2020. After that, the volume of the debate seems to remain high, until a final declining phase around April-May 2022, due to the pandemic control with the consequent decrease of interest on the topic. Interestingly, we detected a positive cross-correlation (with lag equal to \(-1\)) between the time series of the Twitter users engaged in vaccine-related discussions and the number of Google searches, based on the same keywords, obtained from Google Trend (Figure 2).
However, it is interesting to monitor how the structure of the debate has evolved over time. For this purpose, we computed, for each monthly network, some structural metrics: density, mean clustering coefficient, mean node-degree and s-metric. Their values over time are plotted in Figure 4.
Retweet networks show a decrease in the density metric through early 2022 (Figure 4, top left). Thus, even if there is a growth over time in both the number of nodes and edges, they do not have the same decreasing rate: as the number of users grows, the connection between them becomes more sparse, making the network less similar to a complete graph. This tendency is confirmed also by the negative trend of the average clustering coefficient (Figure 4, top right), which implies a decrease in the probability of detecting triangle patterns within the network. Such findings, together with the rise of the average node-degree (Figure 4, bottom left),
Figure 2: Cross-Correlation between the time-series of the number of nodes and the time-series of the research interest on Google Trend.
lead to conclude that the structure of the debate significantly changed over time. In particular, although all the forty-one networks show a scale-free pattern -- as revealed by the power-law degree distribution (Figure 5) and by the low clustering coefficient [58] --, at the beginning, the discussion looks more uniformly spread all over the users, and later on it gets more and more concentrated in the hands of a small number of influential nodes. These _hubs_ play a key role in the diffusion of opinions about vaccines. More specifically, hubs are nodes characterized by a very high node-degree and most other users tend to retweet them massively, building poor connections with low-degree nodes. The increasing s-metric score (Figure 4, bottom right) confirms that the network becomes more scale-free as months go by.
Opposite conclusions can be drawn for the structural changes of the debate since the beginning of 2022: the end of the vaccination campaign and the spread of the less aggressive Omicron variant seem to turn off the debate (Figure 3) and to make it more similar -- in terms of structure -- to the before-Covid era.
Monitoring the evolution of the network structure over time is crucial, as changes in the structure can significantly impact the spread of information and disinformation [59]. Indeed, such changes lead to a more efficient spread of information, since a sparser network with reduced node clusters allows for faster and wider dissemination of information. This is because the information can travel quickly across the network without getting stuck in highly clustered regions or bottlenecks. However, this also means that disinformation can spread more efficiently as well, especially if the malicious actors responsible for spam or fallacious content have managed to infiltrate the network's key nodes. Disinformation campaigns can take advantage of a sparse network structure to target a wider audience, and the reduced clustering can make it difficult for fact-checkers and debunkers to combat the spread of false information.
### Temporal evolution of the community composition and network polarization
As previously explained, we applied the METIS algorithm one hundred times to produce two partitions on the forty-one retweet networks. The result of the procedure is presented in the heat-map of the leaning scores shown in Figure 6. Each row of the heat-map corresponds to a month (between 1 and 41) while each column is a number between 0 and 1 (represented with at most two decimal places), defining a possible leaning score. Indeed, the leaning score (see Section 3.3) describes the probability that a user belongs to one of the two communities. The color of each entry represents the fraction of users that, in the corresponding sub-period, share that leaning score: the lighter the color the higher the percentage of users characterized by that score. Intuitively, a huge fraction of users assigned to one of the two extremes witnesses a highly partitioned network. Based on Figure 6, as the months go by, there is a higher percentage of users with extreme leaning scores. This suggests that the network structures become increasingly partitioned, enabling the algorithm to more accurately detect communities. However, a bottleneck is present approximately in correspondence of August 2020-February 2021. In this period, a new lockdown started, during which each region was assigned a color corresponding to the spread of the virus and consequently to the severity of the restrictions imposed on the population. This situation corresponds to a highly uncertain time frame and is coherent with the abrupt increase in the debate size, shown in Figures 3 and 4 (highlighted by the yellow box). Another plausible explanation for this bottleneck could be
Figure 3: Evolution of the number of nodes (left) and edges (right) of the retweet network in time (during the forty–one months considered). The plots clearly show how the beginning of the second wave of COVID-19, which caused the second lockdown and the highest number of deaths in Italy, has changed the dimension of the debate significantly.
the concurrent rollout of the vaccination campaign, which garnered heightened public attention and may have acted as a connecting factor between the two communities during that particular period. After the bottleneck, a continuous increase of the clustering leaning score is observed, indicating a stronger polarization of the debate.
In order to characterize the two communities found by the algorithm, the tweets of the users with the most extreme scores were qualitatively read and assessed. For each month, users with similar extreme leaning scores tweeted similar content, guaranteeing the fairness of the partitioning procedure. Every other node of the network is then classified based on the community membership of the "extreme" user with the closest score. Four examples of the retweet networks, with the respective communities, are reported in Figure 7. From now on the blue and orange colors will indicate ProVax and NoVax users, respectively. Once the two communities were identified, it was possible to study the relative proportion of vaccination skeptics and advocates for each sub-period (Figure 8). Before the 27th month (March 2021), the relative proportion of ProVax and NoVax users did not follow a particular trend and, mostly, the former were more numerous than the latter. Since April 2021, however, vaccine advocates have become progressively less active, being outnumbered by skeptics from January 2022.
Figure 4: Evolution of network metrics over time. In all the plots, the \(x\) axis shows the month numbers, while the \(y\) axis describes the values of the four network measures considered (density, average clustering coefficient, average degree and s–metric). The yellow box indicates the transition period, reported also in Figure 3, approximately ranging from July 2020 to February 2021.
An interesting insight we could gather from our analysis is also given by the number of tweets exchanged within the two communities. As shown in Figure 9, NoVax users were generally more productive than ProVax users. The latter tweeted more than the former only during the first five months of the vaccination campaign. Starting from the 28th month (April 2021) the productivity of the two groups begins to follow two different trends: NoVax users become increasingly productive, while the number of tweets from ProVax users decreased significantly. Furthermore, with the beginning of the COVID-19 vaccination campaign, the subsequent political issues around the green pass, and the
Figure 5: Degree distribution of the retweet networks at months 1, 21, 31, 41. All the networks are scale–free.
Figure 6: Heat–map of the leaning scores. The \(x\) axis represents the leaning score value (which varies in [0,1]), while the \(y\) axis shows the numbering of the months. The lighter the color the greater the percentage of users sharing the same leaning score.
Figure 7: Graph representation of the retweet networks at months 1, 21, 31, 41; the blue and orange nodes represent the ProVax and NoVax users, respectively.
introduction of mandatory vaccination law for some categories of workers, the vaccination skeptics became not only more present in the debate, but also more and more active. Going into detail, the four peaks of the ProVax curve correspond exactly to the administration of the vaccine to medical personnel (December 2020) and to the first, second and third dose for the entire population (indicatively, April, July and November 2021). In the case of NoVax users, the vaccination skeptics became more prominent in the debate, but also more in the debate.
Figure 8: Relative proportion of NoVax (orange) and ProVax (blue) users over the forty–one months, from January 2019 to May 2022.
Figure 9: Total number of tweets from NoVax (orange) and ProVax (blue) users for each month.
instead, the two most significant peaks relate to the imposition of the green pass obligation (August 6, 2021) and the order of the Ministry of Health (February 8, 2022) for the cessation of the obligation to wear outdoor masks, while the local minimum in April 2022 corresponds to the end of the state of emergency. The correspondence between the peaks of ProVax and NoVax activities and national-level events suggests a substantial difference of base motivation between the two groups, with the NoVax being mostly driven by self-interest considerations. Amidst the vast body of research on vaccine hesitancy, particularly in the context of the COVID-19 pandemic [60, 61, 12, 62], some studies have focused on exploring the impact of social trust levels on vaccination rates [63, 64, 65]. In particular, our findings align with those of a study conducted in Italy [66], which examined the relationship between vaccination status, social context, social trust, and adherence to core institutional structures, such as the rule of law and collective commitments. The study revealed that individuals with higher levels of social trust are less likely to remain unvaccinated against COVID-19. Conversely, unvaccinated individuals show less support for honoring other collective commitments unrelated to COVID-19, compared to vaccinated individuals, ceteris paribus. These findings highlight the importance of considering a social contract perspective, alongside the social context, in the study of vaccine hesitancy. As such, they have significant implications for guiding policymakers in developing effective strategies to promote vaccination. Specifically, they suggest that appeals emphasizing individual benefits may be more successful in encouraging vaccination compared to appeals centered around collective responsibility.
Finally, the polarization score described in Eq. (1) was computed for each sub-period (Figure 10). The polarization has followed an increasing trend over time with an oscillating phase corresponding to late 2020 and early 2021. This confirmed that in the month of the so-called "Vax-day" (December 2020) not only the network is less clearly divided but the debate is also less polarized. These observations provide further evidence supporting the assertions made in [35], namely that the arrival of COVID-19 has failed to alleviate the echo chamber effect in vaccine discussion. On the contrary, our analysis indicates that the advent of COVID-19 has exacerbated the polarization of the debate.
### Possible strategies to slow down vaccine misinformation
By identifying communities over time, we can develop effective strategies to curb the spread of vaccine misinformation and limit the growth of the NoVax community. One promising approach is to pinpoint nodes with a central role in the NoVax community and disrupt their influence through node attacks, such as account blocking. To determine node importance, we employed the betweenness centrality metric for NoVax nodes. Analyzing the betweenness of NoVax nodes enabled us to examine the pathways of misinformation spreading. Specifically, we analyzed the subgraph induced by the NoVax nodes at each time step and calculated the distribution of betweenness among the nodes in the subgraph. Our analysis revealed that all the distributions followed a power-law trend, indicating that only few nodes had high betweenness at each time step. We then determined the number of nodes with extremely high betweenness by calculating the percentage of nodes outside the 95th percentile of the distribution for each time step (Figure 11). Although the number of nodes with high betweenness increased over time, they constituted a small portion of the total nodes. Therefore, account blocking could be an effective strategy to hinder their ability to spread misinformation. Other node attacks, based on metrics such as degree and closeness, could also fragment the NoVax community.
Figure 10: Polarization score over the forty–one months considered.
### Characterization of the core NoVax/ProVax users
The global multiplexity matrix \(M\), of dimension 300,653\(\times\)300,653, is found to be sparse. Each entry \(M(i,j)\) is an integer between 0 and 41 representing the number of months in which the two users \(i\) and \(j\) belong to the same community. First, we extracted all pairs of users who have been in the same community throughout the period, i.e. pairs of users whose relative matrix entry equals 41. Then, we used the extracted information to construct a new network connecting only such selected users. The resulting graph (Figure 4.4) has seven connected components, whose dimensions are shown in Table 1.
It is worth noting that each component is fully connected because requiring that nodes belong to the same community forty-one times -- which is the maximum possible -- is a transitive membership relation. As we can see in Table 1, the largest connected component is totally composed of users that have always been NoVax, whereas ProVax convinced users populate the second largest component. The other five connected components are composed by few users that have changed their mind about vaccines at least one time during the forty-one months. However, it is also possible that some convinced and active users have not tweeted during some periods. Considering this, we decided to relax the constraint by requiring that nodes belong to the same community just forty times. In this case, the graph (Figure 4.4)
Figure 11: Fraction of NoVax users characterized by a betweenness centrality outside the 95th percentile.
Figure 12: Users that belong to the same community for forty–one months, forty months and thirty–nine months. The two colors (blue/orange) represent the two core communities (ProVax/NoVax) identified among these subgroups.
has four connected components -- no more fully-connected --, with the largest one composed by the core NoVax users (Table 2).
Decreasing the number of periods to thirty-nine, the two core NoVax and ProVax sets became larger and more stable: the graph at this point contains only two connected components (Figure 4.4), whose dimension and composition are shown in Table 3. Based on Table 3, we can also conclude that the NoVax supporters are more numerous and stable over time, whereas the core set of ProVax users is significantly smaller. Anyway, the most convinced and active ProVax users have 100200 followers on average, while NoVaxes have generally less followers (44067), even if they are more constant and more present in the debate. Furthermore, none of the users who make up the NoVax backbone are verified users, while 15.85% of vaccine advocates are.
Another noteworthy observation pertains to the structure of the core NoVax and ProVax users. As depicted in Figures 10b and 10c, it is evident that the NoVax users exhibit a significantly more homogeneous structure compared to the ProVax users. This characteristic may be closely tied to their efficacy in disseminating false information. As highlighted in [67], the act of sharing false information, even if subsequently debunked, tends to correlate with individuals either postponing vaccination or outright rejecting it.
Figure 13 shows a selection of the most popular users on both sides. Interestingly, among ProVaxes, the most followed user is Dr. Roberto Burioni, a famous Italian virologist who played an important role in health information during the COVID-19 pandemic and even before. The most influential and active NoVax user is instead Byoblu, a counter-information blog founded by Claudio Messora, a well-known member of the NoVax community in Italy, as well as an advocate of conspiracy theories on the health dictatorship.
dataset TwitterVax, using the Twitter API and crawling tweets related to several vaccine keywords. This first collection step allowed us to exploit Twitter data from a network perspective, in order to understand the changes in the structure, scale and polarization of the vaccine dispute in Italy, starting from January 2019 to May 2022. Based on the collected data, we showed that, with the outbreak of COVID-19, the debate has not only dramatically intensified, but has also become more concentrated in the hands of a few influential hubs, who have played a vital role in disseminating vaccine information. However, this trend seems to have reversed since the beginning of 2022, testifying that the change in the structure of the discussion was probably due to the COVID-19 emergency. We successfully detected the NoVax and ProVax communities, demonstrating that the relative proportion of the two user cohorts does not change significantly over time. In particular, the NoVax community is often the least populated but also the most active in terms of the number of tweets. Moreover, we computed a polarization score between the two user groups, demonstrating an increasing level of polarization. Finally, using a multiplexity approach, we identified the core of NoVax and ProVax users. In this way, we have verified that core NoVaxes are more than core ProVaxes who, however, have more followers on average and a higher percentage of verified users.
Future perspectives include the possibility to collect a more complete data set, covering a larger time period, as well as process the data according to a higher resolution. It could also be interesting to analyze higher order motifs in temporal multiplex networks, which could give us further information on the structure of the interaction networks [68, 69]. Thereby, other strategic events that have had a profound impact on the debate can be identified and analyzed, also using different network approaches and evaluation metrics. Furthermore, our analysis did not consider the presence of artificial robots (bots) in the vaccine debate on Twitter. Identifying and filtering bot users can be a challenging task[70, 71], which is beyond the scope of this work. However, analyzing the percentage of bots and investigating their influence on diffusion and polarization processes would be valuable and will be the subject of future research.
|
2301.03559 | Color Me Intrigued: Quantifying Usage of Colors in Fiction | We present preliminary results in quantitative analyses of color usage in
selected authors' works from LitBank. Using Glasgow Norms, human ratings on
5000+ words, we measure attributes of nouns dependent on color terms. Early
results demonstrate a significant increase in noun concreteness over time. We
also propose future research directions for computational literary color
analytics. | Siyan Li | 2023-01-09T18:20:10Z | http://arxiv.org/abs/2301.03559v1 | # Color Me Intrigued: Quantifying Usage of Colors in Fiction
###### Abstract
We present preliminary results in quantitative analyses of color usage in selected authors' works from LitBank. Using Glasgow Norms, human ratings on 5000+ words, we measure attributes of nouns dependent on color terms. Early results demonstrate a significant increase in noun concreteness over time. We also propose future research directions for computational literary color analytics. 1
Footnote 1: All code and data used are available at [https://github.com/siyan-sylvia-li/ColorLit](https://github.com/siyan-sylvia-li/ColorLit).
## 1 Introduction
_All great writers are great colourists_, Virginia Woolf once stated [20]. Analyzing colors in literary work across time and authors has fascinated the field of literature, philosophy, and psychology [2].
Most literary analyses of colors focus on only one author, one work, or one historical era. There has been very few large-scale analyses of color usage shifts. Recently, natural language processing (NLP) has progressed in fields potentially relevant to literary color analyses, such as dependency parsing [12] and named entity recognition [13]. Leveraging these tools expedites localization of spans of interest, increasing efficiency and ease of larger-scale literary analyses.
What makes literary color analyses interesting for natural language processing? Authors utilize colors in numerous ways, and NLP tools should capture this variety. While Goethe uses colors as a backdrop of his narratives, only using them to emphasize the plastic shapes of objects [1], Dante's coloring in his _Divine Comedy_ displays more symbolic undertones. The colors on the three faces of Dante's Lucifer can be related to the three horses of the Apocalyspe [2]. The sudden absence of green in Heavenly Paradise may stem from green's association with hope, and Dante's Paradise eliminates the need for hope since it fulfills all wishes [14]. In contrast, James Joyce's green can be interpreted to symbolize absinthe [1] and the author's frustration with the Irish Catholic Church [15]. For more contemporary writers, Virginia Woolf's blue in _To the Lighthouse_ accompanies Mrs. Ramsay for her Madonna role and her mixture of radiance and somberness [2]. The same blue manifests cholera and illness in Edgar Allan Poe's _The Masque of the Red Death_[2]. Despite some subjectivity in these interpretations, the existence of differences in color usages are absolute. We want to examine whether current NLP tools "understand" these differences.
We propose a novel line of research using word embeddings and pre-trained language models to quantify color usages in literature. Specifically, we measure the attributes of nouns dependent on color adjectives according to the Glasgow Norms [2]. Preliminary results demonstrate statistically significant trends over time for certain colors' Glasgow Norm attributes. We present future research directions and plausible experiments.
Our proposed framework can supplement literary color analyses research and provide additional insight for color usages comparisons. Looking at literature and creativity through the lens of colors is informative because of the prevalence of color terms in literature. Color terms can serve as anchoring points of comparison between authors, and potentially between humans and language models.
## 2 Related Work
The most similar work to ours would be Rabinovich and Carmeli (2022), a study of color term usage by both non-color-blind and color-blind individuals on Reddit. The authors discover significant differences in certain color terms. They then concentrate on the nouns that are modified by color words, using dependency parsing to obtain NOUN words in an AMOD dependency pair with an ADJ color term. The authors identify significant discrepancies between the two populations in imageability [2] values of color-modified nouns. Our preliminary work is methodologically similar, but we study literature instead. Additionally, our work more extensively leverages labels from the Glasgow Norms by using three dimensions instead of one.
Word embeddings play a crucial role in computational social science. Garg et al. (2018) leverages Word2vec [2] to reflect changes in relationships between the embedding representing women and different adjectives as potentially a result of the feminist movement. A similar work, Bailey, Williams, and Cimpian (2022), show
cases that people = man by comparing distances between word embeddings of trait words of people, men, and women respectively. We are interested in similar techniques in a literary color analysis context.
## 3 Dataset
We use LitBank [1], a Euro-centric collection of 100 English fictions from 75 authors. We conduct a scrape of Project Gutenberg using LitBank's Gutenberg ID's to obtain the full text of each work. The genres consist primarily of realistic novels, with few exceptions of science fiction (H.G. Wells, Mary Shelley), fantasy (Bram Stoker, Oscar Wilde), and horror (Edgar Allan Poe).
## 4 Methodology
### Extracting Modified Nouns
Colors and SynonymsWe select common colors and cute a list of their synonyms. The colors include "red", "green", "black", "white", "blue", "brown", "gray", "yellow", "pink", and "purple". All color terms and their synonyms are in Appendix A. Each set of sentences from a Project Gutenberg E-book are split into lemmatized words. We choose sentences containing either our specified color adjectives or their synonyms for dependency parsing.
Dependency ParsingAlthough Rabinovich and Carmeli (2022) strictly studies nouns modified by color terms through the AMOD dependency, this would be limiting in lyrical writing. For instance, "she has eyes of sapphire" describes blue eyes and should be included in our analysis, but dependency parsing would categorize "eyes" and "saphire" to linked by NMOD instead of AMOD. Therefore, we expand upon our pool of nouns by including all nouns with a dependency link to our color terms. We employ Stanza's Dependency Parser [1]. Upon obtaining dependencies on a sentence, we perform a filtering process to retain the relevant head-dependent pairs. The specific filtering process is as follows. For each head-dependent pair: (1) Lemmatize both the head and the dependent. (2) Iterate through all color words and their synonyms; if none of them is present in either the head or the dependent, prune out this pair. (3) If the other word in the dependency pair is not a noun or a proper noun, prune out this pair.
### Glasgow Norm Models
The Glasgow Norms are a list of 5,553 words with corresponding normative human ratings on different psycholinguistic dimensions. Our ongoing work concentrates on: (1) Imageability (IMAG), the ease of summoning a mental image from a word; (2) Concreteness (CNC), the extent to which words can be experienced to our senses; and (3) Valence (VAL), how positive or negative a word's value is. We hypothesize that different authors differ on the imageability/concreteness/valence values of color-dependent nouns.
Although the Glasgow Norms vocabulary is extensive, we still hope to handle unseen words. FastText embeddings [1] reduced to 100 dimensions are used to train separate 1-layer Multi-Layer Perceptron (MLP) models to predict these values. We choose FastText for its adaptability for unseen words. Prior to training, all scores are normalized to the 0 to 1 range for better interpretability, consistent with Rabinovich and Carmeli (2022). Three neural networks with sigmoid activations are trained on these data, and evaluated on a held-out test set with an 8:1:1 split. We use Pearson's correlation between predictions and ground truths as our metric. Rabinovich and Carmeli (2022) reports a Pearson's correlation of 0.76 for their IMAG model on a random held-out set, while ours achieves 0.79 on the test set. We understand that the held-out test set may be different, but this indicates that our IMAG model should as potent as the prior model. Our CNC model and VAL model achieve correlation scores of 0.83 and 0.76, respectively.
To prevent repeated occurrences of a word affecting the average Glasgow Norm values, the dependent nouns are deduplicated when computing the averages.
## 5 Preliminary Results
Despite our analyses on LitBank yield statistically significant results when analyzed across time, this could stem from an imbalance in the distribution of publication time in LitBank. This paper aims to establish a preliminary framework for studying color usages in literature, and current results would need corroboration from additional texts from different eras and genres.
### Color-dependent Nouns
We conduct both quantitative and qualitative analyses on color-dependent nouns in LitBank through both computing average Glasgow Norm values and through inspection of most frequently associated nouns. Additional analyses of color term frequencies are in Appendix B.
Out of all unique nouns, 1299 are within the Glasgow Norm vocabulary, and 1924 are OOV. We use our trained MLPs to infer Glasgow Norm scales of the out-of-vocabulary nouns. After recognizing an upward trend in IMAG and CNC, we compute Pearson's correlations between publication year and average IMAG and CNC values for all color terms in novels where the color is present (Table 2, Figure 1). The IMG and CNC values increase significantly over time for black, white, yellow, and pink. This indicates that the nouns associated with these color terms
\begin{table}
\begin{tabular}{c c|c c} \hline \hline
**Color** & **\# of Occurrences** & **Color** & **\# of Occurrences** \\ \hline red & 2888 & green & 1839 \\ \hline black & 3325 & white & 4990 \\ \hline blue & 1622 & brown & 1206 \\ \hline gray & 1575 & yellow & 848 \\ \hline pink & 648 & purple & 545 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The total numbers of occurrences of our selected color terms in the 100 LitBank novels. These are instances where the color terms act as either a dependent or a dependency head.
become increasingly concrete easier to conjure mental images of. This is consistent with some conclusions from Skard (1946) that early color usages are obscure and abstract.
Nouns in Individual WorksWe observe interesting data points in our plots (the full sets of figures are available in Appendix C). For instance, we notice a significantly lower valence for red in Edgar Allan Poe's _The Masque of the Red Death_, because the only nouns associated with red are "death", "stain", and "horror". Similarly, an abnormally low green valence arises in Henry Fielding's _History of Tom Jones, a Foundling_, because green is attached to "slut", "witch", and "monster".
Nouns over Publication TimeWe divide our fictions into pre-1800's, 1800 - 1900, and post-1900 based on patterns in our data. We then deduplicate dependent nouns in each work so that we can measure most frequent nouns in each era without thematic motifs biasing our analyses. A list of selected color terms and their most dependent nouns in each of the eras are in Table 3.
From the table we observe a significant shift in frequently used nouns across time. Pre-1800 dependent nouns are more abstract and complex compared to post-1800 nouns dependent on the same colors, while there is no significant difference between 1800 - 1900 and post-1900. This is a possible explanation for the increase in imageability and concreteness over time among LitBank works.
### Inter-Author Differences
We plot the Word2vec embeddings of nouns dependent on the same colors from different authors to decipher how the color terms are used. Out-of-vocabulary nouns are discarded. This serves as a crude visualization of different topics associated with these color terms. For instance, when comparing nouns modified by _yellow_ in works of Fitzgerald and Joyce in LitBank, the topic of facial hair (hair, beard, pompadour) only manifests in Fitzgerald's, while food items (soup, cheese) appear in Joyce's. Additional examples are in Appendix D.
## 6 Proposal of Future Work
### Further Analyses
Fine-grained Timeline AnalysesSimilar to Garg et al. (2018), we can train separate Word2vec models on each decade of literature in our collection for fine-grained analyses. Preliminary results indicate that certain colors become
\begin{table}
\begin{tabular}{c c|c c} \hline \multicolumn{4}{c}{**IMAGE Results**} \\ \hline
**Color** & **Pearson’s r** & **Color** & **Pearson’s r** \\ \hline red & -0.095* & green & 0.059 \\ \hline black & 0.257** & white & 0.303*** \\ \hline pink & 0.534*** & yellow & 0.340** \\ \hline \multicolumn{4}{c}{**CNC Results**} \\ \hline
**Color** & **Pearson’s r** & **Color** & **Pearson’s r** \\ \hline black & 0.237** & white & 0.239** \\ \hline pink & 0.517*** & yellow & 0.318*** \\ \hline \multicolumn{4}{c}{**VAL Results**} \\ \hline
**Color** & **Pearson’s r** & **Color** & **Pearson’s r** \\ \hline green & 0.273*** & purple & 0.210* \\ \hline \end{tabular}
\end{table}
Table 2: Pearson’s correlations between published years and Glasgow Norm values of the 100 fictions for color terms with sig. results. *** \(p<0.001\), ** \(p<0.05\), and * \(p<0.1\). Full results are in Appendix C.
Figure 1: Imageability plots for the color terms with significant results. We omit the concreteness plots here because IMAG and CNC are highly correlated, but we provide the full set of plots in Appendix C.
\begin{table}
\begin{tabular}{c c c} \hline
**Color** & **Era** & **Frequent Nouns** \\ \hline pink & Pre-1800 &
\begin{tabular}{c} shame, guilt, folly, \\ ribbon, indignation \\ \end{tabular} \\ & 1800 - 1900 & cheek, face, ribbon, rose, lip \\ & Post-1900 & cheek, face, rose, paper, bud \\ \hline black & Pre-1800 & color, eye, grain, hair, wave \\ & 1800 - 1900 & hair, eye, shadow, dress, face \\ & Post-1900 & hair, eye, dress, figure, man \\ \hline white & Pre-1800 & face, cheek, countenance, \\ & 1800 - 1900 & cliff, cave \\ & Post-1900 & face, cheek, hand, hair, man \\ \hline yellow & Pre-1800 & appearance, complexion, \\ & 1800 - 1900 & hair, light, face, glove, skin \\ & Post-1900 & hair, light, eye, flower, house \\ \hline \end{tabular}
\end{table}
Table 3: Most frequently color-modified nouns from each era for selected color terms. Example sentences are in Appendix D
increasingly associated with concrete descriptions (pink associated with cheeks and face). We can compute cosine similarities between Word2vec embeddings of certain colors with words such as "face" and "lips"; this should increase over time as we observe increasing presence of colors in character descriptions. A similar metric can further quantize inter-author differences as well.
Additional ClusteringWe demonstrate the prowess of Word2vec for visualizing color-related topics in this proposal, but word embeddings fail to account for contexts. Further clustering analyses can include embeddings from context-dependent pre-trained language models such as SentenceBERT [13] and BERT [1].
### RQs: From a Literature Perspective
How do colors differ across genres?We chose LitBank for its ease of access and thorough documentations, but one emerging issue is that LitBank skews heavily towards realistic fictions. Due to this imbalance, we cannot compare color usages meaningfully across genres. We will include more books in future analyses. If we observe significant differences in frequencies of different color words or in the concepts and objects associated with the colors, we can conclude that there exists cross-genre differences in color usage.
We already observe such differences. Bram Stoker's _Dracula_[12], a pioneering work in varnpire literature, features numerous descriptions of pale maidens with their red lips, as well as of scarlet blood and crimson eyes. Therefore, we notice much more frequent usage of the color red in this work, compared to H. G. Wells's _The War of the Worlds_[10], where red most commonly modifies woods. We want to inspect whether the same general pattern would persist on larger-scale analyses.
How do colors differ across literary forms?Literary color analyses often separate novelists and poets. Comparisons are often drawn only between two poets or two novelists, but rarely both. Our hypothesis is that colors usage in poetry differs significantly from color usage in prose and novels. Such differences can manifest as a discrepancy in concreteness (e.g. in poetry, colors can more often associate with a concept instead of a concrete object). Expanding our research to include poets, preferably those contemporary to our novelists, should enable us to address this systematically.
### RQs: From a Social Science Perspective
How do colors differ across cultures and classes?Different cultures can have different associations with the same color; for instance, white is often a symbol of purity and a staple at Western weddings, whereas the same color is used more traditionally in Chinese funerals. These associations may reflect socio-economic classes as well (e.g. white-collar and blue-collar jobs) through colors frequently co-occurring with characters from different cultures and societal classes. To analyze this, we will utilize named entity recognition to link colors to characters, operating with more context. Pursuing this research direction would involve translated work, instead of using only Euro-centric collections of literature. The social class of a character can either be looked up online or inferred by a model to automate the process. We can then cluster character colors by cultures and classes.
How do colors affect biases and stereotypes?Current-day color associations, such as pink with girls and blue with boys, can fuel biases. For instance, boys who enjoy wearing pink may be regarded as "girly" and overly feminine. Certain colors also contain associations with LGBTQ+ communities. We are interested in identifying how these color-based biases (going beyond race) manifest in literature and online communities. We can study this by finding colors associated with characters of a demographic group of interest. Tracing through past literature may also shed light upon the evolution of color associations.
## 7 Discussion
Our work serves as a step towards more systematic analyses of color usages in literature using natural language processing tools. Following prior work, we propose using The Glasgow Norms and word embeddings as tools for quantifying color usage differences. We demonstrate significant increasing trends in imageability and concreteness in color-dependent nouns over time.
One limitation is the range of language we are capable of handling is constrained by the language models we employ. Our current collection does not have many pre-1800 pieces. While it is possible to increase the representation of pre-1800s literature, the domain shift in English style and word conventions may require different word embedding and pre-trained models to embed narratives (e.g. Chaucer often uses "red" in place of "read", standard of his time, but causes ambiguity when processing texts on a large scale). Given such shifts in vocabulary and sentence structures, we may fail to provide meaningful insights into earlier literature, since the word embeddings may have disjoint vocabularies, and models such as SentenceBERT are trained on more modern texts.
Figure 2: Word2vec embeddings of nouns modified by _yellow_ in novels by F. Scott Fitzgerald (red points) and James Joyce (blue points). |
2310.02106 | Permanent Magnets Based on Hard Ferrite Ceramics | Permanent magnets are integral components in many of the modern technologies
that are critical for the transition to a sustainable society. However, most of
the high-performance (BHmax > 100 kJ/m3) permanent magnets that are currently
employed contain rare earth elements (REE), which have long been classified as
critical materials with a high supply risk and concerns regarding pollution in
their mining. Therefore, suitable REE-lean/free magnets must be developed in
order to ensure the sustainability of clean energy generation and electric
mobility. The REE-free hexagonal ferrites (or hexaferrites) are the most used
permanent magnets across all applications, with an 85 wt.% pie of the permanent
magnet market. They are the dominant lower-grade option (BHmax < 25 kJ/m3) due
to their relatively good hard magnetic properties, high Curie temperature (>700
K), low cost and good chemical stability. In recent years, the hexaferrites
have also emerged as candidates for substituting REE-based permanent magnets in
applications requiring intermediate magnetic performance (25-100 kJ/m3), due to
considerable performance improvements achieved through chemical tuning,
nanostructuring and compaction/sintering optimization. This chapter reviews the
state-of-the-art sintering strategies being investigated with the aim of
manufacturing hexaferrite magnets with optimized magnetic properties,
identifying key challenges and highlighting the natural future steps to be
followed. | Cecilia Granados-Miralles, Matilde Saura-Múzquiz, Henrik L. Andersen | 2023-10-03T14:50:49Z | http://arxiv.org/abs/2310.02106v1 | # Permanent magnets based on hard ferrite ceramics
###### Abstract
Permanent magnets are integral components in many of the modern technologies that are critical for the transition to a sustainable society. However, most of the high-performance (\(BH_{\mathrm{max}}>100\) kJ/m3) permanent magnets that are currently employed contain rare-earth elements (REE), which have long been classified as critical materials with a high supply risk and concerns regarding pollution in their mining. Therefore, suitable REE-lean/free magnets must be developed in order to ensure the sustainability of clean energy generation and electric mobility. The REE-free hexagonal ferrites (or hexaferrites) are the most used permanent magnets across all applications, with an 85 wt.% pie of the permanent magnet market. They are the dominant lower-grade option (\(BH_{\mathrm{max}}<25\) kJ/m3) due to their relatively good hard magnetic properties, high Curie temperature (\(>\)700 K), low cost and good chemical stability. In recent years, the hexaferrites have also emerged as candidates for substituting REE-based permanent magnets in applications requiring intermediate magnetic performance (25-100 kJ/m3), due to considerable performance improvements achieved through chemical tuning, nanostructuring and compaction/sintering optimization. This chapter reviews the state-of-the-art sintering strategies being investigated with the aim of manufacturing hexaferrite magnets with optimized magnetic properties, identifying key challenges and highlighting the natural future steps to be followed.
**Keywords:** permanent magnets, hard ferrites, hexaferrites, ceramic magnets, rare-earth-free magnets, SrFe\({}_{\mathrm{12}}\)O\({}_{\mathrm{19}}\), BarFe\({}_{\mathrm{12}}\)O\({}_{\mathrm{19}}\)
## 1 Introduction
### Classification of magnetic materials
The magnetism of magnetic materials arises at the atomic scale and is influenced by characteristics spanning several orders of magnitude (see Figure 1a). In the atoms of most compounds, the electrons exist in pairs with opposite spins that cancel out each other's magnetic moment. However, some elements or ions have unpaired electrons, whose spin and orbital motion cause them to exhibit a magnetic field giving the atom a magnetic moment. The organization of these magnetic atoms in the atomic structure of the material determines its magnetic properties. Figure 1b shows a schematic illustration of the main types of magnetic ordering. In paramagnetic materials, the atomic magnetic moments are randomly oriented leading to no net magnetization and a relatively weak attraction to an external magnetic field. Antiferromagnetic materials are magnetically ordered, but also exhibit zero net magnetization due to an antiparallel organization of equal atomic magnetic moments. However, in ferro- or ferri-magnetic materials (below their Curie temperature, \(T_{\mathrm{c}}\), which is the critical temperature above which thermal fluctuations lead to the material being paramagnetic), the magnetic atoms are organized in a way that leads to a net magnetization along a certain direction (magnetic easy axis) in the structure, and it is these types of materials that are used for permanent magnets (PMs).
The ferro/ferri-magnetic materials are generally categorized as either'soft' or 'hard' depending on their resistance to demagnetization. This is evaluated in terms of the coercive field (or coercivity, \(H_{\mathrm{c}}\)), which is the external magnetic field required to reset the magnetization of the material. Magnetically soft materials are easily (de)magnetized by an external magnetic field (typically defined as \(H_{\mathrm{c}}<10\) kA/m) and their magnetization is therefore often temporary, while hard (or permanent) magnetic materials have a high resistance to demagnetization (\(H_{\mathrm{c}}>400\) kA/m) and once magnetized they can therefore sustain a magnetic field
Figure 1: (a) Illustration of the multiscale origin of the magnetism in magnetic materials. (b) Schematic illustration of main magnetic ordering types and the resulting net zero field magnetization. (c) Size dependency of the coercivity. (d) Hysteresis curve of an ideal permanent magnet.
indefinitely.[1] The coercivity of a material is determined in part by the intrinsic magnetocrystalline anisotropy of the crystal structure as well as by microstructural (extrinsic) effects such as crystallite size or structural defects, which influence the formation (nucleation and growth) of magnetic domains in the material. For most magnetic materials, \(H_{\rm c}\) is found to increase as the crystallite size is reduced, reaching a maximum value at the critical single-domain size (see Figure 1c).
Another key property of a magnetic material is its remanence field (\(B_{\rm r}\) or \(M_{\rm r}\)), which is the spontaneous magnetic flux density or magnetization exhibited by the material in zero external field conditions. Figure 1d shows a schematic illustration of the external magnetic field (\(H\))-dependent flux density (\(B\)) and magnetization (\(M\)) curves, commonly called hysteresis curves, of an ideal permanent magnetic material. As illustrated, it is the combination of these two parameters, _i.e._, the coercivity (magnetic stability) and remanence (spontaneous magnetization), that ultimately determines the magnetic strength of the magnet. This magnetic strength is quantified by the so-called maximum energy product (\(BH_{\rm max}\)), defined by the area of the largest possible rectangle that fits under the \(BH\) curve in the second quadrant, which measures the potential energy stored in the stray field of the magnet.[2]
Figure 1d shows the magnetic hysteresis of an ideal permanent magnet, in which all magnetic spins are perfectly aligned (and therefore, \(M_{\rm r}=M_{\rm s}\)) but in real magnets, the remanence value is smaller than the saturation (_i.e._, \(M_{\rm r}<M_{\rm s}\)). It follows that, as the \(M_{\rm r}\) value approaches \(M_{\rm s}\), the loop turns more squared, and in turn, \(BH_{\rm max}\) is maximized. Hence, the squareness and magnetic alignment is often measured in terms of in \(M_{\rm r}/M_{\rm s}\) ratio,[3] which is another of the key parameters to be improved for permanent magnets.
### 1.2 Materials for permanent magnets: Current status
Magnetic materials have the unique ability to directly interconvert between electrical and mechanical energy. A moving magnet can induce an electric current to generate electrical energy, and oppositely, an electric current can be used to generate a magnetic field and exert a magnetic force. These electromagnetic properties underpin the operation of electric generators and motors, making magnetic materials critical for the transition towards an environmentally friendly and sustainable future.[2] As a result, the worldwide permanent magnet market is expected to reach $39.71 Billion by 2030, according to the 8.6% compound annual growth rate (CAGR) forecast in the last Grand View Research report.[4]
Figure 2a illustrates the relative performance in terms of \(BH_{\rm max}\) and \(H_{\rm c}\) for the most important families of commercial PM materials, including AlNiCo alloys, hard ferrites ceramics, Nd\({}_{2}\)Fe\({}_{14}\)B and SmCo\({}_{5}\). The high-performance (\(BH_{\rm max}>100\) kJ/m\({}^{3}\)) permanent magnet market is currently dominated by the rare earth element (REE)-containing materials Nd\({}_{2}\)Fe\({}_{14}\)B (strongest magnet) and SmCo\({}_{5}\) (best high temperature performance) due to their superior energy products,[15] which is a critical parameter for the performance in applications where miniaturization is a major driving force (_e.g._, electric vehicle motors, direct-drive generators, electro-acoustic devices, accessory electric motors, mobile phones, sensors, portable electronics, _etc._). Unfortunately, the use of REE-based materials entails various problems. The compounds rely on scarce REE such as neodymium, samarium or dysprovision, which are classified as critical raw materials, not only owing to their supply risk and price volatility, but also to the harmful environmental impact of their extraction.[6] China has been the undisputed leader in REE mining and production for the last 40 years,[7] and despite other countries attempting to gain ground, today China still accounts for more than 60% of the world REE production.[8] Consequently, over the last 20
years geopolitical circumstances have often led to erratic price fluctuations. Furthermore, the cobalt used in SmCo\({}_{5}\) magnets is another problematic element. The supply chains for the bulk part (>50%) of the cobalt used in advanced materials can be traced back to the cobalt mines in the Democratic Republic of the Congo, where artisanal miners (including thousands of children) work under extremely hazardous conditions [9]. As a consequence, the development of REE-poor or REE-free alternatives has long been an important research topic in the PM field.
Although the undisputed strength of REE-magnets is necessary for the highest-performance applications, there are many other applications that are less demanding in terms of magnetic strength, where a compromise (see Figure 2b) must be made between other factors such as price, stability, processability, _etc.[10]_ At this end of the spectrum, hard ferrite magnets have long been the material of choice for lower grade applications (<25 kJ/m3). However, as illustrated by the arrow in Figure 2a, a considerable performance gap exists in the intermediate performance range between the cheaper AlNiCo and hard ferrite PMs and REE PMs. Consequently, for many applications it is often necessary to use an expensive and excessively strong REE magnet, in lack of an intermediate alternative. Here, a modest performance improvement of lower grade magnets would be sufficient to replace REE PMs while remaining within a weight range suitable for the application.
Figure 2: (a) Diagram of \(BH_{\text{max}}\) vs coercive field for the main families of commercially available hard magnetic materials. (b) Radar plots of key extrinsic properties of sintered Nd.Fe.,B, sintered SmCo, anisotropic AlNiCo and sintered hexaferrite magnets. Figures based on values from [14].
In this context, hexaferrites have long been considered good candidates for replacing REE magnets in the intermediate performance range, due to their reasonably good performance, high Curie temperature (>700 K) and excellent chemical stability, which all comes at a fraction of the cost of REE magnets.[12, 13] In fact, hard ferrites are the most produced magnetic material, despite their moderate performance compared to REE magnets.[14] In 2013 they were reported to account for 85 % of the total PM market by manufactured mass, although they only represented 50 % of the market by sales.[15]
While recent studies have demonstrated new approaches to improve magnetic properties of hard hexaferrite powders (_e.g_. nanostructuring,[16, 17, 18, 19] chemical substitution,[20, 21, 22, 23] exchange spring composites[24, 19, 25]), manufacturing dense sintered pellets of sufficient structural integrity without degrading the optimized properties has proven a key challenge. In practice, this prevents the replacement of expensive and unsustainable REE PMs in a range of applications, and is the reason why hard ferrites still generate great scientific interest.[26] The present chapter aims at summarizing the most relevant recent achievements and progress in the field, as well as key challenges encountered during the fabrication and sintering of dense ferrite magnets.
## 2 Hard ferrites: M-type hexaferrites
### Crystal and magnetic structure
The so-called hexaferrites, hexagonal ferrites or simply hard ferrites, are a family of ternary or quaternary iron oxides with hexagonal crystal lattice of long unit cell _c_-axis (=23-84 A).[26] Of the materials in the hexaferrite family, the M-type hexaferrites have been widely used for application as permanent magnets. With chemical formula \(M\)Fe\({}_{2}\)O\({}_{9}\) (\(M\) = Sr\({}^{2+}\) or Ba\({}^{2+}\)), the Sr and Ba M-type ferrites (SrM and BaM) are isostructural and exhibit very similar magnetic characteristics. The compounds have a large uniaxial magnetocrystalline anisotropy and a magnetic easy axis along the crystallographic _c_-direction. This strong intrinsic anisotropy results in a high _H_c, making them very resistant towards demagnetization (_i.e._ magnetically hard) and therefore attractive as PM materials.
Figure 3: Crystal and magnetic structure of Sr (Ba) hexaferrite. Black and red spheres represent Sr\({}^{2+}\) (Ba\({}^{2+}\)) and O\({}^{2-}\) ions. Colored polyhedra illustrate the 5 different crystallographic sites of Fe\({}^{3+}\) and arrows symbolize the Fe\({}^{3+}\) magnetic spins.
Figure 3 illustrates the crystal and magnetic structures of M-type hexaferrites. They display a hexagonal magnetoplumbite structure (space group \(P6_{3}/mmc\)) with very anisotropic unit cell (\(a\approx 5.9\) A, \(c\approx 23\) A). Fe\({}^{3+}\) ions occupy interstitial positions in a hexagonal close-packed structure of O\({}^{2-}\) and Sr\({}^{2+}\) (Ba\({}^{2+}\)) ions.[26, 27, 28] With 2 formula units per unit cell (64 atoms), SrM has a crystallographic density of 5.3 g/cm\({}^{3}\) (5.1 g/cm\({}^{3}\) for BaM).[29, 30] The crystal structure may also be described in terms of stacking of simpler structural blocks (cubic S and hexagonal R blocks) which are in turn stacked onto similar blocks rotated 180\({}^{\circ}\) about the \(c\)-axis (S* and R* blocks, respectively).[28]
### Magnetic properties
Table 1 compares the intrinsic magnetic properties of SrM and BaM with that of other important magnetic compounds. The theoretical magnetic moments (at o K) of the hexaferrite crystal structures can be calculated from the ferrimagnetic ordering of the magnetic Fe\({}^{3+}\) ions in the structure (see arrows in Figure 3), yielding values of 20.6 \(\mu_{\mathrm{B}}\)/molecule for SrM and 20 \(\mu_{\mathrm{B}}\)/molecule for BaM.[26, 31] This results in fairly good saturation magnetization, \(M_{\mathrm{s}}\), and magnetic induction, \(B_{\mathrm{s}}\), values. The Curie temperature, \(T_{\mathrm{c}}\), of the M-type hexaferrites is more than 100 \({}^{\circ}\)C above that of the much used REE-based Nd\({}_{2}\)Fe\({}_{4}\)B hard phase.
The large uniaxial anisotropy of the hexagonal lattice of SrM and BaM (\(c/a=3.9\)) causes a large magnetocrystalline anisotropy along the \(c\)-axis, which yields relatively high anisotropy constants, \(K_{\mathrm{i}}\) (see Table 1)[33, 34, 35] and a large theoretical maximum \(H_{\mathrm{c}}\) of 594 kA/m.[26] For a hypothetical fully-dense and perfectly-oriented hexaferrite magnet, a theoretical maximum \(BH_{\mathrm{max}}\) of 45 kJ/m\({}^{3}\) has been estimated.[1]
## 3 Sintered hard ferrite permanent magnets
Towards the effective implementation of permanent magnets into a device, the material in powder form has to be compacted into dense, mechanically stable and magnetically-oriented pieces (_i.e._, magnets). This conforming/densification process (called sintering) generally involves applying elevated pressures and/or temperatures to the material in powder shape.[35] As for most other materials, the mechanical properties of the sintered piece relies on a high density. However, the importance of achieving a highly dense magnet is enhanced for PMs, since the magnetic performance (\(BH_{\mathrm{max}}\)) is measured per volume unit, and hence, it is directly proportional to the density.
\begin{table}
\begin{tabular}{|l|l|l|l|l|} \hline & \(M_{\mathrm{s}}\) (Am\({}^{\mathrm{a}}\)/kg) & \(B_{\mathrm{s}}\) (T) & \(T_{\mathrm{c}}\) (K) & \(K_{\mathrm{s}}\) (MJ/m3) \\ \hline \hline \(\mathbf{Fe_{\alpha,\mathrm{d}}Co_{0.38}}\) & 240 & 2.45 & 1210 & 0.018 \\ \hline \(\mathbf{Fe}\) & 217 & 2.15 & 1044 & 0.048 \\ \hline \(\mathbf{AlNiCo_{5}}\)[1] & 159 & 1.40 & 1210 & 0.68* \\ \hline \(\mathbf{CoFe_{2}O_{4}}\)[1] & 75 & 0.5 & 793 & 0.27 \\ \hline \(\mathbf{BaFe_{2}O_{9}}\) & 72 & 0.48 & 740 & 0.33 \\ \hline \(\mathbf{SrFe_{2}O_{9}}\) & 72 & 0.48 & 746 & 0.35 \\ \hline \(\mathbf{Nd_{2}Fe_{4}B}\) & 165 & 1.61 & 588 & 4.9 \\ \hline \(\mathbf{SmCo_{5}}\) & 100 & 1.07 & 1020 & 17.2 \\ \hline \(\mathbf{Sm_{4}Co_{0.7}}\) & 118 & 1.25 & 1190 & 4.2 \\ \hline \end{tabular}
*shape anisotropy
\end{table}
Table 1: Intrinsic magnetic parameters at room temperature (RT) for some representative soft and hard magnetic phases. Data extracted from [34] unless otherwise stated.
The high sintering temperatures often end up undesirably altering the functional properties of the starting material and therefore, great efforts are dedicated to both (i) adapting the sintering methods to the specific material of interest and (ii) developing novel sintering strategies that lower the working temperatures, aiming at minimizing the damage.[35] In the particular case of hexaferrites, a common problem is the formation of hematite (\(\alpha\)-Fe\({}_{2}\)O\({}_{3}\)) as a side phase. This iron oxide is very prone to appear, as a result of its high stability, and causes a decrease of saturation magnetization, due to the antiferromagnetic nature of the phase. Fortunately, it has been shown that \(\alpha\)-Fe\({}_{2}\)O\({}_{3}\) can be avoided when the starting \(M\)Fe\({}_{2}\)O\({}_{19}\) powders have the right \(M\):Fe stoichiometry, yielding \(M_{\rm s}\) values approaching the expected \(\approx\)70 Am\({}^{2}\)/kg,[26] In contrast, limiting the grain growth to circumvent the detrimental impact on \(H_{\rm c}\) has proven more challenging.[36] Thus, M-type ferrites in powder form often present coercities far below the theoretical value, and the situation worsens for sintered pieces (see Table 7 in ref. [26] for an extensive sample record). Owing to this, sintered hexaferrite magnets are generally inferior to the theoretically achievable 45 kJ/m\({}^{3,\{1\}}\) although specific studies have managed to come fairly close to this value.
Another important aspect in the sintering of PMs is the magnetic alignment of the constituent particles and domains in the material. The magnetic particles may (or may not) be magnetically aligned, resulting in anisotropic (isotropic if not aligned) magnets. The greater the magnetic alignment, the more the \(M_{\rm r}\) value approaches \(M_{\rm s}\), yielding a more square-shaped \(MH\) curve (as illustrated by the black curve in Figure 1c), thereby maximizing \(BH_{\rm max}\). Thus, the \(BH_{\rm max}\) of mass-produced isotropic M-ferrite magnets is around 10 kJ/m\({}^{3}\), while the anisotropic kind ranges from 33 to 42 kJ/m\({}^{3,\{37-40\}}\).[37, 38, 39, 40] The magnetic alignment has been traditionally carried out by application of an external magnetic field,[26, 41] although recently patented methods have succeeded in suppressing the external field by taking advantage of the shape of the particles.[42, 43] Notably, the M-type ferrites are prone to form platelet-shaped particles, with magnetization direction parallel to the platelet normal vector (see Figure 4a). As illustrated in the figure, the platelet shape of the particles favors magnetic (and crystallographic) alignment upon compaction.
* Ferrite magnets with higher \(BH_{\rm max}\) values are available commercially (up to 44 kJ/m\({}^{3}\)), but in those cases the material is doped with _e.g._, La or Co.[39, 40]
Figure 4: \(M\)Fe\({}_{2}\)O\({}_{19}\) particles, displaying typical hexagonal platelet shape with the easy axis of magnetization normal to the platelet plane (and parallel to the crystallographic \(c\)-axis). This shape favors magnetic (and crystallographic) alignment upon application of uniaxial pressure. Adapted with permission from [14]
During the last decades, different sintering strategies have been investigated aiming at maximizing both the magnetic alignment (boosting \(B_{\rm r}\) and \(M_{\rm r}/M_{\rm s}\)) and the \(H_{\rm c}\) on the sintered material. Lately, efforts have also been devoted to making the processes greener and increasing recycling rates. The following sections intend to offer an overview of the pros and cons of each of the alternatives.
### Conventional sintering
Hexaferrites were first developed as a PM material by researchers at the Philips Research Laboratories in 1950s. In 1952, Went _et al._ prepared a Ba-ferrite magnet with a good \(H_{\rm c}\) value (\(\approx\)240 kA/m), although a limited \(B_{\rm r}\) derived from its isotropic nature (0.21 T) yielded a modest \(BH_{\rm max}\) of 6.8 kJ/m\({}^{3,[44-45]}\) Two years later, Stuijts _et al._ developed a conventional sintering (CS) strategy to produce anisotropic BaM magnets with \(BH_{\rm max}\) up to 28 kJ/m\({}^{3,[41]}\) which is essentially the method used nowadays to make sintered ferrite magnets industrially. In brief, a sludge of BaM powders and water is compacted while being held it in an external magnetic field, producing a consolidated piece (still poor in density) which is subsequently sintered at temperatures above 1100 \({}^{\circ}\)C to promote densification. Stuijts _et al._ explored sintering temperatures between 1250 and 1340 \({}^{\circ}\)C and noted that increasing the temperature maximizes the density and the magnetic alignment (and therefore \(B_{\rm r}\)), but at the cost of decreasing \(H_{\rm c}\), as a consequence of the grain growth promoted by the elevated temperatures. This problem, encountered already in 1954, has been subject of extensive research since.
As mentioned earlier, structural characteristics such as crystallite size, size distribution and crystallite morphology can largely affect the coercivity of ferrite magnets. In particular, highest \(H_{\rm c}\) values are attained for crystallite sizes close to the critical single-domain size defined earlier.[33-45] The difficulty not only lies in being able to produce particles of a specific size in a controlled manner, but it begins with determining what this critical size is for a specific material. For isotropic SrM crystallites, the critical single-domain size has been estimated to be around 620-740 nm.[16; 47] However, the experimentally reported crystallite/particle single-domain sizes of SrM span from 30 nm all the way up to 830 nm.[47] This is due to the high influence of particle morphology in the attained coercivity, as well as to the different characterization methods used to determine the reported size (_i.e._ particle _vs._ crystallites, number _vs._ volume weighted, _etc._). A study by Gjorup _et al._ showed that a much smaller critical single-domain size is obtained for highly anisotropic crystallites, and therefore not only the overall size, but also the aspect ratio of anisotropic SrM crystallites should be considered when trying to maximize \(H_{\rm c}\).[47]
Notably, reducing the size of the starting powders does not necessarily yield to better coercivities, as the grain growth upon sintering seems to be even greater when dealing with materials of smaller particle sizes.[48-50] El Shater _et al._ sintered nanometric BaM (100-200 nm) at 1000 and 1300 \({}^{\circ}\)C, producing average particle sizes of 0.537 and 16.35 \(\upmu\)m, respectively, with the consequent drop in coercivity (from 271 to 56 kA/m) and the \(M_{\rm r}\) gain.[51] Therefore, the choice of sintering temperature must be a compromise between minimizing grain growth (to maximize \(H_{\rm c}\)) and maximizing densification (and in turn, \(M_{\rm r}\)).
A common approach for limiting grain growth has been the use of sintering additives. Kools proposed a mechanism through with SiO\({}_{2}\) would prevent the growth of SrM grains during sintering and proved the effect for a range of SiO\({}_{2}\) concentrations (0.36-1.44 wt.36).[52; 53] Besenicar _et al._ reported that, besides limiting the growth, SiO\({}_{2}\) induces some ordering of the SrM particles, resulting in very anisotropic magnets with high relative density (97%) and satisfactory magnetic properties (\(B_{\rm r}\approx 0.39\) T, \(H_{\rm c}\approx 340\) kA/m).[54] Kobayashi _et al._ determined
the optimal SiO\({}_{2}\) concentration to be between 1 and 1.8 wt.%, showing a detrimental effect on \(H_{\mathrm{c}}\) for greater SiO\({}_{2}\) additions.[55] Guzman-Minguez _et al._ reported the appearance of \(\approx\)20 wt.% \(\alpha\)-Fe\({}_{2}\)O\({}_{3}\) as a secondary phase for SiO\({}_{2}\) concentrations>1 wt.%.[56]
CaO has been reported to favor densification, and therefore, it has also been explored as a sintering additive for hexaferrites, in this case with the aim of boosting \(M_{\mathrm{r}}\), although at the expense of aggravating the grain growth effect.[46, 55, 57] In this context, the combined use of both additives has also been investigated. Lee _et al._ reported a decent \(BH_{\mathrm{max}}\) of 29.4 kJ/m\({}^{3}\) when adding 0.6 wt.% SiO\({}_{2}\) and 0.7 wt.% CaO, but neither remanence nor coercivity were terrific (\(B_{\mathrm{r}}=0\)-36 T, \(H_{\mathrm{c}}=281\) KA/m).[58] Topfer _et al._ fabricated a very dense SrM magnet (98%) with a notable \(B_{\mathrm{r}}\) value of 0.42 T by incorporating 0.25 wt.% of SiO\({}_{2}\) and 0.25 wt.% CaO, although a moderate coercivity value of 282 kA/m only allowed for a \(BH_{\mathrm{max}}=32.6\) kJ/m\({}^{3}\).[59] Huang _et al._ tested the combined addition of CaCO\({}_{3}\), SiO\({}_{2}\) and Co\({}_{3}\)O\({}_{4}\) (1.1, 0.4 and 0.3 wt.%, respectively), managing a remarkable \(BH_{\mathrm{max}}\) of 38.7 kJ/m\({}^{3}\), owing to an exceptional remanence (0.44 T) and despite a modest coercivity (264 kA/m).[60]
Slightly superior magnetic parameters (\(B_{\mathrm{r}}=0.44\) T, \(H_{\mathrm{c}}=328\) kA/m, \(BH_{\mathrm{max}}=37.6\) kJ/m\({}^{3}\)) have been obtained by from a two-step sintering (TSS) method adapted to SrM by Du _et al._.[61] Here, the powders were cold-pressed as usual CS, but the subsequent thermal cycle used for sintering was slightly more elaborate: after a first high temperature step, in which the maximum temperature (1200 \({}^{\circ}\)C) is maintained for only 10 min, a longer (2 h) heating step at 1000 \({}^{\circ}\)C provides for full densification of the SrM magnet.[61] The scanning electron micrograph (SEM) in Fig. 6(e) from ref. [61] illustrates the confined grain size, the high density and high degree of alignment justifying the good magnetic performance. A more recent work by Guzman-Minguez _et al._[62] combined a TSS approach with the addition of 0.2% PVA and 0.6% SiO\({}_{2}\), realizing great control of the grain growth at 1250 \({}^{\circ}\)C (see Figure 5) although the obtained magnetic properties were not as good as the ones previously reported by Du _et al._.
### 3.2 Spark plasma sintering
In the 1990s, a new commercial apparatus based on resistive sintering, called spark plasma sintering (SPS) was developed by Sumitomo Heavy Industries Ltd. (Japan).[63] The SPS method is based on the use of an electrical current and a uniaxial mechanical pressure under low atmospheric pressure, to simultaneously heat and compact a powder sample.[64] The starting powders are typically loaded in a graphite die, which is placed between two electrodes in a water-cooled vacuum chamber. A uniaxial pressure is applied to the die while passing a DC electrical current through, which heats up the sample due to the Joule effect (see Fig. 1 in ref. [66] for a typical SPS setup). The inventors of the system claimed the generation
Figure 5: SEM images of SrM pellet sintered at 1250 \({}^{\circ}\)C by (a) conventional sintering and (b) two-step sintering. Reprinted from [64], Copyright 2021, with permission from Elsevier.
of plasma to take place, thus leading to the technique's name. However, although it is generally accepted that plasma may be generated between particles due to electrical discharges, there is no conclusive experimental evidence of such occurrence.[64] Therefore, SPS is sometimes referred to by alternative names, such as field-assisted sintering technique (FAST). The simultaneous application of temperature and pressure can also be obtained by conventional hot pressing (HP). However, in SPS and HP, heat is produced and transmitted to the material in different ways. In conventional heating the powders are sintered by heating the entire container using external heating elements in a furnace. This leads to slow heating rates, long sintering times and waste of energy in heating up all the components. The SPS method, however, has allowed increasing the heating rates, lowering the working temperatures and reducing the dwell times.[66, 67] These benefits make SPS a good alternative when the goal is to limit the grain growth during sintering,[67] and potentially improve the obtained \(H_{\mathrm{c}}\) (and \(BH_{\mathrm{max}}\)) values of sintered hexaferrite magnets.
Numerous investigations focusing on sintering hexagonal ferrites by SPS have been published in the last two decades. Obara _et al._ prepared fully-dense SrM magnets by SPS at 1100 \({}^{\circ}\)C and 5o MPa for only 5 min.[65] A fairly competitive \(H_{\mathrm{c}}\) of 325 kA/m was obtained by doping with La\({}_{2}\)O\({}_{3}\) (1 wt.%) and Co\({}_{3}\)O\({}_{4}\) (0.1 wt.%). Although the measured hysteresis loops were rather squared, the remanence value (0.32 T) was not sufficient to guarantee a noteworthy energy product (\(BH_{\mathrm{max}}=18.3\) kJ/m\({}^{3}\)). Mazaleyrat _et al._ sintered BaM nanopowders with sizes below 100 nm and managed to hold grain growth and produce a \(H_{\mathrm{c}}\) of 390 kA/m,[68] which even surpasses the value reported for the La and Co-doped material described above. Unfortunately, a deficient density (88%) degraded the \(BH_{\mathrm{max}}\) down to 8.8 kJ/m\({}^{3}\).Ovtar _et al._ sintered the same batch of 90 nm BaM nanoparticles by both CS and SPS, producing much smaller sizes through the second method.[69] Additionally, they realized that secondary phases (Fe\({}_{3}\)O\({}_{4}\), \(\alpha\)-Fe\({}_{2}\)O\({}_{3}\)) tend to form on the surface of the BaM SPS pellets, and tested different materials for the protective discs separating the sample from the graphite die (BN, Au, \(\alpha\)-Al\({}_{2}\)O\({}_{3}\)) concluding that \(\alpha\)-Al\({}_{2}\)O\({}_{3}\) was the one performing best. The resulting density was rather low 82% but the coercivity was adequate (350 kA/m). Stingatiu _et al._ attempted downsizing a \(\upmu\)m-sized SrM material by a ball-milling step prior to consolidation through SPS.[70] The resulting density was satisfactory (90%) but unfortunately, ball-milling was seen to underlie the surface of the SrM, which triggered formation of secondary phases during SPS, this having a detrimental effect on the magnetic properties (\(BH_{\mathrm{max}}<10\) kJ/m\({}^{3}\)).
Saura-Muzquiz _et al._ prepared nm-sized SrM powders by hydrothermal synthesis (HT) with hexagonal plate-like particles (such as those in Figure 4) with very small sizes; in some cases, the platelets were as thin as a single unit cell (i.e. \(<\)3 nm).[17] These HT-synthesized SrM powders were consolidated by SPS yielding appropriate \(H_{\mathrm{c}}\) values of 301 kA/m. More importantly, the highly anisotropic shape of the particles provided for a pronounced magnetic alignment of the sintered SrM magnets, inherently occurring as a result of simultaneous application of elevated temperature and uniaxial pressure, just as illustrated in Figure 4. Here, an \(M_{\mathrm{r}}\)/\(M_{\mathrm{s}}\) ratio of 0.89 was reached without applying an external magnetic field neither before nor during sintering, yielding a \(BH_{\mathrm{max}}\) value of 26 kJ/m\({}^{3}\). Figure 6a shows the magnetic hysteresis of the HT powders and the corresponding SPS pellet, evidencing the squareness of the latter. Achieving magnetic alignment without a magnetic field is very convenient from an industrial point of view, because it allows a full step to be removed from the manufacturing process (i.e., the magnetization), which simplifies the procedure, reduces costs and increases energy efficiency.[42] Figure 6b displays the powder X-ray diffraction (PXRD) data measured on both SrM powders and SPS pellet. Despite the very dissimilar appearance, Rietveld analysis demonstrates that both PXRD patterns are consistent with pure-phase SrFe\({}_{2}\)O\({}_{19}\) although with notable differences in
crystalline size and orientation. The highly anisotropic shape of the powders is visible from the sharpness of the _hkl_-reflections describing the crystallite on the platelet plane, such as (110) or (220), compared to the large broadening of those associated to the platelet thickness, _e.g._, (008), all this in agreement with much smaller sizes along the \(c\)-axis than on the \(ab\)-plane (_i.e._, thin platelets). Regardless of the difference in peak broadening, Bragg reflections of all orientations are present in the PXRD pattern measured for the SrM powders, demonstrating a random orientation of the crystallites. However, the very intense _hh_o reflections are absent for the PXRD pattern recorded for the SPS pellet, while \(\mathrm{o}o\mathrm{l}\) reflections (as well as others with high contribution from the \(c\)-crystallographic direction) are systematically intensified, thus indicating a marked preferred orientation of the platelets. As explained before, for M-type platelet-shaped particles, crystallite/particle alignment goes together with magnetic alignment. The crystallographic alignment was further studied based on pole figure measurements (Figure 6c), a slightly more complex diffraction measurement enabling quantification of the degree of orientation (Figure 6d).
Optimization of both the HT synthesis route[71] and the SPS protocol[72, 18] as well as correlation of structural and magnetic properties, allowed reaching \(M_{\mathrm{r}}/M_{\mathrm{s}}\) ratios as high as 0.95, although at the cost of reducing \(H_{\mathrm{c}}\) down to 133 \(\mathrm{kA/m}\), with which the \(BH_{\mathrm{max}}\) improvement was only moderate (29 \(\mathrm{kJ/m^{3}}\))[73]. However, performing a thermal treatment at 850 \({}^{\mathrm{o}}\)C after SPS was enough to reach a \(BH_{\mathrm{max}}=36\)\(\mathrm{kJ/m^{3}}\), value on the order of the highest-grade commercially available ferrite magnets[37, 38, 39, 40], while avoiding the use of an external magnetic field. Applying this SPS protocol to SrM powders produced by synthesis methods other than HT did not yield such outstanding magnetic properties, due to an inferior particle orientation degree and, hence, a poorer magnetic alignment[72, 74]. A newer study by Saura-Muzquiz _et al._ confirmed that the degree of magnetic alignment using this preparation method could be tuned by modifying the aspect ratio of the initial powders, reaching almost fully-aligned pellets (\(M_{\mathrm{r}}/M_{\mathrm{s}}=0.9\)) with densities above 90%[74]. Higher alignment leads to higher squareness and thus greater \(M_{\mathrm{r}}/M_{\mathrm{s}}\) ratio and \(BH_{\mathrm{max}}\), but it is accompanied by a reduction in \(H_{\mathrm{c}}\) due to the inversely proportional relationship that exists between magnetization and coercive field. Nonetheless, by reducing the degree of alignment they were able to obtain SrM magnets with a large \(H_{\mathrm{c}}\) of 412 \(\mathrm{kA/m}\), proving the potential of SPS to overcome the reduction of \(H_{\mathrm{c}}\) due to excessive crystallite growth.
Figure 6: (a) Magnetic hysteresis loop of HT-synthesized SrM nanoparticles and corresponding SPS pellet. (b) PXRD data along with Rietveld model of the same samples. (c) X-ray pole figure measurements and (d) oriented volume fraction of SPS pellet. Reproduced from Ref. [71] with permission from the Royal Society of Chemistry.
Recently, Vijayan _et al._ reported the use of SPS not only for densification of ferrite powders, but for the direct synthesis of aligned SrM magnets.[75, 76, 77] In this study, SrM is synthesized directly during the SPS process, using a precursor powder of antiferromagnetic six-line ferrihydrite (FeOOH) platelets mixed with SrCO\({}_{3}\). A low SPS temperature of \(\approx\)750 \({}^{\circ}\)C was sufficient to drive the reaction between the six-line phase and SrCO\({}_{3}\) to produce SrFe\({}_{12}\)O\({}_{19}\), while the anisotropic shape of the hydrothermally synthesized six-line phase ensured the alignment of the resulting SrM particles. Following this synthesis method, they were able to produce a dense SrM magnet with a \(BH_{\rm max}\) of 33(4) kJ/m\({}^{3}\), a \(M_{\rm r}\)/\(M_{\rm s}\) of 0.93 and a \(H_{\rm c}\) of 247 kA/m.
### 3-3 Microwave sintering
In the field of hexaferrite research, microwaves (MWs) have mainly been used for synthesis purposes although a few sintering attempts using MWs have also been reported.[78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 213, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 282, 284, 285, 286, 287, 288, 289, 291, 292, 293, 294, 295, 296, 297, 298, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 333, 340, 341, 342, 343, 344, 345, 346, 347, 348, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 411, 412, 413, 414, 415, 416, 417, 418, 419, 42, 42, 43, 44, 45, 46, 47, 48, 49, 42, 44, 45, 47, 49, 43, 45, 48, 49, 44, 46, 49, 47, 49, 48, 49, 40, 41, 42, 43, 44, 45, 46, 48, 49, 40, 42, 44, 46, 49, 43, 45, 47, 49, 44, 46, 47, 48, 49, 45, 49, 40, 43, 46, 47, 48, 49, 41, 42, 44, 45, 48, 49, 40, 44, 41, 43, 45, 49, 41, 42, 45, 46, 47, 48, 49, 40, 43, 47, 49, 41, 44, 45, 49, 42, 46, 48, 49, 43, 48, 49, 45, 49, 40, 44, 46, 47, 49, 41, 45, 49, 42, 46, 47, 48, 49, 45, 49, 46, 49, 47, 48, 49, 48, 49, 40, 41, 42, 43, 45, 49, 42, 45, 46, 49, 43, 47, 48, 49, 45, 49, 40, 41, 44, 45, 46, 47, 48, 49, 41, 45, 49, 42, 46, 49, 43, 47, 48, 49, 40, 44, 41, 45, 49, 42, 45, 46, 47, 49, 45, 48, 49, 49, 40, 41, 43, 49, 42, 45, 46, 47, 49, 48, 49, 49, 41, 45, 49, 42, 46, 49, 43, 48, 49, 42, 49, 45, 46, 47, 49, 48, 49, 49, 40, 41, 45, 49, 42, 49, 43, 49, 45, 46, 47, 49, 48, 49, 49, 40, 42, 49, 41, 45, 49, 43, 46, 47, 48, 49, 42, 49, 43, 48, 49, 45, 49, 46, 49, 47, 48, 49, 49, 40, 45, 49, 48, 49, 41, 46, 49, 49, 42, 47, 49, 43, 49, 45, 49, 46, 47, 48, 49, 47, 49, 48, 49, 49, 49, 49, 50, 49, 51, 52, 53, 54, 55, 55, 56, 57, 58, 59, 50, 51, 53, 56, 59, 52, 54, 57, 59, 53, 58, 51, 59, 50, 52, 54, 50, 53, 57, 56, 59, 51, 54, 52, 55, 57, 58, 59, 53, 59, 54, 50, 55, 56, 57, 59, 58, 59, 50, 59, 51, 50, 52, 54, 53, 59, 54, 55, 56, 57, 59, 52, 57, 59, 53, 59, 54, 58, 59, 50, 53, 59, 54, 51, 55, 59, 56, 57, 58, 59, 50, 59, 52, 59, 53, 59, 54, 52, 55, 59, 56, 57, 59, 54, 53, 55, 57, 59, 56, 58, 59, 50, 57, 59, 51, 59, 50, 59, 52, 59, 53, 59, 54, 55, 59, 56, 57, 59, 58, 59, 50, 59, 57, 59, 59, 50, 59, 51, 59, 52, 59, 50, 59, 53, 59, 51, 52, 59, 54, 55, 56, 57, 59,
subjected to a uniaxial pressure (\(\approx\)400 MPa) while heated at 190 \({}^{\circ}\)C.[82] After CSP, relative densities of about 85% are obtained, which can be driven up to 92% by subsequently treating the sintered piece at 1100 \({}^{\circ}\)C for 2 h. This last sintering step also has a beneficial effect on the magnetic properties (see Figure 7A). In particular, \(M_{\rm s}\) at 5 T increases from 49.2 to 73.7 Am\({}^{2}\)/kg and \(H_{\rm c}\) goes from 119 to 223 kA/m. For the final product, a \(M_{\rm r}/M_{\rm s}\) ratio of 0.68 was obtained. The density obtained by conventional sintering at 1100 \({}^{\circ}\)C for 4 h (no solvent, no hot compression) was only 77% and the magnetic properties slightly inferior (see Figure 7B). Conventional sintering at 1300 \({}^{\circ}\)C yielded higher density (97%) but very poor magnetic properties (\(H_{\rm c}=48\) kA/m, \(M_{\rm r}/M_{\rm s}=0.33\)), due to the dramatic grain growth caused by the high temperature (see bottom FE-SEM micrograph from Figure 7B).
Further investigations have been carried out using different organic solvents (_i.e._, oleic acid, oleylamine) and widening the pressure and temperature ranges explored (up to 270 \({}^{\circ}\)C and 670 MPa).[83] In all cases, the average grain size of the CSP ceramic magnet was about 1 \(\mu\)m (similar to the starting SrM powders) while similar conventional sintering processes typically yield average grain sizes above 3 \(\mu\)m.[62]
With the aim of further improving the density and magnetic properties of CSP magnets, the addition of a small amount (10 wt.%) of nanometric SrM to the original micrometric SrM powders was tested, moderately increasing \(H_{\rm c}\) (239 kA/m) and \(M_{\rm r}/M_{\rm s}\) (0.73), although the density value continued at 92%.[84] These numbers are competitive in the context of commercial SrM magnets. As an example, the Hitachi's NMF-7C series display values of \(H_{\rm c}=220-260\) kA/m and \(M_{\rm s}=68\) Am\({}^{2}\)/kg).[40]
Figure 7: Magnetic hysteresis and FE-SEM corresponding to SrFe\({}_{\rm m}\)O\({}_{9}\) magnets fabricated by A) CSP at 190 \({}^{\circ}\)C, CSP followed by annealing at 1100 \({}^{\circ}\)C, B) conventional sintering at 1100 and 1300 \({}^{\circ}\)C. Reprinted from [82] with permission from Elsevier.
## 4 Summary and perspective
In the present chapter, the main sintering approaches applied to manufacturing hard ferrite ceramic magnets have been reviewed. Table 2 summarizes the properties of top SrFe\({}_{\text{12}}\)O\({}_{\text{19}}\) magnets fabricated by the various discussed sintering strategies. Conventional sintering (CS) continues to be the quintessential industrial method for M-type hexaferrite PM fabrication, owing to its technical simplicity and the relatively good resulting properties. However, this approach is highly inefficient, as most of the energy employed is irreversibly dissipated as heat[85]. Therefore, the search for more energy-efficient methods continues to be an active field of research.
Multiple studies have demonstrated that spark plasma sintering (SPS) allows production of PMs with much higher \(M_{\text{r}}/M_{\text{s}}\) ratios than CS. However, the increase in texture comes at a cost of reduction in \(H_{\text{c}}\) values, which therefore still need to be improved. As a result, magnets made using SPS end up displaying a similar performance (\(BH_{\text{max}}\)) to the best CS examples. Additionally, technical challenges hinder the replacement of CS by SPS in the industrial production of magnetic ferrites, since current SPS machines only allow producing relatively small pieces with very few specific shapes (typically cylindrical pellets).
Only a few attempts have so far been made to densify SrM by the relatively new cold sintering process (CSP) and therefore, there is still much to explore and optimize. However, the CSP has already allowed preparation of hexaferrite magnets with magnetic properties comparable to medium-high grade commercial ferrites, while lowering the sintering temperature. This reduces the energy consumption by about 9 kWh/kg, which leads to energy savings of \(\approx\)29% compared to the sintering methods employed industrially at present.
The results obtained by microwave sintering (MWs) have been very satisfactory in terms of both density and \(H_{\text{c}}\), but the resulting \(M_{\text{s}}\) and \(M_{\text{r}}/M_{\text{s}}\) values are still insufficient to be commercially competitive. As with CSP, reports are scarce and further exploration is required.
Sintering has undergone significant innovation over the last decade[35], with the introduction of a number of new sintering technologies, such as flash sintering[86, 87], and various modified SPS methodologies, like flash SPS (FSPS)[88], deformable punch SPS (DP-SPS)[89], or cool-SPS[90]. As a result, there are more alternatives available for sintering ferrites with enhanced magnetic characteristics and microstructure. To our knowledge, none of the just mentioned have yet been examined on hard hexagonal ferrites, leaving lots of room for additional study in this area.
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|} \hline & \(M_{\text{s}}\) (Am\({}^{\ast}\)/kg) & \(M_{\text{r}}/M_{\text{s}}\) & \(M_{\text{r}}\) (Am\({}^{\ast}\)/kg) & \(H_{\text{c}}\) (kA/m) & \(BH_{\text{max}}\) (kJ/m) & \(\rho_{\text{rot}}\) (\%) \\ \hline
**CS** & \(\approx\)68 & \(\approx\)1 & 68 & 328 & 37 & z99\% \\ \hline
**SPS** & 73 & 0.93 & & 225 & 36 & \textgreater{}95\% \\ \hline
**CSP** & 73 & 0.73 & & 239 & – & 92\% \\ \hline
**MWs** & 50 & \(\approx\)0.62 & & 445 & – & 95\% \\ \hline \end{tabular}
*Approximate values (\(\approx\)) are graphically estimated from the article figures.
\end{table}
Table 2: Magnetic parameters and relative density for top representatives of SrFe\({}_{\text{u}}\)O\({}_{\text{9}}\) magnets manufactured following the different sintering approaches described in the present chapter, _i.e._, conventional sintering (CS)[108], spark plasma sintering (SPS)[88], cold sintering process (CSP)[88], and microwave sintering (MWs)[80].
## Acknowledgments
C.G.-M. acknowledges financial support from grant RYC2021-031181-I funded by MCIN/AEI/10.13039/501100011033 and by the "European Union NextGenerationEU/PRTR". M.S.-M. acknowledges the financial support from the Comunidad de Madrid, Spain, through an "Attaccion de Talento Investigador" fellowship (2020-T2/IND-20581). H.L.A acknowledges the financial support from The Spanish Ministry of Universities (Ministerio de Universidades) and the European Union--NextGenerationEU through a Maria Zambrano--attraction of international talent fellowship grant.
|
2307.06977 | Solar Submillimeter Telescope next generation | The Solar Submillimeter Telescope (SST) is an unique instrument that has been
observing the Sun daily since 2001 bringing a wealth of information and raising
new questions about the particle acceleration and transport, and emission
mechanisms during flares. We are now designing its successor, the SSTng, that
will expand the scientific goals of the instrument, including non-solar source
observations. | C. Guillermo Giménez de Castro, Jean-Pierre Raulin, Adriana Valio, Emilia Correia, Paulo J. A. Simoes, Sergio Szpigel | 2023-07-13T14:26:31Z | http://arxiv.org/abs/2307.06977v1 | # Solar Submillimeter Telescope next generation
###### Abstract
The Solar Submillimeter Telescope (SST) is an unique instrument that has been observing the Sun daily since 2001 bringing a wealth of information and raising new questions about the particle acceleration and transport, and emission mechanisms during flares. We are now designing its successor, the **SSTNG**, that will expand the scientific goals of the instrument, including non-solar source observations.
## 1 Introduction
Submillimeter-wave (submm) observations, here considered for \(0.3\leq\lambda\leq 3\) mm, allow us to study the solar low atmospheric layers, from the Transition Region to the Chromosphere [1]. During flares, the submm emission might be originated by synchrotron emission from relativistic particles [2]. Therefore, we can track the energy transport from the acceleration to the emission sites. Moreover, Kaufmann et al. [3] have shown that some flares have a second spectral submm component (Figure 1) with a still unknown origin [4].
Even though the immense wealth of information that submm observations may bring to the understanding of the solar atmosphere and its dynamics, there is a lack of regular observations to cover this wavelength range. First efforts were carried on with the James Clerk Maxwell Telescope (JCMT) [5]. The Swiss KOSMA telescope, also observed the Sun a few times in 2003/2004 before it was decommissioned [6, 7]. More recently, the Atacama Large Millimeter Array (ALMA) is revealing fine details of the quiet and quiescent solar behavior [8]. However, JCMT and KOSMA observed the Sun just a couple of times, and ALMA allocates a small portion of its observing time to the Sun and it is not the best instrument to catch fast transient phenomena, like solar flares.
Since 1999, the only solar dedicated submm instrument is the Solar Submillimeter Telescope (SST) [9], a single
_This paper's copyright is held by the author(s). It is published in these proceedings and included in any archive such as IEEE Xplore under the license granted by the "Agreement Granting URSI and IEICE Rights Related to Publication of Scholarly Work."_
Figure 1: Left: Time profiles of the SOL2003-11-04T1945 solar event at 212 GHz (blue) and 405 GHz (red) observed by the SST. Spectra obtained at two different instants of the event (see vertical dashed lines on the left panel). Microwave data were obtained by the OVSA array. Observations at 44 GHz were carried out at Pierre Kaufmann Radio Observatory (ROPK) with its 14-m single dish antenna. This was the first event to show a second spectral submm component [3].
dish telescope with room temperature receivers operating at 212 GHz (\(\lambda=1.4\) mm) and 405 GHz (\(\lambda=0.7\) mm). After more than 20 years of excellent service, SST has to be updated in order to provide answers to the questions its observations have raised: what is the emission mechanism that creates a second spectral component \(>100\) GHz during flares? Does this component existe in "weak" flares? It may also bring more information about the 3-5 minute p-mode oscillations, the time evolution of the large scale chromospheric structures and its relationship with the magnetic field, the "slow" components at these frequencies, among other.
In the following lines we will present the general characteristics of the SST and introduce the SST next generation (**SST**s) which is being designed at the Center for Radio Astronomy and Astrophysics Mackenzie (CRRAM) in Sao Paulo (Brazil).
## 2 Sst
SST (Figure 2) is a product of _state-of-art_ technologies of the 1990s. It has a 1.5 m, \(f/D=8\), radome-enclosed single-dish aluminum reflector built at the Steward Observatory, University of Arizona, Tucson, USA. Its frontend has six room temperature radiometers that operate simultaneously: four receivers operate at 212 GHz and two at 405 GHz, with nominal beam sizes of 4 and 2 arc minutes, respectively. The six beams form two arrays separated by approximately 6 arc-minutes. The first array has three 212 GHz beams arranged in an equilateral triangle, in the center of this triangle, there is one 405 GHz beam. The receiver horns have a taper that allows the beam intersection at 50% level (-3 dB). Although the taper reduces the efficiency and increases the spill over, it allows the use of the _multibeam_ method to instantly localize the emission centroid of point-like sources and to correct the flux for offset pointing [10, 11]. The second array has one 212 GHz and one 405 GHz beam with the same center and is used for reference. The radiometers have a \(\Delta\nu=8.5\) GHz bandwidth, temperatures of around 2000-3000 K and were custom made by RPG-Radiometer GmbH, Meckenheim, Germany. The backend output signal is converted to 2-byte integer numbers. SST has an Alt-Azimuth mount with 3.6 milliarcsec resolution and maximum speed of 3\({}^{\circ}\) s\({}^{-1}\). The output data is recorded in three different file structures: _sub-integrated_ with 5 ms time resolution, _integrated_ with 40 ms time resolution and _auxiliary_ with 1 s time resolution.
The telescope is installed in the El Leoncito Astronomical Complex (CASLEO, in Spanish) at 2550 m above sea level in the Argentinean Andes, Province of San Juan. First light was in July 1999 and since April 2001 does daily observations. During the past 20+ years we refined the measurement of the atmospheric optical depth with different techniques and gathered a large statistics to understand the atmospheric transmission on the site at both frequencies [12, 13]. Median values of the opacity are 0.16 and 1.1 for 212 and 405 GHz, respectively [14]. That means that for more than 50% of the time, the atmosphere is nominally optically thick at 405 GHz.
## 3 Sstng
The **SST**s will be more sensitive. Indeed, the SST noise flux density, when observing the Sun, is 1 and 7 SFU1 for 212 and 405 GHz, respectively, considering 40 ms integration time, the median values of the atmospheric opacities and a mean elevation angle of 60\({}^{\circ}\). With this sensitivity, the weakest flares we have observed are of GOES class M. By changing the receiver frequencies to 150 and 300 GHz we gain a factor \(>2\) in opacity: from our statistics and using the relationship obtained in [13] we derive median values of 0.07 and 0.4 for the atmospheric opacities at 150 and 300 GHz, respectively. In Figure 3 we show the histograms of the observed zenith opacities obtained between 2008 and 2012 using the skydip method. The same figure presents the expected histograms for 150 and 300 GHz showing an expressive reduction.
Footnote 1: Solar Flux Unit \(\equiv 10^{4}\) Jy.
We also want to keep the same beam sizes, therefore we plan to substitute the present reflector by a new one of 3-m diameter. Moreover, today receivers have lower temperatures and larger bandwidths. For the present work we assume temperatures around 1000 and 2500 K for 150, and 300 GHz, respectively, and \(\Delta\nu=16\) GHz for both
Figure 2: The SST with the radome open for maintenance.
frequency bands. Everything combined, lower opacities, larger reflector surface and receiver bandwidth, and smaller temperature, shall yield noise fluxes of 0.06 and 0.12 SFU, i.e. **SSTng** by the 15 and 55 times more sensitive when compared with SST 212 and 405 GHz observations. In terms of flares, this gain means that events of GOES class C, and maybe, class B, will be detected, dramatically increasing the number of events to analyze. In terms of quiet sun behavior, it will certainly be possible to detect the 3-5 minute oscillations, and faint structures.
Polarization is key to discriminate the origin of the emission and to study the ambient magnetic field, however it was not yet explored at submm wavelengths during flares. **SSTng** will be the first solar telescope to have circular polarization detectors for both frequency bands. And we plan to have a spectrometer to make studies of the yet to be observed large \(n\) Rydberg hydrogen lines at these frequencies. On the other hand the multibeam system will be maintained with three receivers at 300 GHz and one at 150 GHz in a triangular array similar to SST.
**SSTng** will be able to make observations of non-solar objects like H ii regions and QSOs. Indeed, for 1-min integration time, the noise flux density of **SSTng** will be 3 and 12 Jy for 150 and 300 GHz, respectively, making possible night surveys.
_This paper's copyright is held by the author(s). It is published in these proceedings and included in any archive such as IEEE Xplore under the license granted by the "Agreement Grunting URSI and IEICE Rights Related to Publication of Scholarly Work."_
## 4 Final remarks
As we said above, **SSTng** is more than an update of the 1990s technology. It intends to be a new instrument, based on our experience in this frequency range that will enlarge the original scientific goals. At the present time we are finishing the scientific requirements, afterwards we will start to identify possible contractors for the different subsystems. By the end of 2023 we will submit projects to our funding agencies to obtain financial support. Construction should start early 2025 and by 2027 it should have its first light,
Figure 4: Provisional project schedule.
Figure 3: Top panel: Observed atmospheric opacity histogram at 212 GHz and expected opacity at 150 GHz. Bottom panel: Observed opacity histogram at 405 GHz and expected opacity for 300 GHz. The observed opacities were determined with skydips between 2008 and 2012.
starting the commissioning and the scientific observations (Figure 4).
## Acknowledgements
We acknowledge FAPESP and CAPES funding agencies through grants 2013/24155-3 and 88887.310385/2018-00, respectively, for their support to this scientific project.
|
2303.05751 | Irreducibility of Generalized Permutohedra, Supermodular Functions, and
Balanced Multisets | We study generalized permutohedra and supermodular functions. Specifically we
analyze decomposability and irreducibility for these objects and establish some
asymptotic behavior. We also study a related problem on irreducibility for
multisets. | Milan Haiman, Yuan Yao | 2023-03-10T07:25:32Z | http://arxiv.org/abs/2303.05751v1 | # Irreducibility of generalized permutohedra, supermodular functions, and balanced multisets
###### Abstract.
We study generalized permutohedra and supermodular functions. Specifically we analyze decomposability and irreducibility for these objects and establish some asymptotic behavior. We also study a related problem on irreducibility for multisets.
## 1. Introduction
A _permutohedron_ in \(\mathbb{R}^{n}\) is the \((n-1)\)-dimensional polytope obtain by taking the convex hull of all \(n!\) points obtained by permuting the coordinates a point \((x_{1},\ldots,x_{n})\in\mathbb{R}^{n}\). In [8], Postnikov introduced the _generalized permutohedron_, which is a deformation of a permutohedron obtained by translating the hyperplanes bounding each face. Generalized permutohedra were also further studied in [9]. In [8], Postnikov derives a volume formula for generalized permutohedra as a polynomial in the defining parameters. The proof of the formula requires a decomposition of some generalized permutohedra into weighted Minkowski sums of coordinate simplices. However, not all generalized permutohedra can be written in this form. Therefore, it is natural to ask whether all generalized permutohedra can be written as a weighted Minkowski sum of a some fixed set of polytopes.
We define a generalized permutohedron to be _irreducible_ if it cannot be written as a weighted Minkowski sum of generalized permutohedra in a nontrivial way (any convex polytope is a Minkowski sum of smaller copies of itself). Then all generalized permutohedra can be decomposed as a Minkowski sum of irreducible generalized permutohedra.
In this paper we aim to understand the class of irreducible generalized permutohedra. We make use of connections to related problems.
Generalized permutohedra in \(\mathbb{R}^{n}\) are strongly related to supermodular functions on subsets of \([n]\), which are a discrete analog of convex functions. Supermodular functions are an important object in optimization and other fields (see [2]).
There is a direct bijection between irreducible generalized permutohedra and irreducible supermodular functions, which have been studied by several authors ([13, 10]). In [10], Promislow and Young determined the irreducible supermodular functions for \(n\leq 4\) and conjectured a simple characterizations for \(n>4\). However, this conjecture was shown to be false by Zivny, Cohen, and
Jeavons in [13], and we further show that this characterization is far from capturing all irreducible supermodular functions.
To understand irreducible supermodular functions, we first study a related (and simpler) problem involving irreducibility. Given a multiset \(\mathcal{M}\) of subsets of \([n]\), we say that \(\mathcal{M}\) is _balanced_ if each element of \([n]\) appears the same number of times in \(\mathcal{M}\). We denote the number of times each element appears by the _complexity_\(m=m(\mathcal{M})\). The conditions for irreducibility generalized permutohedra and supermodular functions can be formulated as a modification of that of irreducible balanced multisets.
We derive bounds on the complexity of irreducible balanced multisets and enumerate the number of irreducible balanced multisets up to lower order terms. Using similar ideas, we provide double-exponential upper bounds for complexity and number of irreducible generalized permutohedra. We also obtain double-exponential lower bounds by relating a subclass of supermodular functions to matroids. The key asymptotic results are the following.
**Theorem 1.1**.: _The number of irreducible supermodular functions, up to equivalence, is bounded above by \(2^{O(n2^{n})}\) and bounded below by \(2^{\Omega(2^{n}/n^{3/2})}\)._
We also study a simple subclass of irreducible supermodular functions, which we enumerate precisely.
The paper is structured as follows. In Section 2 we establish the preliminary definitions related to generalized permutohedra and supermodular functions. In Section 3 we establish some conditions for a supermodular function to be irreducible. In Section 4 we explore the related problem of irreducible balanced multisets. In Section 5 we obtain upper bounds on the number and complexity of irreducible supermodular functions. In Section 6 we obtain lower bounds on the number of irreducible supermodular functions. In Section 7, we study supermodular functions with supermodularities on only two layers.
## 2. Preliminaries
There are several equivalent ways to define a generalized permutohedron, and here we present the one that is the most convenient for our purposes. For a subset \(I\subseteq[n]\), we let \(1_{I}\) denote the vector whose \(i\)-th coordinate is \(1\) if \(i\in I\) and \(0\) otherwise.
**Definition 2.1**.: A _generalized permutohedron_ in \(\mathbb{R}^{n}\) is a polytope of the form
\[\{x\in\mathbb{R}^{n}\colon x\cdot 1_{I}\geq z_{I},x\cdot 1_{[n]}=z_{[n]}\}\]
for reals \(z_{I}\) satisfying the _supermodularity condition_:
\[z_{I\cap J}+z_{I\cup J}\geq z_{I}+z_{J}\]
for all \(I,J\subseteq[n]\) (we set \(z_{\varnothing}=0\)).
This definition includes all ordinary permutohedra: for \(x_{1}\leq x_{2}\leq\cdots\leq x_{n}\), we can recover the permutohedron with vertices that are permutations of \((x_{1},\ldots,x_{n})\) by taking \(z_{I}=x_{1}+\cdots+x_{|I|}\) for each \(I\subseteq[n]\).
The supermodularity conditions \(z_{I\cap J}+z_{I\cup J}\geq z_{I}+z_{J}\) guarantees that a generalized permutohedron has the same face structure as a permutohedron, up to degeneracies that reduce the dimension of faces. In particular, we still have that all edges are parallel to \(e_{i}-e_{j}\) for some \(i,j\).
We define the _Minkowski sum_ of two subsets \(P\) and \(Q\) of \(\mathbb{R}^{n}\) to be the set \(P+Q=\{x+y\colon x\in P,y\in Q\}\). Note that the set of generalized permutohedra is closed under Minkowski sums, since taking a Minkowski sum simply results in adding the corresponding \(z_{I}\) parameters.
**Definition 2.2**.: We say that a generalized permutohedron \(P\) is _irreducible_ if whenever \(P\) is written as a Minkowski sum \(Q_{1}+Q_{2}\) of generalized permutohedra, \(Q_{1}\) and \(Q_{2}\) are both copies of \(P\) up to scaling and translation.
We can view the set of generalized permutohedra as a subset of \(\mathbb{R}^{2^{n}-1}\) by considering the vector of corresponding \(z_{I}\) parameters. This subset is a cone bounded by the hyperplanes corresponding to the supermodularity conditions. Then the irreducible generalized permutohedra correspond to the extreme rays of this cone. In particular, because we have finitely many conditions, there are finitely many irreducible generalized permutohedra, up to scaling and translation. So every generalized permutohedron can be written as a weighted Minkowski sum of irreducible generalized permutohedra. For example, a permutohedron is a weighted Minkowski sum of \(\binom{n}{2}\) line segments between the standard basis vectors in \(\mathbb{R}^{n}\).
Since the problem of determining the irreducible generalized permutohedra is equivalent to determining the extreme rays of a high-dimensional cone, we can apply standard algorithms to determine the answer for small \(n\). For \(n=3\) there are 5 irreducible supermodular functions. As generalized permutohedra, two of these are equilateral triangles (with opposite orientations), and the other three are are line segments.
For \(n=4\) there are 37 irreducible supermodular functions. Ignoring lower dimensional examples and symmetries, we have 5 new irreducible generalized permutohedra, pictured below.
For \(n=5\) there are 117978 irreducible generalized permutohedra. Even accounting for lower dimensional examples and symmetries, we have many new polytopes that do not follow a clear pattern. Thus we aim to understand the number and complexity of irreducible generalized permutohedra for general \(n\) instead of a precise characterization.
**Definition 2.3**.: A function \(f:2^{[n]}\to\mathbb{R}\) is _supermodular_ if \(f(I\cap J)+f(I\cup J)\geq f(I)+f(J)\) for all \(I,J\subseteq[n]\). It is _modular_ if \(f(I\cap J)+f(I\cup J)=f(I)+f(J)\) for all \(I,J\subseteq[n]\).
Given a generalized permutohedron, we immediately obtain a supermodular function by taking \(f(I)=z_{I}\). In the other direction, any supermodular function \(f\) with \(f(\emptyset)=0\) gives a generalized permutohedron. Thus generalized permutohedra and supermodular functions are essentially the same object. In particular, note that modular functions correspond to a single point as a generalized permutohedron.
**Definition 2.4**.: We say that two supermodular functions are _equivalent_ if they differ by a modular function. We say that a supermodular function \(f\) is _irreducible_ if it is not modular and whenever \(f=g_{1}+g_{2}\) for supermodular functions \(g_{1}\) and \(g_{2}\), \(g_{1}\) and \(g_{2}\) are each equivalent to a function of the form \(cf\) for \(c\in\mathbb{R}_{\geq 0}\).
Note that each equivalence class of irreducible supermodular functions has a representative \(f\) with the following properties:
* \(f(I)=0\) for \(|I|\leq 1\)
* \(f\) takes nonnegative integer values with greatest common divisor \(1\).
As before, we see that the irreducible supermodular functions generate all supermodular functions by taking nonnegative linear combinations. Additionally, irreducible generalized permutohedra correspond to irreducible supermodular functions.
## 3. Analyzing Irreducible Supermodular Functions
As a motivating example, we first consider _nondecreasing_ functions on subsets of \([n]\), that is, functions \(f\colon 2^{[n]}\to\mathbb{R}\) where \(f(I)\geq f(J)\) whenever \(I\supseteq J\).
For a set \(S\) and \(i\in S\), let \(\partial_{i}\) be the _discrete derivative operator_ mappings functions \(f\colon 2^{S}\to\mathbb{R}\) to functions \(\partial_{i}f\colon 2^{S\setminus\{i\}}\to\mathbb{R}\), defined by \((\partial_{i}f)(I)=f(I\cup\{i\})-f(I)\). It is clear that \(f\) is nondecreasing if and only if \((\partial_{i}f)(I)\geq 0\) for every \(i\in[n]\) and \(I\subseteq[n]\setminus\{i\}\). We also have that \(f\) is supermodular if and only if \((\partial_{i}\partial_{j}f)(I)\geq 0\) for every pair of distinct \(i,j\in[n]\) and \(I\subseteq[n]\setminus\{i,j\}\). Thus we can think of supermodular functions as having nonnegative second derivatives everywhere, which makes the case of nondecreasing functions (nonnegative first derivatives) natural.
Motivated by this parallel, we can define equivalence and irreducibility for nondecreasing functions.
**Definition 3.1**.: We say two nondecreasing functions are _equivalent_ if their difference is a constant function. A nondecreasing function \(f\) is _irreducible_ if it is not constant and not a nontrivial sum of nondecreasing functions. That is, if \(f=g_{1}+g_{2}\) for nondecreasing functions \(g_{1}\) and \(g_{2}\), then \(g_{1}\) and \(g_{2}\) are each equivalent to a nonnegative multiple of \(f\).
We can precisely characterize the irreducible nondecreasing functions. Given an nonempty antichain \(\mathcal{A}\) of subsets of \([n]\), we define the up function of \(\mathcal{A}\) to be \(u_{\mathcal{A}}(I)=1\) if \(I\supseteq J\) for some \(J\in\mathcal{A}\) and \(u_{\mathcal{A}}(I)=0\) otherwise.
**Lemma 3.2**.: _A nondecreasing function is irreducible if and only if it is equivalent to a function of the form \(cu_{\mathcal{A}}\) for some nonempty antichain \(\mathcal{A}\) of subsets of \([n]\) and some \(c\in\mathbb{R}_{\geq 0}\)._
Proof.: First we show the "if" direction. Let \(\mathcal{A}\) be a nonempty antichain of subsets of \([n]\). We will show that \(u_{\mathcal{A}}\) is irreducible. Let \(f_{1}\) and \(f_{2}\) be nondecreasing functions such that \(u_{\mathcal{A}}=f_{1}+f_{2}\). Now consider sets \(I\subseteq J\) such that \(u_{\mathcal{A}}(I)=u_{\mathcal{A}}(J)\). We have that \(f_{i}(I)\leq f_{i}(J)\) by nondecreasingity. However
\[u_{\mathcal{A}}(I)=f_{1}(I)+f_{2}(I)\leq f_{1}(J)+f_{2}(J)=u_{\mathcal{A}}(J).\]
Thus we must have equality, so \(f_{1}(I)=f_{1}(J)\) and \(f_{2}(I)=f_{2}(J)\).
This fact implies that \(f_{i}(I)=f_{i}([n])\) whenever \(I\supseteq A\) for some \(A\in\mathcal{A}\), and \(f_{i}(I)=f_{i}(\emptyset)\) otherwise. Thus
\[f_{i}(I)=f_{i}(\emptyset)+(f_{i}([n])-f_{i}(\emptyset))u_{\mathcal{A}}.\]
This shows that \(u_{\mathcal{A}}\) is irreducible.
Now we show the converse. Suppose that \(f\) is a nondecreasing function not equivalent to a multiple of \(u_{\mathcal{A}}\) for any antichain \(\mathcal{A}\). Consider the family \(\mathcal{F}\) of subsets \(I\subseteq[n]\) for which \(f(I)>f(\emptyset)\). Since \(f\) is not constant, \(\mathcal{F}\) is nonempty. Let \(\mathcal{A}\) be the family of minimal sets in \(\mathcal{F}\). Note that \(u_{\mathcal{A}}\) takes the value \(1\) on elements of \(\mathcal{F}\) and \(0\) elsewhere. Now, for sufficiently small \(c>0\), we have \(f(I)>c+f(\emptyset)\). Thus \(f-cu_{\mathcal{A}}\) is nondecreasing for some \(c>0\). But \(f\) is not equivalent to \(cu_{\mathcal{A}}\). So \(f\) is reducible.
The number of antichains of subsets of \([n]\) is at least \(2^{\binom{n}{n/2}}\) by choosing only subsets of size \(\binom{n}{n/2}\). In fact such antichains describe most possibilities [5, 7].
Given this understanding of nondecreasing functions, we can attempt to use it to understand supermodular functions. If \(f\) is supermodular, then \(\partial_{i}f\) is nondecreasing. So we can construct supermodular functions by taking \(n\) nondecreasing functions \(g_{1},\ldots,g_{n}\) with \(g_{i}\colon 2^{[n]\setminus\{i\}}\to\mathbb{R}\). However, we are restricted by the fact that
\[\partial_{i}g_{j}=\partial_{i}\partial_{j}f=\partial_{j}\partial_{i}f= \partial_{j}g_{i}.\]
Therefore supermodular functions can be heuristically described as \(n\) weighted sums of antichains with a compatibility condition between the sums.
Another way to understand supermodular functions is to consider the supermodularity condition on certain pairs of \(I\) and \(J\).
**Definition 3.3**.: We say that an unordered pair of subsets \(\{I,J\}\) of \([n]\) is _close_ if \(|I|=|J|=|I\cap J|+1=|I\cup J|-1\). Let \(\mathcal{P}_{n}\) be the set of all close pairs. Note that \(|\mathcal{P}_{n}|=\binom{n}{2}2^{n-2}\).
Given a supermodular functions \(f\), for each close pair \(\{I,J\}\), let the _supermodularity value_ of this pair be
\[s_{I,J}=f(I\cap J)+f(I\cup J)-f(I)-f(J).\]
Clearly, \(\{I,J\}\) is a close pair if and only if \(1_{I}\), \(1_{J}\), \(1_{I\cap J}\), \(1_{I\cup J}\) are the vertices of a square face in the boolean hypercube, so we will treat square faces and close pairs interchangeably.
It is sufficient to define \(s_{I,J}\) only when \(\{I,J\}\) is a close pair because of the following lemma. Let \(T\colon\mathbb{R}^{2^{[n]}}\to\mathbb{R}^{\mathcal{P}_{n}}\) denote the linear map sending \(f\) to \(s\).
**Lemma 3.4**.: _Let \(f\colon 2^{[n]}\to\mathbb{R}\) and let \(s=Tf\). Then \(f\) is supermodular if and only if \(s_{I,J}\geq 0\) for each close pair \(\{I,J\}\)._
Proof.: The "only if" direction is clear.
For the "if" direction, let \(f\colon 2^{[n]}\to\mathbb{R}\) and supppose \(s=Tf\) satisfies \(s_{I,J}\geq 0\) for each close pair \(\{I,J\}\). We will show that
\[f(I\cap J)+f(I\cup J)-f(I)-f(J)\geq 0\]
for all \(I,J\subseteq[n]\).
Fix subsets \(I,J\subseteq[n]\) and let \(I\setminus J=\{i_{1},\ldots,i_{\ell_{I}}\}\), \(J\setminus I=\{j_{1},\ldots,j_{\ell_{J}}\}\). Let \(K(a,b)=(I\cap J)\cup\{i_{1},\ldots,i_{a},j_{1},\ldots,j_{b}\}\) for \(0\leq a\leq\ell_{I}\) and \(0\leq b\leq\ell_{J}\). Note that \(\{K(a,b-1),K(a-1,b)\}\) is a close pair for \(a,b>0\). Additionally we have that
\[0\leq s_{K(a,b-1),K(a-1,b)}=f(K(a,b))+f(K(a-1,b-1))-f(K(a,b-1))-f(K(a-1,b)).\]
Now we sum inequality over all \(1\leq a\leq\ell_{I}\) and \(1\leq b\leq\ell_{J}\). Most terms on the RHS cancel, leaving us with
\[0\leq f(K(\ell_{I},\ell_{J}))+f(K(0,0))-f(K(\ell_{I},0))-f(K(0,\ell_{J}))=f(I \cup J)+f(I\cap J)-f(I)-f(J).\]
Thus \(f\) is supermodular.
Note that the kernel of \(T\) is the space of modular functions, which has dimension \(n+1\). Thus the image of \(T\) has dimension \(2^{n}-n-1\).
We can determine the image of \(T\) in \(\mathbb{R}^{\mathcal{P}_{n}}\) by a set of \(\binom{n}{2}2^{n-2}-2^{n}+n+1\) linear conditions on \(s\). The possible vectors \(s\) obtained from supermodular functions are just the vectors satisfying these conditions with nonnegative entries. So the relevant subset of vectors is the intersection of \(\operatorname{im}T\) with the positive orthant, which is a cone. The irreducible supermodular functions then correspond to the extreme rays of this cone.
We now characterize the linear conditions determining \(\operatorname{im}T\). Given a permutation \(\sigma=(\sigma_{1},\ldots,\sigma_{n})\in S_{n}\), we let \(I_{r}(\sigma)=\{\sigma_{1},\ldots,\sigma_{r}\}\) and \(J_{r}(\sigma)=\{\sigma_{2},\ldots,\sigma_{r+1}\}\), for each \(1\leq r\leq n-1\). Also let \(I_{n}(\sigma)=[n]\) and \(J_{0}(\sigma)=\emptyset\), so that \(I_{r}\cup J_{r}=I_{r+1}\) and \(I_{r}\cap J_{r}=J_{r-1}\). We define the _path sum_ of \(s\) along \(\sigma\) to be
\[P_{\sigma}(s)=\sum_{r=1}^{n-1}s_{I_{r},J_{r}}.\]
Here \(\sigma\) corresponds to a maximal chain in the poset of square faces of the hypercube ordered by the relation \(\{I,J\}<\{I^{\prime},J^{\prime}\}\) when one of \(I^{\prime}\) and \(J^{\prime}\) contains both of \(I\) and \(J\) and the other contains at least one of \(I\) and \(J\). Additionally, we say that the path corresponding to \(\sigma\) has _color_\(\sigma_{1}\).
**Example 3.5**.: When \(n=4\) and \(\sigma=(2,4,1,3)\), we obtain the following path on square faces. The color of the path is \(\sigma_{1}=2\), which can be seen by each square face having a pair of opposite edges in the direction \(1_{\{2\}}\).
The following theorem explains the relevance of the color of a path and uses path sums to describe \(\operatorname{im}T\).
**Theorem 3.6**.: _The following are equivalent for any \(s\in\mathbb{R}^{\mathcal{P}_{n}}\):_
1. \(s\in\operatorname{im}T\)_._
2. _There exist_ \(m_{1},\dots,m_{n}\) _such that_ \(P_{\sigma}(s)=m_{\sigma_{1}}\) _for all_ \(\sigma\in S_{n}\)_. The value of_ \(m_{i}\) _will be referred to as the_ weight _of color_ \(i\)_._
3. _For all distinct_ \(i,j,k\in[n]\) _and_ \(I\subseteq[n]\setminus\{i,j,k\}\)_,_ \[s_{I\cup\{i\},I\cup\{j\}}+s_{I\cup\{i,j\},I\cup\{j,k\}}=s_{I\cup\{i\},I\cup\{k\} }+s_{I\cup\{i,k\},I\cup\{j,k\}}.\]
Proof.: We will show that \((1)\implies(2)\implies(3)\implies(1)\).
We first show \((1)\implies(2)\). Let \(f\) be a function with \(Tf=s\). We claim that
\[m_{i}=f([n])+f(\emptyset)-f(\{i\})-f([n]\setminus\{i\})=\partial_{i}f([n] \setminus\{i\})-\partial_{i}f(\emptyset)\]
satisfies condition (2).
Consider an arbitrary \(\sigma\in S_{n}\). Note that
\[s_{I_{r},J_{r}}=\partial_{\sigma_{1}}\partial_{\sigma_{r+1}}f(I_{r}\cap J_{r} )=\partial_{\sigma_{1}}\partial_{\sigma_{r+1}}f(J_{r-1})=\partial_{\sigma_{1}} f(J_{r})-\partial_{\sigma_{1}}f(J_{r-1}).\]
Thus the sum in \(P_{\sigma}(s)\) telescopes to \(\partial_{\sigma_{1}}f(J_{n-1})-\partial_{\sigma_{1}}f(J_{0})=m_{\sigma_{1}}\), as desired.
Next we show that \((2)\implies(3)\). Fix distinct \(i,j,k\in[n]\) and \(I\subseteq[n]\setminus\{i,j,k\}\). Let \(t=|I|\). Choose a \(\sigma\in S_{n}\) such that \(\sigma_{1}=i\), \(J_{t}(\sigma)=I\), \(\sigma_{t+2}=j\), and \(\sigma_{t+3}=k\). Let \(\sigma^{\prime}\in S_{n}\) be such that \(\sigma^{\prime}_{r}=\sigma_{r}\) for \(r\neq t+2,t+3\) and \(\sigma^{\prime}_{t+2}=k\), \(\sigma^{\prime}_{t+3}=j.\) Then we have that \(I_{r}(\sigma)=I_{r}(\sigma^{\prime})\) except when \(r=t+2\) and \(I_{r}(\sigma)=I_{r}(\sigma^{\prime})\) except when \(r=t+1\). Since \(\sigma_{1}=\sigma^{\prime}_{1}\), we have \(P_{\sigma}(s)=P_{\sigma^{\prime}}(s)\). Cancelling the common terms from the sum gives
\[s_{I_{t+1}(\sigma),J_{t+1}(\sigma)}+s_{I_{t+2}(\sigma),J_{t+2}(\sigma)}=s_{I_ {t+1}(\sigma^{\prime}),J_{t+1}(\sigma^{\prime})}+s_{I_{t+2}(\sigma^{\prime}),J _{t+2}(\sigma^{\prime})}.\]
After substituting for \(I_{r}\) and \(J_{r}\) we obtain
\[s_{I\cup\{i\},I\cup\{j\}}+s_{I\cup\{i,j\},I\cup\{j,k\}}=s_{I\cup\{i\},I\cup\{k\}} +s_{I\cup\{i,k\},I\cup\{j,k\}},\]
as desired.
Finally we show \((3)\implies(1)\). Let \(s\) be a function satisfying \((3)\). We will construct an \(f\) such that \(Tf=s\). To construct \(f\) we define \(f(J)\) inductively based on \(|J|\). If \(|J|<2\) we let \(f(J)=0\).
Now suppose that we have defined \(f(J)\) for all \(J\) with \(|J|<t\), for some \(t\in[2,n]\). Fix a \(J\) with \(|J|=t\). Choose \(i,j\in J\) arbitrarily let \(I=J\setminus\{i,j\}\). We define
\[f(J)=s_{I\cup\{i\},I\cup\{j\}}+f(I\cup\{i\})+f(I\cup\{j\})-f(I).\]
This inductive procedure defines some function \(f\colon 2^{[n]}\to\mathbb{R}\). We claim that the choices of \(i,j\in J\) do not affect the function \(f\) defined by the procedure.
We prove that \(f(J)\) is uniquely determined by induction on \(|J|\). This is clear for \(|J|\leq 2\). Now suppose we know that \(f(J)\) is uniquely determined for all \(J\) with \(|J|\leq t\) for some \(J\in[2,n]\). Fix a \(J\) with \(|J|=t\). It suffices to show that for any distinct \(i,j,k\in J\) we obtain the same value for \(f(J)\) be recursing with \(\{i,j\}\) or \(\{i,k\}\), because applying this fact twice connects any two pairs \(\{i,j\}\) and \(\{i^{\prime},j^{\prime}\}\).
Applying condition \((3)\) with \(i,j,k\) and \(I=J\setminus\{i,j,k\}\) we obtain that
\[s_{I\cup\{i\},I\cup\{j\}}+s_{I\cup\{i,j\},I\cup\{j,k\}}=s_{I\cup\{i\},I\cup \{k\}}+s_{I\cup\{i,k\},I\cup\{j,k\}}.\]
By the inductive hypothesis we have that
\[s_{I\cup\{i\},I\cup\{j\}}=f(I\cup\{i,j\})+f(I)-f(I\cup\{i\})-f(I\cup\{j\}).\]
Similarly, we have that
\[s_{I\cup\{i\},I\cup\{k\}}=f(I\cup\{i,k\})+f(I)-f(I\cup\{i\})-f(I\cup\{k\}).\]
Substituting these values into condition \((3)\) gives
\[s_{I\cup\{i,j\},I\cup\{j,k\}}+f(I\cup\{i,j\})-f(I\cup\{j\})=s_{I\cup\{i,k\},I \cup\{j,k\}}+f(I\cup\{i,k\})-f(I\cup\{k\}).\]
Adding \(f(I\cup\{j,k\})\) to both sides gives that the two potential values for \(f(J)\) in question are in fact equal.
By the above theorem, we know that we can describe \(\operatorname{im}T\) using linear conditions of the form \(s\cdot v=0\), where \(v\) has all entries \(0\) except for \(2\) entries of \(+1\) and \(2\) entries of \(-1\). We will use this fact to understand the complexity of irreducible supermodular functions.
## 4. Irreducibility of Balanced Multisets
For each square face of the hypercube, we have an associated supermodularity value \(s_{I,J}\). Additionally, we have some subset of our \(n!\) paths passing through this face. Our only condition on the supermodularity values is that their sum along each path of a given "color" is fixed.
So, choosing a supermodular functions is equivalent to choosing a weight for each square face (and thus the corresponding set of paths) such that the each path of a given "color" has a fixed total weight. This is equivalent to choosing a collection of subsets of a set of size \(n!\) subject to the sum of the collection having a nice form, which is the same as our simplified irreducibility problem with two modifications. First, we are only allowed to use certain subsets in our collection (i.e., those corresponding to a square face). Second, the sum of the collection doesn't have to be a perfect multiple of the set of all \(n!\) paths; it only has to be count paths of each color the same number of times.
**Definition 4.1**.: A multiset \(\mathcal{M}\) of subsets of \([N]\) is _balanced_ with _complexity_\(m\) if each \(i\in[N]\) appears in exactly \(m\) sets in \(\mathcal{M}\). We say that a balanced multiset is \(\mathbb{Z}\)_-irreducible_ if no proper nonempty subset is balanced.
**Example 4.2**.: When \(N=4\), the multiset \(\mathcal{M}=\{\{1\},\{1\},\{2,3\},\{2,4\},\{3,4\}\}\) is balanced with complexity \(2\) and is \(\mathbb{Z}\)-irreducible.
Given a multiset \(\mathcal{M}\) of subsets of \([N]\), we can construct a vector \(v=v(\mathcal{M})\in\mathbb{R}^{2^{[N]}}\) such that \(v_{I}\) is the number of times \(I\) appears in \(\mathcal{M}\). Then we have that \(\mathcal{M}\) is balanced (of complexity \(m\)) if and only if \(B_{N}v(\mathcal{M})=m1_{[N]}\), where \(B_{N}\) is the \(N\times 2^{N}\) matrix with columns \(1_{I}\) for each \(I\subseteq[N]\). This allows us to extend the definition of balanced multisets to all vectors \(v\in\mathbb{R}^{2^{[N]}}\) with nonnegative entries.
**Definition 4.3**.: A vector \(v\in\mathbb{R}^{2^{[N]}}_{\geq 0}\) is _balanced_ if \(B_{N}v=m1_{[N]}\) for some \(m\in\mathbb{R}\). A balanced vector \(v\) is _irreducible_ if whenever \(v=u_{1}+u_{2}\) for balanced \(u_{1}\) and \(u_{2}\), both \(u_{1}\) and \(u_{2}\) are real multiples of \(v\). Equivalently, \(v\) is irreducible if it lies on an extreme ray of the cone of all balanced vectors in \(\mathbb{R}^{2^{[N]}}\).
Given a nonzero balanced vector \(v\in\mathbb{R}^{2^{[N]}}\) we can construct a balanced multiset \(\mathcal{M}\) by scaling \(v\) to have integer entries not sharing a common factor. We define the complexity of a balanced \(v\) to be the complexity of the multiset \(\mathcal{M}\) obtained in this way. We also say that a multiset \(\mathcal{M}\) is _irreducible_ if it is obtained from an irreducible \(v\) in this way.
Note that if \(\mathcal{M}\) is irreducible, then it is also \(\mathbb{Z}\)-irreducible. However, the reverse does not hold. For example, the multiset \(\mathcal{M}=\{1234,4,12,135,235,45\}\) is \(\mathbb{Z}\)-irreducible but not irreducible by Lemma 4.5.
To analyze irreducibility for balanced multisets and vectors, we will use some results from random matrix theory. Let \(M_{N}\) be the \(N\times N\) matrix with uniform and independent \(\pm 1\) entries. In [12], Tikhomirov showed that \(M_{N}\) is singular with probability \((1/2+o(1))^{N}\). We will only need that \(M_{N}\) is invertible with probability \(1-o(1/N)\)
By Hadamard's inequality, \(|\det M_{N}|\leq n^{n/2}\). Equality is attained when \(M_{N}\) is a Hadamard matrix. Additionally, in [11], Tao and Vu showed that \(|\det M_{N}|\geq(cn)^{n/2}\) with probability \(1-o(1)\) for fixed \(c<1/e\).
For our applications, we will need the following lemma, which follows by applying row operations to \(M_{N}\).
**Lemma 4.4**.: _Let \(A\) be a uniformly random \(N\times N\) matrix with \(\{0,1\}\) entries and let \(A_{i}\) be \(A\) with the \(i\)-th column replaced with all \(1\)'s. Then the distribution of \(|\det A|\) is \(2^{-N}\) times the distribution of \(|\det M_{N+1}|\) and the distribution of \(|\det A_{i}|\) is \(2^{-N+1}\) times the distribution of \(|\det M_{N}|\)._
Proof.: We first show the second claim. From \(A_{i}\), replace each column \(j\neq i\) with column \(i\) minus twice column \(j\) to obtain a matrix \(A^{\prime}_{i}\). We have that \(\det A^{\prime}_{i}=(-2)^{N-1}\det A_{i}\), and \(A^{\prime}_{i}\) has \(\pm 1\) entries. Next independently negate each row of \(A^{\prime}_{i}\) with probability \(1/2\) to obtain \(A^{\prime\prime}_{i}\). Then \(|\det A^{\prime\prime}_{i}|=|\det A^{\prime}_{i}|\) and \(A^{\prime\prime}_{i}\) is distributed identically to \(M_{N}\). So the distribution of \(|\det A_{i}|\) is \(2^{-N+1}\) times the distribution of \(|\det M_{N}|\).
Now we show the first claim. Consider the \((N+1)\times(N+1)\) matrix \(A^{\prime}\) with \(A\) in the bottom right \(N\times N\) block, all \(1\)'s in the first column, and all \(0\)'s in the rest of the top row. Then \(\det A^{\prime}=\det A\). Now, we construct a new matrix \(A^{\prime\prime}\), obtained from \(A^{\prime}\) as follows. Independently, for each \(i>1\), with probability \(1/2\), either keep column \(i\) the same or replace column \(i\) with column \(1\) minus column \(i\). Then \(|\det A^{\prime\prime}|=|\det A^{\prime}|\) and \(A^{\prime\prime}\) is distributed the same as \(A_{1}\) in the second claim with \(N\) replaced by \(N+1\). So the distribution of \(|\det A|\) is \(2^{-N}\) times the distribution of \(|\det M_{N+1}|\).
Now we analyze irreducible balanced vectors. First we prove the following lemma.
**Lemma 4.5**.: _Let \(v\) be an irreducible balanced vector in \(\mathbb{R}^{2^{[N]}}\). Then the set of vectors \(\{1_{I}\colon v_{I}>0\}\) is linearly independent._
Proof.: Suppose this is not the case. Let \(\{I_{1},\ldots,I_{\ell}\}=\{1_{I}\colon v_{I}>0\}\). Then there exists \(\alpha_{i}\) not all \(0\) such that \(\sum_{i=1}^{\ell}\alpha_{i}1_{I_{i}}=0\).
Now, let
\[t=\min\left\{\frac{v_{I_{i}}}{|\alpha_{i}|}\colon 1\leq i\leq\ell,\alpha_{i} \neq 0\right\}.\]
Define \(v^{+}\) by \(v^{+}_{I_{i}}=v_{I_{i}}+t\alpha_{i}\) and \(v^{+}_{I}=0\) if \(v_{I}=0\). Similarly define \(v^{-}\) by \(v^{-}_{I_{i}}=v_{I_{i}}-t\alpha_{i}\) and \(v^{-}_{I}=0\) if \(v_{I}=0\).
By definition of \(t\), \(v^{+}\) and \(v^{-}\) have nonnegative entries and \(v^{+}+v^{-}=2v\). Additionally, there exists \(i\) such that \(v_{I_{i}}^{+}=0\) or \(v_{I_{i}}^{-}=0\) but \(v_{I_{i}}\neq 0\). Thus \(v^{+}\) and \(v^{-}\) cannot both be real multiples of \(v\). So \(v\) is reducible.
We are now ready to bound the complexity of irreducible balanced vectors. The key idea is to consider the support of such a vector and think about solving for the values of the nonzero entries.
**Theorem 4.6**.: _Let \(v\) be an irreducible balanced vector in \(\mathbb{R}^{2^{[N]}}\) with complexity \(m\). Then_
\[m\leq\max_{A\in\{0,1\}^{N\times N}}\det A\leq(N+1)^{(N+1)/2}/2^{N}.\]
Proof.: Without loss of generality scale \(v\) so that it has relatively prime integer entries. In particular we have \(B_{N}v=m1_{[N]}\).
Let \(\{I_{1},\ldots,I_{\ell}\}=\{1_{I}\colon v_{I}>0\}\). Note that \(\ell\leq N\) by Lemma 4.5. If \(\ell<N\) choose \(N-\ell\) more sets \(J_{\ell+1},\ldots,J_{N}\) such that the set of vectors \(\{1_{I_{1}},\ldots,1_{I_{\ell}},1_{J_{\ell}+1},\ldots,1_{J_{N}}\}\) is linearly independent.
Let \(x\) be the \(N\times 1\) column vector with \(i\)-th entry \(v_{I_{i}}\) for \(i\leq\ell\) and all other entries \(0\). Let \(A\) be the \(N\times N\) matrix with \(i\)-th column \(v_{I_{i}}\) for \(i\leq\ell\) and \(i\)-th column \(v_{J_{i}}\) for \(i>\ell\). Then we have that
\[Ax=m1_{[N]}.\]
Now, consider solving for \(x\) by treating this as a linear system in the entries of \(x\). By Cramer's rule, we have that \(v_{I_{i}}=x_{i}=m\det(A_{i})/\det(A)\), where \(A_{i}\) is the matrix \(A\) with the \(i\)-th column replacing by \(1_{[N]}\). Since the values \(x_{i}\) are positive integers that are collectively relatively prime, we must have that
\[m=\frac{\det(A)}{\gcd(\det(A_{1}),\ldots,\det(A_{\ell}),\det(A))}\leq\det(A).\]
Now by Hadamard's inequality and Lemma 4.4, we have that \(m\leq(N+1)^{(N+1)/2}/2^{N}\), as desired.
We can extend this result to \(\mathbb{Z}\)-irreducibility using Caratheodory's Theorem, losing a negligible factor of \(2^{N}\).
**Theorem 4.7**.: _Let \(\mathcal{M}\) be a \(\mathbb{Z}\)-irreducible multiset with complexity \(m\). Then \(m\leq(N+1)^{(N+1)/2}\)._
Proof.: Let \(v=v(\mathcal{M})\). By Caratheodory's Theorem, we can write \(v\) as a positive linear combination of irreducible balanced vectors \(v_{1},\ldots,v_{r}\) with \(r\leq 2^{N}\). Without loss of generality, each \(v_{j}\) has relatively prime integer entries.
Now, let \(v=\sum_{j=1}^{r}\lambda_{j}v_{j}\) for some reals \(\lambda_{j}\geq 0\). Note that \(\lambda_{j}<1\) because otherwise we could reduce \(\mathcal{M}\) from \(v=v_{j}+(v-v_{j})\).
Now we have that
\[m(\mathcal{M})1_{[N]}=m(v)1_{[N]}=B_{N}v=B_{N}\sum_{j=1}^{r}\lambda_{j}v_{j}=\sum _{j=1}^{r}\lambda_{j}B_{N}v_{j}=\sum_{j=1}^{r}\lambda_{j}m(v_{j})1_{[N]}.\]
So, by Theorem 4.6 we have that
\[m(v)=\sum_{j=1}^{r}\lambda_{j}m(v_{j})<r(N+1)^{(N+1)/2}/2^{N}\leq(N+1)^{(N+1)/2}.\]
To construct a lower bound on the largest possible complexity \(m\), it suffices to construct a matrix \(A\) with large determinant such that the gcd factor is small. It is unlikely that all matrices \(A\) with large determinant also give a large gcd factor, but we have not established this yet. We expect that a lower bound is possible losing at most a factor of \(c^{N}\) for some \(c\in\mathbb{R}\).
**Theorem 4.8**.: _The number of distinct irreducible balanced vectors is \((1-o(1))2^{N^{2}}/N!\)._
Proof.: Recall that each irreducible balanced vector is supported on most \(N\) distinct sets. For each choice of \(N\) distinct sets, we can solve for the unique irreducible balanced vector supported on a subset those sets. So we have an upper bound of
\[\binom{2^{N}}{N}=(1-o(1))\frac{2^{N^{2}}}{N!}.\]
Now, consider sampling a matrix \(A\) from all matrices \(N\times N\) matrices with \(\{0,1\}\) entries, uniformly at random. Then construct matrices \(A_{i}\) for \(1\leq i\leq N\) by replacing the \(i\)-th column of \(A\) with \(\vec{1}_{N}\).
Since \(M_{N}\) and \(M_{N+1}\) are singular with probability \(o(1/N)\), by Lemma 4.4, we have that \(A,A_{1},\ldots,A_{N}\) are each singular with probability at most \((\frac{1}{2}+o(1))^{-N+1}\). So by a union bound we have that all of \(A,A_{1},\ldots,A_{N}\) are invertible with probability \(1-o(1)\). Now we focus on the \((1-o(1))2^{N^{2}}\) matrices \(A\) for which this holds.
When we solve \(Ax=\vec{1}_{N}\), all entries of \(x\) will be nonzero. This gives us a irreducible sequence with exactly \(n\) distinct sets. Over all matrices \(A\), we will obtain each such sequence exactly \(N!\) times, giving a lower bound of \((1-o(1))\frac{2^{N^{2}}}{N!}\)
This also shows that almost all irreducible sequences have exactly \(N\) distinct sets. Also note that we can obtain a smaller error term by using the full strength of Tikhomirov's result in [12].
Obtaining an upper bound for the number of \(\mathbb{Z}\)-irreducible sequences is more difficult. We can obtain a weak bound by counting all possible sequences with \(m\leq(N+1)^{(N+1)/2}\) or via Caratheodory's Theorem, but this unlikely to be tight.
## 5. Upper Bounds for Irreducible Supermodular Functions
In this section we analyze irreducible supermodular functions. We consider the vector of supermodularity values \(s\in\mathbb{R}^{\mathcal{P}_{n}}\), indexed by square faces of the hypercube. We have that \(s\) lies in a fixed subspace of dimension \(2^{n}-n-1\) given the restrictions from Section 3. For \(s\) to be irreducible, it must lie on an extreme ray of the cone \(s\geq 0\) in this subspace. In particular, any irreducible \(s\) must satisfy \(2^{n}-n-2\) linearly independent conditions of the form \(s_{I,J}=0\). These conditions allow us to solve for \(s\) up to a scaling factor. So, we obtain the following theorem.
**Theorem 5.1**.: _There are at most \(\binom{n^{2}2^{n}}{2^{n}}\) irreducible supermodular functions up to equivalence._
Proof.: For each irreducible \(s\), take \(2^{n}-n-2\) linearly independent conditions of the form \(s_{I,J}=0\) that determine \(s\). Thus the number of possible irreducible \(s\) is at most
\[\binom{\binom{n}{2}2^{n-2}}{2^{n}-n-2}\leq\binom{n^{2}2^{n}}{2^{n}}\]
This bound is clearly not tight. However, as we see in Section 6 the true growth rate is in fact exponential in \(2^{n}\).
As in the previous section, we can also obtain a bound on complexity. In this case, we define the _complexity_ of an irreducible supermodular \(f\) as the maximum weight of a color of \(s=Tf\) (defined in Theorem 3.6) after scaling \(s\) to have relatively prime integer entries.
**Theorem 5.2**.: _Let \(f\) be an irreducible supermodular function. Then the complexity of \(f\) is at most \(2^{n^{2}2^{n}}\)._
Proof.: As in the proof of Theorem 4.6, we consider solving for \(s\). Pick \(2^{n}-n-2\) pairs \(I,J\) with \(s_{I,J}=0\) which allow us to determine \(s\). Now consider the matrix \(A\) in which the top \(\binom{n}{2}2^{n-2}-(2^{n}-n-2)\) rows determine the subspace \(\operatorname{im}T\), the next \(2^{n}-n-2\) rows enforce \(s_{I,J}=0\), and the last row has all \(1\)'s to account for scaling. For an irreducible \(s\) with integer entries, we have that \(As\) is a vector with all \(0\) entries except the last entry.
As in the proof of Theorem 4.6, it suffices to bound the determinant of \(A\). The first \(\binom{n}{2}2^{n-2}-(2^{n}-n-2)\) rows each have norm \(2\). The next \(2^{n}-n-2\) rows have norm \(1\). The last row has norm at most \(n2^{n/2}\). So by Hadamard's inequality we have that the determinant of \(A\) is at most
\[2^{\binom{n}{2}2^{n-2}-(2^{n}-n-2)}n2^{n/2}\leq 2^{n^{2}2^{n}}.\]
## 6. Lower bounds for irreducible supermodular functions
In this section we prove a double-exponential lower bound on the number of irreducible supermodular functions. We do this by relating a special class of supermodular functions to matroids.
**Definition 6.1**.: We say that a supermodular function \(f\) is _simple_ if the nondecreasing function \(\partial_{i}f\) is irreducible or constant for each \(i\in[n]\).
**Lemma 6.2**.: _Let \(f\) be a simple supermodular function. If \(f\) is not irreducible, then there exists a partition \([n]=S_{1}\cup S_{2}\) and simple functions \(g_{1}\colon 2^{S_{1}}\to\mathbb{R}\) and \(g_{2}\colon 2^{S_{2}}\to\mathbb{R}\) such that \(f(I)=g_{1}(I\cap S_{1})+g_{2}(I\cap S_{2})\) for all \(I\)._
Proof.: Suppose that \(f=g_{1}+g_{2}\). WLOG each of \(f,g_{1},g_{2}\) are standard. For each \(i\), \(\partial_{i}f=\partial_{i}g_{1}+\partial_{i}g_{2}\). Since \(\partial_{i}f\) is irreducible as a nondecreasing function, one of \(\partial_{i}g_{1}\) and \(\partial_{i}g_{2}\) is \(0\). Then we can take \(S_{1}=\{i\colon\partial_{i}g_{2}=0\}\) and \(S_{2}=[n]\setminus S_{1}\).
Recall that a _matroid_\(M\) consists of a ground set of elements \(E\) and a collection of bases \(\mathcal{B}\subseteq 2^{E}\) satisfying the following exchange axiom: for each pair of bases \(A,B\in\mathcal{B}\) and \(a\in A\setminus B\), there exists \(b\in B\setminus A\) such that \(A\cup\{b\}\setminus\{a\}\in\mathcal{B}\).
We will work with matroids \(M\) on the ground set \(E=[n]\). Recall that each matroid has a rank \(r\) which is the size of every base. Additionally, we can define a rank function on subsets \(I\) of \(E\) by \(\operatorname{rank}(I)=\min_{B\in\mathcal{B}}|I\cap B|\). We also have the nullity function given by \(\operatorname{null}(I)=|I|-\operatorname{rank}(I)\). We say that \(I\) is an independent set if \(\operatorname{null}(I)=0\).
Recall that a loop is an element of \(E\) that is in no bases, and a coloop is an element that is in every base. We also say that a matroid \(M\) is _reducible_ is we can partition the ground set \(E\) into two nonempty sets \(E_{1}\) and \(E_{2}\) and construct matroids \(M_{i}\) on \(E_{i}\) with bases \(\mathcal{B}_{i}\) such that
\[\mathcal{B}=\{B_{1}\cup B_{2}\colon B_{1}\in\mathcal{B}_{1},B_{2}\in\mathcal{B }_{2}\}.\]
If there does not exist such a decomposition of \(M\), we say that \(M\) is _irreducible_. Note that if a matroid on a ground set of at least \(2\) elements has a loop or a coloop, then it is reducible.
Next, we define a polytope associated with a matroid. Given a matroid \(M\) on \([n]\), its matroid polytope is the convex hull of the indicator vectors for its bases. We use the following result of Gelfand, Goresky, MacPherson, and Serganov [3].
**Theorem 6.3** (Gelfand, Goresky, MacPherson, and Serganov).: _Let \(P\) be a polytope in \(\mathbb{R}^{n}\) with vertices in \(\{0,1\}^{n}\) such that each edge is a translate of \(e_{i}-e_{j}\) for some \(i,j\in[n]\). Then \(P\) is the matroid polytope of some matroid \(M\) on \([n]\)._
**Theorem 6.4**.: _There exists a bijection between equivalence classes of simple supermodular functions and loopless matroids on \([n]\). Furthermore, irreducible functions correspond to irreducible matroids._
Proof.: Let \(M\) be a matroid on \([n]\). Then we can construct a supermodular function \(f\) by defining \(f(I)=\operatorname{null}(I)\). Since \(M\) is loopless, we have \(\operatorname{null}(I)=0\) for \(|I|\leq 1\). So, \(f\) is the standard representative for its equivalence class. Additionally \(f\) is simple since for any \(i\in[n]\), \(\partial_{i}f\) takes values in \(\{0,1\}\).
Now suppose we have a simple function \(f\) that is the standard representative for its equivalence class. We will construct a matroid \(M\) on \([n]\) with \(f(I)=\operatorname{null}(I)\). Consider the generalized permutohedron \(P\) corresponding to \(f\). Let \(x\) be a vertex of \(P\). Since \(f\) takes integer values, \(x\) must have integer entries (otherwise we could perturb \(x\) along a line while keeping it inside \(P\)). We have \(x_{i}\geq f(\{i\})=0\), and \(x_{i}\leq f([n])-f([n]\setminus\{i\})\in\{0,1\}\). So, \(x_{i}\in\{0,1\}\).
Now, consider an edge of \(P\). It is parallel to \(e_{i}-e_{j}\) for some \(i,j\in[n]\). Since the vertices of \(P\) are in \(\{0,1\}^{n}\), the edge must be a translate of \(e_{i}-e_{j}\). Thus by Theorem 6.3 we have that \(P\) is the matroid polytope of a matroid \(M\). This gives that \(M\) maps to \(f\) by our map above.
By Lemma 6.2, a matroid \(M\) is irreducible exactly when the corresponding supermodular function \(f\) is irreducible.
Let \(m_{n}\) be the number of matroids on \([n]\). The following bounds for the asymptotics of \(m_{n}\) are known (all logarithms are in base 2).
**Theorem 6.5** (Bansal-Pendavingh-van der Pol [1]).: \[\log\log m_{n}\leq n-\frac{3}{2}\log n+\frac{1}{2}\log\frac{2}{\pi}+1+o(1).\]
**Theorem 6.6** (Knuth [4]).: \[\log\log m_{n}\geq n-\frac{3}{2}\log n+\frac{1}{2}\log\frac{2}{\pi}-o(1).\]
**Theorem 6.7** (Mayhew-Newman-Welsh-Whittle [6]).: _The number of matroids on \([n]\) with a loop or a coloop is \(o(m_{n})\)._
Using these results, we easily obtain the following.
**Theorem 6.8**.: _There are at least \(2^{\left(\sqrt{2/\pi}-o(1)\right)2^{n}/n^{3/2}}\) irreducible supermodular functions._
Proof.: It suffices to lower bound the number of irreducible matroids. The number of reducible matroids on \([n]\) is at most the number of matroids on \([n]\) with a loop or coloop plus
\[\sum_{t=2}^{\lfloor n/2\rfloor}\binom{n}{t}m_{t}m_{n-t}\leq 2^{n}m_{n-2}m_{ \lfloor n/2\rfloor}\leq o(m_{n}).\]
Here we used Theorem 6.5 and Theorem 6.6. So, with Theorem 6.7 we have that the number of reducible matroids is \(o(m_{n})\). Thus the number of irreducible matroids is at least
\[(1-o(1))m_{n}\geq 2^{\left(\sqrt{2/\pi}-o(1)\right)2^{n}/n^{3/2}}.\]
## 7. Supermodularities on two layers
In this section, we analyze irreducible supermodular functions which are nearly modular, for some notion of "nearly". Let \(\mathcal{P}_{n,t}\) denote the set of close pairs \(\{I,J\}\) with \(|I|=|J|=t\), for \(t\in[n-1]\).
Suppose that \(f\) is a supermodular function and let \(s=Tf\) as in Section 3. Suppose that \(s\) is supported only on \(\mathcal{P}_{n,t}\) for some fixed \(t\in[n-1]\). Then by iterating condition (3) of Theorem 3.6, we have that \(s_{I,J}\) is constant over \(\mathcal{P}_{n,t}\). In particular, there is only equivalence class of irreducible supermodular functions with supermodularities supported on a single layer. As a generalized permutohedron, this corresponds to the hypersimplex \(\Delta_{n,n-t}\). Let \(\alpha_{n,t}=\max(0,|I|-t)\) be the corresponding standard supermodular function.
A natural next step is to consider the case when \(s\) is supported only on \(\mathcal{P}_{n,t}\cup\mathcal{P}_{n,t+1}\) for some fixed \(t\in[n-2]\). Here we allow the supermodularities to lie on two layers of the hypercube instead of just one. Let \(\mathcal{K}_{n,t}\) denote the set of standard irreducible supermodular functions of this form.
The hypersimplices \(\Delta_{n,n-t}\) and \(\Delta_{n,n-t-1}\) correspond to the elements \(\alpha_{n,t},\alpha_{n,t+1}\in\mathcal{K}_{n,t}\) as seen above. Additionally, we can lift the hypersimplex \(\Delta_{n-1,n-t+1}\) in \(n\) different ways to obtain a supermodular function in \(\mathcal{K}_{n,t}\). Specifically, for each \(k\in[n]\), the corresponding function is
\[\beta_{n,t,k}(I)=\max(0,|I\cap([n]\setminus\{k\})|-t).\]
We will use the \(n+2\) functions \(\mathcal{B}_{n,t}=\{\alpha_{n,t},\alpha_{n,t+1},\beta_{n,t,1},\ldots,\beta_{n, t,n}\}\) to describe \(\mathcal{K}_{n,t}\).
**Theorem 7.1**.: _The elements of \(\mathcal{K}_{n,t}\) other than \(\alpha_{n,t}\) and \(\alpha_{n,t+1}\) are in bijection with subsets \(S\subseteq[n]\) with \(|S|\in\{1,n-1\}\) or_
\[\min(t+1,n-t)<|S|<\max(t+1,n-t).\]
_The bijection is given by the map_
\[S\mapsto\sum_{k\in S}\beta_{n,t,k}-\max(0,|S|-(t+1))\alpha_{n,t}-\max(0,|S|-(n -t))\alpha_{n,t+1}.\]
Proof.: First, notice the identity
\[\sum_{k\in[n]}\beta_{n,t,k}=(n-t-1)\alpha_{n,t}+t\alpha_{n,t+1}. \tag{1}\]
Additionally, this is the only linear dependence in \(\mathcal{B}_{n,t}\). Thus \(\dim\operatorname{span}\mathcal{K}_{n,t}\geq n+1\).
In fact, we claim that this is an equality. By Theorem 3.6, for any \(f\in\mathcal{K}_{n,t}\), we can solve for \(s=Tf\) from the color path sums \(m_{1},\ldots,m_{n}\) and any fixed supermodularity value on \(\mathcal{P}_{n,t}\), since
each path sum has only two nontrivial terms. Since \(f\) is standard, we can solve for \(f\) from \(s\). Thus \(\dim\operatorname{span}\mathcal{K}_{n,t}=n+1\).
Now, let \(f\in\mathcal{K}_{n,t}\) with \(f\neq\alpha_{n,t},\alpha_{n,t+1}\) and let \(s=Tf\). By Equation (1), we can write \(f\) as a linear combination of \(\mathcal{B}_{n,t}\) such that each \(\beta_{n,t,k}\) has a nonnegative coefficient \(x_{k}\) and at least one of these coefficients is \(0\). Now, note that there must exist a close pair \(\{I,J\}\in\mathcal{P}_{n,t}\) with \(s_{I,J}=0\). Otherwise, we could subtract a multiple of \(\alpha_{n,t}\) from \(f\) while leaving a supermodular function. Thus the coefficient of \(\alpha_{n,t}\) is determined by the coefficients \(x_{k}\); it is the minimum supermodularity value of \(\sum_{k\in[n]}x_{k}\beta_{n,t,k}\) on \(\mathcal{P}_{n,t}\). Note that \(\beta_{n,t,k}\) is supermodular on a close pair \(\{I,J\}\in\mathcal{P}_{n,t}\) if and only if \(k\not\in I\cup J\). So we obtain that the coefficient of \(\alpha_{n,t}\) is
\[y_{1}=-\min_{\{I,J\}\in\mathcal{P}_{n,t}}\sum_{k\not\in I\cup J}x_{k}=-\min_{ |K|=t+1}\sum_{k\not\in K}x_{k}.\]
Similarly, we obtain that the coefficient of \(\alpha_{n,t+1}\) is
\[y_{2}=-\min_{\{I,J\}\in\mathcal{P}_{n,t+1}}\sum_{k\in I\cap J}x_{k}=-\min_{|K |=t}\sum_{k\in K}x_{k}.\]
Next, we show that the coefficients \(x_{k}\) only have one distinct nonzero value. Consider \(y_{1}\) and \(y_{2}\) as functions of \(\vec{x}=\{x_{k}\}\). Note that both are piecewise linear. In particular, given \(\vec{x}\) and \(\vec{x}^{\prime}\) with coordinates sharing a (weak) relative order, we have that \(y_{1}(\vec{x}+\vec{x}^{\prime})=y_{1}(\vec{x})+y_{1}(\vec{x}^{\prime})\). Now, let \(S\subseteq[n]\) consist of all \(k\) such that \(x_{k}\) is maximal. Let \(x_{k}^{\prime}=1\) for \(k\in S\) and \(x_{k}^{\prime}=0\) for \(k\not\in S\). Then have that
\[g=\left(\sum_{k\in S}\beta_{n,t,k}+y_{1}(\vec{x}^{\prime})\alpha_{n,t}+y_{2}( \vec{x}^{\prime})\alpha_{n,t+1}\right)\]
is supermodular, and for sufficiently small \(\varepsilon>0\), \(f-\varepsilon g\) is supermodular. Thus \(f\) must be a multiple of \(g\). In particular, we can assume \(x_{k}=x_{k}^{\prime}\in\{0,1\}\) for each \(k\). Then \(y_{1}=-\max(0,|S|-(t+1))\) and \(y_{2}=-\max(0,|S|-(n-t))\). Thus we have that each \(f\in\mathcal{K}_{n,t}\) with \(f\neq\alpha_{n,t},\alpha_{n,t+1}\) corresponds to an \(S\subseteq[n]\) as claimed. It suffices to check with choices of \(S\) give an irreducible \(f\). By symmetry, we only need to consider \(|S|\).
First, we consider the case \(|S|\leq\min(t+1,n-t)\). If \(|S|=0\), then \(f=0\), so it is not irreducible. If \(|S|=1\), then \(f=\beta_{n,t,k}\) for some \(k\), so it is irreducible. Otherwise, note that \(y_{1}=y_{2}=0\), so we already have a decomposition of \(f\). So, \(f\) is not irreducible.
Next, we consider the case \(|S|\geq\max(t+1,n-t)\). If \(|S|=n\), then \(f=0\) by Equation (1), so it is not irreducible. Now assume \(|S|<n\). By the above calculations for \(y_{1}\) and \(y_{2}\), we have that for each \(\ell\in[n]\),
\[\gamma_{n,t,\ell}=\sum_{k\in[n]\setminus\{\ell\}}\beta_{n,t,k}-(n-t-2)\alpha_{ n,t}-(t-1)\alpha_{n,t+1}\]
is supermodular. We claim that \(f=\sum_{\ell\in[n]\setminus S}\gamma_{\ell}.\) This follows by applying Equation (1) and comparing coefficients. So, if \(|S|<n-1\), then \(f\) is not irreducible.
It remains to check that \(f\) is irreducible if \(|S|=n-1\) or
\[\min(t+1,n-t)<|S|<\max(t+1,n-t).\]
Now, since we know all possible elements of \(\mathcal{K}_{n,t}\), we just need to show that none of the claimed elements can be decomposed using the other claimed elements. It suffices to check that for each ordered pair of claimed elements, there exists a close pair on which the first is strictly supermodular while the second is modular. This is a simple calculation, and we omit the details.
## 8. Acknowledgements
We would like to thank Ashwin Sah and Mehtaab Sawhney for helpful comments on random matrix theory. This research was conducted at SPUR at MIT in Summer 2022. We would like to thank the SPUR Directors David Jerison and Ankur Moitra for organizing SPUR and for helpful conversations throughout the project.
|
2302.12104 | Ultrafast laser-induced spin-lattice dynamics in the van der Waals
antiferromagnet CoPS3 | CoPS3 stands out in the family of the van der Waals antiferromagnets XPS3
(X=Mn, Ni, Fe, Co) due to the unquenched orbital momentum of the magnetic Co2+
ions which is known to facilitate the coupling of spins to both electromagnetic
waves and lattice vibrations. Here, using a time-resolved magneto-optical
pump-probe technique we experimentally study the ultrafast laser-induced
dynamics of mutually correlated spins and lattice. It is shown that a
femtosecond laser pulse acts as an ultrafast heater and thus results in the
melting of the antiferromagnetic order. At the same time, the resonant pumping
of the 4T1g - 4T2g electronic transition in Co2+ ions effectively changes their
orbital momentum, giving rise to a mechanical force that moves the ions in the
direction parallel to the orientation of their spins, thus generating a
coherent Bg phonon mode at the frequency of about 4.7 THz. | D. Khusyainov, T. Gareev, V. Radovskaia, K. Sampathkumar, S. Acharya, M. Šiškins, S. Mañas-Valero, B. A. Ivanov, E. Coronado, Th. Rasing, A. V. Kimel, D. Afanasiev | 2023-02-23T15:42:08Z | http://arxiv.org/abs/2302.12104v1 | ## Ultrafast laser-induced spin-lattice dynamics in the van der Waals antiferromagnet CoPS\({}_{3}\)
## Abstract
_CoPS\({}_{3}\) stands out in the family of the van der Waals antiferromagnets XPS\({}_{3}\)(X=Mn, Ni, Fe, Co) due to the unquenched orbital momentum of the magnetic Co\({}^{2+}\) ions which is known to facilitate the coupling of spins to both electromagnetic waves and lattice vibrations. Here, using a time-resolved magneto-optical pump-probe technique we experimentally study the ultrafast laser-induced dynamics of mutually correlated spins and lattice. It is shown that a femtosecond laser pulse acts as an ultrafast heater and thus results in the melting of the antiferromagnetic order. At the same time, the resonant pumping of the \({}^{4}\)T\({}_{1g}\)\(\rightarrow\)\({}^{4}\)T\({}_{2g}\) electronic transition in Co\({}^{2+}\) ions effectively changes their orbital momentum, giving rise to a mechanical force that moves the ions in the direction parallel to the orientation of their spins, thus generating a coherent \(B_{g}\) phonon mode at the frequency of about 4.7 THz._
## Introduction
Since the seminal discovery of ultrafast demagnetization in Ni\({}^{1}\), the ultrafast manipulation of magnetism with ultrashort pulses of light has evolved into a fascinating research topic of nonequilibrium magnetism, with examples ranging from excitation of collective magnetic modes[2, 3, 4, 5] to light-driven magnetic phase transitions[6, 7, 8] and switching of spin orientation[9, 10, 11]. The recent resurgence of interest in two-dimensional (2D) van der Waals (vdW) materials hosting intrinsic long-range magnetic orders has offered a novel playground for investigating these phenomena in systems where the interplay between structural and magnetic orders plays a pivotal role[12, 13, 14, 15, 16]. Understanding the nonequilibrium dynamics in vdW magnets particularly promises to provide important insights into the fundamental physics of spin-lattice interactions and spin relaxation in ultimately thin magnets. Moreover, the inherently strong light-matter interactions, typical of vdW materials[12, 17, 18, 19, 20, 21, 22, 23], open up exciting possibilities for efficient manipulation of magnetism on the ultrafast timescale, promising novel energy-efficient data processing devices for future spintronics and magnonics applications[24, 25].
Among the various vdW magnets, transition metal thiophosphates, \(X\)PS\({}_{3}\) (\(X=\) Fe, Ni, Mn, and Co), form a unique class of 2D antiferromagnets (AFMs) with intralayer AFM order on a honeycomb lattice. In addition to the 2D AFM order, previous studies of \(X\)PS\({}_{3}\) have uncovered strongly coupled spin and charge orders[26, 27], highly anisotropic excitons[28], magneto
electric coupling [29, 30], signatures of the BKT transition [31] and strong electron correlations [32, 33]. The family has also recently attracted a lot of attention for ultrafast control of magnetism. In particular, it has been shown that a sudden perturbation of electron orbital momentum via resonant pumping of specific electronic transitions opens up new ways to control spins and lattice at an ultrafast timescale [18, 19, 33, 34]. CoPS\({}_{3}\) stands out in the _X_PS\({}_{3}\) family due to the Co\({}^{2+}\) ions, characterized by the large spin and unquenched orbital momentum. The presence of such ions in magnetic materials usually results in a strong coupling of the spins to the lattice [35, 36], large magnetocrystalline anisotropy [37], high frequencies of magnetic resonance [38], and strong photomagnetic effects [10, 35, 36]. Furthermore, it has (not sure about "been")recently suggested that the high-spin \(d^{7}\) configuration of Co\({}^{2+}\)-based compounds may host a dominant Kitaev interaction [39, 40, 41]. However, despite intense studies of the _X_PS\({}_{3}\) family, ultrafast optical control of magnetism and lattice in CoPS\({}_{3}\) remains unexplored.
Here we address these shortcomings by first introducing optical magnetic linear dichroism (MLD) as an efficient means to probe the AFM order in CoPS\({}_{3}\) and then employing
Figure 1: **(a)** Schematics of the AFM spin and crystal structure of CoPS\({}_{3}\) in the antiferromagnetic and paramagnetic phases. **(b,c)** The normalized transmission of light as function of linear polarization angle \(\vartheta\) measured below and above \(T_{\text{N}}\), respectively. The angle \(\vartheta\)=0 corresponds to the orientation of the polarization plane along the a-axis. **(d)** Magnetic linear dichroism (\(\alpha_{\text{MLD}}\)) of CoPS\({}_{3}\) measured as a function of temperature. Solid line is a guide to the eye. **(e)** Rotation of the light polarization plane \(\theta\) as a function of temperature. Solid line is a guide to the eye. The photon energy of the probe light is 1.55 eV._
time-resolved magneto-optical pump-probe spectroscopy to detect the ultrafast light-induced dynamics of spins and lattice in this AFM compound. Comparison of the results with earlier studies for MnPS\({}_{3}\) and FePS\({}_{3}\) reveals that the presence of the large unquenched orbital momentum in CoPS\({}_{3}\) results in a substantially higher MLD and much faster laser-induced melting of the spin order. Moreover, the unquenched momentum leads to exceptionally strong spin-lattice coupling and provides an opportunity for highly efficient optical control of the lattice dynamics by selective pumping of specific orbital transitions in magnetic Co\({}^{2+}\) ions.
## Sample and Experimental Procedure
CoPS\({}_{3}\) is a layered van der Waals AFM, characterized by a weak coupling between the adjacent crystal and magnetic layers, that can be viewed approximately as a quasi-2D antiferromagnet[42]. The intralayer spin ordering below \(T_{\rm N}\)=120 K results in the formation of ferromagnetic "zig-zag" chains along the \(a\)-axis, while the adjacent chains in the direction of the \(b\)-axis are AFM coupled (see Fig 1a). Introducing individual magnetizations of the adjacent chains as \({\bf M}_{1}\) and \({\bf M}_{2}\), the AFM order in CoPS\({}_{3}\) can be naturally characterized by a Neel vector \({\bf L}\), such that \({\bf L}\) = \({\bf M}_{2}\)-\({\bf M}_{1}\). Inelastic neutron scattering shows that CoPS\({}_{3}\) has sizeable easy-axis single-ion anisotropy which defines the orientation of the spins and the resultant Neel vector along the crystallographic \(a\)-axis[39]. The formation of spin chains, combined with a strong spin-lattice interaction of Co\({}^{2+}\) ions, leads to a reduction of the crystal symmetry, such that the point group of a single CoPS\({}_{3}\) layer changes from \(D_{3\rm d}\) to \(C_{2\rm h}\)[15]. Above \(T_{\rm N}\) in the paramagnetic (PM) phase, the crystal lattice is characterized by a six-fold rotational symmetry. Below \(T_{\rm N}\), the ordering of spins in chains leads to a compression of \(a\) and elongation of \(b\) lattice parameters, effectively elongating the hexagons formed by the Co\({}^{2+}\) ions in the direction perpendicular to the spin chains. The structural changes that accompany the magnetic transition strongly affect the vibrational (phonon) spectrum of CoPS\({}_{3}\). Recent Raman studies show that upon spin ordering several otherwise double degenerate E\({}_{\rm g}\) phonon modes lose their degeneracy and split into a pair of nondegenerate A\({}_{\rm g}\) and B\({}_{\rm g}\) ones, the frequencies of which demonstrate anomalous behavior below \(T_{\rm N}\)[15].
To be able to employ an all-optical pump-and-probe technique for studies of ultrafast magnetism in CoPS\({}_{3}\), one must find an efficient method of optical detection of spin order in the AFM phase. Earlier it was reported that several _X_PS\({}_{3}\) materials possess a large magnetic linear dichroism (MLD) in the AFM phase[12, 43]. Moreover, according to Ref.[12], the strength of the dichroism seems to scale with the orbital moment of the magnetic ion. The MLD is the smallest in MnPS\({}_{3}\), where the orbital momentum of the magnetic Mn\({}^{2+}\) ions is quenched, and the largest in FePS\({}_{3}\), where the orbital momentum of Fe\({}^{2+}\) is known to be non-zero. To reveal the strength of this magneto-optical effect in CoPS\({}_{3}\), with an even larger orbital momentum associated with the Co\({}^{2+}\) ions, we measured the transmittance of linearly polarized probe light at the photon energy of 1.55 eV as a function of the angle \(\vartheta\) between the electric field of light and the \(a\)-axis of the crystal. Figures 1b, and 1c show the polarization dependencies below (\(T\)=7 K) and above (\(T\)=300 K) the \(T_{\rm N}\). It is seen that the normalized transmittance is strongly anisotropic if the material is in the AFM state. Measuring the difference between the intensities \(I_{\rm a}\) and \(I_{\rm b}\) of the transmitted light polarized along the \(a\)- and the \(b\)-axes, respectively, we define MLD as:
\[\alpha_{\rm MLD}=\frac{(I_{\rm a}-I_{\rm b})}{(I_{\rm a}+I_{\rm b})}\cdot 1 00\%. \tag{1}\]
Figure 2: **(a)** _Schematics of the time-resolved magneto-optical pump-probe experiment in CoPS\({}_{3}\). The pump pulse (blue) is delayed with respect to the probe pulse by a variable time delay \(\Delta t\). **(b)** Time-resolved rotation of the probe pulse polarization plane exemplifying both pump-induced quenching of the AFM order and excitation of the coherent \(B_{g}\) phonon mode (inset). The sample temperature is set to 77 K._
Although MLD, in general, strongly depends on the probe photon energy, already in our experiment its value reaches 30%. This is almost an order of magnitude larger than the largest MLD reported for FePS\({}_{3}\)[12]. To define the origin of the MLD in CoPS\({}_{3}\) we studied the effect as a function of the probe photon energy. As shown in Supplementary Materials Fig. S1, two well-pronounced MLD bands appear with the maxima centered at the photon energies of about 1.69 and 2.34 eV. These energies closely match the energies of the so-called \(d\)-\(d\) transitions of Co\({}^{2+}\) ions that are responsible for the change of the orbital state of the magnetic ion[44]. This observation thus indicates that the \(d\)-\(d\) transitions are the origin of MLD in CoPS\({}_{3}\), in agreement with previous studies[12, 18, 19, 34]
Figure 1d shows MLD in CoPS\({}_{3}\) as a function of temperature \(T\). It is seen that \(\alpha_{\rm MLD}\) reduces significantly when approaching \(T_{\rm N}\) from below. To quantify the critical behavior of \(\alpha_{\rm MLD}\), we fitted the temperature dependence of \(\alpha_{\rm MLD}\) in the AFM phase using a power law \(\alpha_{\rm MLD}\propto|T_{\rm N}-T|^{2\beta}\). The results, shown in Fig. 1d, yield \(2\beta\)=\(0.60\pm 0.01\). This value agrees well with the critical exponent characterizing a temperature scaling of the decay in the intensity of the neutron Bragg diffraction peak from the underlying AFM order in CoPS\({}_{3}\) which was found to be \(2\beta\)=\(0.60\)[42]. As the Bragg intensity scales with the Neel order as \(L^{2}\), its comparison with our results suggests that \(\alpha_{\rm MLD}\) also scales as \(L^{2}\). In theory, MLD should be quadratic concerning **L**, but experimentally it is not always the case, especially in AFMs with strong piezomagnetism[45]. At the same time, MLD is known to be one of the most universal effects to probe the AFM order using light in a broad spectral range, including THz[46], visible[47, 48], and X-rays[49]. Yielding \(\beta\)\(\approx\)\(0.30\) for the critical exponent characterizing the temperature dependence of the AFM order
Figure 3: **| Ultrafast light-induced quenching of the Néel order in CoPS\({}_{3}\) (a) Time-resolved rotation of the probe polarization \(\Delta\theta\) showing pump-induced quenching of the AFM order for various temperatures across \(T_{N}\). The pump photon energy is 0.9 eV. (b) The magnitude of the quenching as a function of temperature T. (c) Quenching time as a function of T.**
parameter, our results further confirm that the spin order in CoPS\({}_{3}\) is best described by the 3D Ising model[50]. Note, the dichroism, as defined in this work, does not depend on the sample
thickness, and thus can be potentially employed to probe AFM spin order in CoPS\({}_{3}\) down to a single layer. At the same time, we observed that in the finite size CoPS\({}_{3}\) samples studied in our experiments, the linear dichroism does not completely vanish above \(T_{\text{N}}\), see Fig. 1d. This is because the ideal six-fold rotational symmetry of the honeycomb lattice, present in a single-layer form, vanishes in bulk crystals due to the displacement of the stacked layers along the \(a\)-axis[30].
In our experiments, to probe the anisotropy of the optical properties induced by the spin order, we employ the fact that if the polarization of the incoming light does not orient along and/or perpendicularly to the Neel vector (the \(a\)- and \(b\)-axis, respectively), the MLD will result in rotation of the polarization plane for light propagating through the sample. In our case, the light, initially polarized at 45 degrees with respect to both the \(a\)- and \(b\)-axes, upon propagation through a 4 \(\upmu\)m thick sample experiences a net polarization rotation for an angle reaching about 13 degrees (Fig. 1e). We note that, although the value of the angle is large, it is by far less than expected for an MLD as large as 30%. This difference can be explained by the fact that according to the Kramers-Kronig relations[51, 52], linear dichroism is always accompanied by linear birefringence. The latter can modify the polarization state of light from linear to elliptical
Figure 4: **| High frequency coherent spin-coupled THz phonon mode in CoPS\({}_{3}\).****(a)**_Time-resolved rotation of the probe polarization showing dynamics of the pump-excited coherent phonon plotted for various temperatures across \(T_{\text{N}}\). **(b)** Frequency f of the phonon mode as a function of temperature T. **(c)** Amplitude of the phonon mode as a function of temperature T. The solid lines are independent linear fits below and above \(T_{\text{N}}\)._
and even circular and thus substantially hampers the measurements of the polarization rotation[53].
To study light-induced ultrafast dynamics of spins and lattice in CoPS\({}_{3}\) we carried out an all-optical pump-probe experiment, see Fig. 2a. To excite the dynamics, we employed ultrashort (\(\sim\)50 fs) linearly polarized pulses of light with the photon energy _hv_ tunable in a range from 0.7 to 2.5 eV. This energy range covers the vast majority of the orbital _d-d_ transitions in Co\({}^{2+}\) ions[44]. To probe the light-driven spin dynamics, we relied on measuring the polarization rotation _d_\(\theta\) providing, as we have already shown, access to the Neel order. The polarization rotation was also employed to detect the lattice vibrations (phonons), the dynamics of which are intrinsically highly anisotropic and thus contribute to the linear birefringence[17, 54]. Figure 2b shows an example of the light-induced dynamics triggered by the pump pulse at a temperature of T=77K when the sample is in the AFM phase. It is seen that, after the excitation, the signal of the transient rotation _d_\(\theta\) suddenly drops, indicating a suppression of the AFM order. Note, that the suppression does not occur instantaneously within the duration of the pump pulse, but rather proceeds on a longer timescale \(\tau_{\mathrm{s}}\) of about 1.5 ps (Fig. 2b). The quenching of the AFM order is concomitant with a set of coherent high-frequency phonon oscillations featuring frequencies from 3 to 10 THz and dominated by a phonon mode at 4.74 THz, see Supplementary Materials Fig. S2.
### Spin dynamics
Figures 3a and 3b show the pump-induced polarization rotation \(d\), demonstrating the evolution of the light-induced quenching as a function of the temperature \(T\). It is seen that upon approaching _T_\({}_{\mathrm{N}}\) the degree of the quenching is gradually growing and peaks right below _T_\({}_{\mathrm{N}}\). Above _T_\({}_{\mathrm{N}}\) no significant light-induced magnetic dynamics is seen: the small residual signal is likely of non-magnetic origin and caused by pump-induced changes to the electronic part of the dielectric function[55]. The growth of the quenching amplitude is accompanied by the growth of the quenching time \(\tau_{\mathrm{s}}\). Upon approaching _T_\({}_{\mathrm{N}}\) (Fig. 3c), this time increases from 1.6 ps at T=10 K to a maximum detected time of 5 ps at about T=118 K. We note that qualitatively similar behavior was also observed when the probing was performed in the reflection geometry, see Supplementary Materials S3.
To establish the origin of the AFM quenching, we varied the photon energy of the pump pulse in the range of several _d-d_ transitions of Co\({}^{2+}\) ions. We find that the amplitude of the quenching scales with the absorption and does not depend on the origin of the optical transition. The quenching is defined by the amount of heat deposited into CoPS\({}_{3}\) by a laser pulse. The origin of the laser-induced quenching of AFM order is thus similar to that reported earlier for other materials from the XPS\({}_{3}\) family[17]. Although laser-induced demagnetization is often described using a three-temperature 3_T_-model[1], CoPS\({}_{3}\) lacks free electrons and the model is not adequate in this case. Instead, the quenching of the AFM order in CoPS\({}_{3}\) can be described by a two-temperature 2T model, where the pump photons increase the potential energy of the excited electrons without making them hot. Upon non-radiative recombination of the excited electrons, the latter promptly (\(<\)1 ps) transfer the gained photon energy to the lattice[56]. The subsequent heat exchange via spin-lattice interaction leads to an increase of the effective spin temperature and consequently to a melting of the spin order[57]. This mechanism can explain a substantial increase in the magnitude of the quenching as the temperature approaches _T_\({}_{\mathrm{N}}\). Indeed, at higher temperatures the derivative of the Neel order parameter \(L\) with respect to the temperature increases, and the spins become more susceptible to temperature variations, see Fig. 1d. Remarkably, not only the amplitude of the quenching goes up, but the quenching time
\(\tau_{\rm s}\) also experiences a substantial increase in the vicinity of \(T_{\rm N}\). Such a critical slowing down of the transient spin dynamics induced by light is seen in many AFM compounds, including in recently published results on MnPS\({}_{3}\) and FePS\({}_{3}\)[18, 34]. It has been shown[17] that in magnetic insulators the rate of the spin-lattice coupling defining \(\tau_{\rm s}\) scales with the spin-specific heat \(C_{\rm s}\), such that \(\tau_{\rm s}\)=\(C_{\rm s}/g_{\rm sl}\), where \(g_{\rm sl}\) is the spin-lattice relaxation rate representing the strength of the spin-lattice interaction. As the heat capacity near the Neel temperature is expected to diverge, assuming a weak temperature dependence of \(g_{\rm sl}\) near \(T_{\rm N}\), the characteristic time \(\tau_{\rm s}\) is expected to follow the divergence of \(C_{\rm s}\). Indeed, our experiment shows that \(\tau_{\rm s}\propto|T_{\rm N}\)-\(T|^{\alpha}\) with a critical exponent \(\alpha\sim 0.1\). Remarkably, this value closely matches the theoretical one of \(\alpha\)=0.1 characterizing the critical scaling of the heat capacity in the 3D Ising model[50, 58].
We would like to emphasize that the characteristic time for the quenching of the AFM order in CoPS\({}_{3}\) (\(\tau_{\rm s}\) =2 ps) is much shorter than the one reported for other XPS\({}_{3}\) compounds and MnPS\({}_{3}\), in particular (\(\tau_{\rm s}\) =30 ps) [18]. It is well known that the heat capacity \(C_{\rm s}\) scales with the strength of the exchange interaction, which is proportional to the Neel temperature. As CoPS\({}_{3}\) and MnPS\({}_{3}\) are characterized by similar values of \(T_{\rm N}\), the difference in the quenching time can only be explained by a difference in the magnitude of the spin-lattice relaxation rate \(g_{\rm sl}\). Using the available data for the specific heat capacity[59, 15], we estimate that \(g_{\rm sl}\) to be about \(1\cdot 10^{14}\) Wm' \({}^{3}\)K-\({}^{1}\) and \(5\cdot 10^{11}\) Wm-\({}^{3}\)K-\({}^{1}\) for CoPS\({}_{3}\) and MnPS\({}_{3}\), respectively. Our experiment thus shows that the strength of the coupling can be effectively changed by more than two orders of magnitude by inducing an orbital momentum at the ground state of the magnetic ion. We note that, although the obtained values of the spin-lattice relaxation rates in CoPS\({}_{3}\) due to the unquenched momentum of the Co\({}^{2+}\) ions are high, this is only in comparison with electronically similar materials lacking mobile electrons (dielectrics and semiconductors). In metals, free electrons
Figure 5: **(a)** _The amplitude of the \(B_{g}\) phonon mode as a function of the pump photon energy (left axis). Optical absorption of CoPS\({}_{3}\) in the near IR to the visible range: absorption line at around 0.9 eV corresponds to the \({}^{4}T_{\rm 1g}\)\(\rightarrow\)\({}^{4}T_{\rm 2g}\) orbital transition (right axis). The data is taken from Ref. [44]. The samples temperature is 77 K. Inset: Amplitude of the phonon as a function of the pump polarization angle \(\theta\) with respect to the a-crystal axis. The sign “+” and “-” indicates the relative phase of the oscillations and corresponds to 0\({}^{\circ}\) and 180\({}^{\circ}\), respectively. **(b)** Schematics of the orbital excitation of the \(B_{g}\) phonon, where blue and red ions are Co\({}^{2+}\) ions with antiparallel spins._
serve as an additional reservoir of energy that can increase the rate of laser-induced demagnetization by another two orders of magnitude[60].
### Lattice dynamics
To further understand the nature and excitation mechanism of the light-driven coherent phonon mode that dominates the \(\Delta\theta\) signal, we studied its dynamics as a function of both temperature and photon energy. Figure 4a shows the oscillations measured at various \(T\). It is seen that the excitation is the most efficient below \(T_{\rm N}\), where both the amplitude and the frequency of the oscillations are strongly dependent on \(T\). Using the Fourier transform we retrieved the frequency and amplitude of the oscillations and plotted them as functions of \(T\) in Fig. 4b and Fig. 4c, respectively. Upon temperature increase, the phonon frequency \(f\) softens until it reaches \(T_{\rm N}\) where it stabilizes at \(f_{0}\)=4.64 THz. Fitting the temperature evolution of the frequency shift \(\Delta f\)=\(f\)-\(f_{0}\) to a critical law similar to the one used for \(\alpha_{\rm MLD}\), we find that their critical exponents closely match each other, thus indicating that \(\Delta f\) also follows \(L^{2}\). In accordance with Ref.[15], we assign the oscillations to the B\({}_{\rm g}\) phonon mode that involves antiphase motions of the magnetic Co\({}^{2+}\) ions in the direction parallel to the orientation of their spins (Fig. 5 b). The comparison of the temperature behavior of the amplitude of the phonon mode in AFM and PM phases shows that in PM (\(T\)\(>\)\(T_{\rm N}\)) there is no significant temperature dependence but in AFM (\(T\)\(\leq\)\(T_{\rm N}\)) the amplitude rises linearly as temperature decreases. The rise, concomitant with the onset of the AFM order, clearly indicates that the establishment of the Neel order facilitates the excitation of the lattice dynamics.
Figure 5a shows the amplitude of the B\({}_{\rm g}\) phonon mode as a function of the pump photon energy \(hv\). Unlike the light-induced spin quenching the laser-induced lattice dynamics does not correlate with the absorption coefficient but instead is most efficient if the photon energy is in resonance with the \(d\)-\(d\) transition \({}^{4}T_{\rm 1g}\)\(\rightarrow\)\({}^{4}T_{\rm 2g}\) in the Co\({}^{2+}\) ions at 0.87 eV [44]. We have also found that the efficiency of the excitation is strongly dependent on the incoming pump polarization. The inset in Fig. 5a shows how the amplitude of the phonon oscillations depends on the angle \(\gamma\) the pump polarization plane forms with the \(a\)-axis along which the spins chains are formed. It is seen that the excitation is most efficient if the pump is polarized at 45\({}^{\circ}\) with respect to the spins, while no oscillations are excited if the polarization is oriented either along or perpendicularly.
To explain the unusual temperature behavior of the B\({}_{\rm g}\) phonon mode and its excitation mechanism we first refer to Ref.[61] which demonstrates that in FePS3, also characterized by unquenched angular momentum, the spin-lattice interaction leads to hybridization of spin and lattice dynamics accompanied by renormalization of their frequencies. The energy of such coupling to the lowest order acquires the following form:
\[\mathcal{Q}^{sp}=\!U\!\cdot\!Q\!\cdot lL \tag{2}\]
where \(U\) is a phenomenological coupling parameter, \(Q\) is the normal coordinate of the B\({}_{\rm g}\) mode and \(l\)\(\ll\)\(L\) is a component of the Neel vector along the \(b\) - axis, describing its deviation from the ground state \(\mathbf{L}\)= \(L\)\(\mathbf{\bar{a}}\). It is thus clear to enable the spin-lattice coupling in CoPS3 the spins have to be brought out-of-equilibrium. Next, we consider the energy of the light-matter interaction enabling excitation of both the Neel vector \(l\) and B\({}_{\rm g}\) phonon mode. The energy describing this interaction reads:
\[\mathcal{Q}^{jm}=(\alpha Q+\beta lL)\!\cdot\!E_{a}E_{b}, \tag{3}\]
where \(E_{a,b}\) are time-dependent electric field components of the pump pulse, and \(a\),\(\beta\) are phenomenological parameters, that describe the strength of the light-matter coupling in CoPS\({}_{3}\). It is seen that the pump can be coupled not only to the phonon mode but also to the Neel vector. Remarkably, the efficiency of this coupling is proportional to \(E_{a}E_{b}\propto\sin 2\gamma\), and thus perfectly agrees with the pump polarization dependence of the phonon amplitude, see inset in Fig. 5a. In Supplementary Materials we demonstrate that sufficiently short pump pulse can impulsively excite coherent dynamics of both \(Q\) and \(l\) in accordance a mechanism known as impulsive stimulated Raman scattering (ISRS)[62, 63]. We also show that the inclusion of the spin-lattice coupling hybridizes their dynamics and may explain the enhancement of the phonon amplitude in the AFM phase. Indeed, the hybridization renormalizes the phonon coordinate \(\tilde{Q}=Q+\zeta l\). The parameter \(\zeta\) characterizes the degree of the hybridization and is proportional to the product _UL_. Therefore, it may naturally explain the enhancement of the phonon excitation in the AFM phase, where \(L\not{\pi}0\). Moreover, the hybridization causes a renormalization of the phonon frequency \(\Delta f\) in the AFM phase:
\[\Delta f\propto\zeta^{2}\propto(UL)^{2}. \tag{4}\]
that once again lines up with our experimental findings. We note that although the suggested theory explains experimental findings remarkably well, it still relies on the excitation of the coherent spin dynamics, which is not observed in our experiments. This is rather surprising as excitation and magneto-optical detection of the coherent spin dynamics with femtosecond optical pulses has been reported recently in both MnPS\({}_{3}\) and NiPS\({}_{3}\)[18, 19]. The analogy with the latter is particularly striking as NiPS\({}_{3}\) is magnetically isomorphous to the studied here CoPS\({}_{3}\). Future probes of the laser-induced spin dynamics in CoPS\({}_{3}\) e.g., time-resolved THz spectroscopy, may help to elucidate this issue[64]. In accordance with these theoretical considerations the resonance dependence of the amplitude of the phonon mode on the pump photon energy, shown in Fig. 5a, implies that either both or one of the phenomenological parameters \(U\), \(\beta\) are maximized at this photon energy. This implies that, among all possible \(d\)-\(d\) electronic transitions in the range from 0.5 eV to 2.2 eV, the excitation of the B\({}_{\rm g}\) mode is most affected by the \({}^{4}T_{1\rm g}\) to \({}^{4}T_{2\rm g}\) transition, which can be seen as a change of the effective orbital momentum of the magnetic Co\({}^{2+}\) ions, see Fig. 5b. The corresponding changes to the interionic potential, mediated by the strong spin-lattice coupling, are likely is the mechanical force that moves the ions in the direction parallel to the orientation of their spins.
## Conclusions
To conclude, we have shown that in the vdW AFM CoPS\({}_{3}\), characterized by strong spin-lattice coupling, the AFM order can be effectively probed with MLD. Using MLD we detected the ultrafast dynamics of spins and lattice induced by ultrashort pulses of light. We showed that light can suddenly heat the magnetic system, leading to a substantial loss (\(\sim\)1%) of the spin ordering within nearly a single picosecond. Resonantly pumping \(d\)-\(d\) transitions in magnetic Co\({}^{2+}\) ions, we effectively change the orbital momentum of Co\({}^{2+}\) ions and show that this excitation, mediated by the spin-lattice coupling, brings Co\({}^{2+}\) ions in a coherent motion in the direction of the AFM Neel vector. Our experiments not only elucidate the nature of the ultrafast spin-lattice coupling in 2D vdW AFMs but also lay the ground for future ultrafast pump-probe experiments, particularly those aimed at resonant pumping of infrared-active structural phonon modes[6, 65].
## Acknowledgments
The authors are grateful to M. Matthiessen, J.R. Hortensius, and A.D. Caviglia for fruitful discussions and to S. Semin, and C. Berkhout for technical support. This work was funded by the Netherlands Organization for Scientific Research (NWO), the European Union Horizon 2020 and innovation program under the European Research Council ERC grant agreement no.856538 (3D-MAGiC), and European Union Horizon 2020 innovation program under the Marie Sklodowska-Curie Grant Agreement No. 861300 (COMRAD), the National Research Fund of Ukraine within project no. 2020.02/026, the Gravitation program of the Dutch Ministry of Education, Culture and Science (OCW) under the research program "Materials for the Quantum Age" (QuMat) registration number 024.005.006, the ERC (Grant No. 1010 78206, ASTRAL), SMV thanks the Generalitat Valenciana for the postdoctoral fellow APOSTD-CIAPOS2021/215
## References
* [1] E. Beaurepaire, J.-C. Merle, A. Daunois, and J.-Y. Bigot, Phys. Rev. Lett. **1**, 1 (1996).
* [2] A. V. Kimel, A. Kirilyuk, P.A. Usachev, R. V. Pisarev, A.M. Balbashov, and T. Rasing, Nature **435**, 655 (2005).
* [3] D. Bossini and T. Rasing, Phys. Scr. **92**, 024002 (2017).
* [4] A.M. Kalashnikova, A. V. Kimel, R. V. Pisarev, V.N. Gridnev, A. Kirilyuk, and T. Rasing, Phys. Rev. Lett. **99**, 1 (2007).
* [5] S. Baierl, M. Hohenleutner, T. Kampfrath, A.K. Zvezdin, A. V. Kimel, R. Huber, and R. V. Mikhaylovskiy, Nat. Photonics **10**, 715 (2016).
* [6] D. Afanasiev, J.R. Hortensius, B.A. Ivanov, A. Sasani, E. Bousquet, Y.M. Blanter, R. V. Mikhaylovskiy, A. V. Kimel, and A.D. Caviglia, Nat. Mater. **20**, 607 (2021).
* [7] A. V. Kimel, A. Kirilyuk, A. Tsvetkov, R. V. Pisarev, and T. Rasing, Nature **429**, 850 (2004).
* [8] P. Beaud, A. Caviezel, S.O. Mariager, L. Rettig, G. Ingold, C. Dornes, S.-W. Huang, J.A. Johnson, M. Radovic, T. Huber, T. Kubacka, A. Ferrer, H.T. Lemke, M. Chollet, D. Zhu, J.M. Glownia, M. Sikorski, A. Robert, H. Wadati, M. Nakamura, M. Kawasaki, Y. Tokura, S.L. Johnson, and U. Staub, Nat. Mater. **13**, 923 (2014).
* [9] C.D. Stanciu, F. Hansteen, A. V. Kimel, A. Kirilyuk, A. Tsukamoto, A. Itoh, and T. Rasing, Phys. Rev. Lett. **99**, 047601 (2007).
* [10] A. Stupakiewicz, K. Szerenos, D. Afanasiev, A. Kirilyuk, and A. V. Kimel, Nature **542**, 71 (2017).
* [11] A.V. Kimel, A.M. Kalashnikova, A. Pogrebna, and A.K. Zvezdin, Phys. Rep. **852**, 1 (2020).
* [12] Q. Zhang, K. Hwangbo, C. Wang, Q. Jiang, J.-H. Chu, H. Wen, D. Xiao, and X. Xu, Nano Lett. **21**, 6938 (2021).
* [13] B.H. Zhang, Y.S. Hou, Z. Wang, and R.Q. Wu, Phys. Rev. B **100**, 224427 (2019).
* [14] A. Ghosh, M. Palit, S. Maity, V. Dwij, S. Rana, and S. Datta, Phys. Rev. B **103**, 064431 (2021).
* [15] Q. Liu, L. Wang, Y. Fu, X. Zhang, L. Huang, H. Su, J. Lin, X. Chen, D. Yu, X. Cui, J.-W. Mei, and J.-F. Dai, Phys. Rev. B **103**, 235411 (2021).
* [16] Y.-J. Sun, J.-M. Lai, S.-M. Pang, X.-L. Liu, P.-H. Tan, and J. Zhang, J. Phys. Chem. Lett. **13**, 1533 (2022).
* [17] X.-X. Zhang, S. Jiang, J. Lee, C. Lee, K.F. Mak, and J. Shan, Nano Lett. **21**, 5045 (2021).
* [18] M. Matthiesen, J.R. Hortensius, S. Manas-Valero, M. Siskins, B.A. Ivanov, H.S.J. van der Zant, E. Coronado, D. Afanasiev, and A.D. Caviglia, e-print arXiv: 2204.10574, 2022.
* [19] D. Afanasiev, J.R. Hortensius, M. Matthiesen, S. Manas-Valero, M. Siskins, M. Lee, E. Lesne, H.S.J. van der Zant, P.G. Steeneken, B.A. Ivanov, E. Coronado, and A.D. Caviglia, Sci. Adv. **7**, (2021).
* [20] P. Zhang, T.-F. Chung, Q. Li, S. Wang, Q. Wang, W.L.B. Huey, S. Yang, J.E. Goldberger, J. Yao, and X. Zhang, Nat. Mater. **21**, 1373 (2022).
* [21] S. Li, L. Zhou, T. Frauenheim, and J. He, J. Phys. Chem. Lett. **13**, 6223 (2022).
* [22] H. Ling and A.R. Davoyan, Nat. Photonics **16**, 259 (2022).
* [23] B. Liu, S. Liu, L. Yang, Z. Chen, E. Zhang, Z. Li, J. Wu, X. Ruan, F. Xiu, W. Liu, L. He, R. Zhang, and Y. Xu, Phys. Rev. Lett. **125**, 267205 (2020).
* [24] A. Barman, G. Gubbiotti, S. Ladak, A.O. Adeyeye, M. Krawczyk, J. Grafe, C. Adelmann, S. Cotofana, A. Naeemi, V.I. Vasyuchka, B. Hillebrands, S.A. Nikitov, H. Yu, D. Grundler, A. V Sadovnikov, A.A. Grachev, S.E. Sheshukova, J.-Y. Duquesne, M. Marangolo, G. Csaba, W. Porod, V.E. Demidov, S. Urazhdin, S.O. Demokritov, E. Albisetti, D. Petti, R. Bertacco, H. Schultheiss, V. V Kruglyak, V.D. Poimanov, S. Sahoo, J. Sinha, H. Yang, M. Munzenberg, T. Moriyama, S. Mizukami, P. Landeros, R.A. Gallardo, G. Carlotti, J.-V. Kim, R.L. Stamps, R.E. Camley, B. Rana, Y. Otani, W. Yu, T. Yu, G.E.W. Bauer, C. Back, G.S. Uhrig, O. V Dobrovolskiy, B. Budinska, H. Qin, S. van Dijken, A. V Chumak, A. Khitu, D.E. Nikonov, I.A. Young, B.W. Zingsem, and M. Winklhofer, J. Phys. Condens. Matter **33**, 413001 (2021).
* [25] A. Hirohata, H. Sukegawa, H. Yanagihara, I. Zutic, T. Seki, S. Mizukami, and R. Swaminathan, IEEE Trans. Magn. **51**, 1 (2015).
* [26] S.Y. Kim, T.Y. Kim, L.J. Sandilands, S. Sinn, M.-C. Lee, J. Son, S. Lee, K.-Y. Choi, W. Kim, B.-G. Park, C. Jeon, H.-D. Kim, C.-H. Park, J.-G. Park, S.J. Moon, and T.W. Noh, Phys. Rev. Lett. **120**, 136402 (2018).
* [27] R. Galceran, B. Tian, J. Li, F. Bonell, M. Jamet, C. Vergnaud, A. Marty, J.H. Garcia, J.F. Sierra, M. V. Costache, S. Roche, S.O. Valenzuela, A. Manchon, X. Zhang, and U. Schwingenschlogl, APL Mater. **9**, 100901 (2021).
* [28] K. Hwangbo, Q. Zhang, Q. Jiang, Y. Wang, J. Fonseca, C. Wang, G.M. Diederich, D.R. Gamelin, D. Xiao, J.-H. Chu, W. Yao, and X. Xu, Nat. Nanotechnol. **16**, 655 (2021).
* [29] E. Ressouche, M. Loire, V. Simonet, R. Ballou, A. Stunault, and A. Wildes, Phys. Rev. B **82**, 100408 (2010).
* [30] H. Chu, C.J. Roh, J.O. Island, C. Li, S. Lee, J. Chen, J.-G. Park, A.F. Young, J.S. Lee, and D. Hsieh, Phys. Rev. Lett. **124**, 027601 (2020).
* [31] U.F.P. Seifert, M. Ye, and L. Balents, Phys. Rev. B **105**, 155138 (2022).
* [32] F. Wang, N. Mathur, A.N. Janes, H. Sheng, P. He, X. Zheng, P. Yu, A.J. DeRuiter, J.R. Schmidt, J. He, and S. Jin, Sci. Adv. **7**, (2021).
* [33] E. Ergecen, B. Ilyas, D. Mao, H.C. Po, M.B. Yilmaz, J. Kim, J.-G. Park, T. Senthil, and N. Gedik, Nat. Commun. **13**, 98 (2022).
* [34] F. Mertens, D. Monkebuscher, U. Parlak, C. Boix-Constant, S. Manas-Valero, M. Matzer, R. Adhikari, A. Bonanni, E. Coronado, A.M. Kalashnikova, D. Bossini, and M. Cinchetti, Adv. Mater. **35**, 2208355 (2023).
* [35] E.A. Mashkovich, K.A. Grishunin, R.M. Dubrovin, A.K. Zvezdin, R. V. Pisarev, and A. V. Kimel, Science (80-. ). **374**, 1608 (2021).
* [36] T. Satoh, R. Iida, T. Higuchi, Y. Fujii, A. Koreeda, H. Ueda, T. Shimura, K. Kuroda, V.I. Butrim, and B.A. Ivanov, Nat. Commun. **8**, 638 (2017).
* [37] Y. Suzuki, G. Hu, R.B. van Dover, and R.J. Cava, J. Magn. Magn. Mater. **191**, 1 (1999).
* [38] F. Formisano, R.M. Dubrovin, R. V Pisarev, A.M. Kalashnikova, and A. V Kimel, J. Phys. Condens. Matter **34**, 225801 (2022).
* [39] C. Kim, J. Jeong, P. Park, T. Masuda, S. Asai, S. Itoh, H.-S. Kim, A. Wildes, and J.-G. Park, Phys. Rev. B **102**, 184429 (2020).
* [40] C. Kim, H.-S. Kim, and J.-G. Park, J. Phys. Condens. Matter **34**, 023001 (2022).
* [41] G. Jackeli and G. Khaliullin, Phys. Rev. Lett. **102**, 017205 (2009).
* [42] A.R. Wildes, V. Simonet, E. Ressouche, R. Ballou, and G.J. McIntyre, J. Phys. Condens. Matter **29**, 455801 (2017).
* [43] H. Zhang, Z. Ni, C.E. Stevens, A. Bai, F. Peiris, J.R. Hendrickson, L. Wu, and D. Jariwala, Nat. Photonics **16**, 311 (2022).
* [44] E.J.K.B. Banda, Phys. Status Solidi **135**, K43 (1986).
* [45] A.S. Boroyik-Romanoy, N.M. Krelnes, A.A. Pankoy, and M.A. Talalaev, Sov. Phys.-JETP **37**, 890 (1973).
* [46] T.W.J. Metzger, K.A. Grishunin, D. Afanasiev, R.M. Dubrovin, E.A. Mashkovich, R. V. Pisarev, and A. V. Kimel, Appl. Phys. Lett. **121**, 252403 (2022).
* [47] G.A. Smolenskii, R. V Pisarev, and I.G. Sini, Sov. Phys. Uspekhi **18**, 410 (1975).
* [48] J. Wagner, A. Sahasrabudhe, R.B. Versteeg, L. Wysocki, Z. Wang, V. Tsurkan, A. Loidl, D.I. Khomskii, H. Hedayat, and P.H.M. van Loosdrecht, Npj Quantum Mater. **7**, 28 (2022).
* [49] J. Stohr and S. Hans Christoph, _Magnetism_ (Springer Berlin Heidelberg, Berlin, Heidelberg, 2006).
* [50] M.F. Collins, _Magnetic Critical Scattering_ (Oxford University Press Inc, New York, 1989).
* [51] N. Tesarova, T. Ostatnicky, V. Novak, K. Olejnik, J. Subrt, H. Reichlova, C.T. Ellis, A. Mukherjee, J. Lee, G.M. Sipahi, J. Sinova, J. Hamrle, T. Jungwirth, P. Nemec, J. Cerne, and
* [52] A. V. Kimel, G. V. Astakhov, A. Kirilyuk, G.M. Schott, G. Karczewski, W. Ossau, G. Schmidt, L.W. Molenkamp, and T. Rasing, Phys. Rev. Lett. **94**, 227203 (2005).
* [53] A.J. Kurtzig, R. Wolfe, R.C. LeCraw, and J.W. Nielsen, Appl. Phys. Lett. **14**, 350 (1969).
* [54] J.R. Hortensius, D. Afanasiev, A. Sasani, E. Bousquet, and A.D. Caviglia, Npj Quantum Mater. **5**, 95 (2020).
* [55] I.R. Jahn, Phys. Status Solidi **57**, 681 (1973).
* [56] A. Kirilyuk, A. V. Kimel, and T. Rasing, Rev. Mod. Phys. **82**, 2731 (2010).
* [57] A. V. Kimel, R. V. Pisarev, J. Hohlfeld, and T. Rasing, Phys. Rev. Lett. **89**, 287401 (2002).
* [58] R.K. Pathria and P.D. Beale, _Statistical Mechanics_ (Elsevier, 2011).
* [59] Y. Takano, N. Arai, A. Arai, Y. Takahashi, K. Takase, and K. Sekizawa, J. Magn. Magn. Mater. **272-276**, E593 (2004).
* [60] M. Pankratova, I.P. Miranda, D. Thonig, M. Pereiro, E. Sjoqvist, A. Delin, O. Eriksson, and A. Bergman, Phys. Rev. B **106**, 174407 (2022).
* [61] S. Liu, A. Granados del Aguila, D. Bhowmick, C.K. Gan, T. Thu Ha Do, M.A. Prosnikov, D. Sedmidubsky, Z. Sofer, P.C.M. Christianen, P. Sengupta, and Q. Xiong, Phys. Rev. Lett. **127**, 097401 (2021).
* [62] R. Merlin, Solid State Commun. **102**, 207 (1997).
* [63] D.M. Juraschek and S.F. Maehrlein, Phys. Rev. B **97**, 174302 (2018).
* [64] C.A. Belvin, E. Baldini, I.O. Ozel, D. Mao, H.C. Po, C.J. Allington, S. Son, B.H. Kim, J. Kim, I. Hwang, J.H. Kim, J.-G. Park, T. Senthil, and N. Gedik, Nat. Commun. **12**, 4837 (2021).
* [65] A.S. Disa, M. Fechner, T.F. Nova, B. Liu, M. Forst, D. Prabhakaran, P.G. Radaelli, and A. Cavalleri, Nat. Phys. **16**, 937 (2020).
Supplementary Materials
Ultrafast laser-induced spin-lattice dynamics in the van der Waals antiferromagnet CoPS\({}_{3}\)
D. Khusyainov*, T. Gareev, V. Radovskaia, K. Sampathkumar, S. Acharya, M. Siskins, S. Manas-Valero, B.A. Ivanov, E. Coronado, Th. Rasing, A.V. Kimel and D. Afanasiev
*Corresponding author. Email: [email protected]
Spectral dependence and origin of MLD in CoPS\({}_{3}\)
Spectral measurements of magnetic linear dichroism (MLD) have been performed on an individual CoPS\({}_{3}\) 100 nm thin flake, different from the one used in our time-resolved experiments. The flake was exfoliated from the very same bulk crystal as the one used in the time-resolved experiments. Figure S1a shows a schematic of the experimental setup. The polarization-sensitive absorption measurements have been performed in the transmission geometry. A halogen lamp light source has been employed as a broadband radiation source for spectral measurements. Figure S1b shows the spectral dependence of \(\alpha_{\rm{MLD}}\) as a function of the sample temperature. A clear difference in the optical absorption of light polarized along the \(a\)- and \(b\)-axis is seen in our experiments. Figure S1c demonstrates the spectral dependence of \(\alpha_{\rm{MLD}}\), as defined in the manuscript by Eq. (1), for various temperatures \(T\) below and above \(T_{\rm{N}}\). Two pronounced bands in \(\alpha_{\rm{MLD}}\) are seen at \(\lambda_{1}\)=530 nm and \(\lambda_{2}\)=730 nm, respectively. Their spectral weight rapidly goes down with temperature and nearly disappears above \(T_{\rm{N}}\)=120 K, signifying their sensitivity to the spin order.
To infer the physical origin of the bands responsible for the MLD we compare their positions to the electronic transitions defining the optical absorption in the visible range. It is known that in the XPS\({}_{3}\) family, similarly to other transition metal complexes, the optical absorption in the visible range is dominated by so-called \(d\)-\(d\) transitions[1, 2, 3]. These transitions are electronic transitions that occur between the molecular orbitals of the transition metal ions, like Co\({}^{2+}\) in CoPS\({}_{3}\). Indeed, the position of the first MLD band centered at \(\lambda_{1}\)=530 nm closely matches the energy of the orbital \(d\)-\(d\) transition from the \({}^{4}T_{\rm{1g}}\)(F) to \({}^{4}A_{\rm{2g}}\)(F) states[4]. The position of the second MLD band centered at \(\lambda_{2}\)=730 nm matches the energy of another \(d\)-\(d\) transition from \({}^{4}T_{\rm{1g}}\)(F) to \({}^{4}T_{\rm{1g}}\)(P) states. Therefore, we conclude that the physical origin of the MLD in CoPS\({}_{3}\) lies in the \(d\)-\(d\) orbital transitions in the transition metal ion (Co\({}^{2+}\)), widely known to be responsible for the MLD in the visible part of the optical spectrum[5].
We note that the absolute value of the MLD at 800 nm is somewhat smaller than the one reported in our manuscript. This difference can be explained by the linear birefringence that dramatically influences the polarization state of the light as it propagates along the crystal. For instance, it can modify the polarization state of light from linear to elliptical and even circular, thus substantially hampering the polarization-sensitive measurements of the optical absorption[6].
**Figure S1.** Spectral dependence of magnetic linear dichroism (MLD) in CoPS\({}_{3}\). **(a)** Schematic diagram of the MLD experiment. The inset shows an optical image of the sample with a probed area marked by a glowing star. **(b)** Spectrally resolved absorption of the sample for the probe light polarized along both the \(a\)- and \(b\)- axes. **(c)** MLD spectrum plotted for various temperatures across \(T_{\text{N}}\).
## 2 FFT spectrum of the coherent high-frequency oscillations
Quenching time and polarization rotation as a function of temperature as measured in the reflection geometry
**Figure S3. Quenching of the AFM order as measured in the reflection geometry. (a)** Time-resolved quenching of the antiferromagnetic order as a function of the temperature \(T\). No quenching is seen above \(T_{\rm N}\)=120 K (**b**) Quenching time \(\tau_{\rm s}\) as a function of \(T\). (c) Amplitude of the quenching as a function of \(T\).
## 5 Spin-phonon coupling mechanism.
As CoPS\({}_{3}\) is a complex system, which could be explained with 3 exchange interaction integrals, we will use a simple intuitive phenomenological model based on symmetry analysis of the spin-phonon coupling after excitation with short laser pulses. In this way, normal coordinate \(Q\) for Co\({}^{2+}\) ions vibrations along a crystal axis (B\({}_{\rm g}\) phonon) and the sigma model for Neel vector **L** oscillation are included in our model in the linear approximation. For the description of Q, it is convenient to use displacements of the ion normalized by atomic spacing. Further, we introduce the coupling term, which could be written as \(l\) component obtained after **L** linearization, which is aligned along the b crystal axis multiplied by the normal coordinate \(Q\) and the phenomenological constant \(U\), which describes spin-phonon coupling (see main text).
By adding a term for optical excitation using the Raman scattering tensor for the B\({}_{\rm g}\) phonon for the C\({}_{\rm 2h}\) point group, our system can be described using the following Lagrange function:
\[\mathcal{L}=\tfrac{M}{2}\big{(}\dot{Q}^{2}-\omega_{0}^{2}Q^{2}\big{)}+\tfrac{m}{ 2}\big{(}l^{2}-\omega_{l0}^{2}l^{2}\big{)}-UQL-(\alpha\cdot Q+\beta\cdot lL)E_{a }E_{b}, \tag{1}\]
where the first two terms describing the motion of the \(Q\) and \(l\), \(M\) is an effective mass of the Q, \(m\) is an effective mass of the \(l\), \(\omega_{0}\) is the frequency of the phonon mode without coupling, i.e., above Neel temperature, \(\omega_{0l}\) is a frequency of the \(\mathbf{L}\) without spin-phonon coupling, when \(L\)=0, \(\alpha,\beta\) and \(U\) are phenomenological constants, \(E_{a},E_{b}\) are time-dependent electric field components along a and b crystal axis, \(L\) is a component of \(\mathbf{L}\) along the \(a\) crystal axis and \(l\ll L\) (see main text). Electric fields component could be described as \(E_{a}E_{b}=E_{0}^{2}(t)\cdot sin2\gamma\), where \(E_{0}(t)\) is the time-dependent amplitude of electric field component, which describes laser pulse and \(\gamma\) is an angle between the electric field direction and \(a\) crystal axis. Using this Lagrangian we obtained differential equations of motion:
\[\begin{array}{l}M\big{(}\ddot{Q}+\omega_{0}^{2}Q\big{)}+ULl=-\alpha E_{0}^{ 2}(t)\cdot sin2\gamma,\\ m\big{(}\ddot{l}+\omega_{l0}^{2}l\big{)}+ULQ=-\beta LE_{0}^{2}(t)\cdot sin2 \gamma.\end{array} \tag{2}\]
Let us start with the analysis of free oscillations of lattice and spins, which happens after the laser pulse action. The general dynamics of the coupled spin-lattice system can be described as a superposition of two normal modes with frequencies \(\omega\), which are described by linear homogeneous equations:
\[\begin{array}{l}(\omega^{2}-\omega_{0}^{2})Q=ULl/M,\\ (\omega^{2}-\omega_{l0}^{2})l=ULQ/m,\end{array} \tag{3}\]
where the frequencies \(\omega\) of the normal modes are determined by quadratic over \(\omega^{2}\) equation,
\[\big{(}\omega_{ph}^{2}-\omega_{0}^{2}\big{)}(\omega_{l}^{2}-\omega_{l0}^{2})- \frac{(UL)^{2}}{mM}=0, \tag{4}\]
with the solution
\[\omega^{2}=\tfrac{1}{2}(\omega_{0}^{2}+\omega_{l0}^{2})\pm\tfrac{1}{2}\sqrt{( \omega_{0}^{2}-\omega_{l0}^{2})^{2}+4\frac{(UL)^{2}}{mM}}, \tag{5}\]
where the signs plus and minus corresponds to the frequency of lattice-dominated (phonon) mode \(\omega_{ph}\) and the spin-dominated (magnon) mode \(\omega_{l}\), respectively. Due to the experimental data, the value of the frequency shift \(\omega_{ph}-\omega_{0}\) is relatively small, and we can use the approximate expression:
\[\omega_{ph}\cong\omega_{0}+\frac{(UL)^{2}}{2\omega_{0}mM(\omega_{0}^{2}- \omega_{l0}^{2})}, \tag{6}\]
where the dependence on L coincides with that found in experiment, see Fig 4 b. in the main text. Then using Eq. (4) and the experimental data for \(\omega_{ph}\) at low temperatures (\(L\)=1) and above the Neel temperature (\(L\)=0) we can estimate the coupling parameter \(U\). To do this, we will use
\(\omega_{ph}=4.75\) THz for low temperatures and the value \(\omega_{0}=4.64\) THz, and explore \(\omega_{l}=3.1\) THz according to ref [8] as a magnon gap in CoPS\({}_{3}\), effective masses we use as \(m=\frac{\hbar}{\omega_{ex}}\) (see e.g. Ref. [9]) with the value of exchange frequency for CoPS\({}_{3}\)\(\omega_{ex}=7\) THz [1], and we estimate \(M\sim\frac{\hbar}{\omega_{D}}\), where \(\omega_{D}\) is the Debye frequency, of the order of \(\omega_{D}\)\(\sim\)2\(\omega_{0}\) [1]. Finally, we arrive to the estimate U/\(\hbar\)\(\approx\)0.42 THz.
Now we can describe \(Q\) and \(l\) with coupling contribution. Further, we can introduce renormalized normal coordinates \(\tilde{Q}\) and \(\tilde{l}\) with parameter \(\zeta\):
\[\begin{array}{c}\tilde{Q}\cong Q+\zeta l,\\ \tilde{l}\cong l+\zeta Q,\end{array} \tag{7}\]
parameter \(\zeta\) could be described as:
\[\frac{l}{Q}=\sqrt{\left(\frac{M}{m}\right)\frac{\omega_{ph}^{2}-\omega_{0}^{2} }{\omega_{ph}^{2}-\omega_{l}^{2}}}\simeq\sqrt{\frac{2M\omega_{ph}\left(\omega_ {ph}-\omega_{0}\right)}{m\left(\omega_{ph}^{2}-\omega_{l}^{2}\right)}}. \tag{8}\]
We could estimate \(\zeta=\frac{l}{Q}\approx 0.3\), thus the coupling is strong enough (the standard dimensionless constant of magnon-phonon coupling rarely exceed \(10^{-4}\), see e.g. [10]). Considering that \(E_{0}^{2}(t)\) is strongly localized function so we can replace it with delta function. In this approximation, the dynamics of lattice and spins after the pulse action leads to the free oscillations, which can be described by using non-zero initial condition of the form of \(-M\hat{Q}(+0)=\alpha E_{0}^{2}(t)\cdot\tau\cdot sin2\gamma;\)\(-m\hat{l}(+0)=\beta LE_{0}^{2}(t)\cdot\tau\cdot sin2\gamma\),where \(E_{0}^{2}\cdot\tau=\int_{-\infty}^{\infty}E_{0}^{2}(t)dt.\) Thus, the presence of spin-lattice hybridization leads to excitation of both modes even if the only one of two phenomenological constants in Eq. (1), \(\alpha\) or \(\beta\), is non-zero. If the interaction of pump pulse with the media below T\({}_{\rm N}\) is dominated by the constant \(\beta\), the amplitude of excited phonon should be proportional to the "fraction" of spin oscillations involved to the normal coordinate, i.e., to the parameter \(\zeta\)\(\sim\)\(L\) and the phonon amplitude is proportional to \(L^{2}\). Interaction of the probe pulse with the media contains additional multiplier \(L\) and the dependence of the rotation is expected to be proportional to \(L^{3}\)\(\sim\)\((T_{N}-T)^{0.9}\) that is common to our observation, see Fig. 3 c.
|
2308.07234 | UniWorld: Autonomous Driving Pre-training via World Models | In this paper, we draw inspiration from Alberto Elfes' pioneering work in
1989, where he introduced the concept of the occupancy grid as World Models for
robots. We imbue the robot with a spatial-temporal world model, termed
UniWorld, to perceive its surroundings and predict the future behavior of other
participants. UniWorld involves initially predicting 4D geometric occupancy as
the World Models for foundational stage and subsequently fine-tuning on
downstream tasks. UniWorld can estimate missing information concerning the
world state and predict plausible future states of the world. Besides,
UniWorld's pre-training process is label-free, enabling the utilization of
massive amounts of image-LiDAR pairs to build a Foundational Model.The proposed
unified pre-training framework demonstrates promising results in key tasks such
as motion prediction, multi-camera 3D object detection, and surrounding
semantic scene completion. When compared to monocular pre-training methods on
the nuScenes dataset, UniWorld shows a significant improvement of about 1.5% in
IoU for motion prediction, 2.0% in mAP and 2.0% in NDS for multi-camera 3D
object detection, as well as a 3% increase in mIoU for surrounding semantic
scene completion. By adopting our unified pre-training method, a 25% reduction
in 3D training annotation costs can be achieved, offering significant practical
value for the implementation of real-world autonomous driving. Codes are
publicly available at https://github.com/chaytonmin/UniWorld. | Chen Min, Dawei Zhao, Liang Xiao, Yiming Nie, Bin Dai | 2023-08-14T16:17:13Z | http://arxiv.org/abs/2308.07234v1 | # UniWorld: Autonomous Driving Pre-training via World Models
###### Abstract
In this paper, we draw inspiration from Alberto Elfes' pioneering work in 1989, where he introduced the concept of the occupancy grid as World Models for robots [1]. We imbue the robot with a spatial-temporal world model, termed UniWorld, to perceive its surroundings and predict the future behavior of other participants. UniWorld involves initially predicting 4D geometric occupancy as the World Models for foundational stage and subsequently fine-tuning on downstream tasks. UniWorld can estimate missing information concerning the world state and predict plausible future states of the world. Besides, UniWorld's pre-training process is label-free, enabling the utilization of massive amounts of image-LiDAR pairs to build a Foundational Model. The proposed unified pre-training framework demonstrates promising results in key tasks such as motion prediction, multi-camera 3D object detection, and surrounding semantic scene completion. When compared to monocular pre-training methods on the nuScenes dataset, UniWorld shows a significant improvement of about 1.5% in IoU for motion prediction, 2.0% in mAP and 2.0% in NDS for multi-camera 3D object detection, as well as a 3% increase in mIoU for surrounding semantic scene completion. By adopting our unified pre-training method, a 25% reduction in 3D training annotation costs can be achieved, offering significant practical value for the implementation of real-world autonomous driving. Codes are publicly available at [https://github.com/chaytonmin/UniWorld](https://github.com/chaytonmin/UniWorld).
## 1 Introduction
The multi-camera 3D perception systems in autonomous driving offer a cost-effective solution to gather \(360^{\circ}\) environmental information around vehicles, making it a hot research area recently [2; 3; 4; 5; 6]. However, current multi-camera 3D perception models [7; 8; 9; 10; 11; 12] usually rely on pre-trained ImageNet models [13] or depth estimation models [7] on monocular images. These models fail to take into account the inherent spatial and temporal correlations presented in multi-camera systems. Additionally, while monocular pre-training enhances the capability of image feature extraction, it does not address the pre-training requirements of subsequent tasks. Autonomous driving vehicles collect vast amounts of image-LiDAR pairs, which contain valuable spatial and temporal information. Thus, effectively utilizing these unlabeled image-LiDAR pairs can be beneficial for enhancing the pre-training performance of autonomous driving systems.
Recent studies, such as BEVDepth [10] and DD3D [14], have underscored the significance of depth estimation in visual-based perception algorithms. Monocular depth estimation plays a crucial role in acquiring spatial position information for objects. However, depth estimation methods typically focus on estimating the depth of object surfaces, neglecting the holistic 3D structure of objects and occluded elements. For robot systems, geometric occupancy gird provides a unified and consistent world model for robotic tasks, such as obstacle avoidance, path planning, and navigation [1]. Achieving precise geometric occupancy prediction is instrumental in enhancing the overall 4D perception accuracy within multi-camera perception systems [15]. Hence, in the field of autonomous driving, as illustrated in Figure 1, the pre-training of models would yield greater benefits by priori
tizing the reconstruction of the 4D geometric occupancy of the surrounding scene, rather than solely emphasizing depth prediction.
With an inherent World Model, humans possess the remarkable ability to mentally reconstruct the complete 3D geometry of occluded scenes and anticipate the future motion trajectories of objects in the surrounding scene, which is crucial for recognition and understanding. To imbue the perception system of autonomous vehicles with similar spatial-temporal world models, we propose a multi-camera unified pre-training method, called UniWorld. Our approach leverages the intuitive concept of using the multi-camera system to learn a compressed spatial and temporal representation of the environment (i.e., World models) as the foundational stage, followed by fine-tuning downstream tasks. In the case of multi-camera BEV perception, the input multi-camera images are transformed to the BEV space using advanced techniques like LSS [16] or Transformer [7], and then a geometric occupancy prediction head is incorporated to learn the 4D occupancy distribution, thereby enhancing the model's understanding of the 4D surrounding scene. Due to the sparsity of single-frame point clouds, we employed multi-frame point cloud fusion as the ground truth for 4D occupancy label generation. The decoder was solely used for pre-training, while the well-trained model was utilized to initialize the multi-camera perception models. By designing an effective multi-camera unified pre-training method, we enable the pre-trained model to exploit the rich spatial and temporal information inherent in the unlabeled data. This not only improves the model's ability to understand complex 4D scenes but also reduces reliance on costly and time-consuming manual 3D annotation.
To evaluate the effectiveness of our approach, we conducted extensive experiments using the widely used autonomous driving dataset nuScenes [17]. The experimental results demonstrate the superiority of our multi-camera unified pre-trained model compared to existing monocular pre-training methods across various 3D perception tasks, including motion prediction, 3D object detection and semantic scene completion. For the motion prediction task, our multi-camera unified pre-training algorithm exhibits a 1.8% increase in IoU and a 1.7% improvement in VPQ compared to monocular approaches. This indicates that our algorithm is capable of learning future information by constructing World models. In the 3D object detection task, the proposed UniWorld-3D achieves a significant improvement of 2.0% in mAP and 2.0% in NDS when compared to monocular pre-training methods. This indicates that our model is better equipped to accurately detect and localize objects in a 3D environment. For the semantic scene completion task, UniWorld-3D demonstrates a noteworthy improvement of approximately 3% in mIoU, indicating that our model is more effective in reconstructing and predicting the semantic labels of the surrounding environment. Besides, through the implementation of our integrated pre-training approach, a noteworthy 25% reduction in costs related to 3D training annotations can be realized. This achievement holds considerable practical significance for the seamless integration of autonomous driving technology into real-world scenarios. The
Figure 1: Comparison between monocular pre-training and our unified multi-camera pre-training. Monocular pre-training only enhances the capability of the feature extraction from a single view, whereas our proposed multi-view unified pre-training enables the incorporation of temporal and spatial information from multi-view images through World Models for pre-training.
superior performance of our model can be attributed to its ability to effectively leverage unlabeled data, as well as its consideration of spatial and temporal correlations. By incorporating information from multiple camera views, our model can better capture the rich contextual and temporal information present in the scene, leading to enhanced perception capabilities in autonomous driving scenarios.
The main contributions of this work are listed below:
* We propose to learn spatial-temporal World Models for unified autonomous driving pre-training, which involves initially reconstructing the 4D surrounding scene as the foundamental stage and subsequently fine-tunes on downstream tasks.
* UniWorld has the capability to estimate missing information concerning the 3D world state and predict plausible future states of the 4D world.
* UniWorld's pre-training process is label-free, enabling the utilization of massive amounts of image-LiDAR pairs collected by autonomous vehicles to build a Fundamental Model.
* By adopting our unified pre-training method, a 25% reduction in costly 3D annotation costs can be achieved, offering significant practical value for the implementation of real-world autonomous driving.
## 2 Related Work
### Multi-Camera 3D Perception
In the field of autonomous driving, vision-based 3D perception conducted in bird's eye view has gained significant attention in recent years [18; 19; 20; 21; 22]. Learning-based BEV perception methods, based on 2D-to-3D view transformation, can be broadly categorized into geometry-based and Transformer-based approaches [23; 24; 25; 26; 27]. One of the early geometry-based methods is LSS [16], which lifts each individual image into a frustum of features for each camera and then combines them into a rasterized BEV grid. Building upon LSS, BEVDet [9] introduces image-view and BEV space data augmentation techniques. BEVDepth [10] demonstrates the significance of depth and improves the quality of BEV features by incorporating explicit depth supervision from LiDAR. BEVStereo [28] and STS [29] leverage temporal multi-view stereo methods to enhance depth precision. SOLOFusion [30] and VideoBEV [31] explore long-term temporal fusion for multi-view 3D perception. DETR3D [7] is the first Transformer-based BEV method, which defines object
Figure 2: The overall architecture of the proposed multi-camera unified pre-training method UniWorld. We first transform the multi-frame large-scale irregular LiDAR point clouds into volumetric representations as the 4D geometric occupancy labels, then add an occupancy decoder with some layers of 3D convolutions to the BEV encoder. We apply binary occupancy classification as the pretext task to distinguish whether the 4D voxel contains points. After pre-training, the lightweight decoder is discarded, and the encoder is used to warm up the backbones of downstream tasks.
queries in 3D space and learns from multi-view image features using a transformer decoder. Building upon DETR3D, PETR [32] enhances the approach with position embedding transformations, while BEVFormer [8] introduces temporal self-attention to fuse historical BEV features. UniAD [33] extends BEVFormer to enable multi-task learning in BEV space. Although the existing BEV perception methods have shown promising performance, they are typically initialized with ImageNet pre-trained models [13] or depth pre-trained models [14] trained on monocular images. However, there is a lack of unified pre-training methods that effectively leverage the geometric structure of multi-camera inputs.
### Label-free Pre-training
Label-free Pre-training has gained significant popularity in recent years as it eliminates the need for expensive data annotation. For instance, the method presented in [34] focuses on predicting the relative location of image patches as the pretext task. Another approach, as described in [35], tackles a jigsaw puzzle prediction task, which demonstrates strong generalization capabilities for domain adaptation in object recognition. DeepCluster [36] and SwAV [37] leverage k-means clustering to obtain pseudo-labels, which are then used to train the network. Moco [38] and BYOL [39] construct contrastive views for self-supervised learning. Additionally, methods like MAE [40] and BEiT [41] employ a random patch masking approach where missing pixels or features are reconstructed using a simple autoencoder framework. In the context of automated vehicle perception, DD3D utilizes monocular depth estimation for pre-training. Voxel-MAE [42] and ALSO [43] propose predicting occupancy for LiDAR perception as the pretext task. Our previous work Occ-BEV [44] defines the task of multi-camera unified pre-training and reconstrucutes the 3D static surrounding scene as the fundamental stage. In this work, we extend Occ-BEV to utilize 4D geometric occupancy prediction to learn spatial-temporal world models for vision-based perception.
### World Models
The concept of employing world models by humans, animals, and intelligent systems has a historical foundation in psychology, dating back to the work of Craik in 1943 [45]. Alberto Elfes proposed the geometric occupancy grid as a world model for robot perception and navigation in 1989 [1]. David Ha proposed that world model can be trained quickly in an unsupervised manner to learn a compressed spatial and temporal representation of the environment [46]. Methods in [47; 48; 49] presuppose access to rewards and online interaction with the environment from predictions in the compact latent space of a world model in reinforcement learning. [50; 51; 52] learned the latent dynamics of a world model from image observations via video prediction. MILE [53] proposed to build the world model by predicting the future BEV segmentation from high-resolution videos of expert demonstrations for autonomous driving. In this paper, we follow the concept of the occupancy grid as a robot's world model introduced in 1989 [1], and propose a label-free spatiotemporal fused world model for autonomous driving by integrating future prediction techniques inspired by methods in reinforcement learning [46] and MILE [53].
## 3 Methodology
This section provides a detailed description of the network architecture employed by UniWorld, depicted in Figure 2. We commence by examining the vision-based bird's eye view (BEV) perception methods in Section 3.1. Following that, in Section 3.1.2, we present the proposed 4D world model utilized for pre-training. Additionally, we compare this model to existing monocular pre-training, knowledge distillation, and 3D static pre-training methods in Section 3.2.
### Review of BEV Perception
As discussed in the related works, there are two primary learning-based approaches for transforming 2D images into 3D space: LSS-based [16] and Transformer-based [7] view transformations. How
ever, our method is not restricted to any specific view transformation technique. In the subsequent sections, we will provide a comprehensive outline of the workflow for multi-camera perception algorithms based on the bird's eye view.
The input images from multiple cameras, denoted as \(I=\{I_{i},i=1,2,...,N_{view}\}\), are initially processed by an image backbone network, such as ResNet-101 [54], which generates feature maps \(F_{2d}=\{F_{2d}^{i}\}_{i=1}^{N_{view}}\) for each camera view. These features are then passed through a 2D-to-3D view transformation operation, projecting them onto a unified bird's eye view representation denoted as \(F_{bev}\in\mathbb{R}^{C\times H\times W}\). By incorporating specific heads, various autonomous driving perception tasks can be performed on the bird's eye view, including 3D object detection, map segmentation, object tracking, and more [33].
Current bird's eye view perception algorithms [8; 7; 9] often rely on feature extraction models (e.g., ImageNet [13]) or depth estimation models (e.g., V2-99 [14]) trained on monocular images. However, these approaches do not consider the interplay and correlation between images captured from different camera views and frames, leading to a lack of a unified pre-training model that utilizes the spatial and temporal relationships between different camera views. In order to fully exploit these relationships, we propose a multi-camera unified pre-training model.
Methods such as BEVDepth [10] and DD3D [14] demonstrate the importance of depth estimation for visual-based perception algorithms. However, depth estimation can only estimate the position of the object's surface, ignoring the occlusions of objects. For multi-camera systems, precision occupancy prediction is beneficial to the accuracy of perception. To enable the model to possess the capabilities of occupancy scene completion and future prediction simultaneously, we propose to build world models for multi-camera unified pre-training via 4D geometric occupancy prediction.
#### 3.1.1 4D Geometric Occupancy Decoder
In order to predict 4D geometric occupancy using the BEV features \(F_{bev}\), we begin by transforming the BEV features into \(F_{bev}^{{}^{\prime}}\in\mathbb{R}^{C^{\prime}\times D\times H\times W}\), where D represents the number of height channels, and \(C=C^{{}^{\prime}}\times D\). Subsequently, we utilize a 3D decoder specifically designed to generate 4D geometric occupancy. This decoder consists of lightweight 3D convolution layers, with the final layer providing the probability of each voxel containing points. The output of the decoder is denoted as \(P=\{P^{i}\in\mathbb{R}^{D\times H\times W\times 1},i=1,2,...,m\}\). During the pre-training phase, the main objective of the decoder is to reconstruct occupied voxels.
#### 3.1.2 Pre-training Target
Considering the sparsity of single-frame LiDAR point clouds and the potential inaccuracies that arise from fusing a large number of frames due to dynamic objects, we adopt a strategy of fusing LiDAR point clouds from selected keyframes to generate occupancy labels. Following the standard practice in 3D perception models [55; 56; 57; 58], the LiDAR point clouds are divided into evenly spaced voxels. For the dimensions of the LiDAR point clouds along \(Z\times Y\times X\) (denoted as \(D\times H\times W\)), the voxel size is determined as \(v_{Z}\times v_{H}\times v_{W}\) respectively. The 4D ground truth \(T=\{T^{i}\in\{0,1\}^{D\times H\times W\times 1},i=1,2,...,m\}\) is generated based on the occupancy of the voxels, indicating whether each voxel contains points or not. A value of 1 represents an occupied voxel, while a value of 0 indicates a free voxel.
We propose the binary geometric occupancy classification task as part of the pre-training process for multi-camera perception models. The goal of this task is to train the network to accurately predict the distribution of geometric occupancy in a 4D scene based on multi-view images. However, due to the significant number of empty voxels, predicting occupancy grids presents an imbalanced binary classification challenge. To address this, we employ focal loss for binary occupancy classification, leveraging the predicted 4D occupied values \(P\) and the 4D ground truth occupied voxels \(T\)
\[loss=-\frac{1}{m}\frac{1}{n}\sum_{i=1}^{m}\sum_{j=1}^{n}\alpha_{t}\left(1-P_{t}^{ ij}\right)^{\gamma}\log(P_{t}^{ij}), \tag{1}\]
where \(P^{ij}\) is the predicted probability of voxel \(j\) in the \(i\)-th occupancy. \(n=D\times H\times W\) is the total number of voxels and batch is the number of batch sizes. The weighting factor \(\alpha\) for positive/negative examples is set as 2 and the weighting factor \(\gamma\) for easy/hard examples is 0.25. \(\alpha_{t}=\alpha\) and \(P_{t}^{ij}=P^{ij}\) for class 1. \(\alpha_{t}=1-\alpha\) and \(P_{t}^{ij}=1-P^{ij}\) for class 0.
Currently, the 4D geometry occupancy labels used in the algorithm are obtained from multi-frame LiDAR point clouds. In the future, it is also feasible to utilize point clouds generated from 3D scene reconstructions using techniques such as NeRF [59; 60; 61; 62] or MVS [63; 64; 65; 66].
#### 3.1.3 Pre-training for Surrounding Semantic Occupancy Prediction
Recently, several algorithms, such as TPVFormer [67], OpenOccupancy [68], and Occ3D [69], have expanded the scope of multi-camera BEV perception to include the task of surrounding semantic scene completion [70; 71; 72; 73; 74]. However, directly predicting the 3D semantics of multi-view images requires a significant amount of 3D semantic annotations for training, which can be expensive and time-consuming. To overcome this challenge, we propose to extend our multi-camera unified pre-training algorithm to include the surrounding semantic scene completion task. This involves initially performing geometric occupancy prediction and subsequently fine-tuning the model on the semantic scene completion task.
### Comparision with Existing Methods
#### 3.2.1 Comparision with Monocular Pre-training
Currently, multi-camera perception algorithms predominantly utilize either monocular image pre-training on ImageNet [13] or depth estimation pre-training [14]. In Figure 1, we highlight several advantages of our proposed multi-camera unified pre-training model over monocular pre-training: (1) **Spatial-Temporal Integration**: By leveraging spatial and temporal information from multiple camera views, the model gains a better understanding of the dynamic nature of the environment, leading to more accurate predictions. (2) **Unified Representation**: The unified pre-training approach enables the model to learn a shared representation across different camera views, facilitating improved knowledge transfer and reducing the necessity for task-specific pre-training. (3) **Perception of Occluded Areas**: Monocular depth estimation can only predict surface positions of objects, while our proposed multi-camera unified pre-training method enables comprehensive 3D reconstruction of occluded objects.
#### 3.2.2 Comparision with Knowledge Distillation
Recently, there have been advancements in knowledge distillation algorithms such as BEVDistill [75], TiG-BEV [76] and GeoMIM [77], which aim to transfer knowledge from well-established 3D LiDAR models like CenterPoint [57] to multi-camera object detection algorithms. Similarly, our approach aims to leverage the rich spatial information presented in 3D point clouds and transfer it to multi-camera algorithms. Our unique pre-training algorithm eliminates the need for annotations or pre-trained LiDAR detection models, significantly reducing the 3D annotation requirements.
## 4 Experiments
### Experimental Setup
We conducted extensive experiments on the nuScenes dataset [17]. We adopted the training settings from the existing methods: DETR3D [7] and BEVFormer [8], which are two Transformer-based methods, and BEVerse [78], BEVDet [9], BEVDepth [10], and BEVStereo [28], which are four
LSS-based methods. We performed pre-training for a total of 24 epochs. The occupancy decoder consists of two layers of 3D convolutional layers. For more detailed information about the parameter setups, please refer to the papers of DETR3D, BEVFormer, BEVese, BEVDet, BEVDepth and BEVStereo. All experiments were conducted using 8 Nvidia Tesla A40 GPU cards.
### Results on Downstream Tasks
#### 4.2.1 Multi-Camera 3D Object Detection
We first comprehensively validate the performance of the proposed multi-camera unified pretraining algorithm on a multi-task model, i.e., BEVerse [78]. As shown in Table 1, compared to the baseline, both Occ-BEV [44] and UniWorld significantly enhance the performance of 3D object detection, semantic map construction, and motion prediction. In comparison to Occ-BEV's 3D reconstruction pre-training, UniWorld introduces a 4D occupancy prediction auxiliary task, enabling the model to learn 4D motion information and construct a more comprehensive 4D world model. In the motion prediction task, UniWorld's improvement over Occ-BEV results in a 1% increase in IoU and VPQ.
We subsequently performed an assessment of the performance of multi-camera unified pre-training models in the 3D object detection task using the validation set of nuScenes. As shown in Table 2, our multi-camera unified pre-training methods Occ-BEV and UniWorld exhibited significant improvements over monocular FCOS3D [79]. Occ-BEV surpassed FCOS3D [79] on DETR3D [7] by achieving a 2.7% increase in NDS and 1.1% in mAP. Additionally, it outperformed BEVFormer [8] with a 2.6% improvement in NDS and 2.1% in mAP for ImageNet baseline, a 1.7% improvement in NDS and 2.2% in mAP for FCOS3D baseline. We present the convergence curve of BEVFormer [8] in Figure 3 for FCOS3D baseline. Our unified pre-training Occ-BEV significantly enhances BEVFormer [8] at the initial epoch, achieving a 4% increase in NDS. This demonstrates that our unified pre-training method delivers accurate object position information from a global perspective.
Compared to the 3D reconstruction pretraining of Occ-BEV [44], the introduction of 4D occupancy prediction in UniWorld resulted in slightly inferior performance in the 3D object detection task. This discrepancy is likely attributed to the fact that 4D prediction allows the model to learn future information but introduces uncertainty in the object's position. Further resolution of this issue is required.
For further validation, we conducted additional experiments on the nuScenes test set to validate the effectiveness of our proposed multi-camera unified pre-training method Occ-BEV via 3D scene reconstruction compared to pre-training based on monocular depth estimation. As presented in Table 3, our multi-camera unified pre-training method demonstrated a significant improvement of about 1.8% in both mAP and NDS compared to the DETR3D [7] pre-trained on DD3D [14] for depth
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{**Method**} & \multirow{2}{*}{**Pre-train**} & \multicolumn{2}{c|}{**Detection**} & \multicolumn{4}{c|}{**Semantic map**} & \multicolumn{2}{c}{**Motion**} \\ & & **NDS\({}^{\dagger}\)** & **MAP\({}^{\dagger}\)** & **Dibider\({}^{\dagger}\)** & **Ped Cross\({}^{\dagger}\)** & **Boundary\({}^{\dagger}\)** & **mIoU\({}^{\dagger}\)** & **IoU\({}^{\dagger}\)** & **VPQ\({}^{\dagger}\)** \\ \hline \multirow{4}{*}{BEVPerse [78]} & ImageNet [13] & 0.466 & 0.321 & 53.2 & 39.0 & 53.9 & 48.7 & 38.7 & 33.3 \\ \cline{2-11} & ImageNet + UniWorld-3D & \(0.483^{+1.75}\) & **0.341\({}^{+2.75}\)** & **54.81\({}^{+2.75}\)** & **40.41\({}^{+1.5}\)** & \(55.0^{+1.1}\) & **50.0\({}^{+1.3}\)** & \(33.4^{+0.7}\) & \(34.1^{+0.8}\) \\ \cline{1-1} \cline{2-11} & ImageNet + UniWorld-4D & **0.484\({}^{+1.50}\)** & \(0.331^{+1.05}\) & \(54.7^{+1.5}\) & \(40.1^{+1.1}\) & \(54.9^{+1.0}\) & \(49.9^{+1.2}\) & **40.5\({}^{+1.3}\)** & **35.0\({}^{+1.7}\)** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Quantitative multi-task learning performance on the nuScenes validation set.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c} \hline \hline
**Type** & **Method** & **Pre-train** & **Benchiner** & **Image Size** & **CIGS** & **mAP\({}^{\dagger}\)** & **NDS\({}^{\dagger}\)** & **mAP\({}^{\dagger}\)** & **mASE\({}^{\dagger}\)** & **mAGE\({}^{\dagger}\)** & **mAVE\({}^{\dagger}\)** & **mALE\({}^{\dagger}\)** \\ \hline \multirow{4}{*}{Transformer [7]} & DETR3D [7] & FCOS3D [79] & 80.101-DCN & 900.1600 & \(\sim\) & 0.39 & 0.434 & 0.716 & 0.288 & 0.379 & 0.842 & 0.200 \\ \cline{2-11} & \multirow{4}{*}{TENOSD [7]} & TENOSD + UnWorld-3D & 80.101-DCN & 900.1600 & \(\sim\) & **4.300\({}^{+1.07}\)** & **0.441\({}^{+1.75}\)** & 0.30 & 0.320 & 0.372 & 0.130 & 0.188 \\ \cline{1-1} \cline{2-11} & & ImageNet [13] & R101-DCN & 900.1600 & \(\sim\) & 0.371 & 0.479 & 0.688 & 0.275 & 0.441 & 0.382 & 0.195 \\ \cline{1-1} \cline{2-11} & \multirow{4}{*}{BEVFormer [8]} & ImageNet + UniWorld-3D & R101-DCN & 900.1600 & \(\sim\) & **0.397\({}^{+1.05}\)** & 0.409\({}^{+1.05}\)** & 0.680 & 0.272 & 0.962 & 0.300 & 0.193 \\ \cline{1-1} \cline{2-11} & & BEVFormer [8] & ImageNet + UniWorld-4D & R101-DCN & 900.1600 & \(\sim\) & 0.3020\({}^{+1.05}\)** & **0.501\({}^{+1.05}\)** & 0.683 & 0.208 & 0.304 & 0.374 & 0.192 \\ \cline{1-1} \cline{2-11} & \multirow{4}{*}{BEVFormer [8]} & FCOS3D [79] & R101-DCN & 900.1600 & \(\sim\) & 0.416 & 0.517 & 0.479 & 0.247 & 0.372 & 0.394 & 0.198 \\ \cline{1-1} \cline{2-11} & & FCOSD + UnWorld-3D & R101-DCN & 900.1600 & \(\sim\) & **0.448\({}^{+1.05}\)** & **0.534\({}^{+1.05}\)** & 0.650 & 0.271 & 0.371 & 0.348 & 0.183 \\ \cline{1-1} \cline{2-11} & & FCOSD + UnWorld-4D & R101-DCN & 900.1600 & \(\sim\) & 0.422\({}^{+1.05}\)** & 0.539\({}^{+1.05}\)** & 0.659 & 0.274 & 0.375 & 0.344 & 0.188 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Quantitative multi-camera 3D object detection performance on the nuScenes validation set.
estimation. This highlights the effectiveness and superiority of our pre-training approach in enhancing the performance of 3D perception tasks. Compared to the monocular depth estimation approach of DD3D [14], our pre-training method considers the complete 3D structure of objects, beyond the partial surfaces captured by LiDAR. Moreover, it incorporates the learning of multi-view and temporal information, allowing for a more comprehensive understanding of the scene. The above results indicated that our proposed Occ-BEV model has a promising application in autonomous driving. As UniWorld's performance in the 3D object detection task is slightly inferior to Occ-BEV, we do not present its results on the test set here.
We also compared our proposed multi-camera unified pre-training method Occ-BEV with the knowledge distillation approach BEVDistill [75]. As shown in Table 6, our method demonstrates comparable performance to the knowledge distillation method trained on annotated LiDAR point clouds data. It is worth noting that our approach offers higher efficiency and broader applicability since it does not rely on data annotation or training of LiDAR point clouds models as BEVDistill [75].
#### 4.2.2 Multi-Camera Semantic Occupancy Prediction
We also evaluated the performance of our proposed multi-camera unified pre-training method Occ-BEV on the task of multi-camera semantic scene completion. Compared to BEV perception, the task of predicting semantic labels for each voxel in 3D space, known as surrounding semantic scene completion, is more challenging. To tackle this challenge, we decomposed the task into two steps: first reconstructing the 3D scene as the fundamental model and then simultaneously reconstructing and predicting semantics. As shown in Table 4, on the 3D occupancy prediction challenge [80], our algorithm achieved a 3% improvement in mIoU compared to BEVStereo [28], highlighting the effectiveness of our approach in addressing the complexities of surrounding semantic occupancy prediction.
\begin{table}
\begin{tabular}{c|
### Ablation Studies
In this section, we perform thorough ablation experiments to investigate the individual components of our unified pre-training method UniWorld-3D with multi-camera 3D object detector BEVFormer [8] on nuScenes _val_ set.
#### 4.3.1 Data-efficient Learner
Fine-tuning models using limited labeled data is made possible through pre-training. In order to evaluate the data efficiency of UniWorld-3D, we conducted experiments using varying amounts of labeled data for fine-tuning. BEVFormer [8] was utilized as the backbone and assessed the detection performance of the model on the nuScenes validation set. The results, as depicted in Figure 4, demonstrate that when BEVFormer is trained with 75% of the labeled data, it achieves the same performance as it trained on the complete dataset. Moreover, even with only 25% of the samples available for fine-tuning, our UniWorld model outperforms BEVFormer by 1% in mAP, highlighting its remarkable data efficiency and its potential to reduce the reliance on expensive human-annotated 3D data.
#### 4.3.2 Multi-frame Fusion
We conducted an analysis of the influence of the number of fused LiDAR frames on the pre-trained model. As more frames were fused, the density of the point clouds increased. Our comparison included single-frame fusion, 3-frame fusion and 5-frame fusion (including the corresponding non-key frames) and the results are presented in Table 5. It is clear that the model's accuracy initially improved with an increasing number of fused point clouds but started to decline afterwards. This finding suggests that fusing multiple frames of point clouds can enhance the effectiveness of the pre-trained model. However, it is important to note that excessive fusion of frames introduces uncertainty due to the presence of dynamic objects. This uncertainty can lead to errors in the fusion process and subsequently lower the accuracy of the model.
#### 4.3.3 Explicit Semantic Supervision
Labeled 3D data can be utilized to handle dynamic objects separately during the point cloud fusion process, resulting in more precise occupancy grid ground truth for multi-frame fusion. Subsequently, we examined the impact of explicit occupancy grid prediction on the model's performance. The results in Table 7 demonstrate that incorporating explicit supervision leads to a notable improvement of 3% in mAP and NDS compared to BEVFormer [8]. Furthermore, when compared to unlabeled multi-frame fused point cloud pre-training, there is a 1% increase in mAP. These findings highlight the potential of leveraging labeled data for explicit occupancy prediction supervision. Moreover, they further support the proposition that occupancy prediction enables the model to learn the data distribution of the entire 3D scene, thereby enhancing the accuracy of downstream tasks.
### Qualitative Evaluation
As shown in Figure 5, we present several reconstructed scenes. It can be observed that using single-frame point clouds as the supervision for occupancy grid generation results in incomplete reconstructions due to the sparse nature of the LiDAR point clouds. On the other hand, using three
\begin{table}
\begin{tabular}{c|c|c} \hline \hline
**Methods** & **mAP\(\uparrow\)** & **NDS\(\uparrow\)** \\ \hline BEVFormer [8] & 0.352 & 0.423 \\ BEVDistill [75] & 0.386 & 0.457 \\ \hline UniWorld-3D & **0.389** & **0.459** \\ \hline \hline \end{tabular}
\end{table}
Table 6: Comparison with BEVDistill [75].
\begin{table}
\begin{tabular}{c|c|c|c} \hline \hline
**Methods** & **mAP\(\uparrow\)** & **NDS\(\uparrow\)** & **mATE\(\downarrow\)** \\ \hline BEVFormer [8] & 0.416 & 0.517 & 0.673 \\ UniWorld-3D & 0.438 & 0.534 & 0.656 \\ \hline Supervision & **0.445** & **0.544** & **0.648** \\ \hline \hline \end{tabular}
\end{table}
Table 7: Semantic occupancy supervision.
keyframes and their corresponding non-key frames as the supervision information allows for more complete reconstructions of scenes.
**Limitations.** Although our multi-camera unified pre-training approach has demonstrated promising results, there are several limitations to consider: (1) Currently, UniWorld-4D's performance in the 3D object detection task is slightly inferior to UniWorld-3D. It is necessary to address the issue of decreased 3D static detection performance caused by temporal information prediction. (2) The 3D convolutions in the decoder limit its applicability to tasks requiring high-resolution occupancy reconstruction. We will explore the cascade refine strategy. (3) We rely on LiDAR to obtain ground truth occupancy grids. In the future, we will explore the NeRF [59, 60, 61, 62] and MVS [63, 64, 65, 66] algorithms to reconstruct 3D scenes solely from multi-view images and obtain ground truth data.
## 5 Conclusion
We introduce a unified pre-training algorithm founded on 4D occupancy prediction to estimate missing information concerning the 3D world state and predict plausible future states of the 4D world. This approach has showcased remarkable efficacy across diverse autonomous driving tasks, such as motion prediction, multi-camera 3D object detection and surrounding semantic scene completion. Pre-training via World Model with unlabeled image-LiDAR pairs offers promising opportunities for reducing the 25% dependency on annotated 3D data and establishing a Foundational Model for autonomous driving and robotics. Future work should focus on addressing the limitations mentioned and further improving the performance and applicability of our approach in real-world autonomous driving scenarios.
|
2310.12315 | Gravitational form factors of light nuclei: Impulse approximation | The gravitational form factors of light nuclei are evaluated up to momenta of
the order of the nucleon mass, using the impulse approximation. The nucleon
gravitational form factors are reduced non-relativistically, and used to derive
the gravitational form factors of light nuclei. The deuteron gravitational form
factors are analysed using the Reid soft core potential. The helium-4
gravitational form factors are assessed using the K-harmonics method, and
compared to those following from a mean-field approximation with a Woods-Saxon
potential. The importance of removing the center of mass motion for the ensuing
form factors is emphasized. The mass radii of these light nuclei are extracted
and compared to their charge radii counterparts. The details of their pressure
and shear distributions are discussed. | Fangcheng He, Ismail Zahed | 2023-10-18T20:38:14Z | http://arxiv.org/abs/2310.12315v2 | # Gravitational form factors of light nuclei:
###### Abstract
The gravitational form factors of light nuclei are evaluated upto momenta of the order of the nucleon mass, using the impulse approximation. The nucleon gravitational form factors are reduced non-relativistically, and used to derive the gravitational form factors of light nuclei. The deuteron gravitational form factors are analysed using the Reid soft core potential. The Helium-4 gravitational form factors are assessed using the K-harmonics method, and compared to those following from a mean-field approximation with a Woods-Saxon potential. The importance of removing the center of mass motion for the ensuing form factors is emphasized. The mass radii of these light nuclei are extracted and compared to their charge radii counterparts. The details of their pressure and shear distributions are discussed.
## I Introduction
The gravitational form factors of the nucleon carry important information on its mass distribution, most of which is carried by constitutive gluons. Recently, threshold photo-production production of charmonium at JLab [1], has opened the possibility of measuring the gluonic component of the nucleon gravitational form factors. The high statistics results reported by the E12-007-collaboration [2], suggest smaller mass radii for the proton in comparison to its electromagnetic radius.
Threshold electromagnetic production of charmonium off light nuclei, could open the possibility of understanding the nuclear effects on the gravitational form factors. The nucleus is a collection of nucleons (protons and neutrons) bound by strong QCD interactions. Most of what is known about nuclei has been gleaned using electromagnetic probes at intermediate energies [3], where the nucleons appear as rigid but extended bodies exchanging mesons, albeit mostly pions [4] (and references therein). The disparity between the fundamental and unconfined degrees of QCD (quarks and gluons) and the observed but confined degrees of freedom (mesons and nucleons) call for novel probes. The ultimate goal is to understand the composition of the nucleons, and how the nuclear interactions emerge in a nucleus.
The difficult character of the strong nuclear interaction, has required the use of approximate models to account for the motion of the nucleons in a bound nucleus. Mean field models of which the shell model is the ultimate realization, have proven successful in interpreting many aspects of low and intermediate nuclear physics. However, much is still needed for a theory to be sufficiently accurate and predictive. For this reason, the study of simpler nuclei such as deuterium, triton and Helium-3,4 should prove useful for the study of novel probes, such as the one provided by the EMT.
The simplest nuclear system is of course the deuteron. Its binding energy (2.225 MeV), charge radius (2.13 fm) and magnetic moment (0.857 in Bohr magnetons) are well established, which strongly constrain the pair nucleon-nucleon interaction [5]. The deuteron large size and weak binding, suggests that the nuclear interaction is due to single pion exchange between almost on-shell nucleons. The deuteron is a diffuse nucleus.
The purpose of this work is to provide the starting framework for the nuclear effect on the deuteron EMT. We will derive in detail its gravitational form factors using the impulse approximation. The results are readily extended to spherically symmetric and light nuclei such as Helium-4, which is the prototype nucleus per excellence, given that its binding energy per particle is close to the saturation one.
For completeness, we note that the gravitational D-form factors for nuclei were initially discussed using a liquid drop model in [6], relativistic nuclear potentials in [7], and more recently the generalized Skyrme model in [8]. Also, an estimate of the mass radius of Helium-4, was suggested recently using phimeon photoproduction [9].
The outline of this paper is as follows: in section II we briefly review the chief aspects of the deuteron S,D contributions using Reid soft core potential. In section III we summarize the relevant aspects of the nucleon gravitational A,B,C=\(\frac{1}{4}\) D invariant form factors. To use them for low and intermediate energies up to the nucleon mass scale, we explicit their non-relativistic expansion. In section IV we derive the deuteron gravitational form factors in the impulse approximation, and in leading order in the recoil momentum of the spectator nucleon. In section V we extend our results to Helium-4, using both the K-harmonics method, and the mean-field approximation with a Woods-Saxon potential. The importance of removing the spurious center of mass motion, while addressing the form factors of light nuclei is emphasized. In section VI we detail the extraction of the mass radii from the pertinent EMT form factors. Our conclusions are in section VII. In Appendix A, we compare our deuteron and Helium-4 charge form factors, versus the existing data. In Appendix B, we briefly recall the pressure and shear used.
## II Deuteron state
The deuteron with the tiny 2.2 MeV binding, is a loosely bound light nucleus, composed of almost quasi-free proton plus neutron held together by a long range pion-exchange interaction. In the non-relativistic approximation, the deuteron wavefunction is a mixture of \({}^{3}S_{1}+{}^{3}D_{1}\),
\[\Phi_{m}(r)=\left(\frac{u}{r}+\frac{1}{\sqrt{8}}\frac{w}{r}\,S_{12}\right) \frac{\chi_{m}}{\sqrt{4\pi}} \tag{1}\]
with the deuteron quadrupole operator
\[S_{12}=6S\cdot\hat{r}S\cdot\hat{r}-2S^{2}=6Q^{ij}\hat{r}^{i}\hat{r}^{j} \tag{2}\]
with total spin \(\vec{S}\), where \(Q^{ij}\) is the quadrupole operator \(Q^{ij}=\frac{1}{2}(S^{i}S^{j}+S^{j}S^{i})-\frac{2}{3}\delta^{ij}\). The reduced radial wavefunctions \(u,w\) are normalized,
\[\int_{0}^{\infty}dr(u^{2}+w^{2})=1 \tag{3}\]
The coupled \(u,w\) reduced radial components \({}^{3}S_{1}\) and \({}^{3}D_{1}\) of the deuteron wavefunction, will be sought using a central and tensor interaction [10]
\[u^{\prime\prime}+\frac{m}{\hbar^{2}}(-E-V_{C}(r))\,u-\sqrt{8} \frac{m}{\hbar^{2}}V_{T}\,w=0\] \[w^{{}^{\prime\prime}}+\frac{m}{\hbar^{2}}\bigg{(}-E-V_{C}(r)- \frac{6\hbar^{2}}{mr^{2}}+2V_{T}(r)+3V_{LS}(r)\bigg{)}\,w-\sqrt{8}\frac{m}{ \hbar^{2}}V_{T}\,u=0 \tag{4}\]
with the Reid soft core potential \(V_{R}=V_{C}+V_{T}S_{12}+V_{LS}\,L\cdot S\)
\[V_{C} = -h\frac{e^{-x}}{x}+105.468\,\frac{e^{-2x}}{x}-3187.8\,\frac{e^{-4 x}}{x}+9924.3\,\frac{e^{-6x}}{x}\] \[V_{T} = -h\bigg{(}\bigg{(}\frac{1}{x}+\frac{3}{x^{2}}+\frac{3}{x^{3}} \bigg{)}\,e^{-x}-\left(\frac{12}{x^{2}}+\frac{3}{x^{3}}\right)e^{-4x}\bigg{)}+3 51.77\,\frac{e^{-4x}}{x}-1673.5\,\frac{e^{-6x}}{x}\] \[V_{LS} = 708.91\,\frac{e^{-4x}}{x}-2713.1\,\frac{e^{-6x}}{x} \tag{5}\]
Here \(x=\mu r\) is the pion range fixed by \(\mu=0.7\,\mathrm{fm}^{-1}\), as illustrated in Fig. 1 (top). Also \(h=10.463\,\mathrm{MeV}\) and \(\hbar^{2}/m\) is assumed to be \(41.47\,\mathrm{MeV}\,\mathrm{fm}^{2}\) with \(m\) the reduced pn mass. The numerical S- and D-wavefunctions solution to the coupled equations (II) and valid for \(x<10.01\), are shown in Fig 1 (bottom). For \(x>10.01\), the explicit solutions are [10],
\[u(r) = 0.87758e^{-\alpha\mu r},\] \[w(r) = 0.0023e^{-\alpha\mu r}\left(1+\frac{3}{\alpha\mu r}+\frac{3}{( \alpha\mu r)^{2}}\right), \tag{6}\]
with \(\alpha=(mE_{D})^{1/2}/(\mu\hbar)\). The deuteron solution in Fig 1 (bottom) carries binding energy \(E_{D}=2.2246\,\mathrm{MeV}\), and a quadrupole moment
\[Q_{D}^{E}=\frac{1}{4}\int d^{3}r|\Phi_{1}(r)|^{2}\,(3z^{2}-r^{2})\approx 0.31 \,\mathrm{fm}^{2} \tag{7}\]
in the z-direction and the maximally stretches spin state. The deuteron is mostly cigar-shaped. This
deformation amounts to \(p_{D}=6.53\%\), the percentage of admixture of D-state in the deuteron [10].
## III Nucleon EMT
The standard decomposition of the energy-momentum (EMT) form factor in a nucleon state is [11; 12; 13]
\[T_{N}^{\mu\nu}(p_{2},p_{1})=\langle p_{2}|T^{\mu\nu}(0)|p_{1}\rangle=\overline{ u}(p_{2})\left(A(k)\gamma^{(\mu}P^{\nu)}+B(k)\frac{iP^{(\mu}\sigma^{\nu) \alpha}k_{\alpha}}{2m_{N}}+C(k)\frac{k^{\mu}k^{\nu}-\eta^{\mu\nu}k^{2}}{m_{N} }\right)u(p_{1})\,, \tag{8}\]
with \(a^{(\mu}b^{\nu)}=\frac{1}{2}(a^{\mu}b^{\nu}+a^{\nu}b^{\mu})\), \(k^{2}=(p_{2}-p_{1})^{2}=t\), \(P=(p_{1}+p_{2})/2\) and the normalization \(\overline{u}u=1\). (10) is conserved and tracefull. Note that in other conventions, the C-form factor is also referred to by \(D(k)=4C(k)\).
The gluonic gravitational form factors were analyzed both analytically and numerically, and more recently extracted empirically with overall good agreements. For the numerical analyses below, we will use a tripole parametrization of the holographic results for the gluon GFFs, and a dipole parametrization for the lattice quark GFFs with \(k^{2}=-Q^{2}\) space-like,
\[A_{g}(k)=\frac{A_{g}(0)}{\left(1+\frac{Q^{2}}{m_{TT,g}^{2}} \right)^{3}}\] \[A_{q}(k)=\frac{A_{q}(0)}{\left(1+\frac{Q^{2}}{m_{TT,g}^{2}} \right)^{2}}\] \[C_{g}(k)=\frac{\frac{1}{4}D_{g}(0)}{\left(1+\frac{Q^{2}}{m_{SS,g }^{2}}\right)^{3}}\] \[C_{q}(k)=\frac{\frac{1}{4}D_{q}(0)}{\left(1+\frac{Q^{2}}{m_{SS,q }^{2}}\right)^{2}}\]
with \(m_{TT,g}=1.612\,\mathrm{GeV}\), \(m_{TT,q}=1.477(44)\,\mathrm{GeV}\), \(m_{SS,g}=0.963\,\mathrm{GeV}\), \(m_{SS,q}=0.81(14)\,\mathrm{GeV}\), \(A_{g}(0)=0.430\), \(A_{q}(0)=0.510(25)\), \(D_{g}(0)=-1.275\) and \(D_{q}(0)=-1.30(49)\). The parameters of the gluon GFFs are from the holographic model [15], in overall agreement with those recently reported by the E12-007 collaboration [2]. The quark GFFs are ob
Figure 1: Top: Reid soft core potentials \(V_{C,T,LS}\); Bottom: Deutron S,D wavefunctions in (4).
tained by recent Lattice results in [14], as illustrated in Fig. 2. To fix the sum rule \(A_{q}(0)+A_{g}(0)=1\), we set \(A_{q}(0)\)=0.57, which is slightly larger than the lattice results. The remaining EMT form factor \(B\) is null.
Although, the holographic EMT form factors are given in terms of hypergeometical functions [15], (9) provide a good approximation for a wide range of momenta. They are in agreement with the hard scattering rules asymptotically. We will not consider the additional \(\bar{C}_{q,g}\) form factors as they are absent in the holographic construction, and add to zero in physical observables. Alternative discussions to some of these form factors can be found in [16; 17; 18; 19; 20; 21].
For on-shell nucleons, we can use the Gordon identity to make (10) more emanable to the non-relativistic reduction. More specifically, we have
\[T_{N}^{\mu\nu}(p_{2},p_{1})=\overline{u}(p_{2})\bigg{(}A(k)\frac{P^{\mu}P^{ \nu}}{m_{N}}+(A(k)+B(k))\frac{iP^{(\mu}\sigma^{\nu)\alpha}k_{\alpha}}{2m_{N}}+ C(k)\frac{k^{\mu}k^{\nu}-\eta^{\mu\nu}k^{2}}{m_{N}}\bigg{)}u(p_{1})\,, \tag{10}\]
To probe the EMT in the Deuteron at low and intermediate momentum transfers, we will use a non-relativistic reduction, with the assumption that it holds for \(k/m_{N}\) of about 1 GeV. The justification for this assumption can only be made a posteriori, by comparing to possibly future diffractive experiments. We recall that a similar assumption works reasonably well for the electromagnetic probes in the deuteron, at the nucleon mass scale [4]. With this in mind, the non-relativistic reduction of (10) reads
\[T_{N}^{00}(k) = \bigg{(}A(k)\,m_{N}+\bigg{(}\frac{1}{8}A(k)-\frac{1}{4}B(k)+C(k) \bigg{)}\frac{\vec{k}^{2}}{m_{N}}\bigg{)}+\bigg{(}\frac{1}{2}A(k)+B(k)\bigg{)} \frac{(\sigma\times ik)\cdot P}{2m_{N}}+\mathcal{O}\bigg{(}\frac{\vec{k}^{3}}{ m_{N}^{2}},\frac{P^{2}}{m_{N}}\bigg{)}\] \[T_{N}^{0i}(k) = (A(k)+B(k))\,\frac{(\sigma\times ik)^{i}}{4}+A(k)\,P^{i}+ \mathcal{O}\bigg{(}\frac{\vec{k}^{3}}{m_{N}^{2}},\frac{P^{2}}{m_{N}}\bigg{)}\] \[T_{N}^{ij}(k) = (A(k)+B(k))\frac{(\sigma\times ik)^{(i}P^{j)}}{2m_{N}}+C(k)\frac {k^{i}k^{j}-\delta^{ij}\vec{k}^{2}}{m_{N}}+\mathcal{O}\bigg{(}\frac{\vec{k}^{ 3}}{m_{N}^{2}},\frac{P^{2}}{m_{N}}\bigg{)} \tag{11}\]
where only terms linear in the nucleon recoil momentum \(P^{i}\), are retained. This additional assumption is justified in the analysis of the EMT of the deuteron to follow. Indeed, the higher order terms in the expansion when evaluated in a deuteron state, are controlled by the binding energy \(E_{D}=2.225\,MeV\) which is small,
\[\frac{\langle P^{2}\rangle}{m_{N}^{2}}\approx\frac{E_{D}}{m_{N}}\approx 10^{-3}\]
With this in mind, and dropping the \(\mathcal{O}\) notations for convenience, (11) yields
\[T_{N}^{00}(k) = T_{M}(k)+T_{SP}(k)\frac{(\sigma\times ik)\cdot P}{2m_{N}^{2}}\] \[T_{N}^{0i}(k) = T_{S}(k)\frac{(\sigma\times ik)^{i}}{4m_{N}}+A(k)P^{i}\] \[T_{N}^{ij}(k) = T_{S}(k)\frac{(\sigma\times ik)^{(i}P^{j)}}{2m_{N}^{2}}+C_{M}(k) \frac{k^{i}k^{j}-\delta^{ij}\vec{k}^{2}}{m_{N}^{2}} \tag{12}\]
with the invariant form factors \(T_{M},T_{S}\) induced by the mass and spin distributions in the nucleon.
Figure 2: Nucleon GFFs: \(A_{q},C_{q}\) (quarks) are from the recent lattice results [14], and \(A_{g},C_{g}\) (gluons) are from the holographic model [15].
Deuteron EMT in the impulse approximation
In the first approximation, we can treat the proton and neutron in the deuteron as quasi-free. In the impulse approximation, the EMT are the expectation values of (11) in the deuteron state. Since the EMT is isoscalar, the contributions of the proton and neutron add equally
The matrix elements (13) in the deuteron states can be simplified using symmetry arguments, and the conservation of the EMT. Their physical interpretation is best in the Breit (brick-wall) frame as illustrated in Fig. 3, with \(k^{0}=0\) and \(\vec{k}\cdot\vec{P}=0\).
The simplest matrix elements to evaluate do not involve the total momentum \(P\), they will be evaluated first, followed by single and double total momenta. For the latters, we will rely on a wavefunction prescription to avoid issues of hermiticity.
### Matrix entry 1
The details of the matrix entry **1** will be provided in full to show how all matrix elements are evaluated. More specifically and following the kinematics depicted in Fig. 3, the matrix element can be defined as [22]
\[\big{\langle}+\frac{k}{2}m^{\prime}\big{|}\textbf{1}\big{|}-\frac{k}{2}m \big{\rangle}=\int d^{3}P\,\Phi^{\dagger}_{m^{\prime}}\bigg{(}P+\frac{k}{4} \bigg{)}\,\textbf{1}\Phi_{m}\bigg{(}P-\frac{k}{4}\bigg{)}=\int d^{3}r\,e^{ \frac{i}{2}k\cdot r}\varphi^{\dagger}_{m^{\prime}}(r)\,\textbf{1}\,\varphi_{m }(r) \tag{14}\]
where we used the wavepacket form for the deuteron in-out states
\[\Phi_{m}\big{(}P\pm\frac{k}{4}\big{)}=\int d^{3}r\,e^{-i(P\pm\frac{k}{4}k)\cdot r }\,\varphi_{m}(r) \tag{15}\]
Inserting (1) into (14) gives
\[\big{\langle}+\frac{k}{2}m^{\prime}\big{|}\textbf{1}\big{|}-\frac{k}{2}m \big{\rangle}=\]
Figure 3: A graviton through \(T^{\mu\nu}(k)\), striking a nucleon in a deuteron, with a recoiling spectator of momentum \(-P\).
\[C_{E}(k^{2})\delta_{mm^{\prime}}-2C_{Q}(k^{2})\langle m^{\prime}|(S \cdot\hat{k})^{2}-\frac{1}{3}S^{2}|m\rangle \tag{16}\]
with the deuteron total spin and angular momentum
\[\vec{S} = \frac{1}{2}(\vec{\sigma}_{p}+\vec{\sigma}_{n})=\vec{\sigma}\] \[\vec{L} = \vec{L}_{p}+\vec{L}_{n}=\vec{r}\times\vec{P} \tag{17}\]
and the form factors
\[C_{E}(k^{2}) = \int_{0}^{\infty}dr(u^{2}+w^{2})\,j_{0}\bigg{(}\frac{kr}{2}\bigg{)}\] \[C_{Q}(k^{2}) = \frac{3}{\sqrt{2}}\int_{0}^{\infty}dr\bigg{(}uw-\frac{w^{2}}{2 \sqrt{2}}\bigg{)}\,j_{2}\bigg{(}\frac{kr}{2}\bigg{)}\]
The deviation from spherical symmetry follows from the D-wave content of the deuteron wavefunction. We note that \(C_{E}(0)=1\) as expected from the deuteron charge normalization, and that near the forward limit
\[C_{Q}(k)\approx\frac{Q_{D}^{E}}{4}\,\vec{k}^{2} \tag{19}\]
with the quadrupole moment \(Q_{D}^{E}\approx 0.31\)fm\({}^{2}\), and \(m_{D}\approx 2m_{N}\).
### Matrix entry \(S\times ik\)
Similarly, the spin contribution can be obtained by symmetry using the Wigner-Eckart theorem
\[\big{\langle}+\frac{k}{2}m^{\prime}\big{|}(S\times ik)^{i}\big{|}- \frac{k}{2}m\big{\rangle}=\int d^{3}r\,e^{\frac{i}{2}k\cdot r}\varphi_{m^{ \prime}}^{\dagger}(r)\,(S\times ik)^{i}\,\varphi_{m}(r)=C_{S}(k^{2})\langle m ^{\prime}|(S\times ik)^{i}|m\rangle \tag{20}\]
with the result
\[C_{S}(k^{2}) = \int_{0}^{\infty}drj_{0}\bigg{(}\frac{kr}{2}\bigg{)}\bigg{(}u^{2 }-\frac{w^{2}}{2}\bigg{)}+\frac{1}{\sqrt{2}}\int_{0}^{\infty}drj_{2}\bigg{(} \frac{kr}{2}\bigg{)}\bigg{(}wu+\frac{w^{2}}{\sqrt{2}}\bigg{)} \tag{21}\]
or by direct computation, by specializing to \(i=2\), \(m=1\) and \(m^{\prime}=0\), and choosing \(k=k\tilde{3}\). In the forward limit
\[C_{S}(0)=C_{I}(0)-4C_{P}(0)=1-\frac{3}{2}p_{D}=1-\frac{3}{2}\,6.53\%\]
with \(C_{P}(0)\) given in (29) below, and \(p_{D}=6.53\%\) the percentage of D-admixture in the deuteron.
### Matrix entry \(P^{i}\)
Owing to the off-shell character of the struck nucleon in the deuteron, the recoil corrections are best evaluated in the Breit frame. To enforce the Breit frame condition \(\vec{k}\cdot\vec{P}=0\) in the matrix elements, we will make the operator substitution
\[P^{\mu}\rightarrow\tilde{P}^{\mu}=P^{\mu}-\frac{(k\cdot P)}{k^{2}}\,k^{\mu} \tag{23}\]
when evaluating the matrix elements of the EMT in the deuteron state. The upshot of this substitution, is the manifest conservation of the the recoil corrections in the Deuteron EMT.
The first recoil contribution \(\tilde{P}^{i}\) to the energy momentum tensor, can be obtained from the symmetrized matrix element of \(P^{i}\) followed by the projection (23). More specifically, we have
\[\big{\langle}+\frac{k}{2}m^{\prime}\big{|}P^{i}\big{|}-\frac{k}{2}m\big{\rangle} =\int d^{3}P\,\Phi_{m^{\prime}}^{\dagger}(P+\frac{1}{4}k)P^{i}\Phi_{m}(P- \frac{1}{4}k)=\frac{i}{2}\int d^{3}r\,e^{\frac{i}{2}k\cdot r}\,\big{(}\partial _{i}\varphi_{m^{\prime}}^{\dagger}(r)\varphi_{m}(r)-\varphi_{m^{\prime}}^{ \dagger}(r)\partial_{i}\varphi_{m}(r)\big{)}\]
Inserting the explicit derivative of the deuteron wavefunction
\[\partial_{i}\varphi_{m}(r) = \bigg{(}\left(\frac{u^{\prime}(r)}{r}-\frac{u(r)}{r^{2}}-\frac{w(r) }{\sqrt{2}r^{2}}\right)\hat{r}^{i}+\left(\frac{w^{\prime}(r)}{\sqrt{8}r}-\frac {3w(r)}{\sqrt{8}r^{2}}\right)S_{12}(\hat{r})\hat{r}^{i}+\frac{3w(r)}{\sqrt{8}r }\left(\frac{\sigma_{1}^{i}\sigma_{2}\cdot\hat{r}+\sigma_{2}^{i}\sigma_{1} \cdot\hat{r}}{r}\right)\bigg{)}\chi_{m}\]
in (24), and using the identities
\[\frac{j_{1}(\frac{kr}{2})}{kr}=\frac{j_{0}(\frac{kr}{2})+j_{2}(\frac{kr}{2})}{6 },\qquad\qquad\int d^{3}re^{i\frac{k\vec{r}}{2}}\hat{r}^{i}=4\pi\int r^{2}dr\,j_ {1}\bigg{(}\frac{kr}{2}\bigg{)}\,i\hat{k}^{i} \tag{26}\]
we can finally reduce (24) to
\[\big{\langle}+\frac{k}{2}m^{\prime}\big{|}P^{i}\big{|}-\frac{k}{2}m\big{\rangle} =C_{P}(k^{2})\,\langle m^{\prime}|(S\times ik)^{i}|m\rangle \tag{27}\]
with
\[C_{P}(k^{2})=\int dr\frac{3w^{2}}{8}\bigg{(}j_{0}\bigg{(}\frac{kr}{2}\bigg{)} +j_{2}\bigg{(}\frac{kr}{2}\bigg{)}\bigg{)} \tag{28}\]
Its forward contribution is readily tied to the admixture of D-state in the deuteron
\[C_{P}(0)=\frac{3}{8}\,p_{D}=\frac{3}{8}\,6.53\% \tag{29}\]
(27) is manifestly transverse. The Breit frame projection through \(P^{i}\rightarrow\tilde{P}^{i}\) leaves it unchanged.
### Matrix entry \((S\times ik)^{(i}P^{j)}\)
This matrix entry can be obtained by evaluating first
\[t^{ij} = \bigg{\langle}+\frac{k}{2}m^{\prime}\Big{|}\frac{1}{2}\bigg{(}P^ {i}(\vec{S}\times i\vec{k})^{j}+P^{j}(\vec{S}\times i\vec{k})^{i}\bigg{)} \bigg{|}-\frac{k}{2}m\bigg{\rangle}\]
followed by the Breit frame substitution (23), which amounts to the projection
\[\tilde{t}^{ij}=t^{ij}-\frac{k^{i}t^{jk}k^{k}+k^{j}t^{ik}k^{k}}{k^{2}}+k^{i}k^ {j}\frac{k^{k}t^{kl}k^{l}}{k^{4}} \tag{31}\]
More specicfically, the reduction of (30) follows the same reasoning as above, with
\[t^{ij} = -\frac{1}{4}\int d^{3}re^{\frac{i}{2}k\cdot r}\left(\partial^{i} \varphi_{m}^{\prime\dagger}(r)(\vec{S}\times\vec{k})^{j}\varphi_{m}(r)-\varphi _{m}^{\prime\dagger}(r)(\vec{S}\times\vec{k})^{j}\partial^{i}\varphi_{m}(r) \right)+(i\leftrightarrow j)\] \[= -\int d^{3}re^{\frac{i}{2}k\cdot r}\left(\frac{\sqrt{2}r\left(u^{ \prime}w-w^{\prime}u\right)-w^{2}+2\sqrt{2}uw}{16r^{3}}\right)(\hat{r}^{j}( \vec{S}\times\vec{k})^{i}S_{12}-\hat{r}^{j}S_{12}(\vec{S}\times\vec{k})^{i})\] \[- \int d^{3}re^{\frac{i}{2}k\cdot r}\frac{3wu}{4\sqrt{8}r^{3}}\left( \big{(}\sigma_{1}^{j}\sigma_{2}\cdot\hat{r}+\sigma_{2}^{j}\sigma_{1}\cdot\hat{ r})(\vec{S}\times\vec{k})^{i}-(\vec{S}\times\vec{k})^{i}(\sigma_{1}^{j}\sigma_{2} \cdot\hat{r}+\sigma_{2}^{j}\sigma_{1}\cdot\hat{r})\right)\] \[- \int d^{3}re^{\frac{i}{2}k\cdot r}\frac{3w^{2}}{32r^{3}}\left(( \sigma_{1}^{j}\sigma_{2}\cdot\hat{r}+\sigma_{2}^{j}\sigma_{1}\cdot\hat{r})( \vec{S}\times\vec{k})^{i}S_{12}-S_{12}(\vec{S}\times\vec{k})^{i}(\sigma_{1}^{j }\sigma_{2}\cdot\hat{r}+\sigma_{2}^{j}\sigma_{1}\cdot\hat{r})\right)\] \[+ (i\leftrightarrow j)\] \[= \frac{1}{k}\int dr\left(\frac{\sqrt{2}r\left(u^{\prime}w-w^{ \prime}u\right)-w^{2}+2\sqrt{2}uw}{16r}\right)\Bigg{[}\left(\frac{10}{kr}j_{2} \left(\frac{kr}{2}\right)-j_{1}\left(\frac{kr}{2}\right)\right)\] \[\times \left(24\frac{k^{i}k^{j}}{\vec{k}^{2}}Q^{\alpha\beta}k_{\alpha}k _{\beta}-12\left(Q^{j\beta}k_{i}k_{\beta}+Q^{i\beta}k_{j}k_{\beta}\right) \right)-\frac{48j_{2}\left(\frac{kr}{2}\right)}{kr}(\delta^{ij}Q^{\alpha \beta}k_{\alpha}k_{\beta}-\vec{k}^{2}Q^{ij})\Bigg{]}\] \[+ \frac{1}{k}\int dr\left(\frac{wu}{\sqrt{8}r}-\frac{w^{2}}{8r} \right)j_{1}\left(\frac{kr}{2}\right)\left(6\delta^{ij}Q^{\alpha\beta}k_{ \alpha}k_{\beta}-6\vec{k}^{2}Q^{ij}\right)\] \[+ \frac{1}{k}\int dr\frac{9w^{2}}{8r}j_{1}\left(\frac{kr}{2}\right) \left((\vec{S}\times k)^{i}(\vec{S}\times k)^{j}+(\vec{S}\times k)^{j}(\vec{S} \times k)^{i}\right)\] \[+ \frac{1}{k}\int dr\frac{3w^{2}}{2r}\frac{j_{2}\left(\frac{kr}{2} \right)}{kr}\Bigg{(}4(\delta^{ij}\vec{k}^{2}-k^{i}k^{j})-3(\vec{S}\times k)^ {i}(\vec{S}\times k)^{j}-3(\vec{S}\times k)^{j}(\vec{S}\times k)^{i}\]
\[+ 2T_{SP}(k)\bigg{(}-D_{0}^{SP}(k)\delta_{mm^{\prime}}-\left(D_{2}^{ SP}(k)+2D_{3}^{SP}(k)\right)\,\hat{k}_{\alpha}\hat{k}_{\beta}\langle m^{\prime}|Q^{ \alpha\beta}|m\rangle\bigg{)}\,\frac{\vec{k}^{2}}{2m_{N}^{2}}\] \[= m_{D}A^{D}(k)\delta_{mm^{\prime}}+Q^{D}(k)\frac{k_{\alpha}k_{ \beta}}{2m_{D}}\langle m^{\prime}|Q^{\alpha\beta}|m\rangle\] \[T_{D}^{0i}(k,m^{\prime},m)= 2T_{S}(k)\,C_{S}(k)\,\langle m^{\prime}|\frac{(S\times ik)^{i}} {4m_{N}}|m\rangle+2m_{N}A(k)\,C_{P}(k)\,\langle m^{\prime}|\frac{(S\times ik)^{ i}}{m_{N}}|m\rangle\] \[= J^{D}(k)\frac{\langle m^{\prime}|\vec{S}\times i\vec{k}|m\rangle} {2}\] \[T_{D}^{ij}(k,m^{\prime},m)= 2T_{S}(k)\left(\frac{(k^{i}k^{j}-\delta^{ij}\,\vec{k}^{2})}{2} D_{0}^{SP}(k)\delta_{mm^{\prime}}+(k^{i}k^{j}-\delta^{ij}\vec{k}^{2})Q^{ \alpha\beta}\hat{k}_{\alpha}\hat{k}_{\beta}D_{3}^{SP}(k)\right)\]
\[+ \langle m^{\prime}|(k^{j}k^{\alpha}Q^{i\alpha}+k^{i}k^{\alpha}Q^{j \alpha}-\vec{k}^{2}Q^{ij}-\delta^{ij}Q^{\alpha\beta}k_{\alpha}k_{\beta})|m \rangle\,D_{2}^{SP}(k)\right)\frac{1}{2m_{N}^{2}}\] \[+ 2C_{M}(k)\frac{C_{E}(k)(k^{i}k^{j}-\delta^{ij}\vec{k}^{2}) \delta_{mm^{\prime}}-2C_{Q}(k)(k^{i}k^{j}-\delta^{ij}\vec{k}^{2})\hat{k}_{ \alpha}\hat{k}_{\beta}\langle m^{\prime}|Q^{\alpha\beta}|m\rangle}{m_{N}^{2}}\] \[= D_{0}^{D}(k)\frac{\vec{k}^{i}k^{j}-\delta^{ij}\vec{k}^{2}}{4m_{D }}\delta_{m^{\prime}m}+D_{3}^{D}(k)\frac{(k^{i}k^{j}-\delta^{ij}\vec{k}^{2}) \hat{k}_{\alpha}\hat{k}_{\beta}\langle m^{\prime}|Q^{\alpha\beta}|m\rangle}{4 m_{D}}\]
Figure 4: The deuteron invariant EMT form factors (37) in the impulse approximation: gluon (red-dashed), quark (blue-dotted) and gluon+quark (black-solid), compared to the nucleon \(A^{N}\) and \(D^{N}\) (thinner dashed, dotted and solid).
\[+ D_{2}^{D}(k)\frac{\langle m^{\prime}|(k^{j}k^{\alpha}Q^{i\alpha}+k^ {i}k^{\alpha}Q^{j\alpha}-\vec{k}^{2}Q^{ij}-\delta^{ij}Q^{\alpha\beta}k_{\alpha}k _{\beta})|m\rangle}{2m_{D}}\]
Our conventions for the deuteron EMT invariant form factors, follow the general spin-1 conventions introduced in [23]. In the impulse approximation and to linear order in the recoil momentum of the spectator nucleon (in short hand notations), they are
\[A^{D} = \frac{2}{m_{D}}\left(T_{M}C_{E}-\frac{\vec{k}^{2}}{2m_{N}^{2}}T_{ SP}D_{0}^{SP}\right)\] \[Q^{D} = -\frac{2m_{D}}{\vec{k}^{2}}\left(4T_{M}C_{Q}+\frac{\vec{k}^{2}}{m _{N}^{2}}T_{SP}(D_{2}^{SP}+2D_{3}^{SP})\right)\] \[J^{D} = \frac{T_{S}C_{S}}{m_{N}}+4AC_{P}\] \[D_{0}^{D} = \frac{4m_{D}}{m_{N}^{2}}\left(\frac{1}{2}T_{S}D_{0}^{SP}+2C_{M}C _{E}\right)\] \[D_{2}^{D} = \frac{2m_{D}}{m_{N}^{2}}T_{S}D_{2}^{SP}\] \[D_{3}^{D} = \frac{4m_{D}}{m_{N}^{2}}\left(T_{S}D_{3}^{SP}-4C_{M}C_{Q}\right) \tag{37}\]
with the deuteron quadrupole form factor
\[Q(k)=-\frac{4m_{D}^{2}}{\vec{k}^{2}}C_{Q}(k)\to Q_{D} \tag{38}\]
that reduces to the deuteron quadrupole moment in the forward direction.
The numerical results for low and intermediate momenta \(k\leq m_{N}\), are shown in Fig 4. Note that the quark and gluon contributions in \(D_{0}\) and \(D_{3}\) are comparable, since \(C_{g}(k)\) and \(C_{q}(k)\) are very similar (9). For \(B(k)\approx 0\) and at low momenta \(k\ll m_{N}\), we have
\[T_{M}(k)\approx T_{S}(k)\approx 2T_{SP}(k)\approx m_{N}A(k)\]
The deuteron invariant EMT form factors (37) simplify (short hand notation)
\[A^{D} \approx A(k)C_{E}\] \[Q^{D} \approx -\frac{4m_{D}^{2}}{\vec{k}^{2}}A(k)C_{Q}-2A(k)(D_{2}^{SP}+2D_{3}^ {SP})\] \[J^{D} \approx A(k)(C_{S}+4C_{P}) \tag{39}\]
for the mass \(A^{D}\), quadrupole \(Q^{Q}\) and momentum \(J^{D}\) respectively. For the deuteron D-terms, we have (short hand notation)
\[D_{0}^{D} \approx 4A(k)D_{0}^{SP}+16C(k)C_{E}\] \[D_{2}^{D} \approx 4A(k)D_{2}^{SP}\] \[D_{3}^{D} \approx 8A(k)D_{3}^{SP}-32C(k)C_{Q} \tag{40}\]
for the standard tensor \(D_{0}^{D}\), tensor spin-spin \(D_{2}^{D}\) and tensor-quadrupole \(D_{3}^{D}\), respectively.
The three deuteron D-form factors in the impulse approximation, can be used to describe the spatial distribution of the pressure and shear inside the deuteron as probed by a graviton or a graviton-like probes. Using the conventions for the pressure and shear introduced in [6], we show in Fig 5 their distribution inside the deuteron. The formulas are put in Appendix B. We have separated the quark and gluon contributions, following their separation in the nucleon form factors (9). Here \(p_{\{0,2,3\},\{g,q\}}\) refer to the pressure distributions carried by the quarks and gluons separatly, and \(s_{\{0,2,3\},\{g,q\}}\) refer to the shear distributions carried by the quarks and gluons also separately. We note that the pressure distributions integrate to null as expected.
## V Helium-4 EMT in the impulse approximation
We will strart with the simplest \(0^{++}\) Helium-4 nucleus, a scalar particle both in spin and isospin. The ground state of Helium-4 is composed of 2 protons and 2 neutrons in a purely S-wave. Its \(0^{++}\) EMT is characterized by two invariant form factors [11]
\[\langle p_{2}|T^{\mu\nu}|p_{1}\rangle = \frac{P^{\mu}P^{\nu}}{m_{\alpha}}A^{H}(k)\] \[+ \frac{1}{4m_{\alpha}}(k^{\mu}k^{\nu}-g^{\mu\nu}k^{2})D^{H}(k)\]
In the impulse approximation, most of the EMT results for Helium-4, can be inferred from those of the deuteron presented above with much simplifications. The same observations can be extended to the lighter \(0^{++}\) magic nuclei.
### Helium-4 state
To construct the Helium-4 state, we will use the K-harmonics method to factor out the spurious center of mass motion [24; 25]. The method
works well for few particle systems, when the multi-dimensional Shrodinger equation can be reduced to a one-dimensional hyper-radial distance times the lowest K-harmonic.
The K-harmonics method becomes increasingly involved for heavier nuclei, where the mean-field single particle approximation is more appropriate. However, the removal of the spurious center of mass motion is more challenging in the mean-field approach. We will present both methods, when addressing Helium-4 for comparison.
#### iii.2.1 K-harmonics method
The ground state of Helium-4 (alpha particle) is spin-isospin singlet, and reads
\[\Phi_{H}[1,...,4]=\varphi[r_{1},..,r_{4}][\chi_{12}]_{nn}[\chi_{34}]_{pp} \tag{42}\]
with \(\chi\) the anti-symmetric spin singlet for \(nn\) (12) and \(pp\) (34). It contains a smaller admixture of D-wave [26] that will not be considered.
To remove the spurious center of mass motion in (42) using the K-harmonics method, the pertinent Jacobi coordinates are [25]
\[\vec{\xi}_{1} = \frac{1}{\sqrt{2}}(\vec{r}_{2}-\vec{r}_{1})\] \[\vec{\xi}_{2} = \frac{1}{\sqrt{6}}(\vec{r}_{1}+\vec{r}_{2}-2\vec{r}_{3})\] \[\vec{\xi}_{3} = \frac{1}{2\sqrt{3}}(\vec{r}_{1}+\vec{r}_{2}+\vec{r}_{3}-3\vec{r} _{4})\] \[\vec{R}_{C} = \frac{1}{4}(\vec{r}_{1}+\vec{r}_{2}+\vec{r}_{3}+\vec{r}_{4}) \tag{43}\]
the radial hyperdistance is
\[R^{2}=\frac{1}{4}\sum_{i\neq j}(\vec{r}_{i}-\vec{r}_{j})^{2}=\vec{\xi}_{1}^{2} +\vec{\xi}_{2}^{2}+\vec{\xi}_{3}^{2} \tag{44}\]
and the CM factors out of the 4-particle Kinetic contribution
\[\mathbb{K}=-\sum_{i=1}^{4}\frac{\nabla_{i}^{2}}{2m_{N}}\rightarrow-\frac{1}{2m _{N}}\bigg{(}\frac{d^{2}}{dR^{2}}+\frac{8}{R}\frac{d}{dR}-\frac{K_{N}^{2}}{R^{ 2}}\bigg{)} \tag{45}\]
The hyper-spherical harmonics (HHs) are the eigenstates of the grand-angular momentum [25]
\[K_{N}^{2}\,\mathcal{Y}_{[K]}^{KLM_{L}}(\Omega_{\tilde{N}})=(K(K+3 N-2))\,\mathcal{Y}_{[K]}^{KLM_{L}}(\Omega_{\tilde{N}})\]
for atomic number \(A\), with \(N=A-1\). The \(\tilde{N}=3N-1\) angles \(\Omega_{\tilde{N}}=(\theta_{1},..,\theta_{\tilde{N}-1},\phi)\), are those of the individual Jacobi coordinates \(\hat{\xi}_{i}\) with hyperangles \(\cos\theta_{j}=\xi_{j}/R\). \(L,M_{L}\) are the standard quantum numbers of the total orbital angular momentum \(L^{2},L_{z}\). The angular volume for \(A=4\) and \(\tilde{N}=\vec{8}\), is
\[\Omega_{9}=\int_{0}^{2\pi}d\phi\,\int_{0}^{\pi}\,\prod_{i=1}^{7} \,d\theta_{i}\] \[\times\mathrm{sin}^{7}\theta_{1}\,\times\mathrm{sin}^{6}\theta_{2} \,\times\mathrm{sin}^{5}\theta_{3}\] \[\times\mathrm{sin}^{4}\theta_{4}\,\times\mathrm{sin}^{3}\theta_{5} \,\times\mathrm{sin}^{2}\theta_{6}\,\times\mathrm{sin}\,\theta_{7}\,=\frac{32 \pi^{4}}{105}\]
Figure 5: The gluon and quark pressure \(p_{0,2,3}\) and shear \(s_{0,2,3}\) distributions inside the deuteron, in the impulse approximation.
(47)
The specific form of the HHs follows by recoupling the individual angular momenta \(l_{i}\). They are normalized as
\[\int d\Omega_{\tilde{N}}\,\mathcal{Y}_{[K]}^{KLM_{L}}\ ^{*}(\Omega_{\tilde{N}})\, \mathcal{Y}_{[K^{\prime}]}^{K^{\prime}L^{\prime}M_{L}^{\prime}}(\Omega_{\tilde{N }})=\delta_{[K],[K^{\prime}]} \tag{48}\]
and their total number is
\[d_{K}=(2K+3N-2)\frac{(K+3N-3)!}{K!(3N-2)!} \tag{49}\]
For Helium-4 with \(A=4\) and \(N=3\), the \(K=0\) HH has degeneracy \(d_{0}=1\), and the \(K=1\) HHs have degeneracy \(d_{1}=9\).
The general form of (42) in hyper-spherical form modulo the spin factors, is
\[\varphi_{[K]}(R)\mathcal{Y}_{[K]}^{KLM_{L}}(\Omega_{\tilde{8}}) \tag{50}\]
with the S-wave solutions for Helium-4
\[\varphi_{[0]}(R)\mathcal{Y}_{[0]}^{000}(\Omega_{\tilde{8}})=\frac{\varphi(R )}{\sqrt{\Omega_{9}}} \tag{51}\]
To eliminate the linear derivative in the hyperdistance in the Shrodinger equation, we will seek the radial wavefunction
\[\varphi(R)=\frac{u(R)}{R^{4}} \tag{52}\]
whith the reduced wavefunction satisfying
\[u^{\prime\prime}-\frac{12}{R^{2}}u-\frac{2m_{N}}{\hbar^{2}}(W(R)+V_{C}(R)-E)u=0 \tag{53}\]
subject to the normalization
\[\int_{0}^{\infty}dR\,|u(R)|^{2}=1 \tag{54}\]
A large centrifugation emerges following the reduction to the hyperdistance. Here \(W(R)\) is the projection of the pair potential \(V(r_{ij})\) on the \(K=0\) harmonic (see Ref. [14] in [27])
\[W(R)=\frac{315}{4}\int_{0}^{1}\,dx\,(1-x^{2})^{2}x^{2}\,V(\sqrt{2}Rx) \tag{55}\]
and \(V_{C}\) is the Coulomb repulsion between the two protons in Helium-4
\[V_{C}(R)=\frac{2.23}{R} \tag{56}\]
with the numerator in units of MeV fm. A simple pair nucleon interaction for Helium-4 is (see Ref. [14] in [27])
\[V(r)=-83.34\,e^{-(r/1.6)^{2}}+144.86\,e^{-(r/0.82)^{2}} \tag{57}\]
with the overall scale in MeV, and the distance in the exponent in fm. We note the recent applications of this method to the clustering of light nuclei in heavy-ion collisions [27], and the charmed tetraquark states in [28].
The reduced S-wave solution to (53) is shown in Fig. 6 versus the hyperdistance. It corresponds to the ground state of Helium-4 with binding energy \(-27.75\,\mathrm{MeV}\). The large induced centrifugation by projection on the hyperdistance, causes it to peak at \(2.5\,\mathrm{fm}\).
#### iii.2.2 Woods-Saxon
For heavier nuclei the use of single particle states in the mean-field approximation is more convenient,
Figure 6: Top: The potential
modulo the center of mass motion. In this case, the radial part of (42) will be sought using the independent particle states
\[\varphi[r_{1},...,r_{4}]=\prod_{i=1}^{4}\frac{u(r_{i})}{r_{i}} \tag{58}\]
The reduced \(u\) is solution to
\[u^{{}^{\prime\prime}}(r)-\frac{m_{N}}{2\hbar^{2}}(E_{H}+V_{WS}(r))u(r)=0 \tag{59}\]
in the Woods-Saxon potential
\[V_{WS}(r)=-\frac{V_{0}}{1+e^{(r-R)/a}}\equiv-V_{0}y(r) \tag{60}\]
and normalized as \(\int dr\,u^{2}=1\). The depth \(V_{0}\), range \(R\) and skin \(a\) of the potential are fixed to reproduce Helium-4 binding energy per particle \(\frac{1}{4}E_{H}=7.1\,\mathrm{MeV}\) and radius \(r_{H}=1.7\,\mathrm{fm}\). In general, the solution to (42) can be obtained in closed form, in terms of a generalized hypergeometric function [29]
\[u(r)= Cy(r)^{\nu}(1-y(r))^{\mu}\] \[\times_{2}F_{1}(\mu+\nu,\mu+\nu+1,2\nu+1,y(r)) \tag{61}\]
with \(C\) fixed by the normalization. Here we have set \(\mu=i(\gamma^{2}-\nu^{2})^{\frac{1}{3}}\) and \(\nu>0\) with
\[\nu^{2}=\frac{a^{2}E_{H}m_{N}}{2\hbar^{2}}\qquad\gamma^{2}=\frac{a^{2}V_{0}m_ {N}}{2\hbar^{2}}\]
For light nuclei in general, \(V_{0}=50\,\mathrm{MeV}\), \(a=0.51\,\mathrm{fm}\) and \(R=r_{0}A^{\frac{1}{3}}\) with \(r_{0}=1.25\,\mathrm{fm}\).
In Fig. 7 we show the potential for \(A=4\) (top), and the single particle state wavefunction for Helium-4 (bottom). The numerical binding energy per particle is \(\frac{1}{4}E_{H}=7.1\,\mathrm{MeV}\), with a radius of \(1.6\,\mathrm{fm}\) in agreement with the measured charge radius in [30].
### Helium-4 EMT
The way we have presented the derivation of the deuteron EMT results in the impulse approximation, can be applied verbatim to Helium-4 using the wavefunction (42), with much simplifications and minor changes, thanks to the absence of a D-wave admixture in Helium-4. With this in mind, the results for Helium-4 follow from (36) by inspection,
\[T_{H}^{00}(k)= 4T_{M}(k)\,\bar{C}_{E}(k)=\left(m_{\alpha}+\frac{\vec{k}^{2}}{4 m_{\alpha}}\right)A^{H}(k)+\frac{\vec{k}^{2}}{4m_{\alpha}}D^{H}(k)\] \[T_{H}^{0i}(k)= 0\] \[T_{H}^{ij}(k)= 4C_{M}(k)\bar{C}_{E}(k)\,\frac{k^{i}k^{j}-\delta^{ij}\vec{k}^{2 }}{m_{N}^{2}}=D^{H}(k)\,\frac{k^{i}k^{j}-\delta^{ij}\vec{k}^{2}}{4m_{\alpha}} \tag{62}\]
with
\[\approx A(k)\bar{C}_{E}(k)\] \[A^{H}(k) = \frac{4T_{M}(k)}{m_{\alpha}}\bar{C}_{E}(k)-\frac{\vec{k}^{2}}{m_{ \alpha}^{3}}(T_{M}+64C_{M})\bar{C}_{E}(k)\]
Figure 7: Top: Woods-Saxon potentials \(V_{WS}\) for Helium-4; Bottom: Helium-4 S-wave.
\[D_{0}^{H}(k) = 16\frac{m_{\alpha}}{m_{N}^{2}}C_{M}(k)\bar{C}_{E}(k)\approx 64C(k) \bar{C}_{E}(k)\]
with the normalization \(\bar{C}_{E}(0)=1\).
## 1. K-harmonic:
For the reduced S-wave solution (51) we have
\[\bar{C}_{E}(k) = \int dR\,d\Omega_{9}\bigg{(}\frac{1}{4}\sum_{i=1}^{4}\,e^{ik\cdot (r_{i}-R_{C})}\,\frac{|u(R)|^{2}}{\Omega_{9}}\bigg{)} \tag{64}\] \[= \int dR\,d\Omega_{9}\bigg{(}\frac{1}{4}\sum_{i=1}^{4}\,j_{0}(k|r_ {i}-R_{C}|)\,\frac{|u(R)|^{2}}{\Omega_{9}}\bigg{)}\rightarrow\int_{0}^{ \infty}dR\,|u(R)|^{2}\,j_{0}\bigg{(}\frac{1}{2}kR\bigg{)}\]
The last relation follows from an estimate of the multi-dimensional angular integral, where the nucleons forming Helium-4 are assumed in a maximally symmetric tetrahedral configuration. In this case the hyperdistance \(R\) relates to the side \(r\) of the tetrahedron by \(R=\sqrt{3/2}r\). All four corners are equidistant from the centroid (center of mass) \(|r_{i}-R_{C}|=\sqrt{3/8}\,r\), hence the relation to the hyperdistance \(|r_{i}-R_{C}|=\frac{1}{2}R\).
## 2. Woods-Saxon:
For the Woods-Saxon potential we have,
\[\bar{C}_{E}(k)=\int_{0}^{\infty}u^{2}(r)\,j_{0}(kr) \tag{65}\]
Note that \(kr\) instead of \(\frac{1}{2}kr\) appears in (65), as the the reduced wavefuntions using the Woods-Saxon potential, are coordinated from the center of mass.
In Fig. 8 we show the A,D form factors for Helium-4 using the K-harmonic method for the wavefunction, with the spurious center of mass removed. In Fig. 9 we show the A,D form factors for Helium-4, using the Woods-Saxon mean-field approach, with
Figure 8: A- and D-form factors for Helium-4 obtained using K-harmonic method.
Figure 9: A- and D-form factors for Helium-4 obtained using Woods-Saxon potential.
out the removal of the spurious center of mass. While the mass radii appear to be reproduced similarly by both approaches (see below), the behavior of the form factors is quantitatively different in the intermediate momentum range, with the A,D form factors free of the center of mass motion, crossing the zero line at about the peak of the corresponding wavefunction in Fig. 6. The differences between the two constructions illustrate the importance of removing the center of mass motion, while describing the form factors for light nuclei. This point is further illustrated in our analysis of the charge form factors in Appendix A. The gluon and quark contributions to the pressure \(p_{g,q}\) and shear \(s_{g,q}\) distributions are shwn in Fig. 10.
The results for Helium-4, the lightest \(0^{++}\) magic nucleus, carry to heavier magic nuclei in the impulse approximation, with general \(A\) in the Woods-Saxon potential. In particular we have \(A^{H}(0)\approx A^{0}A(0)=1\) and \(D^{H}(0)\approx A^{2}D(0)\) in the impulse approximation, which is to be compared to the scaling \(A^{\frac{7}{3}}\) suggested using a liquid drop model [6], \(A^{2.26}\) using relativistic nuclear potentials [7], and more recently \(A^{1.7-1.8}\) reported in the generalized Skyrme model [8]. In Fig. 11 we compare the D-form factor per nucleon, for the nucleon (red-solid), deuteron (green-solid) and Helium-4 with K-harmonic method (black-solid) and Woods-Saxon potential (blue-solid).
## VI Scalar and mass radii
We now extend the proton definitions of the scalar radius \(r_{S}\) and the mass radius \(r_{M}\) to light nuclei, by defining the scalar and mass form factors
\[\mathbb{G}_{S}(k) = T^{00}(k)-T^{ii}(k)\] \[\mathbb{G}_{M}(k) = T^{00}(k) \tag{66}\]
for each of the deuteron and Helium-4, with
\[r_{S,M}^{2}=-6\bigg{(}\frac{d\ln\mathbb{G}_{S,M}(k)}{d\vec{k}^{2}}\bigg{)}_{ \vec{k}^{2}=0} \tag{67}\]
The quark and gluon radii in light nuclei are presented in Table. 1, and compared to the charge radii following from Appendix A using the same wave-functions. The quark and gluon separated radii in light nuclei are comparable, owing to the similarity of these radii in the nucleon following from (9). Overall, the difference between the scalar and mass radii seen in the nucleon, persits in light nuclei, with the gluonic scalar radii larger than the mass radii, but both appear closer to the computed charge radii.
## VII Conclusions
We have analyzed the gravitational form factors for the deuteron in the context of the impulse approximation. The proton and neutron inside the
Figure 11: The spin average D form factor normalized by the baryon number A in the nucleon (N), Deuteron (D) and Helium-4 (HE) with sub-labels for K-harmonic and Woods-Saxon.
Figure 10: The gluon and quark pressure \(p_{g,q}\) and shear \(s_{g,q}\) distributions in Helium-4 using K-harmonics method(top) and Woods-Saxon potential(bottom), in the impulse approximation.
deuteron were assumed non-relativistic, with the recoil of the spectator nucleon retained only to linear order. These approximations limit our gravitational form factors to momenta of the order of the nucleon mass.
The deuteron gravitational form factors \(A^{D},Q^{D},J^{D}\) capture the mass, quadrupole and momentum distributions, supplemented by three additional \(D_{0}^{D},D_{2}^{D},D_{3}^{D}\) form factors reflecting on the standard tensor, spin-tensor and mixed-spin-tensor contributions, respectively. Using the nucleon gravitational form factors, we have made explicit both the gluonic and fermionic contributions to each of the form factors. This budgeting reflects on the quantum delocalization of the concepts of quarks and gluons, in constituent bound states at low energy.
The deuteron scalar and mass radii from either the quarks or gluons, are comparable to the deuteron electromagnetic radius. In contrast, the spin averaged quadrupole scalar and mass radii carried by the quarks and gluons, are substantially smaller than the deuteron electromagnetic radius.
Our analysis of the deuteron, readily extends to Helium-4, a much more compact nucleus. To describe the \(0^{++}\) ground state of Helium-4, we have used both the K-harmonic method where the spurious center of mass motion is explicitly removed, and a Woods-Saxon mean-field approximation with the spurious center of mass motion present. While the radii appears to be similar for both constructions, the ensuing form factors are substantially different, showing the importance of removing the spurious center of mass motion.
In the zero momentum limit, the mean-field approximation appears reliable in the determination of the mass radii, even with the unsubtracted center of mass motion. This observation allows for the extension of the mean-field approach to the heavier \(0^{++}\) magic nuclei O\({}^{16},C^{40}\),... In particular, the D-form factor for these heavier nuclei appears to scale as \(D^{A}(0)\approx A^{2}D(0)\) for a large atomic number \(A\), in the impulse approximation. Although the non-relativistic reduction holds in heavier nuclei, Fermi motion requires that we include the next-to-leading order corrections in the spectator recoil. Also, exchange current corrections maybe important.
While our analysis has been considerably simplified by treating the light nuclei constituents non-relativistically, limiting the range of the invariant form factors to about the nucleon mass, we plan to extend it to the relativistic case at least for the deuteron. Also, our analysis was limited to first order in the recoil of the struck nucleon or core. We plan to pursue the analysis to second order in the spectator recoil momentum, and investigate the importance of the exchange current contributions.
Our construction can be extended to analyze the GPDs for light nuclei, to understand the particular role played by the nucleon pair-interaction, as well as exchange currents. Our gravitational form factors should prove useful for assessing diffractive photo- and electro-production of heavy quarkonia on light nuclei.
The current effort at JLAB to measure near threshold heavy quarkonia production on nucleons, should be extended to light nuclei, to shed light on how the formation of nuclei may affect our understanding of mass and charge distributions, and the nature of the quantum delocalization of the quarks and gluons in bound states at low energy. Clearly, with the advent of the EIC with higher energy and luminosity, treshold photoproduction of quarkonia such as \(J/\Psi,\Upsilon\) on light nuclei, should prove useful for addressing these issues.
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c|c|c|c} \hline \hline Nuclei & \(r_{S}^{g}\) & \(r_{S}^{g}\) & \(r_{M}^{g}\) & \(r_{M}^{g}\) & \(r_{S,Q}^{g}\) & \(r_{S,Q}^{g}\) & \(r_{M,Q}^{g}\) & \(r_{M,Q}^{g}\) & \(r_{E}\) \\ \hline proton(experiment) & 1.07 [2] & – & 0.76 [2] & – & – & – & – & – & 0.84 [31] \\ \hline proton(input) & 0.93 [15] & 0.82 & 0.68 [15] & 0.60 & – & – & – & – & 0.8 \\ \hline Deuteron & 2.16 & 2.11 & 2.06 & 2.04 & 0.97 & 0.97 & 0.97 & 2.12 (2.13 [30]) \\ \hline Helium-4(K) & 1.70 & 1.66 & 1.58 & 1.56 & – & – & – & – & 1.70 (1.68 [32]) \\ \hline Helium-4(WS) & 1.79 & 1.75 & 1.68 & 1.66 & – & – & – & – & 1.79 (1.68 [32]) \\ \hline \hline \end{tabular}
\end{table}
Table 1: The quark and gluon EMT radii (fm) of light nuclei following from (66): scalar radii \(r_{S}\), mass radii \(r_{M}\), \(r_{Q,S/M}\) spin averaged quadrupole radii, and \(r_{E}\) charge radii. For Helium-4 we have listed the results from the K-harmonic (K) and Woods-Saxon (WS), with the bracketed results referring to experiment.
### Acknowledgements
We thank Zein-Eddine Meziani and Bao-Dong Sun for discussions. This work is supported by the Office of Science, U.S. Department of Energy under Contract No. DE-FG-88ER40388. This research is also supported in part within the framework of the Quark-Gluon Tomography (QGT) Topical Collaboration, under contract no. DE-SC0023646.
## Appendix A Light nuclei charge form factors
To compare the mass radii from the gravitational form factors to the charge radii for light nuclei, we provide here a simple estimate of their charge form factors, also in the impulse approximation. Since data are available, that allows us to gauge the validity of our method. With this in mind and for a single nucleon, the electromagnetic current reads
\[J_{N}^{\mu}(k)=\] \[\bar{u}(p_{2})\,e\left(F_{1}(k)\gamma^{\mu}+F_{2}(k)\frac{i\sigma ^{\mu\nu}q_{\nu}}{2m_{N}}\right)u(p_{1}) \tag{11}\]
\(F_{1},F_{2}\) are the Dirac and Pauli form factors, which are related to the electric and magnetic Sachs form factors as
\[G_{E}(k) = F_{1}(k)-\frac{\vec{k}^{2}}{4m_{N}^{2}}F_{2}(k)\] \[G_{M}(k) = F_{1}(k)+F_{2}(k) \tag{12}\]
The age-old Rosenbluth analysis of the electron scattering data up to \(10\,\mathrm{GeV}^{2}\) shows that the Sachs form factors for the proton are well approximated by dipoles
\[G_{D}(k) = \left(1+\frac{\vec{k}^{2}}{0.71}\right)^{-2}\] \[G_{E}^{p}(k) = G_{D}(k)\] \[G_{E}^{n}(k) = 0\] \[G_{M}^{p,n}(k) = \mu_{p,n}G_{D}(k) \tag{13}\]
with the \(p,n\) empirical magnetic moments \(\mu_{p}=2.79,-1.91\) (in Bohr magnetons). For completeness, we note that the JLab analysis based on the ratio of the polarization of the scattered proton, shows \(G_{E}^{p}\) falling faster than \(G_{M}^{p}\)[33]
\[G_{E}^{p}(k)=\left(1-0.13\,(\vec{k}^{2}-0.04)\right)G_{D}(k) \tag{14}\]
The leading non-relativistic and recoil contributions to the nucleon charge form factor, are
\[eJ_{N}^{0}(k)=e\left(G_{E}(k)+\frac{(\sigma\times ik)\cdot P}{4m_{N}^{2}}(2G_{ M}(k)-G_{E}(k))+\mathcal{O}\bigg{(}\frac{\vec{k}^{4}}{m_{N}^{4}},\frac{P^{2}}{m _{N}^{2}}\bigg{)}\right) \tag{15}\]
We now proceed to use (15) for the modifications to the charge density in light nuclei, using the impulse approximation.
**Deuteron charge form factor:**
Since the deuteron wavefunction is symmetric under spin exchange and independently space exchange of \(p,n\), it follows that only the singlet combination of electric and magnetic form factors contribute to the deuteron charge form factor (15) through the substitution
\[G_{E}^{S}(k) = \frac{G_{E}^{p}(k)+G_{E}^{n}(k)}{2}\] \[G_{M}^{S}(k) = \frac{G_{M}^{p}(k)+G_{M}^{n}(k)}{2} \tag{16}\]
with \(\sigma\to S\). With this in mind, and using some of the matrix elements developed for the EMT form factors earlier, we obtain for the charge density in the impulse approximation
\[J_{D}^{0}(k,m^{\prime},m)= 2G_{E}^{S}(k)\left(C_{E}(k)\delta_{m^{\prime}m}-2C_{Q}(k)\langle m ^{\prime}|(S\cdot\hat{k})^{2}-\frac{1}{3}S^{2}|m\rangle\right)\] \[- \frac{\vec{k}^{2}}{2m_{N}^{2}}\left(2G_{M}^{S}(k)-G_{E}^{S}(k) \right)\left(D_{0}^{SP}(k)\delta_{m^{\prime}m}+(D_{2}^{SP}(k)+2D_{3}^{SP}(k)) \langle m^{\prime}|Q^{ij}\hat{k}^{i}\hat{k}^{j}|m\rangle\right)\]
The squared electric charge radius of the deuteron is the sum of the nucleon, plus the intrinsic \(C_{E}\)-form factor contribution
\[\langle r^{2}\rangle_{D}=\langle r^{2}\rangle_{N}+\langle r^{2}\rangle_{C_{E}} \tag{10}\]
The results for the electric charge form factor for the deuteron \(|F_{C}^{D}|\) in (11) are shown in Fig. 12 (top), and compared to the empirical data in [34]. The impulse approximation works reasonably well in this momentum range, although the diffractive dip is slightly off to the right of the empirical values. The diffractive pattern with a first zero at about \(k_{D}^{2}\approx 0.75\,\mathrm{GeV}^{2}\), reflects on the good deuteron S-wavefunction from the soft Reid potential in Fig. 1, with a peak at \(a_{D}\approx 1.5\,\mathrm{fm}\) (diffraction disc size).
**Helium-4 charge form factor:**
For Helium-4, the charge density in the impulse approximation reads
\[J_{He}^{0}(k)=4G_{E}^{S}(k)\,\bar{C}_{E}(k)=2F_{C}^{\mathrm{He}}(k) \tag{11}\]
where the analogue of the singlet substitution (10) applies, because of the spin and space symmetry of the underlying wavefunction. Note that at low momentum transfer (11) simplifies
\[F_{C}^{\mathrm{He}}(k)\approx F_{C}^{D}(k) \tag{12}\]
The squared electric charge radius of Helium-4 is the sum of the nucleon plus the intrinsic \(\bar{C}_{E}\) form factor contribution
\[\langle r^{2}\rangle_{H}=\langle r^{2}\rangle_{N}+\langle r^{2}\rangle_{C_{E}} \tag{13}\]
The results for the electric charge form factor for Helium-4 \(|F_{C}^{He}|\) in (11), are shown in Fig. 12 (bottom) for the Woods-Saxon (dotted-orange) and the K-harmonic (dotted-blue), and compared to the empirical data in [36]. The Woods-Saxon result does not display a diffraction pattern, since the single-particle wavefunction in Fig. 7 shows no plateau or disc. The K-harmonic does, a reflection of the plateau in the wavefunction shown in Fig. 6 at a hyperdistance of \(a_{H}\approx 2.5\,\mathrm{fm}\), hence a first minimum at about \(\vec{k}_{H}^{2}\) (Fresnel diffraction)
\[\frac{\vec{k}_{H}^{2}}{\vec{k}_{D}^{2}}\approx\frac{a_{D}^{2}}{a_{H}^{2}} \approx\left(\frac{1.5}{2.5}\right)^{2}\approx\frac{1}{3} \tag{14}\]
This estimate for the ratio of the locations of the diffractive minima in the electric and D form-factor for Helium-4, is consistent with our results.
## Appendix B Pressure and shear force
Following [23], the stress tensor defined by the ij-components of the EMT is defined as \((a=q,g)\)
\[T_{a}^{ij}(\vec{r},m^{\prime},m) = \int\frac{d^{3}k}{(2\pi)^{3}}\frac{m_{D}}{E}e^{-ik\cdot r}\langle+ \frac{k}{2}m^{\prime}\,|\tilde{T}_{a}^{ij}(0)|-\frac{k}{2}m\rangle \tag{14}\] \[= (p_{0}(r)\delta^{ij}+s_{0}(r)Y_{2}^{ij})\delta_{m^{\prime}m}+p_{2 }(r)\langle m^{\prime}|Q^{ij}|m\rangle+2s_{2}(r)\langle m^{\prime}|\hat{Q}^{ip} Y_{2}^{pj}+\hat{Q}^{jp}Y_{2}^{pi}-\delta^{ij}\hat{Q}^{pq}Y_{2}^{pq}|m\rangle\] \[- \frac{1}{m_{D}^{2}}\langle m^{\prime}|Q^{kl}|m\rangle\partial_{k }\partial_{l}[p_{3}(r)\delta^{ij}+s_{3}(r)Y_{2}^{ij}],\]
with \(Y_{2}^{ij}=\hat{r}^{i}\hat{r}^{j}-\frac{1}{3}\delta^{ij}\). The pressure and shear force follow as
\[p_{i} = \frac{1}{3}\frac{1}{r^{2}}\frac{d}{dr}r^{2}\frac{d}{dr}\tilde{D} _{i}(r),\] \[s_{i} = -\frac{1}{2}r\frac{d}{dr}\frac{1}{r}\frac{d}{dr}\tilde{D}_{i}(r), \tag{15}\]
Here \(\tilde{D}_{i}\) are the Fourier transform of the deuteron form factors \(D_{i}\) defined in (36),
\[\tilde{D}_{0,2}(r)=\int\frac{d^{3}k}{2E(2\pi)^{3}}e^{-ik\cdot r}D_{0,2}(k),\]
\[\tilde{D}_{3}(r)=\int\frac{d^{3}k}{2E(2\pi)^{3}}e^{-ik\cdot r}\frac{m_{D}^{2} }{\tilde{k}^{2}}D_{3}(k) \tag{16}\]
For Helium-4 without the small D-wave admixture, the pressure and shear force receive contribution only from \(D_{0}\).
|
2308.10748 | Iterative solution to the biharmonic equation in mixed form discretized
by the Hybrid High-Order method | We consider the solution to the biharmonic equation in mixed form discretized
by the Hybrid High-Order (HHO) methods. The two resulting second-order elliptic
problems can be decoupled via the introduction of a new unknown, corresponding
to the boundary value of the solution of the first Laplacian problem. This
technique yields a global linear problem that can be solved iteratively via a
Krylov-type method. More precisely, at each iteration of the scheme, two
second-order elliptic problems have to be solved, and a normal derivative on
the boundary has to be computed. In this work, we specialize this scheme for
the HHO discretization. To this aim, an explicit technique to compute the
discrete normal derivative of an HHO solution of a Laplacian problem is
proposed. Moreover, we show that the resulting discrete scheme is well-posed.
Finally, a new preconditioner is designed to speed up the convergence of the
Krylov method. Numerical experiments assessing the performance of the proposed
iterative algorithm on both two- and three-dimensional test cases are
presented. | Paola F. Antonietti, Pierre Matalon, Marco Verani | 2023-08-21T14:30:06Z | http://arxiv.org/abs/2308.10748v3 | Iterative solution to the biharmonic equation in mixed form discretized by the Hybrid High-Order method+
###### Abstract
We consider the solution to the biharmonic equation in mixed form discretized by the Hybrid High-Order (HHO) methods. The two resulting second-order elliptic problems can be decoupled via the introduction of a new unknown, corresponding to the boundary value of the solution of the first Laplacian problem. This technique yields a global linear problem that can be solved iteratively via a Krylov-type method. More precisely, at each iteration of the scheme, two second-order elliptic problems have to be solved, and a normal derivative on the boundary has to be computed. In this work, we specialize this scheme for the HHO discretization. To this aim, an explicit technique to compute the discrete normal derivative of an HHO solution of a Laplacian problem is proposed. Moreover, we show that the resulting discrete scheme is well-posed. Finally, a new preconditioner is designed to speed up the convergence of the Krylov method. Numerical experiments assessing the performance of the proposed iterative algorithm on both two- and three-dimensional test cases are presented.
**Keywords:** partial differential equations, biharmonic equation, hybrid high-order.
## 1 Introduction
Let \(\Omega\subset\mathbb{R}^{d}\), \(d\in\{2,3\}\), be an open, bounded, polyhedral domain with smooth boundary \(\partial\Omega\). In this work, we address the numerical approximation of the solution to the biharmonic equation
\[\Delta^{2}\psi=\mathsf{f}\quad\text{in }\Omega, \tag{1a}\] \[\psi=\mathsf{g}_{\mathrm{D}},\ \partial_{\mathrm{n}}\psi= \mathsf{g}_{\mathrm{N}}\quad\text{on }\partial\Omega, \tag{1b}\]
where the load function \(\mathsf{f}\in L^{2}(\Omega)\), \(\mathsf{g}_{\mathrm{D}}\in H^{1/2}(\partial\Omega)\) and \(\mathsf{g}_{\mathrm{N}}\in H^{-1/2}(\partial\Omega)\) are prescribed. As usual, for \(X\subset\overline{\Omega}\) and \(s\in\mathbb{R}\), we denote by \(H^{s}(X)\) the standard Sobolev space of index \(s\). For all \(\mathsf{g}\colon\partial\Omega\to\mathbb{R}\), we define the subspace \(H^{s}_{\mathsf{g}}(\Omega):=\{v\in H^{s}(\Omega)\mid v_{\partial\Omega}= \mathsf{g}\}\). We denote by \((\cdot,\cdot)\) (resp. \(\langle\cdot,\cdot\rangle\)) the standard \(L^{2}\)-inner product in \(\Omega\) (resp. on \(\partial\Omega\)). Moreover, \(\partial_{\mathrm{n}}\): denotes the outer normal derivative on \(\partial\Omega\).
Equation (1) typically models the bending of a clamped plate supporting a load. By introducing the unknown \(\omega:=-\Delta\psi\), (1a) can be rewritten into two second-order elliptic equations, yielding the mixed formulation
\[-\Delta\omega =\mathsf{f}\quad\quad\text{in }\Omega,\] \[-\Delta\psi =\omega\quad\quad\text{in }\Omega, \tag{2}\] \[\psi=\mathsf{g}_{\mathrm{D}},\ \partial_{\mathrm{n}}\psi =\mathsf{g}_{\mathrm{N}}\quad\text{on }\partial\Omega.\]
This mixed form naturally arises in fluid dynamics, where \(\psi\) represents the stream and \(\omega\) the vorticity. In the plate bending model, \(\psi\) represents the deflection and \(\omega\) the bending moment or shear resultant force. There are several advantages in employing the mixed formulation (2) over the primal form (1). First, while the weak solution of the primal form (1) is to be found in \(H^{2}(\Omega)\), that of the mixed one lies in \(H^{1}(\Omega)\), for which approximation spaces are easier to be constructed. Second, the splitting (2) allows, after introducing an additional unknown, the use of fast, scalable solvers available for second-order elliptic equations. In particular, we will consider here the Hybrid High-Order (HHO)
method [9, 10], a non-conforming polyhedral discretization allowing arbitrary polynomial degrees of approximation, and exhibiting optimal convergence rates. In this context, the mixed form (2) allows to take advantage of the recent works on multigrid methods designed for the diffusion equation discretized by HHO schemes; see [11, 12, 13, 14].
Recently, ad-hoc HHO discretizations of the primal formulation (1) have been designed in [17], and [16] generalizes the methods to other boundary conditions. See also [15] for HHO discretizations of singularly perturbed fourth-order problems. Previously, related models have also been tackled with HHO methods, such as the Kirchhoff-Love plate bending model problem in [2], and the Cahn-Hilliard equation in [4]. The mixed form (2) leads to a saddle-point algebraic system, for which special solvers and preconditioners have been proposed, e.g., in [20, 21, 25]. Nonetheless, in this form, the Laplacian equations are coupled, preventing the use of fast and scalable solvers specifically designed for symmetric positive-definite (SPD) matrices. To address this issue, techniques introducing new variables have been designed in order to transform (2) into a series of decoupled problems, whose associated algebraic formulations lead to symmetric and positive-definite algebraic problems. Among them, we mention the method proposed by Glowinski, Ciarlet, Raviart and Pironneau [5, 6, 19], where the authors introduce the unknown \(\lambda\coloneqq\omega_{|\partial\Omega}\). In that setting, one solves a sequence of Dirichlet problems and iteratively improves the solution as to enforce the prescribed Neumann condition. In a symmetric fashion, the technique proposed by Falk [18] introduces \(\lambda\coloneqq\partial_{n}\omega\), and one solves a sequence of Neumann problems while iteratively improving the solution as to enforce the prescribed Dirichlet condition. In both schemes, \(\lambda\) is the solution of a linear, symmetric, elliptic equation of the form \(\mathcal{L}(\lambda)=b\), in which the evaluation of \(\mathcal{L}\) involves the solution of two Laplacian problems. In the discrete setting, solving this equation then corresponds to solving an _implicit_ linear system, i.e. whose matrix is not explicitly known but one can compute its action on a vector. This specific configuration is well suited for the use of iterative methods where the operator is applied to a vector at each iteration, without requiring explicit knowledge of the matrix coefficients. Gradient descent algorithms and, more specifically, Krylov methods such as conjugate gradients are ideal candidates in this setting.
In this work, we focus on the approach of Glowinski _et al._[5, 6, 19], that we recall in Section2. The use of an iterative method for the solution of the global linear problem yields an iterative scheme where each iteration consists of three steps: (i) the solution of a Laplace problem; (ii) the solution of a subsequent Laplacian problem, using the solution of step (i) as a source term; (iii) the computation of the normal derivative on the boundary of the solution at step (ii). In Section3, we recall the HHO discretization of the Laplacian problem with Dirichlet boundary conditions. Then, Section4 proposes a computable algorithm for the discrete normal derivative of the HHO solution of a diffusion problem. This step is indeed the main ingredient to define the global discrete scheme.
The work of Glowinski _et al._ was applied to the standard Finite Element Method (FEM). While that setting does not raise any issue regarding the well-posedness of the discrete problem, as we will show in Section5, in the context of HHO, the resulting problem requires to be stabilized. A stabilization method is proposed and the well-posedness of the discrete problem is proved.
In the context of two-dimensional FEMs, a preconditioner ensuring a convergence rate independent of the mesh size, was proposed in [22]. It is, however, restricted to two-dimensional problems. In Section6, we therefore propose a novel preconditioner, applicable to general polytopal meshes. The main idea consists in building an approximate, sparse matrix of the problem, where each column \(j\) is computed by solving the two Laplacian problems and evaluating the normal derivative only in a restricted neighbourhood of the \(j^{th}\) degree of freedom (DoF). Finally, numerical experiments are reported in Section7. Various types of two- and three-dimensional meshes, including polygonal meshes, are used. The scheme exhibits a convergence rate scaling as \(\mathcal{O}(h^{k+2})\) in \(L^{2}\)-norm, where \(k\) denotes the polynomial degree corresponding to the face unknowns of the HHO method.
## 2 The continuous splitting of the biharmonic problem
Following [5, 6, 19], we start from equation (2) and we introduce the new unknown \(\lambda\coloneqq\omega_{|\partial\Omega}\in H^{\nicefrac{{1}}{{2}}}(\partial\Omega)\). Supposing \(\lambda\) known, \(\omega\) and \(\psi\) are successively recovered by solving the following Dirichlet problems:
\[\begin{cases}-\Delta\omega=\mathsf{f}&\text{ in }\Omega,\\ \quad\omega=\lambda&\text{ on }\partial\Omega,\end{cases} \tag{3a}\] \[\begin{cases}-\Delta\psi=\omega&\text{ in }\Omega,\\ \quad\psi=\mathsf{g}_{\mathrm{D}}&\text{ on }\partial\Omega.\end{cases} \tag{3b}\]
While the Dirichlet datum \(\mathsf{g}_{\mathrm{D}}\) is explicitly enforced on the solution in (3b), the enforcement of the Neumann condition \(\partial_{n}\psi=\mathsf{g}_{\mathrm{N}}\) defines a problem in the unknown \(\lambda\), which is derived in the following way. For all \(\mu\in H^{\nicefrac{{1}}{{2}}}(\partial\Omega)\), we denote by \((\omega(\mu),\psi(\mu))\) the solution of
\[\begin{cases}-\Delta\omega(\mu)=\mathsf{f}&\text{ in }\Omega,\\ \quad\omega(\mu)=\mu&\text{ on }\partial\Omega,\end{cases}\qquad\begin{cases}-\Delta\psi(\mu)= \omega(\mu)&\text{ in }\Omega,\\ \quad\psi(\mu)=\mathsf{g}_{\mathrm{D}}&\text{ on }\partial\Omega.\end{cases}\]
By construction, the solution \((\omega,\psi)\) of (3) corresponds to \((\omega(\lambda),\psi(\lambda))\). Solving (3) then boils down to finding \(\lambda\in H^{\nicefrac{{1}}{{2}}}(\partial\Omega)\) such that
\[\partial_{n}\psi(\lambda)=\mathbf{g}_{\mathrm{N}}. \tag{4}\]
In order to derive a linear problem from (4), the constraints related to \(\mathrm{f}\) and \(\mathbf{g}_{\mathrm{D}}\) are eliminated through the introduction of \((\omega_{0},\psi_{0}):=(\omega(0),\psi(0))\) (the choice of \(\mu=0\) is arbitrary). For all \(\mu\in H^{\nicefrac{{1}}{{2}}}(\partial\Omega)\), we then denote by \((\hat{\omega}(\mu),\hat{\psi}(\mu))\) the solution of the following sequence of problems, with vanishing load and Dirichlet function:
\[\begin{cases}-\Delta\hat{\omega}(\mu)=0&\text{ in }\Omega,\\ \quad\hat{\omega}(\mu)=\mu&\text{ on }\partial\Omega,\end{cases}\qquad \begin{cases}-\Delta\hat{\psi}(\mu)=\hat{\omega}(\mu)&\text{ in }\Omega,\\ \quad\hat{\psi}(\mu)=0&\text{ on }\partial\Omega,\end{cases}\]
respectively. Equation (4) can now be reformulated as the linear problem
\[\mathcal{L}(\lambda)=b,\qquad\qquad b:=\partial_{n}\psi_{0}-\mathbf{g}_{ \mathrm{N}}, \tag{6}\]
where \(\mathcal{L}\colon H^{\nicefrac{{1}}{{2}}}(\partial\Omega)\to H^{-\nicefrac{{ 1}}{{2}}}(\partial\Omega)\) is the linear operator defined such that for all \(\mu\in H^{\nicefrac{{1}}{{2}}}(\partial\Omega)\),
\[\mathcal{L}(\mu):=-\partial_{n}\hat{\psi}(\mu). \tag{7}\]
The operator \(\mathcal{L}\) is proved to be continuous, symmetric and positive-definite in [19, Lem. 2.1].
## 3 HHO discretization of the Laplacian problem
In this section, we briefly recall (see, e.g., [8, Chap. 2] for extended details) the discrete HHO formulation of the following problem: find \(u\colon\Omega\to\mathbb{R}\) such that
\[\begin{cases}-\Delta u=f&\text{ in }\Omega,\\ \quad u=g_{\mathrm{D}}&\text{ on }\partial\Omega,\end{cases} \tag{8}\]
where \(f\colon\Omega\to\mathbb{R}\) and \(g_{\mathrm{D}}\in H^{\nicefrac{{1}}{{2}}}(\partial\Omega)\). A weak solution of (8) is obtained via the variational formulation: find \(u\in H^{\nicefrac{{1}}{{2}}}_{g_{\mathrm{D}}}(\Omega)\) such that
\[a(u,v)=(f,v)\qquad\forall v\in H^{1}_{0}(\Omega), \tag{9}\]
where the bilinear form \(a\) is such that \(a(v,w)=(\nabla v,\nabla w)\), for all \(v,w\in H^{1}(\Omega)\).
### Mesh definition and notation
Let the couple \((\mathcal{T}_{h},\mathcal{F}_{h})\) define a mesh of the domain \(\Omega\subset\mathbb{R}^{d}\), \(d\in\{1,2,3\}\): \(\mathcal{T}_{h}\) is a set of disjoint, open, polyhedral elements such that \(\bigcup_{T_{c}\in\mathcal{T}_{h}}\overline{T}=\overline{\Omega}\); \(\mathcal{F}_{h}\) is the set of element faces; \(h:=\max_{T\in\mathcal{T}_{h}}h_{T}\) with \(h_{T}\) denoting the diameter of \(T\in\mathcal{T}_{h}\). The mesh is assumed to match the geometrical requirements of [8, Def. 1.4]. Let \(\mathcal{T}_{h}^{\mathrm{B}}\) the subset of \(\mathcal{T}_{h}\) collecting the elements located at the boundary of the domain. We also define the following subsets of \(\mathcal{F}_{h}\):
* \(\mathcal{F}_{h}^{\mathrm{B}}\), collecting the interior faces;
* \(\mathcal{F}_{h}^{\mathrm{B}}\), collecting the boundary faces;
* \(\mathcal{F}_{T}\), collecting the faces of \(T\), for all \(T\in\mathcal{T}_{h}\).
We denote by \(\mathbf{n}_{\partial T}\) the unit normal vector to \(\partial T\) pointing outward of \(T\). For all \(T\in\mathcal{T}_{h}\) (resp. \(F\in\mathcal{F}_{h}\)), we denote by \((\cdot,\cdot)_{T}\) (resp. \(\langle\cdot,\cdot\rangle_{F}\)) the standard inner product of \(L^{2}(T)\) (resp. \(L^{2}(F)\)) or \(L^{2}(T)^{d}\). We define \((\cdot,\cdot)_{\mathcal{T}_{h}}:=\sum_{T\in\mathcal{T}_{h}}(\cdot,\cdot)_{T}\), and for all \(\mathcal{G}_{h}\subset\mathcal{F}_{h}\), \(\langle\cdot,\cdot\rangle_{\mathcal{G}_{h}}:=\sum_{F\in\mathcal{G}_{h}} \langle\cdot,\cdot\rangle_{F}\). Finally, we define \(|\mathcal{T}_{h}|:=\sum_{T\in\mathcal{T}_{h}}|T|\) and for all \(\mathcal{G}_{h}\subset\mathcal{F}_{h}\), \(|\mathcal{G}_{h}|:=\sum_{F\in\mathcal{G}_{h}}|F|\), where \(|\cdot|\) is the Hausdorff measure.
### Local and broken polynomial spaces
The HHO method hinges on discrete unknowns representing polynomial functions local to elements and faces. So, for all \(m\in\mathbb{N}_{0}\) and all \(T\in\mathcal{T}_{h}\) (resp. \(F\in\mathcal{F}_{h}\)), we denote by \(\mathbb{P}^{m}(T)\) (resp. \(\mathbb{P}^{m}(F)\)) the space spanned by the restriction to \(T\) (resp. \(F\)) of \(d\)-variate polynomials of total degree \(\leq m\). From these local polynomial spaces, we can construct the following broken polynomial spaces supported by the mesh and its skeleton:
\[\mathbb{P}^{m}(\mathcal{T}_{h}):=\left\{v_{\mathcal{T}_{h}}:=(v_{T} )_{T\in\mathcal{T}_{h}}\mid v_{T}\in\mathbb{P}^{m}(T)\right.\ \forall T\in\mathcal{T}_{h}\right\},\] \[\mathbb{P}^{m}(\mathcal{F}_{h}):=\left\{v_{\mathcal{F}_{h}}:=(v_{F} )_{F\in\mathcal{F}_{h}}\mid v_{F}\in\mathbb{P}^{m}(F)\right.\ \forall F\in\mathcal{F}_{h}\right\},\]
respectively. The local space \(\mathbb{P}^{m}(\mathcal{F}_{T})\) is defined analogously for all \(T\in\mathcal{T}_{h}\). For all cell or face \(X\), we denote by \(\pi_{X}^{m}\colon L^{2}(X)\to\mathbb{P}^{m}(X)\) the local \(L^{2}\)-orthogonal projector onto the space \(\mathbb{P}^{m}(X)\). By patching up those local projectors, we denote by \(\pi_{h}^{m}\colon L^{2}(\Omega)\to\mathbb{P}^{m}(\mathcal{T}_{h})\) the piecewise \(L^{2}\)-orthogonal projector onto \(\mathbb{P}^{m}(\mathcal{T}_{h})\), and by \(\pi_{\mathcal{F}_{h}^{m}}^{m}\colon L^{2}(\partial\Omega)\to\mathbb{P}^{m}( \mathcal{F}_{h}^{\mathrm{B}})\) the piecewise \(L^{2}\)-orthogonal projector onto \(\mathbb{P}^{m}(\mathcal{F}_{h}^{\mathrm{B}})\).
### Discrete hybrid formulation
Given the polynomial degrees \(k\in\mathbb{N}_{0}\) and \(l\in\{k,k+1\}\). The global and local spaces of _hybrid_ variables are defined as
\[\underline{U}_{h} :=\left\{\underline{v}_{h}:=(v_{\mathcal{T}_{h}},v_{\mathcal{F}_{h} })\in\mathbb{P}^{l}(\mathcal{T}_{h})\times\mathbb{P}^{k}(\mathcal{F}_{h}) \right\},\] \[\underline{U}_{T} :=\left\{\underline{v}_{T}:=(v_{T},v_{\partial T})\in\mathbb{P}^ {l}(T)\times\mathbb{P}^{k}(\mathcal{F}_{T})\right\}\quad\forall T\in\mathcal{T }_{h},\]
respectively. For any \(\underline{v}_{h}\in\underline{U}_{h}\), we denote by \(\underline{v}_{T}\in\underline{U}_{T}\) its restriction to \(T\in\mathcal{T}_{h}\). Boundary data are strongly accounted for in the following subspaces:
\[\mathbb{P}^{k,g_{\mathrm{D}}}(\mathcal{F}_{h}):=\left\{v_{\mathcal{F}_{h}}\in \mathbb{P}^{k}(\mathcal{F}_{h})\mid v_{F}=\pi_{F}^{k}g_{\mathrm{D}}\ \ \forall F\in\mathcal{F}_{h}^{\mathrm{B}}\right\},\qquad\underline{U}_{h,g_{ \mathrm{D}}}:=\mathbb{P}^{l}(\mathcal{T}_{h})\times\mathbb{P}^{k,g_{\mathrm{D }}}(\mathcal{F}_{h}).\]
In particular, homogeneous Dirichlet conditions are strongly enforced in \(\underline{U}_{h,0}\). The global HHO bilinear form associated to the variational formulation of problem (8) is defined as \(a_{h}\colon\underline{U}_{h}\times\underline{U}_{h}\to\mathbb{R}\) such that \(a_{h}(\underline{v}_{h},\underline{v}_{h}):=\sum_{T\in\mathcal{T}_{h}}a_{T}( \underline{v}_{T},\underline{w}_{T})\) where the local bilinear form \(a_{T}\colon\underline{U}_{T}\times\underline{U}_{T}\to\mathbb{R}\) is defined as
\[a_{T}(\underline{v}_{T},\underline{w}_{T}):=(\nabla p_{T}^{k+1}\underline{v}_{ T},\nabla p_{T}^{k+1}\underline{w}_{T})_{T}+s_{T}(\underline{v}_{T},\underline{w}_{T}) \qquad\forall T\in\mathcal{T}_{h}, \tag{10}\]
In this expression, the first term is responsible for consistency while the second is required to ensure stability of the scheme. The consistency term involves the _local potential reconstruction operator_\(p_{T}^{k+1}\colon\underline{U}_{T}\to\mathbb{P}^{k+1}(T)\) defined such that, for all \(\underline{v}_{T}\in\underline{U}_{T}\), it satisfies
\[\begin{cases}(\nabla p_{T}^{k+1}\underline{v}_{T},\nabla w)_{T}=-(v_{T},\Delta w )_{T}+\langle v_{\partial T},\nabla w\cdot\mathbf{n}_{\partial T}\rangle_{ \partial T}&\forall w\in\mathbb{P}^{k+1}(T),\\ (p_{T}^{k+1}\underline{v}_{T},1)=(v_{T},1)_{T}.\end{cases}\] (11a) Given the local interpolant \[\underline{v}_{T}\in\underline{U}_{T}\] of a function \[v\in L^{2}(\Omega)\], \[p_{T}^{k+1}\] reconstrucs an approximation of \[v\] of degree \[k+1\]. The stabilization bilinear form \[s_{T}\] depends on its argument only through the _difference operators_ \[\delta_{T}:\underline{U}_{T}\to\mathbb{P}^{l}(T)\] and \[\delta_{TF}:\underline{U}_{T}\to\mathbb{P}^{k}(F)\] for all \[F\in\mathcal{F}_{T}\], defined such that, for all \[\underline{v}_{T}\in\underline{U}_{T}\], \[\delta_{T}\underline{v}_{T}:=\pi_{T}^{k}(p_{T}^{k+1}\underline{v}_{T}-v_{T})\] and \[\delta_{TF}\underline{v}_{T}:=\pi_{F}^{k}(p_{T}^{k+1}\underline{v}_{T}-v_{F})\] for all \[F\in\mathcal{F}_{T}\]. (12)
These operators capture the higher-order correction that the reconstruction \(p_{T}^{k+1}\) adds to the element and face unknowns, respectively. A classical expression for \(s_{T}\colon\underline{U}_{T}\times\underline{U}_{T}\to\mathbb{R}\) is
\[s_{T}(\underline{v}_{T},\underline{w}_{T}):=\sum_{F\in\mathcal{F}_{T}}h_{F}^{-1 }\ \mathfrak{s}_{TF}(\underline{v}_{T},\underline{w}_{T}),\qquad\qquad\mathfrak{s} _{TF}(\underline{v}_{T},\underline{w}_{T}):=\langle(\delta_{TF}-\delta_{T}) \underline{v}_{T},(\delta_{TF}-\delta_{T})\underline{w}_{T}\rangle_{F}. \tag{13}\]
If \(l=k+1\), one can also use the simpler formula (used, e.g., in [4])
\[\mathfrak{s}_{TF}(\underline{v}_{T},\underline{w}_{T}):=\langle(\pi_{F}^{k}(v_ {T}-v_{F}),\pi_{F}^{k}(w_{T}-w_{F}))_{F}. \tag{14}\]
The global HHO problem reads: find \(\underline{u}_{h}\in\underline{U}_{h,g_{\mathrm{D}}}\) such that
\[a_{h}(\underline{u}_{h},\underline{v}_{h})=(f,v_{T_{h}})_{\mathcal{T}_{h}}\qquad \forall\underline{v}_{h}\in\underline{U}_{h,0}. \tag{15}\]
The final approximation is obtained through the post-processing step
\[u_{h}:=p_{h}^{k+1}\underline{u}_{h}\in\mathbb{P}^{k+1}(\mathcal{T}_{h}). \tag{16}\]
### Local problems and cell unknown recovery operator
It follows from this local construction that the cell unknowns are only locally coupled. For all \(T\in\mathcal{T}_{h}\), we define the linear operator \(\Theta_{T}\colon\mathbb{P}^{k}(\mathcal{F}_{T})\to\mathbb{P}^{l}(T)\) such that for all \(v_{\partial T}\in\mathbb{P}^{k}(\mathcal{F}_{T})\), \(\Theta_{T}v_{\partial T}\) is the unique solution of the local problem
\[a_{T}((\Theta_{T}v_{\partial T},0),(w_{T},0))=-a_{T}((0,v_{\partial T}),(w_{T},0)) \qquad\forall w_{T}\in\mathbb{P}^{l}(T). \tag{17}\]
We denote by \(\Theta_{T}\) the _cell unknown recovery operator_. In order to shorten the notation of the hybrid couple \(\underline{v}_{T}:=(\Theta_{\mathcal{T}_{h}}v_{\partial T},v_{\partial T})\), we define the operator \(\underline{\Theta}_{T}\colon\mathbb{P}^{k}(\mathcal{F}_{T})\to\underline{U}_{T}\) such that for all \(v_{\partial T}\),
\[\underline{\Theta}_{T}v_{\partial T}:=(\Theta_{T}v_{\partial T},v_{\partial T}). \tag{18}\]
Finally, the associated global operators \(\Theta_{\mathcal{T}_{h}}s_{h}\colon\mathbb{P}^{k}(\mathcal{F}_{h})\to\mathbb{P}^ {l}(\mathcal{T}_{h})\) and \(\underline{\Theta}_{h}\colon\mathbb{P}^{k}(\mathcal{F}_{h})\to\underline{U}_{h}\) are defined locally such that for all \(T\in\mathcal{T}_{h}\):
\[(\Theta_{\mathcal{T}_{h}}v_{\mathcal{F}_{h}})|_{T} :=\Theta_{T}v_{\partial T}\qquad\forall v_{\mathcal{F}_{h}}\in \mathbb{P}^{k}(\mathcal{F}_{h}), \tag{19}\] \[(\underline{\Theta}_{h}v_{\mathcal{F}_{h}})|_{T\times\partial T} :=\underline{\Theta}_{T}v_{\partial T}\qquad\forall v_{\mathcal{F}_{h}} \in\mathbb{P}^{k}(\mathcal{F}_{h}).\]
\(\underline{\Theta}_{h}\) verifies the following useful property:
**Lemma 1**.: _For all \(v_{\mathcal{F}_{h}}\in\mathbb{P}^{k}(\mathcal{F}_{h})\) and \(\underline{w}_{h}\in\underline{U}_{h}\), it holds that_
\[a_{h}(\underline{\Theta}_{h}v_{\mathcal{F}_{h}},\underline{w}_{h})=a_{h}( \underline{\Theta}_{h}v_{F_{h}},\underline{\Theta}_{h}w_{\mathcal{F}_{h}}). \tag{20}\]
Proof.: See Appendix A.
**Remark 1**.: _(Static condensation) In practice, problem (15) is solved through the equivalent, condensed formulation: find \(u_{\mathcal{F}_{h}}\in\mathbb{P}^{k,g_{\mathcal{D}}}(\mathcal{F}_{h})\) such that_
\[\widehat{a}_{h}(u_{\mathcal{F}_{h}},v_{\mathcal{F}_{h}})=(f,\Theta_{\mathcal{ F}_{h}}x_{\mathcal{F}_{h}}v_{\mathcal{F}_{h}})_{\mathcal{T}_{h}}\qquad\forall v_{ \mathcal{F}_{h}}\in\mathbb{P}^{k,0}(\mathcal{F}_{h}), \tag{21}\]
_where \(\widehat{a}_{h}\colon\mathbb{P}^{k}(\mathcal{F}_{h})\times\mathbb{P}^{k}( \mathcal{F}_{h})\to\mathbb{R}\) is such that for all \(v_{\mathcal{F}_{h}},w_{\mathcal{F}_{h}}\in\mathbb{P}^{k}(\mathcal{F}_{h})\),_
\[\widehat{a}_{h}(v_{\mathcal{F}_{h}},w_{\mathcal{F}_{h}}):=a_{h}(\underline{ \Theta}_{h}v_{\mathcal{F}_{h}},\underline{\Theta}_{h}w_{\mathcal{F}_{h}}). \tag{22}\]
_Refer to [7, Prop. 4] for the proof._
## 4 Discrete normal derivative
In this section, we derive an approximation, in the HHO context, of the normal derivative on \(\partial\Omega\). This formula will play a crucial role in the approximation scheme of the biharmonic problem.
Let \(u\) be the solution of the boundary value problem (8) and let \(\mathcal{H}\colon H^{\nicefrac{{1}}{{2}}}(\partial\Omega)\to H^{1}(\Omega)\) be a linear operator such that for all \(v\in H^{\nicefrac{{1}}{{2}}}(\partial\Omega)\) defined on the boundary, \(\mathcal{H}v\) extends \(v\) in the interior of \(\Omega\). By Green's formula, it holds that
\[\langle\partial_{n}u,v\rangle=(\nabla u,\nabla\mathcal{H}v)+(\Delta u, \mathcal{H}v)\qquad\forall v\in H^{\nicefrac{{1}}{{2}}}(\partial\Omega).\]
Using the bilinear form \(a(\cdot,\cdot)\), and given that \(-\Delta u=f\), the above equation becomes
\[\langle\partial_{n}u,v\rangle=a(u,\mathcal{H}v)-(f,\mathcal{H}v)\qquad \forall v\in H^{\nicefrac{{1}}{{2}}}(\partial\Omega). \tag{23}\]
In the literature, the operator \(g_{\mathcal{D}}\mapsto\partial_{n}u\) is called Poincare-Steklov operator, or Dirichlet-to-Neumann map; see, e.g. [24]. Equation (23) is the well-known variational formula for the computation of the normal derivative. Now, given the solution \(\underline{u}_{h}\) of the discrete problem (15), we define its normal derivative on the boundary faces, denoted by \(\partial_{n,h}(\underline{u}_{h})\in\mathbb{P}^{k}(\mathcal{F}_{h}^{\rm B})\), through a discrete counterpart of (23) in the HHO setting. Namely, \(\partial_{n,h}(\underline{u}_{h})\) verifies
\[\langle\partial_{n,h}(\underline{u}_{h}),v_{\mathcal{F}_{h}^{\rm B}}\rangle_{ \mathcal{F}_{h}^{\rm B}}=a_{h}(\underline{u}_{h},\underline{\mathcal{H}}_{h} v_{\mathcal{F}_{h}^{\rm B}})-(f,\mathcal{H}_{\mathcal{T}_{h}}v_{\mathcal{F}_{h}^{\rm B }})_{\mathcal{T}_{h}}\qquad\forall v_{\mathcal{F}_{h}^{\rm B}}\in\mathbb{P}^{k }(\mathcal{F}_{h}^{\rm B}), \tag{24}\]
where \(\underline{\mathcal{H}}_{h}\) is an extension/lifting operator expressed in the hybrid setting, i.e.
\[\underline{\mathcal{H}}_{h}:=(\mathcal{H}_{\mathcal{T}_{h}},\mathcal{H}_{ \mathcal{F}_{h}}),\qquad\mathcal{H}_{\mathcal{T}_{h}}\colon\mathbb{P}^{k}( \mathcal{F}_{h}^{\rm B})\to\mathbb{P}^{l}(\mathcal{T}_{h}),\qquad\mathcal{H}_{ \mathcal{F}_{h}}\colon\mathbb{P}^{k}(\mathcal{F}_{h}^{\rm B})\to\mathbb{P}^{k }(\mathcal{F}_{h}).\]
For all \(v_{\mathcal{F}_{h}^{\rm B}}\in\mathbb{P}^{k}(\mathcal{F}_{h}^{\rm B})\) and \(F\in\mathcal{F}_{h}\), we define
\[(\mathcal{H}_{\mathcal{F}_{h}}v_{\mathcal{F}_{h}^{\rm B}})_{|F}:=\begin{cases}(v _{\mathcal{F}_{h}^{\rm B}})_{|F}&\text{ if }F\in\mathcal{F}_{h}^{\rm B},\\ 0&\text{ otherwise.}\end{cases}\]
Then, we set
\[\mathcal{H}_{\mathcal{T}_{h}}:=\Theta_{\mathcal{T}_{h}\mathcal{F}_{h}}\mathcal{ H}_{\mathcal{F}_{h}},\]
where \(\Theta_{\mathcal{T}_{h}\mathcal{F}_{h}}\) is defined as in (19).
**Remark 2**.: _(Condensed formula) In order to facilitate the practical computation of \(\partial_{n,h}(\underline{u}_{h})\), equation (24) can be rewritten in terms of the condensed bilinear form \(\widehat{a}_{h}\) (cf. (22)). First of all, it follows from the definition of \(\underline{\Theta}_{h}\) that \(\underline{\mathcal{H}}_{h}=\underline{\Theta}_{h}\mathcal{H}_{\mathcal{F}_{h}}\). Then, by using property (20) and the definition (22) of \(\widehat{a}_{h}\), we have_
\[a_{h}(\underline{u}_{h},\underline{\mathcal{H}}_{h}v_{\mathcal{F}_{h}^{\rm B}}) =a_{h}(\underline{u}_{h},\underline{\Theta}_{h}\mathcal{H}_{\mathcal{F}_{h}}v_{ \mathcal{F}_{h}^{\rm B}})=a_{h}(\underline{\Theta}_{h}u_{\mathcal{F}_{h}}, \underline{\Theta}_{h}\mathcal{H}_{\mathcal{F}_{h}}v_{\mathcal{F}_{h}^{\rm B}})= \widehat{a}_{h}(u_{\mathcal{F}_{h}},\mathcal{H}_{\mathcal{F}_{h}}v_{\mathcal{F }_{h}^{\rm B}}).\]
_Equation (24) then becomes_
\[\langle\partial_{n,h}(\underline{u}_{h}),v_{\mathcal{F}_{h}^{\rm B}}\rangle_{ \mathcal{F}_{h}^{\rm B}}=\widehat{a}_{h}(u_{\mathcal{F}_{h}},\mathcal{H}_{ \mathcal{F}_{h}}v_{\mathcal{F}_{h}^{\rm B}})-(f,\mathcal{H}_{\mathcal{T}_{h}}v_{ \mathcal{F}_{h}^{\rm B}})_{\mathcal{T}_{h}}\qquad\forall v_{\mathcal{F}_{h}^{\rm B }}\in\mathbb{P}^{k}(\mathcal{F}_{h}^{\rm B}). \tag{25}\]
_Notice that compared to (24), which requires the knowledge of \(\underline{u}_{h}\), formula (25) only involves \(u_{\mathcal{F}_{h}}\). Consequently, the following hold: (i) \(u_{\mathcal{T}_{h}}\) does not have to be computed, (ii) the matrix used to compute the first term is smaller, as the matrix representation of \(\widehat{a}_{h}\) is a Schur complement where the cell unknowns have been eliminated._
The computation of the discrete normal derivative as proposed here is numerically validated in Section 7.2. The experiments show a convergence \(\mathcal{O}(h^{k+1})\) in \(L^{2}\)-norm.
Discrete HHO problem
In this section, the global, discrete realization of problem (6) is formulated.
### Bilinear form
In order to devise a stable problem, we will make use of the following bilinear form in the hybrid space \(\underline{U}_{h}\). For all \(\underline{v}_{h},\underline{w}_{h}\in\underline{U}_{h}\), we introduce
\[(\underline{v}_{h},\underline{w}_{h})^{\star}_{\mathcal{T}_{h}}:=(v_{\mathcal{ T}_{h}},w_{\mathcal{T}_{h}})_{\mathcal{T}_{h}}+\sum_{T\in\mathcal{T}_{h}^{\rm B }}\sum_{F\in\mathcal{F}_{T}}h_{F}\;\mathbf{s}_{TF}(\underline{v}_{T},\underline{w }_{T}). \tag{26}\]
Notice that (26) defines an inner product-like bilinear form based on the \(L^{2}\)-inner product of the cell unknowns, to which a stabilizing term has been added. The stabilizing term is inspired from [4, Eq. (26)]. Notwithstanding the exponent of the \(h_{F}\) factor, the latter reproduces, locally in \(T\), the stabilization term (13) of the Laplacian bilinear form. The scaling factor \(h_{F}\) is selected to ensure dimensional homogeneity with the consistency term it is added to. Note that \((\cdot,\cdot)^{\star}_{\mathcal{T}_{h}}\) does not define an inner product in \(\underline{U}_{h}\), due to the fact that only the boundary cells are involved in the stabilizing term. In spite of that, we allow ourselves the use of an inner product notation, since \((\cdot,\cdot)^{\star}_{\mathcal{T}_{h}}\) shall undertake the role of the \(L^{2}\)-inner product in the discrete problem and, thus, carries the same semantics.
As the stabilizing term of (26) is based on the same local bilinear form \(\mathbf{s}_{TF}\) used to stabilize the bilinear form \(a_{h}\), it naturally confers to \((\cdot,\cdot)^{\star}_{\mathcal{T}_{h}}\) some useful properties. First, \((\cdot,\cdot)^{\star}_{\mathcal{T}_{h}}\) remains symmetric and positive semi-definite. Second, if definiteness is not globally ensured, it is ensured "locally" in the cells where the stabilization term is applied. One might say that \((\cdot,\cdot)^{\star}_{\mathcal{T}_{h}}\) enjoys a property of _local stability_ in the boundary cells, in the sense that
\[(\underline{v}_{h},\underline{v}_{h})^{\star}_{\mathcal{T}_{h}}=0\qquad \Longrightarrow\qquad\underline{v}_{T}=0\quad\forall T\in\mathcal{T}_{h}^{ \rm B}. \tag{27}\]
Let us now describe the discrete, variational form of the continuous operator (7). For all \(\mu\in\mathbb{P}^{k}(\mathcal{F}_{h}^{\rm B})\), we denote by \((\underline{\dot{\omega}}_{h}^{\mu},\underline{\dot{\psi}}_{h}^{\mu})\in \underline{U}_{h,\mu}\times\underline{U}_{h,0}\) the solution of the discrete problems
\[a_{h}(\underline{\dot{\omega}}_{h}^{\mu},\underline{v}_{h}) =0 \forall\underline{v}_{h}\in\underline{U}_{h,0}, \tag{28a}\] \[a_{h}(\underline{\dot{\omega}}_{h}^{\mu},\underline{v}_{h}) =(\underline{\dot{\omega}}_{h}^{\mu},\underline{v}_{h})^{\star}_{ \mathcal{T}_{h}} \forall\underline{v}_{h}\in\underline{U}_{h,0}. \tag{28b}\]
For \(\mu,\eta\in\mathbb{P}^{k}(\mathcal{F}_{h}^{\rm B})\), and \((\underline{\dot{\omega}}_{h}^{\mu},\underline{\dot{\psi}}_{h}^{\mu})\) the solution of (28) associated with \(\mu\), we define the bilinear form \(\ell_{h}\colon\mathbb{P}^{k}(\mathcal{F}_{h}^{\rm B})\times\mathbb{P}^{k}( \mathcal{F}_{h}^{\rm B})\to\mathbb{R}\) as the computation of \(-\partial_{\rm n,h}(\underline{\dot{\psi}}_{h}^{\mu})\) via formula (24), such that
\[\ell_{h}(\mu,\eta):=-a_{h}(\underline{\dot{\omega}}_{h}^{\mu},\underline{ \mathcal{U}}_{h}\eta)+(\underline{\dot{\omega}}_{h}^{\mu},\underline{ \mathcal{U}}_{h}\eta)^{\star}_{\mathcal{T}_{h}}. \tag{29}\]
Remark that we have used \((\cdot,\cdot)^{\star}_{\mathcal{T}_{h}}\) in (28b) and (29) instead of \((\cdot,\cdot)_{\mathcal{T}_{h}}\) in the reference formulas (15) and (24), respectively. Given that formula (29) does not explicitly exhibit symmetry or positive-definiteness, we prove an equivalent reformulation, easier to analyze.
**Lemma 2**.: _(Reformulation of \(\ell_{h}\)) For \(\mu\) and \(\eta\in\mathbb{P}^{k}(\mathcal{F}_{h}^{\rm B})\), denote by \(\underline{\dot{\omega}}_{h}^{\mu}\) and \(\underline{\dot{\omega}}_{h}^{\eta}\) their associated solutions of (28a), respectively. Then_
\[\ell_{h}(\mu,\eta):=(\underline{\dot{\omega}}_{h}^{\mu},\underline{\dot{\omega }}_{h}^{\eta})^{\star}_{\mathcal{T}_{h}}. \tag{30}\]
Proof.: For \(\mu\) and \(\eta\in\mathbb{P}^{k}(\mathcal{F}_{h}^{\rm B})\), let \((\underline{\dot{\omega}}_{h}^{\mu},\underline{\dot{\psi}}_{h}^{\mu})\) and \((\underline{\dot{\omega}}_{h}^{\eta},\underline{\dot{\psi}}_{h}^{\eta})\) be their respective, associated solutions of (28). Let us start with the definition (29) of \(\ell_{h}\):
\[\ell_{h}(\mu,\eta):=-a_{h}(\underline{\dot{\psi}}_{h}^{\mu},\underline{ \mathcal{U}}_{h}\eta)+(\underline{\dot{\omega}}_{h}^{\mu},\underline{ \mathcal{U}}_{h}\eta)^{\star}_{\mathcal{T}_{h}}. \tag{31}\]
By writing \(\underline{\mathcal{U}}_{h}\eta=(\underline{\mathcal{U}}_{h}\eta-\underline{ \dot{\omega}}_{h}^{\eta})+\underline{\dot{\omega}}_{h}^{\eta}\), the first term becomes
\[a_{h}(\underline{\dot{\psi}}_{h}^{\mu},\underline{\mathcal{U}}_{h}\eta)=a_{h}( \underline{\dot{\psi}}_{h}^{\mu},\underline{\mathcal{U}}_{h}\eta-\underline{ \dot{\omega}}_{h}^{\eta})+a_{h}(\underline{\dot{\psi}}_{h}^{\mu},\underline{ \dot{\omega}}_{h}^{\eta}). \tag{32}\]
As \(\underline{\mathcal{U}}_{h}\) does not modify the boundary face unknowns, we have \(\underline{\mathcal{U}}_{h}\eta\in\underline{U}_{h,\eta}\). Additionally, \(\underline{\dot{\omega}}_{h}^{\eta}\in\underline{U}_{h,\eta}\) by definition, so the difference \((\underline{\mathcal{U}}_{h}\eta-\underline{\dot{\omega}}_{h}^{\eta})\in \underline{U}_{h,0}\). Consequently, as \(\underline{\dot{\psi}}_{h}^{\mu}\) verifies (28b), it holds that
\[a_{h}(\underline{\dot{\psi}}_{h}^{\mu},\underline{\mathcal{U}}_{h}\eta-\underline {\dot{\omega}}_{h}^{\eta})=(\underline{\dot{\omega}}_{h}^{\mu},\underline{ \mathcal{U}}_{h}\eta-\underline{\dot{\omega}}_{h}^{\eta})^{\star}_{\mathcal{T}_{h}}=( \underline{\dot{\omega}}_{h}^{\mu},\underline{\mathcal{U}}_{h}\eta)^{\star}_{ \mathcal{T}_{h}}-(\underline{\dot{\omega}}_{h}^{\mu},\underline{\dot{\omega}}_{h}^{ \eta})^{\star}_{\mathcal{T}_{h}}. \tag{33}\]
For the second term of (32), since \(\underline{\dot{\omega}}_{h}^{\eta}\) is solution of (28a) and \(\underline{\dot{\omega}}_{h}^{\mu}\in\underline{U}_{h,0}\), we use \(\underline{\dot{\psi}}_{h}^{\mu}\) as test function in (28a) to show that
\[a_{h}(\underline{\dot{\psi}}_{h}^{\mu},\underline{\dot{\omega}}_{h}^{\eta})=0. \tag{34}\]
Now, plugging (33) and (34) into (32) gives
\[a_{h}(\underline{\dot{\psi}}_{h}^{\mu},\underline{\mathcal{U}}_{h}\eta)=( \underline{\dot{\omega}}_{h}^{\mu},\underline{\mathcal{U}}_{h}\eta)^{\star}_{ \mathcal{T}_{h}}-(\underline{\dot{\omega}}_{h}^{\mu},\underline{\dot{\omega}}_{ h}^{\eta})^{\star}_{\mathcal{T}_{h}}. \tag{35}\]
Finally, plugging (35) into (31) yields (30).
Using formulation (30), we clearly have
**Theorem 1**.: \(\ell_{h}\)_, defined by (29), is symmetric and positive-definite._
Proof.: Formulation (30) clearly shows symmetry. Additionally, for all \(\mu\in\mathbb{P}^{k}(\mathcal{F}_{h}^{\rm B})\), we have
\[\ell_{h}(\mu,\mu)=(\underline{\dot{\omega}}_{h}^{\mu},\underline{\dot{\omega }}_{h}^{\mu})^{\star}_{\mathcal{T}_{h}}. \tag{36}\]
Since \((\cdot,\cdot)^{\star}_{\mathcal{T}_{h}}\) is positive semi-definite, so is \(\ell_{h}\). Now, assume \(\ell_{h}(\mu,\mu)=0\), which, by (36), means \((\underline{\dot{\omega}}_{h}^{\mu},\underline{\dot{\omega}}_{h}^{\mu})^{ \star}_{\mathcal{T}_{h}}=0\). The property of local stability (27) implies that, for all \(T\in\mathcal{T}_{h}^{\rm B}\), we have \(\underline{\dot{\omega}}_{T}^{\mu}=0\), i.e. \(\underline{\dot{\omega}}_{T}^{\mu}=0\) and \(\underline{\dot{\omega}}_{\partial T}^{\mu}=0\). In particular, for all \(F\in\mathcal{F}_{h}^{\rm B}\), \(\mu_{F}=\dot{\omega}_{F}^{\mu}=0\). This being true for all \(F\in\mathcal{F}_{h}^{\rm B}\), we have \(\mu=0\), which proves that \(\ell_{h}\) is positive-definite.
### Discrete scheme
Let \((\underline{\omega}_{h}^{0},\underline{\dot{\omega}}_{h}^{0})\in\underline{U} _{h,0}\times\underline{U}_{h,\mathfrak{g}_{\rm D}}\) be the solution of
\[a_{h}(\underline{\omega}_{h}^{0},\underline{\dot{\omega}}_{h}) =(\mathfrak{f},v_{\mathcal{T}_{h}})_{\mathcal{T}_{h}} \forall\underline{\underline{\nu}}_{h}\in\underline{U}_{h,0}, \tag{37a}\] \[a_{h}(\underline{\psi}_{h}^{0},\underline{\dot{\omega}}_{h}) =(\omega_{\mathcal{T}_{h}}^{0},v_{\mathcal{T}_{h}})_{\mathcal{T}_ {h}} \forall\underline{\underline{\nu}}_{h}\in\underline{U}_{h,0}. \tag{37b}\]
The discrete formulation of problem (6) reads: find \(\lambda_{\mathcal{F}_{h}^{\rm B}}\in\mathbb{P}^{k}(\mathcal{F}_{h}^{\rm B})\) such that
\[\ell_{h}(\lambda_{\mathcal{F}_{h}^{\rm B}},\mu)=\langle\partial_{h,h}( \underline{\psi}_{h}^{0})-\mathfrak{g}_{\rm N},\;\mu\rangle_{\mathcal{F}_{h}^{ \rm B}}\qquad\forall\mu\in\mathbb{P}^{k}(\mathcal{F}_{h}^{\rm B}). \tag{38}\]
We also define the linear operator \(\mathcal{L}_{h}\colon\mathbb{P}^{k}(\mathcal{F}_{h}^{\rm B})\to\mathbb{P}^{k}( \mathcal{F}_{h}^{\rm B})\) associated to the bilinear form \(\ell_{h}\) such that, for all \(\mu\in\mathbb{P}^{k}(\mathcal{F}_{h}^{\rm B})\),
\[\langle\mathcal{L}_{h}\mu,\eta\rangle_{\mathcal{F}_{h}}:=\ell_{h}(\mu,\eta) \qquad\forall\eta\in\mathbb{P}^{k}(\mathcal{F}_{h}^{\rm B}). \tag{39}\]
Since \(\ell_{h}\) is positive-definite, problem (38) is well-posed. Its solution exists and is unique. Once it is solved, we compute \((\underline{\omega}_{h},\underline{\psi}_{h})\in\underline{U}_{h,\lambda_{ \mathcal{F}_{h}^{\rm B}}}\times\underline{U}_{h,\mathfrak{g}_{\rm D}}\), solution of
\[a_{h}(\underline{\omega}_{h},\underline{\upsilon}_{h}) =(\mathfrak{f},v_{\mathcal{T}_{h}})_{\mathcal{T}_{h}} \forall\underline{\underline{\upsilon}}_{h}\in\underline{U}_{h,0}, \tag{40a}\] \[a_{h}(\underline{\psi}_{h},\underline{\upsilon}_{h}) =(\omega_{\mathcal{T}_{h}},v_{\mathcal{T}_{h}})_{\mathcal{T}_{h}} \forall\underline{\underline{\upsilon}}_{h}\in\underline{U}_{h,0}. \tag{40b}\]
Finally, the discrete approximation of \((\omega,\psi)\) is given by
\[(\omega_{h},\psi_{h}):=(p_{h}^{k+1}\underline{\omega}_{h},p_{h}^{k+1} \underline{\psi}_{h}).\]
Note that while \((\cdot,\cdot)^{\star}_{\mathcal{T}_{h}}\) is required within \(\ell_{h}\) for problem (38) to be well-posed, it has to be used neither for the enforcement of the source functions in (37) and (40), nor for the computation of \(\partial_{u,h}(\underline{\dot{\varphi}}_{h}^{0})\) (via formula (24)) in the right-hand side of (38).
## 6 Preconditioner
In this section, we introduce the preconditioned iterative strategy for the solution of the algebraic realization of (38).
### Algebraic setting
For a fixed basis for the discrete HHO spaces, let \(N_{\rm B}:=\dim\left(\mathbb{P}^{k}(\mathcal{F}_{h}^{\rm B})\right)\). Introduce the algebraic counterpart \(\mathbf{L}_{h}\colon\mathbb{R}^{N_{\rm B}}\to\mathbb{R}^{N_{\rm B}}\) of the linear operator \(\mathcal{L}_{h}\) defined by (39). We recall that \(\mathbf{L}_{h}\) can be viewed as a matrix in \(\mathbb{R}^{N_{\rm B}\times N_{\rm B}}\), whose coefficients are not explicitly known, but whose application to a vector can be computed. The algebraic realization of problem (38) reads: find \(\boldsymbol{\lambda}\in\mathbb{R}^{N_{\rm B}}\) such that
\[\mathbf{L}_{h}(\boldsymbol{\lambda})=\mathbf{b}, \tag{41}\]
where \(\mathbf{b}\in\mathbb{R}^{N_{\rm B}}\) is an algebraic representation of \(\partial_{h,h}(\underline{\psi}_{h}^{0})-\mathbf{g}_{\rm N}\) in the chosen basis.
### An approximate, sparse matrix
The implicit system (41) is solved using a preconditioned, flexible conjugate gradient (PFCG) algorithm. This section describes the construction of the preconditioner. The use of a flexible version of the Krylov method is justified by the preconditioner being non-symmetric [1, 3]. Refer to Section6.4 for a discussion about the symmetric property of the preconditioner.
Introducing \(\left(\mathbf{e}_{j}\right)_{j=1,\ldots,N_{\mathrm{B}}}\) the canonical basis of \(\mathbb{R}^{N_{\mathrm{B}}}\), the \(j^{\mathrm{th}}\) column of \(\mathbf{L}_{h}\) is given by the evaluation of \(\mathbf{L}_{h}(\mathbf{e}_{j})\). Note that \(\mathbf{L}_{h}(\mathbf{e}_{j})\) is a dense vector. Consequently, the explicit computation of \(\mathbf{L}_{h}\) would result in a dense matrix. The preconditioner proposed here consists in the explicit computation of a _sparse_ matrix \(\widetilde{\mathbf{L}}_{h}\) approximating \(\mathbf{L}_{h}\). In a nutshell, while the computation of each column \(\mathbf{L}_{h}(\mathbf{e}_{j})\) involves solving two Laplacian problems in the whole domain, our approximation \(\widetilde{\mathbf{L}}_{h}(\mathbf{e}_{j})\) consists in solving those problems in a restricted neighbourhood of the \(j^{th}\) DoF.
Let \(j\in\{1,\ldots,N_{\mathrm{B}}\}\) a fixed unknown index. Let \(F^{j}\in\mathcal{F}_{h}^{\mathrm{B}}\) the face supporting the DoF associated to \(j\), and \(T^{j}\in\mathcal{T}_{h}\) the unique element owning \(F^{j}\). Define \(\mathcal{T}_{h}^{j}\subset\mathcal{T}_{h}\) a neighbourhood of \(T^{j}\): roughly, a set of elements around \(T^{j}\), including \(T^{j}\). Let \(\Omega^{j}\) be the open, connected set such that \(\overline{\Omega^{j}}:=\cup_{T\in\mathcal{T}_{h}^{j}}\overline{T}\). Figure1a illustrates (in grey) such a neighbourhood, where \(F^{j}\) is represented by a thick line and \(T^{j}\) by a darker triangle. Relatively to \(\Omega^{j}\), one defines the sets of interior and boundary faces by \((\mathcal{F}_{h}^{j})^{j}:=\{F\in\mathcal{F}_{h}^{j}\mid F\subset\Omega^{j}\}\) and \((\mathcal{F}_{h}^{h})^{j}:=\{F\in\mathcal{F}_{h}\mid F\subset\partial\Omega^{j}\}\), respectively.
Consider the operator \(\widetilde{\mathcal{L}}_{h}^{j}\), counterpart of \(\mathcal{L}_{h}\) in \(\Omega^{j}\), in which \((\mathcal{T}_{h},\mathcal{F}_{h}^{h},\mathcal{F}_{h}^{h})\) is replaced with \((\mathcal{T}_{h}^{j},(\mathcal{F}_{h}^{h})^{j},(\mathcal{F}_{h}^{h})^{j})\) for the solution of the Laplacian subproblems, and \(\mathcal{F}_{h}^{h}\) is replaced with \(\mathcal{F}_{h}^{h}\cap(\mathcal{F}_{h}^{h})^{j}\) for the computation of the normal derivative. This latter point indicates that the normal derivative is computed on \(\partial\Omega^{j}\cap\partial\Omega\) only. For both subproblems, homogeneous Dirichlet conditions are enforced on the part of \(\partial\Omega^{j}\) interior to \(\Omega\). Define \(N_{\mathrm{B}}^{j}:=\dim\mathbb{P}^{k}((\mathcal{F}_{h}^{\mathrm{B}})^{j})\), \((\widetilde{\mathbf{e}}_{j})_{j=1,\ldots,N_{\mathrm{B}}^{j}}\) the canonical basis of \(\mathbb{R}^{N_{\mathrm{B}}^{j}}\), and \(\widetilde{\mathbf{L}}_{h}^{j}\colon\mathbb{R}^{N_{\mathrm{B}}^{j}}\to \mathbb{R}^{N_{\mathrm{B}}^{j}}\) the algebraic representation of \(\widetilde{\mathcal{L}}_{h}^{j}\).
Supposing that the \(j^{th}\) basis function of \(\mathbb{P}^{k}(\mathcal{F}_{h}^{h})\) corresponds to the \(k^{th}\) basis function of \(\mathbb{P}^{k}((\mathcal{F}_{h}^{\mathrm{B}})^{j})\), we compute \(\widetilde{\mathbf{L}}_{h}^{j}(\widetilde{\mathbf{e}}_{k})\). As \(\widetilde{\mathbf{L}}_{h}^{j}(\widetilde{\mathbf{e}}_{k})\) yields the normal derivative only on \(\partial\Omega^{j}\cap\partial\Omega\), we build the final, approximate column \(\widetilde{\mathbf{L}}_{h}(\mathbf{e}_{j})\) from \(\widetilde{\mathbf{L}}_{h}^{j}(\widetilde{\mathbf{e}}_{k})\) by setting zero coefficients where the domain boundary is not covered by the neighbourhood boundary.
We claim that \(\widetilde{\mathbf{L}}_{h}^{j}(\mathbf{e}_{j})\) yields a sparse approximation of \(\mathbf{L}_{h}(\mathbf{e}_{j})\). To support our claim, in Figures1b and 1c we plot, in iso-value curves, the solutions to the first and second Laplacian problems computed in \(\Omega^{j}\). Moreover, Figures1d and 1e plot the solutions of the Laplacian subproblems computed in the whole domain. We observe that, owing to the homogeneity of the first equation and the homogeneous Dirichlet condition everywhere except on one face, the solution culminates on that face and decreases towards zero as it goes further away. Roughly speaking, our strategy consists in approximating the solution locally in \(\Omega^{j}\) while imposing it to be zero in the rest of the domain. Then, this first truncated solution is used as the source function in the second problem, also solved in \(\Omega^{j}\). Here, note that the quantity of interest is not the solution itself, but its normal derivative on the boundary. In particular, the neighbourhood of the considered face concentrates the most information, insofar as the solution flattens as it goes away from it. In that sense, \(\widetilde{\mathbf{L}}_{h}^{j}(\mathbf{e}_{j})\) yields a reasonable approximation of \(\mathbf{L}_{h}(\mathbf{e}_{j})\).
### Computation in practice
Regarding the actual computation of \(\widetilde{\mathbf{L}}_{h}\), one can note that the HHO matrix blocks used to solve the Laplacian problems in \(\Omega^{j}\) and to compute the normal derivative can be extracted from the global ones. The sparsity of \(\widetilde{\mathbf{L}}_{h}\) is controlled by the number of boundary faces in the chosen neighbourhood, while the accuracy of the approximation depends on the size of the neighbourhood. In particular, remark that choosing \(\mathcal{T}_{h}^{j}:=\mathcal{T}_{h}\) for all \(j\) yields the actual matrix \(\mathbf{L}_{h}\). In practice, we construct \(\mathcal{T}_{h}^{j}\) by adding successive layers of neighbours around \(T^{j}\). The number of layers is defined by the parameter \(\alpha\in\mathbb{N}_{0}\). \(\mathcal{T}_{h}^{j}\) is defined as \(\mathcal{T}_{h}^{j}(\alpha)\), where \(\mathcal{T}_{h}^{j}(0):=\{T^{j}\}\) and \(\mathcal{T}_{h}^{j}(\alpha):=\mathcal{T}_{h}^{j}(\alpha-1)\cup\{T\in\mathcal{T}_ {h}\mid\exists T^{\prime}\in\mathcal{T}_{h}^{j}(\alpha-1)\text{ s.t. }\overline{T}\cap\overline{T^{\prime}}\neq\emptyset\}\) for \(\alpha\geq 1\). Note that this definition understands the neighbouring relationship as having at least one vertex in common. The neighbourhood represented in Figure1a corresponds to \(\alpha=3\).
### Discussion on symmetry
Choosing a constant \(\alpha\) for all \(j\) allows to preserve a symmetric sparsity pattern. Indeed, considering \((i,j)\in\{1,\ldots,N_{\mathrm{B}}\}^{2}\), \((\widetilde{\mathbf{L}}_{h})_{ij}\neq 0\) implies that \(F^{i}\subset\partial\Omega^{j}\). Consequently, \((\widetilde{\mathbf{L}}_{h})_{ji}\neq 0\) implies that, reciprocally, \(F^{j}\subset\partial\Omega^{i}\). However, symmetry itself is not preserved, inasmuch as \((\widetilde{\mathbf{L}}_{h})_{ij}\) results from computations in the neighbourhood \(\Omega^{j}\) whereas \((\widetilde{\mathbf{L}}_{h})_{ji}\) results from computations in \(\Omega^{i}\neq\Omega^{j}\).
To enforce the symmetry of the matrix \(\mathbf{L}_{h}\), one must ensure that for all couple \((i,j)\in\{1,\ldots,N_{\mathrm{B}}\}^{2}\), \(\Omega^{i}=\Omega^{j}\). This requirement leads to a patch-based method: the set of boundary faces must be split into _patches_, i.e. connected subsets of boundary faces. For each patch, one single neighbourhood must be defined to process all DoFs included in that patch. This yields a block-diagonal matrix \(\widetilde{\mathbf{L}}_{h}\), one block corresponding to each patch. Algebraically, such a preconditioner is in fact an approximate block Jacobi preconditioner, insofar as each diagonal block approximates
the corresponding diagonal block of the exact matrix \(\mathbf{L}_{h}\). This symmetric preconditioner was numerically tested. Even with the facts that a non-flexible Krylov method can be used and that the matrix \(\widetilde{\mathbf{L}}_{h}\) is easier to factorize, the convergence of the method is significantly slower than with the unsymmetric preconditioner defined in Section 6.3. Therefore, only the latter has been used in the numerical experiments of Section 7.
## 7 Numerical experiments
### Experimental setup
The implicit system (41) is solved iteratively by the PFCG method, using the preconditioning technique described in Section 6. Given the iterate \(\widetilde{\boldsymbol{\lambda}}\in\mathbb{R}^{N_{\mathbb{B}}}\), the corresponding residual vector is defined as \(\mathbf{r}:=\mathbf{b}-\mathbf{L}_{h}(\widetilde{\boldsymbol{\lambda}})\). The PFCG algorithm stops when \(||\mathbf{r}||_{2}/||\mathbf{b}||_{2}<\varepsilon\), where \(\varepsilon>0\) is a fixed tolerance and \(||\cdot||_{2}\) denotes the Euclidean norm on \(\mathbb{R}^{N_{\mathbb{B}}}\). All experiments are conducted choosing \(l=k\), but we stress that the same qualitative results are obtained with \(l=k+1\). In 2D, the linear systems corresponding to the Laplacian subproblems are solved by Cholesky factorization; in 3D, we use a \(p\)-multigrid algorithm on top of the algebraic multigrid method designed in [11]. In that latter case, the same tolerance \(\varepsilon\) must be used to stop the Laplacian solvers and the global PFCG algorithm. To apply the preconditioner, we solve \(\widetilde{\mathbf{L}}_{h}\) using the BiCGSTAB algorithm with tolerance \(\varepsilon\). The computations are run on an 8-core processor (AMR M1 Pro) clocked at 3228 MHz.
### Preliminary: convergence of the discrete normal derivative
We first assert the validity of the discrete normal derivative proposed in Section 4 by evaluating its order of convergence. Let \(\Omega:=(0,1)^{2}\) and \(u\colon(x,y)\mapsto\sin(4\pi x)\sin(4\pi y)\) the manufactured solution of the boundary value problem (8), where \(f\) and \(g_{\mathrm{D}}\) are defined accordingly. The problem is discretized on a sequence of successively refined meshes, and we consider \(\partial_{u,h}(\underline{u}_{h})\) computed by (25). The exact normal derivative \(\partial_{u}u\) is known, and we assess the relative \(L^{2}\)-error \(\|\partial_{h}u-\partial_{h,h}(\underline{u}_{h})\|_{L^{2}(\partial\Omega)}/ \|\partial_{h}u\|_{L^{2}(\partial\Omega)}\) for \(k\in\{0,1,2,3\}\). Figures 1(a) and 1(b) present the experimental results on uniform Cartesian meshes and on unstructured, triangular Delaunay meshes, respectively. These experiments assess a convergence in \(\mathcal{O}(h^{k+1})\) (also observed for \(l=k+1\)). Note that in the case of a non-convex domain
Figure 1: Illustration of the action of the preconditioner.
with re-entrant corners, those convergence orders are expected to be reduced due to the corner singularities [23]. One way to mitigate this issue is the use of graded meshes (cf. [23]).
### Convergence rate of the scheme
On the unit square, we consider \(\psi(x,y)=x\sin(\pi y)e^{-xy}\) and its corresponding \(\omega=-\Delta\psi\), the manufactured solution of (2), with \(\mathsf{f},\mathsf{g}_{\text{D}}\) and \(\mathsf{g}_{\text{N}}\) determined accordingly. The square is decomposed into a sequence of successively finer meshes, and the numerical scheme is applied for \(k\in\{0,1,2,3\}\). Figure 3 presents the evolution of the relative \(L^{2}\)-error achieved by the approximation \((\omega_{h},\psi_{h})\) with respect to \((\omega,\psi)\), on a sequence of Cartesian meshes. Similarly, Figure 4 shows the analogous results obtained with a sequence of polygonal meshes, each constructed from an initial Cartesian mesh by agglomeration and face collapsing. One such mesh is represented in Figure 6. We empirically interpret these results in light of Glowinski _et al._'s theoretical findings in [19] for the continuous high-order FEM. According to [19, Section 3.3], since \(\psi\) is regular enough, the FEM solution \((\omega_{h}^{\text{FEM}},\psi_{h}^{\text{FEM}})\) of order \(p\geq 3\) verifies
\[\|\psi_{h}^{\text{FEM}}-\psi\|_{L^{2}(\Omega)}+h^{2}\|\omega_{h}^{\text{FEM} }-\omega\|_{L^{2}(\Omega)}\leq Ch^{p+1}\|\psi\|_{H^{p+1}(\Omega)}, \tag{42}\]
where \(C\) is a constant independent of \(h\) and \(\psi\). In our HHO setting, the above error estimate would translate in the following one: if \(k\geq 2\), then
\[\|\psi_{h}-\psi\|_{L^{2}(\Omega)}+h^{2}\|\omega_{h}-\omega\|_{L^{2}(\Omega)} \leq Ch^{k+2}\|\psi\|_{H^{k+2}(\Omega)}. \tag{43}\]
The experimental results of Figure 3 validate the estimate (43), not only for \(k\geq 2\), but also for \(k\in\{0,1\}\). More precisely, while Figure 3 shows that the estimate is sharp for \(\psi\) (i.e. \(\mathcal{O}(h^{k+2})\)), Figure 3 shows, for \(\omega\), a faster convergence than indicated by (43). Namely, the convergence seems to be \(\mathcal{O}(h^{k+1})\) for \(k=0\) and \(\mathcal{O}(h^{k+1/2})\) for \(k\geq 1\). The experiments of Figure 6 on polygonal meshes exhibit the same convergence orders, except for \(\omega_{h}\) with \(k=0\), where the error estimate reduces to \(\mathcal{O}(h^{k+1/2})\), in accordance with the other values of \(k\). Based on these experiments, we conjecture that the following error estimate is sharp for all \(k\geq 0\):
\[\|\psi_{h}-\psi\|_{L^{2}(\Omega)}+h^{3/2}\|\omega_{h}-\omega\|_{L^{2}(\Omega) }\leq Ch^{k+2}\|\psi\|_{H^{k+2}(\Omega)}. \tag{44}\]
Nonetheless, we keep in mind that \(k=0\) seems to be a special case for \(\omega_{h}\), where superconvergence may be observed: besides the order \(1\) obtained on Cartesian meshes (Figure 3), we observe the order \(2\) if we choose as an exact solution the polynomial function \(\psi(x,y)=x^{4}(x-1)^{2}y^{4}(y-1)^{2}\), as illustrated in Figure 5.
**Remark 3**.: _(Comparison with [17]) The HHO discretizations of [17] rely on the primal formulation (1) of the equation. They hinge on the local approximation space \(\mathbb{P}^{k+2}(T)\times\mathbb{P}^{k+1}(\mathcal{F}_{T})\times\mathbb{P}^{k }(\mathcal{F}_{T})\) in 2D (resp. \(\mathbb{P}^{k+2}(T)\times\mathbb{P}^{k+2}(\mathcal{F}_{T})\times\mathbb{P}^{ k}(\mathcal{F}_{T})\) in 3D), where the associated set of DoFs aims at approximating \((\psi_{]T},\psi_{]0T},\nabla\psi\cdot\mathbf{n}_{]T]}\). With this setting, the methods achieve a convergence order of \(k+3\) in \(L^{2}\)-norm. In comparison, the present method requires the space \(\mathbb{P}^{k+1}(T)\times\mathbb{P}^{k+1}(\mathcal{F}_{T})\) to obtain the same convergence order, i.e. one less polynomial degree for the cell unknowns and no DoF dedicated to the approximation of \(\nabla\psi\cdot\mathbf{n}_{\text{D}T}\). However, if our method is structurally lighter in DoFs than [17], one cannot conclude regarding the computational cost, since those DoFs have to be solved multiple times as part of an iterative process (vs. only once for [17])._
Figure 2: Convergence of the discrete normal derivative \(\partial_{n,h}(\underline{u}_{h})\) for a smooth solution.
Figure 4: Convergence in \(L^{2}\)-norm of the discrete solution \((\omega_{h},\psi_{h})\) with respect to the exact solution \(\psi(x,y)=x\sin(\pi y)e^{-xy}\). Square domain discretized by polygonal meshes.
Figure 5: Convergence in \(L^{2}\)-norm of the discrete solution \((\omega_{h},\psi_{h})\) with respect to the exact solution \(\psi(x,y)=x^{4}(x-1)^{2}y^{4}(y-1)^{2}\). Square domain discretized by Cartesian meshes.
Figure 3: Convergence in \(L^{2}\)-norm of the discrete solution \((\omega_{h},\psi_{h})\) with respect to the exact solution \(\psi(x,y)=x\sin(\pi y)e^{-xy}\). Square domain discretized by Cartesian meshes.
### Preconditioner
We evaluate in this section the efficiency of the preconditioner designed in Section 6. We recall that the parameter \(\alpha\in\mathbb{N}_{0}\) is an indicator of the size of the neighbourhoods used for the approximation of the matrix.
#### 7.4.1 Convergence vs. cost: the parameter \(\alpha\)
The considered test case is the square domain partitioned into \(256\times 256\) Cartesian elements, with \(k=1\). The tolerance is set to \(\varepsilon=10^{-14}\). Let us assess how the preconditioner improves the speed of convergence. Figure 6(a) plots the decay of the algebraic residual \(||\mathbf{r}||_{2}/||\mathbf{b}||_{2}\) without preconditioner (in red) and with preconditioner (in blue), considering increasing values of \(\alpha\). One can confirm that, as expected, the higher the value of \(\alpha\), the better the convergence rate. However, increasing \(\alpha\) increases the setup cost of the preconditioner. Indeed, (i) the Laplacian problems are solved in larger subdomains, (ii) the approximate matrix \(\widetilde{\mathbf{L}}_{h}\) is denser, which increases the cost of its factorization. Figure 6(b) presents the computational cost, measured in CPU time, to reach the algebraic solution. One can see that a trade-off between cost and convergence rate must be made to achieve optimal performance. If raising \(\alpha\) is a good strategy up to \(\alpha=6\), above that threshold, the better convergence rate does not make up for the cost of the setup phase, hence making the overall CPU time slightly increase. We stress that other choices of \(k\) yield the same qualitative results.
#### 7.4.2 PFCG convergence rate: \(k\)-independence and \(h\)-dependency assessment
Problem (2) on the unit square with exact solution \(\psi(x,y)=x\sin(\pi y)e^{-xy}\) is solved on a sequence of 2D grids composed of \(N^{2}\) Cartesian elements, \(N\in\{32,64,128,256,512\}\). Table 1 presents, for \(k\in\{0,1,2,3\}\), the number of PFCG iterations executed to reach the convergence criterion with \(\varepsilon=10^{-8}\). The parameter of the preconditioner is set to \(\alpha=8\). For comparison, the number of iterations without preconditioning is reported in brackets. Firstly, one can remark that the use of the preconditioner allows a convergence rate independent of \(k\), while a dependency is observed without preconditioner. Secondly, we approximate the asymptotic convergence rate with respect to the problem size by using the number of iterations measured on the two finer meshes. This yields a dependency in \(\mathcal{O}(h^{-0.41})\). Additionally, Table 2 reports the CPU times of the setup and iteration phases. The setup includes the Cholesky factorization of the Laplacian matrix and the assembly of the preconditioning matrix. In brackets, the same
Figure 6: Example of a polygonal mesh of the unit square \(\Omega=(0,1)^{2}\).
Figure 7: (a) compares the speed of convergence of the (P)FCG w.r.t. the normalized algebraic residual \(||\mathbf{r}||_{2}/||\mathbf{b}||_{2}\). (b) compares the CPU time consumed to reach a tolerance of \(10^{-14}\), in function of \(\alpha\). The test case is the square domain partitioned into \(256\times 256\) Cartesian elements, \(k=1\).
information without preconditioning is displayed. Based on these results, one can see that the compensation of setup cost by the gain in iteration time is more and more efficient as the problem grows larger.
The equivalent experiment is performed in 3D (exact solution \(\psi(x,y)=xz\sin(\pi y)e^{-xy}\)) with unstructured tetrahedral meshes. As the preconditioner is costlier in 3D, we choose \(\alpha=2\), and set the tolerance to \(\varepsilon=10^{-5}\). The number of PFCG iterations (independent of \(k\)) is presented in Table 3, which exhibits a dependency in \(\mathcal{O}(h^{-0.37})\). The CPU times of setup and iteration phases are reported for \(k=0\).
## 8 Conclusion
In this work, we extended the scheme of Glowinski _et al._ to HHO discretizations, yielding an iterative method for the mixed solution of the biharmonic equation. Its main advantage lies in the fact that it can be implemented from an existing diffusion code with limited development costs. Namely, notwithstanding the PFCG solver, one requires only the implementation of (29) and the right-hand side of (28b). Additionally, for large problems, it allows the use of fast, elliptic solvers for an enhanced time to solution. Future work will focus on the a priori error analysis of the proposed method and on improving the preconditioner, in order to obtain a convergence rate of the PFCG that is independent of the mesh size, and achieve a scalable behaviour.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \(N=\) & 32 & 64 & 128 & 256 & 512 \\ \hline \(k=0\) & 13 (19) & 19 (25) & 26 (33) & 33 (42) & 44 (56) \\ \(k=1\) & 13 (30) & 19 (36) & 26 (47) & 33 (58) & 44 (75) \\ \(k=2\) & 13 (37) & 19 (47) & 26 (62) & 33 (77) & 44 (101) \\ \(k=3\) & 13 (44) & 19 (58) & 26 (75) & 33 (96) & 44 (122) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Test case: \(N^{2}\) Cartesian elements, \(\varepsilon=10^{-8}\). Number of PFCG(\(\alpha=8\)) iterations. In brackets, number of unpreconditioned FCG iterations.
\begin{table}
\begin{tabular}{c c c c c|c c c|c c} \hline \hline & \(N=\) & \multicolumn{2}{c}{32} & \multicolumn{2}{c}{64} & \multicolumn{2}{c}{128} & \multicolumn{2}{c}{256} & \multicolumn{2}{c}{512} \\ \hline \multirow{2}{*}{\(k=0\)} & setup & 0.4 & (0.0) & 0.9 & (0.0) & 1.9 & (0.0) & 3.9 & (0.0) & 10.7 & (2.0) \\ & iter. & 0.1 & (0.1) & 0.3 & (0.4) & 1.8 & (2.3) & 9.4 & (11.8) & 50.8 & (66.7) \\ \hline \multirow{2}{*}{\(k=1\)} & setup & 1.2 & (0.0) & 2.8 & (0.0) & 5.7 & (0.2) & 13.9 & (1.7) & 41.0 & (17.2) \\ & iter. & 0.2 & (0.5) & 1.1 & (2.2) & 6.3 & (11.5) & 34.1 & (60.1) & 194.9 & (336.3) \\ \hline \multirow{2}{*}{\(k=2\)} & setup & 4.8 & (0.0) & 10.1 & (0.0) & 22.0 & (0.6) & 51.0 & (5.6) & 147.8 & (57.0) \\ & iter. & 0.6 & (1.7) & 3.5 & (8.7) & 20.1 & (48.6) & 109.1 & (251.9) & 646.2 & (1450.4) \\ \hline \multirow{2}{*}{\(k=3\)} & setup & 18.2 & (0.0) & 40.5 & (0.1) & 86.8 & (1.2) & 188.4 & (13.4) & 508.0 & (155.7) \\ & iter. & 1.8 & (6.3) & 10.8 & (33.2) & 61.4 & (177.3) & 319.4 & (1028.9) & 2028.4 & (6271.7) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Test case: \(N^{2}\) Cartesian elements, \(\varepsilon=10^{-8}\). CPU times (in seconds) of the setup and iteration phases. In brackets: analogous quantity without preconditioning.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \(h\approx\) & \(h_{0}\) & \(h_{0}/2\) & \(h_{0}/4\) & \(h_{0}/8\) \\ Elements & 3373 & 22,869 & 162,167 & 1,224,468 \\ \hline iter. counts & 9 & 12 & 17 & 22 \\ setup time & 5.9 & 24.4 & 98.3 & 426.3 \\ iter. time & 0.2 & 1.5 & 17.8 & 263.5 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Test case: Cubic domain, unstructured tetrahedral mesh, \(\varepsilon=10^{-5}\). Number of PFCG(\(\alpha=2\)) iterations, along with the setup and iteration CPU times (in seconds) for \(k=0\). |
2302.04766 | Interference and reflection from the event horizon of a quantum
corrected black hole | In this work, we calculate the Hawking temperature for a quantum corrected
black hole geometry using the $reflection$ $from$ $the$ $horizon$ method. We
observe that quantum gravity corrections indeed show up in the Hawking
temperature formula of the quantum corrected black hole. It is important to
notice that the quantum gravity corrections arise in the Hawking temperature
formula only due to the underlying quantum gravity corrections to the lapse
function of the black hole metric rather than the semi-classical methods used
in the analysis. We also substantiate our result by computing the Hawking
temperature using the tunneling approach. | Sunandan Gangopadhyay, Soham Sen, Rituparna Mandal | 2023-02-09T16:59:44Z | http://arxiv.org/abs/2302.04766v2 | # Interference and reflection from the event horizon of a quantum corrected black hole
###### Abstract
In this work, we calculate the Hawking temperature for a quantum corrected black hole geometry using the _reflection from the horizon_ method. We observe that quantum gravity corrections indeed show up in the Hawking temperature formula of the quantum corrected black hole. It is important to notice that the quantum gravity corrections arise in the Hawking temperature formula only due to the underlying quantum gravity corrections to the lapse function of the black hole metric rather than the semi classical methods used in the analysis. We also substantiate our result by computing the Hawking temperature using the tunneling approach.
1
Footnote 1: [email protected]
\(\dagger\)[email protected], [email protected]
\(\ddagger\)[email protected]
## 1 Introduction
General theory of relativity developed by Albert Einstein is considered as the most accurate theory describing the large scale structure of the universe [1, 2]. The Einstein's field equations admit solutions with singularities which are called black holes. In classical consideration any object that falls into the event horizon of the black hole can never escape from it. Therefore classically black holes act as perfect absorbers. This classical picture has known limitations. From the viewpoint of thermodynamics, a black hole should have entropy and a temperature. It was shown in [3] that if one tries to extract energy from a Kerr black hole, there is a quantity which never decreases and it was later found to be proportional to the area of the black hole. Future investigations revealed that the area of the black hole is its physical entropy [4, 5, 6]. Some ground breaking works also revealed that the black holes have their own laws of thermodynamics [7]. It was first shown by Stephen Hawking that if one considers quantum fluctuations in the curved background, the black hole emits radiation [8, 9, 10]. Later it was implemented by Stephen Hawking that the black hole radiation is identical to that of a black body radiation with temperature \(T=\frac{\kappa}{2\pi}\), where \(\kappa\) is the surface gravity of the black hole. There have been several attempts to give an alternative derivation of the Hawking radiation [11, 12, 13]. Some of the very famous approaches to understand the origin of the Hawking temperature revolves around the _reflection from the horizon_ method [14] and the well known tunneling approach [15, 16]. The radial null geodesic approach has been used later to compute the Hawking temperature for different black hole geometries [17, 18, 19, 20, 21]. The tunneling of Dirac particle through the event horizon has also been considered in [22, 23, 24, 25, 26, 27]. It was shown in [14] that if a particle is falling into the event horizon of a black hole, there is a finite probability that the particle will be reflected back from the event horizon of the black hole. If one considers particle trajectories around the black hole, classically there are ingoing trajectories and the outgoing trajectories. The outgoing trajectories describe the trajectories that leads a particle away from the centre of the black hole and the ingoing trajectories on the other hand leads particles towards the singularity of the black hole. If a particle enters the event horizon of a black hole, there are no ways to escape classically. But if one consider quantum mechanical analysis in the vicinity of the event horizon, it can be observed that the solution of the incoming particle consists of both the ingoing and outgoing trajectories to allow superposition between the two trajectories. This outgoing part of the solution implies a finite probability of a particle to be reflected back from the event horizon of the black hole making the absorption cross-section finite which is proportional to the event horizon area in the infrared region [28, 29]. Now the Hawking radiation process includes radiations being emitted from the event horizon of the black hole. It was analyzed in [14] that if one calculates the temperature of the black hole for simple black hole geometries, it coincides with the Hawking temperature of the black hole implying that this _reflection from the horizon_ can work as an alternative origin story for the Hawking radiation of a general black hole geometry.
The tunneling approach serves as an alternate description of the source of the Hawking radiation which describes the source of this radiation by tunneling of scalar waves across the event horizon. The core idea of this approach lies in the fact that there is an abrupt change in sign of the energy of the particle when it crosses the event horizon of a black hole. If a pair is created outside or inside of the event horizon of the black hole, after one member of the pair has tunneled to the opposite site, it can materialize with zero total energy. The mass of the black hole goes down as it radiates to maintain conservation of energy. These black holes are therefore in highly excited states from a quantum gravity point of view. The more popular approach to treat Hawking radiation is by considering the black hole immersed in a thermal bath in which equilibrium is possible. Hence, by the principle of detailed balance there must be emission from the black hole itself.
In this work, our main motivation is to investigate the Hawking temperature of quantum gravity corrected black hole geometries via both the _reflection from the horizon_ method and the tunneling approach. As we are working with quantum gravitational black holes, one expects to find quantum gravitational corrections to the classical form of the Hawking temperature. Here we have considered the quantum corrected black hole geometry obtained from the flow of the Newton's gravitational constant and the _Garfinkle-Horowitz-Strominger_ (GHS) black hole.
In section 2 we have obtained the scalar field solution of the covariant Klein-Gordon equation using the Hamilton-Jacobi approach. In sections 3 and 4, we have obtained the Hawking temperatures for the quantum corrected black hole and the GHS black hole using the the method of _reflection from the horizon_.
## 2 Hamilton-Jacobi method for obtaining the scalar field solution
In the \(s\)-wave approximation, the spacetime structure of a spherically symmetric black hole is effectively \(1+1\) - dimensional. The generic metric structure for such a static spherically symmetric black hole reads
\[ds^{2}=-f(r)dt^{2}+g(r)^{-1}dr^{2}. \tag{1}\]
We shall now consider a scalar field in the above background. It satisfies the Klein-Gordon quantum field equation for a scalar field with rest mass \(m\) in two spacetime dimensions
\[-\frac{\hbar^{2}}{\sqrt{-g}}\partial_{\mu}\left(\sqrt{-g}g^{\mu\nu}\partial_{ \nu}\right)\Psi=m^{2}\Psi \tag{2}\]
where \(\sqrt{-g}=\sqrt{-\det\left(g_{\mu\nu}\right)}=\sqrt{\frac{f(r)}{g(r)}}\) and \(\Psi\) is the scalar field. We will be considering massless scalar fields in our analysis, and using the metric structure in eq.(1), we can express eq.(2) as follows
\[-\frac{\partial_{t}^{2}\Psi}{f(r)}+\left(\frac{f^{\prime}(r)}{2f(r)}+\frac{g^ {\prime}(r)}{2g(r)}\right)g(r)\partial_{r}\Psi+g(r)\partial_{r}^{2}\Psi=0. \tag{3}\]
In order to obtain the solution of eq.(3), we take an ansatz of the form
\[\Psi(t,r)=\exp\left(-\frac{i}{\hbar}I(t,r)\right). \tag{4}\]
Substituting the ansatz in eq.(4) for \(\Psi(t,r)\) in eq.(3), we obtain the following equation involving \(I(t,r)\)
\[\begin{split}&\frac{i}{f(r)}\left(\frac{\partial I}{\partial t} \right)^{2}-\frac{\hbar}{f(r)}\frac{\partial^{2}I}{\partial t^{2}}-ig(r)\left( \frac{\partial I}{\partial r}\right)^{2}\\ &+\hbar g(r)\frac{\partial^{2}I}{\partial r^{2}}+\hbar g(r)\left( \frac{f^{\prime}(r)}{2f(r)}+\frac{g^{\prime}(r)}{2g(r)}\right)\frac{\partial I }{\partial r}=0\.\end{split} \tag{5}\]
It can be inferred from the forms of eq.(s)(3,5) that we can separate the equations involving the time coordinate (\(t\)) and the radial coordinate (\(r\)). Therefore, we can consider the form of \(I(r,t)\) to be
\[I(t,r)=\varepsilon t+\tilde{S}(r) \tag{6}\]
where \(\varepsilon\) denotes the energy of the particle.
Now substituting the form of \(I(r,t)\) from eq.(6) in eq.(5), we get
\[\begin{split}&\frac{i\varepsilon^{2}}{f(r)}-ig(r)\left(\frac{ \partial\tilde{S}}{\partial r}\right)^{2}+\hbar g(r)\frac{\partial^{2}\tilde{ S}}{\partial r^{2}}\\ &+\hbar g(r)\left(\frac{f^{\prime}(r)}{2f(r)}+\frac{g^{\prime}(r )}{2g(r)}\right)\frac{\partial\tilde{S}}{\partial r}=0\.\end{split} \tag{7}\]
We now expand \(\tilde{S}(r)\) in a power series in \(\hbar\) as
\[\tilde{S}(r)=S_{0}(r)+\hbar S_{1}(r)+\hbar^{2}S_{2}(r)+\cdots. \tag{8}\]
Using this form of \(\tilde{S}(r)\) from eq.(8), we can recast eq.(7) in the following form
\[\begin{split}&\frac{i\varepsilon^{2}}{f(r)}-ig(r)\bigg{(}\frac{ \partial S_{0}}{\partial r}\bigg{)}^{2}+\hbar g(r)\bigg{[}-2i\frac{\partial S _{0}}{\partial r}\frac{\partial S_{1}}{\partial r}+\frac{\partial^{2}S_{0}}{ \partial r^{2}}\\ &+\left[\frac{f^{\prime}(r)}{2f(r)}+\frac{g^{\prime}(r)}{2g(r)} \right]\frac{\partial S_{0}}{\partial r}\bigg{]}+\hbar^{2}g(r)\bigg{[}-2i\frac{ \partial S_{0}}{\partial r}\frac{\partial S_{2}}{\partial r}\\ &-i\left(\frac{\partial S_{1}}{\partial r}\right)^{2}+\frac{ \partial^{2}S_{1}}{\partial r^{2}}+\left[\frac{f^{\prime}(r)}{2f(r)}+\frac{g^{ \prime}(r)}{2g(r)}\right]\frac{\partial S_{1}}{\partial r}\bigg{]}+\cdots=0\.\end{split} \tag{9}\]
Equating the terms involving equal powers of the reduced Planck's constant (\(\hbar\)) from eq.(9) to zero, we obtain the following set of equations
\[\hbar^{0}: \frac{i\varepsilon^{2}}{f(r)}-ig(r)\bigg{(}\frac{\partial S_{0}}{ \partial r}\bigg{)}^{2}=0\, \tag{10}\] \[\hbar^{1}: -2i\frac{\partial S_{0}}{\partial r}\frac{\partial S_{1}}{ \partial r}+\frac{\partial^{2}S_{0}}{\partial r^{2}}+\left[\frac{f^{\prime}(r) }{2f(r)}+\frac{g^{\prime}(r)}{2g(r)}\right]\frac{\partial S_{0}}{\partial r}=0\,\] (11) \[\hbar^{2}: -2i\frac{\partial S_{0}}{\partial r}\frac{\partial S_{2}}{ \partial r}-i\left(\frac{\partial S_{1}}{\partial r}\right)^{2}+\left[\frac{f^{ \prime}(r)}{2f(r)}+\frac{g^{\prime}(r)}{2g(r)}\right]\frac{\partial S_{1}}{ \partial r}\bigg{]}\] (12) \[+\frac{\partial^{2}S_{1}}{\partial r^{2}}=0\.\] \[\vdots\]
In order to obtain the complete form of \(\tilde{S}(r)\) in eq.(8), we need to solve the complete set of equations given by eq.(s)(10-12) and the higher order equations in \(\hbar\) as well. From eq.(10), we obtain the following solution
\[\frac{\partial S_{0}}{\partial r}=\pm\frac{\varepsilon}{\sqrt{f(r)g(r)}}. \tag{13}\]
Integrating the above equation, with respect to the radial coordinate, we obtain
\[S_{0}=\pm\varepsilon\int^{r}\frac{dr}{\sqrt{f(r)g(r)}}. \tag{14}\]
Substituting the form of \(\frac{\partial S_{0}}{\partial r}\) from eq.(13) in eq.(11), we obtain the following relation
\[\frac{\partial S_{1}}{\partial r}=0. \tag{15}\]
Eq.(15) implies that \(S_{1}\)=constant. Substituting eq.(15) back in eq.(12), we obtain
\[\frac{\partial S_{2}}{\partial r}=0. \tag{16}\]
Following a similar procedure, we find that
\[\frac{\partial S_{n}}{\partial r}=0\,\ \forall\ n\in\{1,2,3,\ldots\}. \tag{17}\]
Eq.(17) indicates that \(S_{n}=\) constant \(\forall\ n\in\{1,2,3,\ldots\}\) and therefore only \(S_{0}\) survives while calculating the energy of the system. Using eq.(14) in eq.(6), we obtain
\[I(t,r)=\varepsilon t\pm\int\frac{\varepsilon}{\sqrt{f(r)g(r)}}. \tag{18}\]
Using the form of \(I(t,r)\) from eq.(18), the solution for the massless scalar field takes the form
\[\Psi=\exp\left(-\frac{i\varepsilon t}{\hbar}\mp\frac{i\varepsilon}{\hbar} \int\frac{dr}{\sqrt{f(r)g(r)}}\right). \tag{19}\]
In the next section, we will be considering reflection of the massless scalar field from the event horizon of a quantum corrected black hole.
## 3 Reflection from the horizon of a quantum corrected black hole
The metric of a quantum corrected black hole in \((3+1)\)-dimensions (obtained from the flow of the Newton's gravitational constant) reads [30, 31, 32, 33, 34]
\[ds^{2}=-f(r)dt^{2}+\frac{dr^{2}}{f(r)}+r^{2}d\theta^{2}+r^{2}\sin^{2}\theta d \phi^{2} \tag{20}\]
where
\[f(r)=1-\frac{2G(r)M}{r} \tag{21}\]
and the form of \(G(r)\) is given by (in natural units)
\[G(r)=\frac{G}{1+\frac{\tilde{\omega}G}{r^{2}}}. \tag{22}\]
In eq.(22), \(\tilde{\omega}\) is a constant denoting the quantum gravity corrections to the black geometry. From the lapse function in eq.(21), we obtain the inner and external radius of this quantum corrected black hole geometry as follows
\[r_{\pm}=GM\pm\sqrt{G^{2}M^{2}-\tilde{\omega}G}. \tag{23}\]
In case of the of the black holes with \(f(r)=g(r)\), the scalar field solution in eq.(19) reduce to the following form
\[\Psi=\exp\left(-\frac{i\varepsilon t}{\hbar}\mp\frac{i\varepsilon}{\hbar} \int\frac{dr}{f(r)}\right). \tag{24}\]
In eq.(24), \(\int^{r}\frac{dr}{f(r)}=r_{*}\) denotes the tortoise coordinate. For the metric structure in eq.(21), we obtain the form of the tortoise coordinate in the vicinity of the event horizon radius as
\[\int_{r_{+}}^{r}\frac{dr}{f(r)} \cong GM\ln\left[r^{2}-2GMr+\tilde{\omega}G\right]+\] \[\frac{G^{2}M^{2}}{\sqrt{G^{2}M^{2}-\tilde{\omega}G}}\ln\left[ \frac{r-GM-\sqrt{G^{2}M^{2}-\tilde{\omega}G}}{r-GM+\sqrt{G^{2}M^{2}-\tilde{ \omega}G}}\right]. \tag{25}\]
Using the forms of \(r_{\pm}\) from eq.(23), we can recast the above equation as
\[\int_{r_{+}}^{r}\frac{dr}{f(r)}\cong\frac{2GM}{r_{+}-r_{-}}\left(r_{+}\ln \left[r-r_{+}\right]-r_{-}\ln\left[r-r_{-}\right]\right). \tag{26}\]
The above form has two parts, one part indicates a solution inside the external radius \(r_{+}\) and the other outside it. Here we will be considering the \(\ln(r-r_{+})\) part of the solution describing the scalar field solution outside the horizon radius \(r_{+}\) as this is the only piece relevant for obtaining the Hawking temperature. Hence, the complete solution of the massless scalar field in the vicinity of the horizon radius \(r_{+}\) is given by (ignoring the second term in eq.(26))2
Footnote 2: A complete analysis keeping both the terms in eq.(26) is given in an Appendix.
\[\Psi(t,r)=e^{-\frac{i\varepsilon t}{\hbar}}\phi(r) \tag{27}\]
where \(\phi(r)\) has the form given by
\[\phi(r)=\exp\left[\mp\frac{2i\varepsilon GM}{\hbar}\left(\frac{r_{+}}{r_{+}-r _{-}}\right)\ln(r-r_{+})\right]. \tag{28}\]
The'minus' sign in the wavefunction in eq.(28) represents a radially infalling massless scalar field that completely gets absorbed by the event horizon of the black hole at \(r=r_{+}\). In our case, we will be considering a radially outgoing solution as well in order to allow interference between the infalling and outgoing parts of the wavefunction. Therefore, the wavefunction in the vicinity of the horizon radius \(r_{+}\) can be written as
\[\Phi(r)=e^{-\frac{2i\varepsilon GM}{\hbar}\left(\frac{r_{+}\ln(r-r_{+})}{r_{+ }-r_{-}}\right)}+\mathcal{B}e^{\frac{2i\varepsilon GM}{\hbar}\left(\frac{r_{+} \ln(r-r_{+})}{r_{+}-r_{-}}\right)}. \tag{29}\]
The first term in eq.(29) denotes an incoming wave whereas the outgoing wave is described by the second term. In case of interference between the incoming and the outgoing waves, the reflection coefficient \(\mathcal{B}\) must have non-zero value. If the black horizon acts as a perfect absorber then the reflection coefficient \(\mathcal{B}\) must be equal to zero. From the unitarity condition, we know that \(|\mathcal{B}|\leq 1\). The wave function in eq.(29) is singular at the point \(r=r_{+}\). In order to tackle this issue, we shall take recourse to analytic continuation by considering \(r-r_{+}=\mathcal{\ell}\) as a complex number. Then \(\mathcal{\ell}\) can be expressed as
\[\mathcal{\ell}=r-r_{+}=\mathcal{\alpha}e^{i\varphi} \tag{30}\]
where \(|\mathcal{\ell}|=\mathcal{\alpha}\) is a real number and \(\varphi\) denotes the argument of the complex number \(\mathcal{\ell}\). Using eq.(30), we can recast eq.(29) as
\[\Phi(r)=e^{-\frac{2i\varepsilon GM}{\hbar}\left(\frac{r_{+}}{r_{+}-r_{-}} \right)\ln\mathcal{\delta}}+\mathcal{B}e^{\frac{2i\varepsilon GM}{\hbar} \left(\frac{r_{+}}{r_{+}-r_{-}}\right)\ln\mathcal{\ell}}. \tag{31}\]
In order to proceed further we will now use the analytical continuation method [14]. In our analysis we are considering the massless scalar field very near the outer horizon \(r_{+}\) of the quantum corrected black hole. Therefore, we can consider, \(0<|\mathcal{\ell}|\ll 1\). We will now continue to rotate the complex number \(\mathcal{\ell}\) in the complex plane by an angle \(2\pi\) (clockwise rotation). Under this clockwise rotation, the complex number \(\mathcal{\ell}\) is given by the following relation
\[\mathcal{\theta}_{2\pi}=\alpha e^{i(\varphi-2\pi)} \tag{32}\]
Using modified complex number from the above equation, the modified wave function \(\Phi_{2\pi}(r)\) is given by
\[\Phi_{2\pi}(r)= e^{-\frac{2i\varepsilon GM}{\hbar}\left(\frac{r_{+}}{r_{+}-r_{-}} \right)\ln\mathcal{\theta}_{2\pi}}+\mathcal{B}e^{\frac{2i\varepsilon GM}{\hbar} \left(\frac{r_{+}}{r_{+}-r_{-}}\right)\ln\mathcal{\theta}_{2\pi}}\] \[= \mathcal{\xi}e^{-\frac{2i\varepsilon GM}{\hbar}\left(\frac{r_{+}}{r_ {+}-r_{-}}\right)\ln\mathcal{\ell}}+\frac{\mathcal{\bar{B}}}{\xi}e^{\frac{2i \varepsilon GM}{\hbar}\left(\frac{r_{+}}{r_{+}-r_{-}}\right)\ln\mathcal{\ell}} \tag{33}\]
where \(\xi\) is given by
\[\xi=\exp\left[-\frac{4\pi\varepsilon GM}{\hbar}\left(\frac{r_{+}}{r_{+}-r_{-}} \right)\right]. \tag{34}\]
The real differential equation satisfied by \(\Phi(r)\) is also satisfied by the analytically continued function \(\Phi_{2\pi}(r)\). This condition implies that one of the coefficients must have an absolute value equal to unity. In case of \(\Phi(r)\) in eq.(31), the first coefficient has an absolute value one. Now for the rotated wavefunction \(\Phi_{2\pi}(r)\) the first coefficient \(\xi=\exp\left[-\frac{4\pi\varepsilon GM}{\hbar}\left(\frac{r_{+}}{r_{+}-r_{-}} \right)\right]<1\), therefore the second coefficient must have an absolute value equal to one. Hence, we can write
\[\begin{split}&\frac{|\mathcal{R}|}{\xi}=1\\ \implies&|\mathcal{R}|=\xi=\exp\left[-\frac{4\pi \varepsilon GM}{\hbar}\left(\frac{r_{+}}{r_{+}-r_{-}}\right)\right]\.\end{split} \tag{35}\]
Therefore, we observe that the value of the reflection coefficient \(\mathcal{R}\) is non-zero. Hence, the probability of reflection from the even horizon is given by
\[\mathcal{P}=|\mathcal{R}|^{2}=\exp\left[-\frac{8\pi\varepsilon GM}{\hbar} \left(\frac{r_{+}}{r_{+}-r_{-}}\right)\right]. \tag{36}\]
Now using the principle of detailed balance we can write
\[\mathcal{P}=\exp\left[-\frac{\varepsilon}{k_{B}T}\right]=\exp\left[-\frac{8 \pi\varepsilon GM}{\hbar c^{3}}\left(\frac{r_{+}}{r_{+}-r_{-}}\right)\right] \tag{37}\]
where \(k_{B}\) denotes the Boltzmann constant and \(T\) denotes the Hawking temperature of the black hole. From eq.(37), we can obtain the Hawking temperature for this quantum corrected black hole to be
\[\begin{split} T&=\frac{\hbar c^{3}}{8\pi k_{B}GM} \left(1-\frac{r_{-}}{r_{+}}\right)\\ &=\frac{\hbar c^{3}}{8\pi k_{B}GM}-\frac{\hbar c^{3}}{8\pi k_{B} GM}\left[\frac{1-\sqrt{1-\frac{\hbar\tilde{\omega}c}{GM^{2}}}}{1+\sqrt{1- \frac{\hbar\tilde{\omega}c}{GM^{2}}}}\right]\.\end{split} \tag{38}\]
In general, the quantum gravity correction term \(\tilde{\omega}\) is very small, hence we can simplify the above result as
\[T\cong\frac{\hbar c^{3}}{8\pi k_{B}GM}\left(1-\frac{\hbar\tilde{\omega}c}{4 GM^{2}}-\frac{\hbar^{2}\tilde{\omega}^{2}c^{2}}{8G^{2}M^{4}}+\mathcal{O}( \tilde{\omega}^{3})\right). \tag{39}\]
From the form of eq.(39), we can observe that the Hawking temperature picks up quantum gravity corrections only due to the underlying quantum gravitational nature of the background spacetime geometry. Note that we have now considered clockwise rotation only. In case of counter-clockwise rotation, we would have arrived at the following relation
\[|\mathcal{R}|=\exp\left[\frac{4\pi\varepsilon GM}{\hbar c^{3}}\left(\frac{r_{+ }}{r_{+}-r_{-}}\right)\right]. \tag{40}\]
Now from eq.(40), we observe that \(|\mathcal{R}|>1\) which is in contradiction with the unitarity condition (\(|\mathcal{R}|\leq 1\)). Hence, the counter-clockwise rotation is forbidden in the current analysis.
In the next section we will calculate the Hawking temperature for a _Garfinkle-Horowitz-Strominger_ (GHS) black hole for which \(f(r)\neq g(r)\).
## 4 Reflection from the horizon of a _Garfinkle -Horowitz-Strominger_ black hole
A generalized spacetime geometry arises in case of low energy string theories. Among several low energy string theoretical actions, we consider the following action
\[\mathcal{G}_{\mathcal{A}}=\int d^{4}x\sqrt{-g}e^{-2\phi}(-R-4(\nabla\phi)^{2} +F^{2}) \tag{41}\]
where \(\phi\) is the dilaton field and \(F_{\mu\nu}\) is a Maxwell field associated with a \(U(1)\) subgroup of \(E_{8}\times E_{8}\) or \(\text{Spin}(32)/Z_{2}\). Among the family of solutions of the low energy action given in eq.(41), its charged black hole solution in \((1+1)\)-dimensions is given by [35, 36, 37, 38]
\[ds^{2}=-f(r)dt^{2}+g(r)^{-1}dr^{2} \tag{42}\]
where \(f(r)\) and \(g(r)\) are given by
\[f(r) =\left(1-\frac{2Me^{\phi_{0}}}{r}\right)\left(1-\frac{Q^{2}e^{3 \phi_{0}}}{Mr}\right)^{-1}\, \tag{43}\] \[g(r) =\left(1-\frac{2Me^{\phi_{0}}}{r}\right)\left(1-\frac{Q^{2}e^{3 \phi_{0}}}{Mr}\right) \tag{44}\]
where \(\phi_{0}\) is the asymptotic constant value of the dilaton field. This is the "_Garfinkle-Horowitz-Strominger_" (GHS) black hole in \((1+1)\)-dimensions [35]. The event horizon radius for a GHS black hole is given by
\[r_{+}=2Me^{\phi_{0}}. \tag{45}\]
Using the metric structure from eq.(8)(43,44), we can compute \(S_{0}\) from eq.(14) in the vicinity of the horizon radius \(r_{+}\) as
\[\begin{split} S_{0}&=\pm\varepsilon\int_{r_{+}}^{r }\frac{dr}{\sqrt{f(r)g(r)}}=\pm\varepsilon\int_{r_{+}}^{r}\frac{dr}{1-\frac{2 Me^{\phi_{0}}}{r}}\\ &\cong\pm 2M\varepsilon e^{\phi_{0}}\ln(r-2Me^{\phi_{0}})=\pm \varepsilon r_{+}\ln(r-r_{+})\.\end{split} \tag{46}\]
As before, we will consider \(r-r_{+}=\theta_{1}=\omega_{1}e^{i\varphi_{1}}\) with \(\omega_{1}=|\theta_{1}|\) and \(\varphi_{1}=\arg(\theta_{1})\). Using this condition along with eq.(46), we can write the radial part of the wave function given by (considering reflection from the event horizon)
\[\Phi^{1}(r)=e^{-\frac{i\varphi}{k}r_{+}\ln\theta_{1}}+\mathcal{R}e^{\frac{i \varphi}{k}r_{+}\ln\theta_{1}}. \tag{47}\]
Following the same analytical continuation method as before we do a clockwise rotation by an angle \(2\pi\) in the complex plane. Under this clockwise rotation the radial part of the wave function \(\Phi^{1}(r)\) transforms as follows
\[\begin{split}\Phi^{1}_{2\pi}(r)&=e^{-\frac{i\varphi }{k}r_{+}\ln\left[\theta_{1}e^{-2\pi i}\right]}+\mathcal{R}e^{\frac{i\varphi}{k }r_{+}\ln\left[\theta_{1}e^{-2\pi i}\right]}\\ &=\xi_{1}e^{-\frac{i\varphi}{k}r_{+}\ln\theta_{1}}+\frac{ \mathcal{R}}{\xi_{1}}e^{\frac{i\varphi}{k}r_{+}\ln\theta_{1}}\end{split} \tag{48}\]
where \(\xi_{1}\) is given by
\[\xi_{1}=\exp\left(-\frac{2\pi\varepsilon}{\hbar}r_{+}\right). \tag{49}\]
We observe that the coefficient of the first term in eq.(48) has a value less than one, therefore by the arguments used in the earlier section we can conclude the following
\[|\mathcal{R}|=\xi_{1}=\exp\left[-\frac{2\pi\varepsilon}{\hbar}r_{+}\right]. \tag{50}\]
From eq.(50), we can calculate the probability of reflection from the horizon of the GHS black hole as follows
\[\mathscr{P}=|\mathscr{R}|^{2}=\exp\left[-\frac{8\pi\varepsilon GMe^{\phi_{0}}}{ \hbar c^{3}}\right]. \tag{51}\]
In order to obtain the Hawking temperature \(T\), we need to use the principle of detailed balance as before. The final form of the Hawking temperature for a _Garfinkle-Horowitz-Strominger_ black hole is given by
\[T=\frac{\hbar c^{3}}{8\pi k_{B}GM}e^{-\phi_{0}}. \tag{52}\]
From eq.(52), we observe that for a GHS black hole the Hawking temperature has the usual form of the temperature multiplied by an exponential term. Now for a very small value of the asymptotic constant value of the dilaton field we can use \(e^{-\phi_{0}}\approx 1-\phi_{0}+\mathcal{O}(\phi_{0}^{2})\). Hence, the Hawking temperature takes the form as follows
\[T\cong\frac{\hbar c^{3}}{8\pi k_{B}GM}-\frac{\hbar c^{3}\phi_{0}}{8\pi k_{B}GM }+\mathcal{O}(\phi_{0}^{2}). \tag{53}\]
The form of the Hawking temperature in eq.(53) has the form similar to that of a base temperature term along with higher order correction terms in \(\phi_{0}\).
Before concluding our discussion, we would like to point out that the same result for the Hawking temperature can be obtained for a quantum corrected black hole geometry using the tunneling method [15] using the form of the action derived earlier in eq.(18). The solution of the covariant Klein-Gordon equation from eq.(19) can be decomposed into one ingoing solution and one outgoing solution as follows
\[\Psi_{in}= \exp\left[-\frac{i\epsilon t}{\hbar}-\frac{i\varepsilon}{\hbar} \int\frac{dr}{\sqrt{f(r)g(r)}}\right] \tag{54}\] \[\Psi_{out}= \exp\left[-\frac{i\epsilon t}{\hbar}+\frac{i\varepsilon}{\hbar} \int\frac{dr}{\sqrt{f(r)g(r)}}\right]. \tag{55}\]
Here we are considering the case of tunneling across the event horizon of the black hole. When the particle crosses the horizon, we observe an interchange of signs between the space and time coordinates indicating an imaginary part of the time coordinate for crossing the horizon. Two patches across the horizon are generally connected by an imaginary time [39]. Hence, by using the ingoing and outgoing wave functions in eq.(8)(54,55), we can express the ingoing and outgoing solutions of the particle as [15]
\[\mathscr{P}_{in}= |\Psi_{in}|^{2}=\exp\left(\frac{2\varepsilon}{\hbar}\Im\left[t \right]+\frac{2\varepsilon}{\hbar}\Im\left[\int_{0}^{r}\frac{dr}{\sqrt{f(r)g( r)}}\right]\right)\, \tag{56}\] \[\mathscr{P}_{out}= |\Psi_{out}|^{2}=\exp\left(\frac{2\varepsilon}{\hbar}\Im\left[t \right]-\frac{2\varepsilon}{\hbar}\Im\left[\int_{0}^{r}\frac{dr}{\sqrt{f(r)g( r)}}\right]\right) \tag{57}\]
where \(\Im[t]\) denotes the imaginary part of time. Using the principle of detailed balance [15], we can write the following
\[\frac{\mathscr{P}_{out}}{\mathscr{P}_{in}}=\exp\left[-\frac{4 \varepsilon}{\hbar}\Im\left[\int_{0}^{r}\frac{dr}{\sqrt{f(r)g(r)}}\right] \right]=\exp\left[-\frac{\varepsilon}{T}\right]. \tag{58}\]
From eq.(58), we can extract the form of the Hawking temperature to be
\[T=\frac{\hbar}{4}\left(\Im\left[\int_{0}^{r}\frac{dr}{\sqrt{f(r)g(r)}}\right] \right)^{-1}. \tag{59}\]
The above equation gives the Hawking temperature for a large class of spherically symmetric black holes from the viewpoint of tunneling across the horizon. Substituting the form of the quantum corrected black hole geometry in eq.(59), we recover eq.(38) thereby agreeing with the interference and reflection method. It is very important to notice that if the lapse function does not possess an explicit quantum gravity correction then the Hawking temperature also has no such corrections.
## 5 Conclusion
In this paper, we have used the method of _reflection from the horizon_ to investigate the Hawking temperature of a quantum corrected black hole. At first we have calculated the Hawking temperature of the quantum corrected black hole using the consideration that there will be a finite probability of a part of the scalar wave function being reflected from the event horizon of the black hole. The computed temperature formula for the quantum corrected black hole has structure similar to that of the Hawking temperature of a classical black hole (Schwarzschild black hole) along with higher order quantum gravity corrections. We have also computed the Hawking temperature for a GHS black hole. For the next part of our analysis, we assume tunneling through the event horizon of a black hole. We obtained the same result for the Hawking temperature of the quantum corrected black hole as obtained earlier using the method of _reflection from the horizon_. These findings showcase the fact that quantum corrections arise in the Hawking temperature of the black hole only due to underlying quantum gravitational effects in the spacetime geometry. These results also indicate clearly that there should be no quantum gravity corrections in the Hawking temperature of the black hole if there are no such quantum gravitational effects in the geometry of the spacetime.
## Appendix: Hawking temperature with the complete solution in eq.(26)
In section (3), we have calculated the form of the Hawking temperature using the part of the solution outside the event horizon of the black hole in eq.(26). In this Appendix, we shall proceed with the same calculation keeping both the terms. Following the discussion after eq.(27), we can write the form of the radial part of the wavefunction in the vicinity of the event horizon keeping both the terms in eq.(23) as follows
\[\begin{split}\Phi(r)&=e^{-\frac{2\mu GM}{\hbar} \left(\frac{r_{+}}{(r_{+}-r_{-})}\ln(r-r_{+})-\frac{r_{-}}{(r_{+}-r_{-})}\ln(r -r_{-})\right)}\\ &+\mathscr{R}e^{\frac{2\mu GM}{\hbar}\left(\frac{r_{+}}{(r_{+}-r_ {-})}\ln(r-r_{+})-\frac{r_{-}}{(r_{+}+r_{-})}\ln(r-r_{-})\right)}\.\end{split} \tag{60}\]
Again defining \(r-r_{+}=\ell=\alpha e^{i\varphi}\), we can recast the above relation in the following form
\[\begin{split}\Phi(r)&=e^{-\frac{2\mu GM}{\hbar}\left( \frac{r_{+}}{(r_{+}-r_{-})}\ln\theta-\frac{r_{-}}{(r_{+}-r_{-})}\ln(\theta+(r_{ +}-r_{-}))\right)}\\ &+\mathscr{R}e^{\frac{2\mu GM}{\hbar}\left(\frac{r_{+}}{(r_{+}-r_ {-})}\ln\theta-\frac{r_{-}}{(r_{+}-r_{-})}\ln(\theta+(r_{+}-r_{-}))\right)}\.\end{split} \tag{61}\]
Using the same analytic continuation procedure [14] and rotating the complex number \(\theta\) by an angle \(2\pi\) in the clockwise direction on the complex plane, we can recast the rotated form of the radial part of the wavefunction as
\[\Phi(r) =e^{-\frac{2i\epsilon_{0}M}{\hbar}\left(\frac{r_{+}}{(r_{+}-r_{-})} \ln\theta_{2\pi}-\frac{r_{-}}{(r_{+}-r_{-})}\ln(\theta_{2\pi}+(r_{+}-r_{-})) \right)}\] \[+\delta e^{\frac{2i\epsilon_{0}M}{\hbar}\left(\frac{r_{+}}{(r_{+} -r_{-})}\ln\theta_{2\pi}-\frac{r_{-}}{(r_{+}-r_{-})}\ln(\theta_{2\pi}+(r_{+}- r_{-}))\right)}. \tag{62}\]
where the form of \(\theta_{2\pi}\) is given by eq.(32). Since \(\ln(\theta_{2\pi}+(r_{+}-r_{-}))=\ln(\theta e^{-2\pi i}+(r_{+}-r_{-}))=\ln( \theta+(r_{+}-r_{-}))\), hence eq.(62) takes the form
\[\Phi(r) =\xi e^{-\frac{2i\epsilon_{0}M}{\hbar}\left(\frac{r_{+}}{(r_{+} -r_{-})}\ln\theta-\frac{r_{-}}{(r_{+}-r_{-})}\ln(\theta+(r_{+}-r_{-}))\right)}\] \[+\frac{\delta\theta}{\xi}e^{\frac{2i\epsilon_{0}M}{\hbar}\left( \frac{r_{+}}{(r_{+}-r_{-})}\ln\theta_{2\pi}-\frac{r_{-}}{(r_{+}-r_{-})}\ln( \theta_{2\pi}+(r_{+}-r_{-}))\right)} \tag{63}\]
where the analytical form of \(\xi\) is given in eq.(34). By continuing the analysis in section (3) after eq.(34), the same form of the Hawking temperature \(T\) given in eq.(38) is obtained.
|
2301.05128 | Abortion Misinformation on TikTok: Rampant Content, Lax Moderation, and
Vivid User Experiences | The scientific effort devoted to health misinformation mostly focuses on the
implications of misleading vaccines and communicable disease claims with
respect to public health. However, the proliferation of abortion misinformation
following the Supreme Court's decision to overturn Roe v. Wade banning legal
abortion in the US highlighted a gap in scientific attention to individual
health-related misinformation. To address this gap, we conducted a study with
60 TikTok users to uncover their experiences with abortion misinformation and
the way they conceptualize, assess, and respond to misleading video content on
this platform. Our findings indicate that users mostly encounter short-term
videos suggesting herbal "at-home" remedies for pregnancy termination. While
many of the participants were cautious about scientifically debunked "abortion
alternatives," roughly 30% of the entire sample believed in their safety and
efficacy. Even an explicit debunking label attached to a misleading abortion
video about the harms of "at-home" did not help a third of the participants to
dismiss a video about self-administering abortion as misinformation. We discuss
the implications of our findings for future participation on TikTok and other
polarizing topics debated on social media. | Filipo Sharevski, Jennifer Vander Loop, Peter Jachim, Amy Devine, Emma Pieroni | 2023-01-12T16:31:48Z | http://arxiv.org/abs/2301.05128v1 | # Abortion Misinformation on TikTok: Rampant Content, Lax Moderation, and Vivid User Experiences
###### Abstract.
The scientific effort devoted to health misinformation mostly focuses on the implications of misleading vaccines and communicable disease claims with respect to public health. However, the proliferation of abortion misinformation following the Supreme Court's decision to overturn _Roe v. Wade_ banning legal abortion in the US highlighted a gap in scientific attention to individual health-related misinformation. To address this gap, we conducted a study with 60 TikTok users to uncover their experiences with abortion misinformation and the way they conceptualize, assess, and respond to misleading video content on this platform. Our findings indicate that users mostly encounter short-term videos suggesting herbal "at-home" remedies for pregnancy termination. While many of the participants were cautious about scientifically debunked "abortion alternatives," roughly 30% of the entire sample believed in their safety and efficacy. Even an explicit debunking label attached to a misleading abortion video about the harms of "at-home" did not help a third of the participants to dismiss a video about self-administering abortion as misinformation. We discuss the implications of our findings for future participation on TikTok and other polarizing topics debated on social media.
TikTok, misinformation, abortion, fact-check, debunking, social media +
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: journal: Information on TikTok
+
Footnote †: journal: Journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Journal: Information on TikTok
+
Footnote †: journal: Information on TikTokTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTokTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTokTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTokTok
+
Footnote †: journal: Information on TikTokTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTokTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTokTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: Information on TikTok
+
Footnote †: journal: journal: Information on TikTok
for abortion (Steintein et al., 2012). The inability to obtain a legal abortion turned people to search engines and social media to learn how to manage their reproductive decisions and perform safe abortions (Steintein et al., 2012). Unfortunately, not all information aligned with the National Library of Medicine's description of abortion and recommendations for safe practices (Bauer et al., 2013).
Many questionable practices including pills, oils, and herbs for inducing abortion flooded social media, both as claims and as an advertisements in users' feeds (Steintein et al., 2012). Platforms used diverse strategies to mitigate this misinformation: YouTube added "context labels" to such abortion content (Stein
rumour that HIV/AIDS was a mis-fired American biological weapon in order to undermine the United States' credibility during the Cold War (Hummer et al., 2017). In the same time, the tobacco industry in the United States created a "disinformation playbook" to systematically distort and downplay the link between the consumption of tobacco and cancer (Krishnan et al., 2017). While the intent to mislead was clearly present in these campaigns, the volume and output of the rumors and alternative health narratives was limited to a number of outlets and fabricated publications.
The Internet and social media changed the landscape by enabling an inordinately high volume and rapid output of health information with varying quality to reach the public (Hummer et al., 2017). The health-related rumors and alternative narratives collaterally grew and were amplified to a point where they yielded _uncontrollable_ consequences even for _known_ and treatable health issues. For example, a poorly designed study in the 1990s that falsely claimed that the measles, mumps, rubella (MMR) vaccine causes autism (Krishnan et al., 2017) caused such a regression in public immunization that resulted in several measles outbreaks twenty years later on (Krishnan et al., 2017). Rumours about the Ebola and Zika viruses also overshadowed the evidence-based health information and resulted in higher vaccine hesitancy in fear of _undesirable_ health consequences and death (Krishnan et al., 2017; D'Amica et al., 2018). The vaccine hesitancy, on a global level, achieved a climax during the COVID-19 pandemic with an unprecedented volume and output of COVID-19 related misinformation (Krishnan et al., 2017).
While the majority of misleading health information on social media focuses on vaccines and communicable diseases, rumors and alternative narratives also spread about cancer, heart disease, and other conditions (Hummer et al., 2017). For example, social media users are more likely to trust and share cancer-related rumours if the rumours are dreadful rather than wishful, and if one has had previous personal experience (Krishnan et al., 2017). The _uncontrollable_ consequences in these cases are not overall treatment hesitancy but seeking of alternative and unproven treatments about diabetes (Krishnan et al., 2017), heart failure (Krishnan et al., 2017), hypertension (Krishnan et al., 2017) and psoriasis (Krishnan et al., 2017). Interestingly, in all of these cases of non-communicable health issues, the unsubstantiated claims were promulgated through videos as a particularly influential mode for conveying misleading health evidence (e.g. also used in anorexia and dietary disorders' deceiving messages) (Hummer et al., 2017).
### Response to Health Misinformation
In the context of misleading health claims, misinformation is considered by its opposition to the consensus of what the medical community defines as accurate and evidence-based information (Hummer et al., 2017). Scholars, in response, have focused the attention of anti-health-misinformation on two main fronts: 1) examining the harms of the misinformation (Krishnan et al., 2017; D'Amica et al., 2018); and 2) misinformation prebunking and debunking (Krishnan et al., 2017; Krishnan et al., 2017); The harms of health misinformation are reflected in dramatic increase in vaccine hesitancy (Krishnan et al., 2017), pursuing dangerous home therapies (e.g. cancer cleansing, weight loss, virus prevention) (D'Amica et al., 2018), as well as increased hostility toward health workers (Krishnan et al., 2017).
The goal of "prebunking" or forewarning is improving people's ability to spot and resist manipulation techniques commonly used in health misinformation (Krishnan et al., 2017). To this objective, people nowadays are "innoculated" against health misinformation by the use of "accuracy nudges" (Krishnan et al., 2017), social correction that occurs via peers (Hummer et al., 2017), or play browser-based games about health myths and facts (Beck et al., 2017). The "prebunking" was shown to be an effective strategy (Krishnan et al., 2017), though in time of social media virality the inoculation effect wanes for users with a conspiracy mentality about _unknown_ and _undesirable_ health consequences (e.g. the COVID-19 pandemic) (Hummer et al., 2017).
If this "innoculation" is rendered ineffective, "debunking" is the next step where verifiable corrections of the falsehoods from credible sources are presented in order to break the illusion of truth (Krishnan et al., 2017; Krishnan et al., 2017). Debunking, as in fact-checking of health misinformation, was shown to give mixed results depending on the perceived credibility and expertise of the sources in science-related contexts (Krishnan et al., 2017). The perception of credibility and expertise, for example, matters little to people with
strong conspiratorial ideation tendencies who tend to mistrust any official source (Shi et al., 2018). As pressing health problems of general public interest are hard not be seen also in a political context, debunking of health misinformation was found to work either when it comes from sources that are perceived to share people's values and worldviews (Shi et al., 2018), or when people maintain a science-accepting attitude regardless of their political worldviews (Bahdan et al., 2019)
### Moderation of Health Misinformation
In as much as the prebunking and debunking helps in curbing the health misinformation work online, they are nonetheless slow and difficult to scale to the pace, volume, and output of information sharing on social media (Shi et al., 2018). Platforms, in response, had to turn to automated means of moderating unsubstantiated content and questionable accounts to prevent an "outbreak" of misleading information, especially after the meteoric influx of COVID-19 rumors, conspiracies, and falsehoods (Shi et al., 2018). YouTube opted for a soft moderation and decided to apply context labels to video searches for health information that link to credible sources recommended by the National Academy of Medicine (Nakamura et al., 2018). Twitter, up to early December 2022, also applied soft moderation in two forms: (i) _interstitial covers_, which obscure the misleading content and require users to click through to see the information; and (ii) _trustworthiness tags_, which appear under the content and do not interrupt the user or compel action (Shi et al., 2018; Shi et al., 2018). Meta, the parent company of Facebook and Instagram, did the same in conjunction with hard moderation, taking down prominent accounts that spread COVID-19 misinformation (e.g. Robert F. Kennedy Jr.'s account was blocked after he repeatedly undercut trust in the COVID-19 vaccines) (Kennedy Jr., 2019). TikTok followed suit and expanded their soft moderation labeling with trustworthiness tags to content pertaining to eating disorders, health challenges, and alternative medical treatment videos next to misleading COVID-19 videos (Shi et al., 2018).
The response to platform moderation has been, at best, mixed. The interstitial covers provided an adequate "accuracy nudge" for users to distance from COVID-19 misinformation posts, but users largely ignored trustworthiness tags (Shi et al., 2018; Shi et al., 2018). Further, numerous studies reveal that trustworthiness tags "backfire" (i.e. make users believe health misinformation more, not less (Kennedy Jr., 2019; Shi et al., 2018; Shi et al., 2018)). In the context of the COVID-19 pandemic, the tags triggered a "belief echo," manifested as skepticism of adequate mass COVID-19 immunization (Shi et al., 2018). A possible reason for such an unexpected reception of the trustworthiness tags was the asymmetrical nature of soft moderation--the mere exposure to health misinformation often generates a strong and automatic effective response while the tag itself may not generate a response of an equal and opposite magnitude (Shi et al., 2018). This is because the trustworthiness tags often lack meaning, have ambiguous wording, or ask users to find health information themselves (e.g. learn more about COVID-19), which is cognitively demanding and time consuming (Shi et al., 2018).
### Health Misinformation on TikTok
TikTok, a social media platform for short-form videos, has rapidly grown in popularity in the last few years (Shi et al., 2018). A central feature of TikTok is the 'For You' page, a feed of algorithmically curated videos for the user based on the user's past browsing, viewing, and interactions with videos (Shi et al., 2018). Users can also search for videos based on hashtags and, in some cases, sounds. Roughly 75% of the global users on the platform are age 34 or younger, and every fifth person in the United States visits the platform on a daily basis (Shi et al., 2018).
TikTok's affordances for viral spread of curated short-form videos and demographic structure (Shi et al., 2018) made the platform particularly interesting for healthcare workers and science communicators creating educational content both based on their speciality and more general advice (Shi et al., 2018; Shi et al., 2018). A review of 331 videos with authoritative COVID-19 information (Shi et al., 2018) showed that anti-sigma/anti-rumor, disease knowledge, encouragement, personal precautions, recognition, and societal crisis
management drive platform engagement. Another review of 199 videos with information about chronic obstructive pulmonary disease showed that most of them have a satisfactory scientific background (Kurur et al., 2019). An analysis of obstetrician-gynaecologists (OBGYNs) videos and the associated hashtags and commentary (Kur et al., 2019) revealed that the health "educators" not just convey authoritative sex health information (Kur et al., 2019), but use it to creatively debunk the related misinformation and misleading treatments. This practice of authoritative health communication as a form of misinformation diffraction also motivated proposals for teaching abortion using TikTok (Kur et al., 2019).
Other work on misleading health content on TikTok is scarce and overwhelmingly focuses on COVID-19 health misinformation (Sasan et al., 2019; Wang et al., 2019). Basch et al (Basch et al., 2019) analyzed a sample of 72 videos containing the hashtag #covidvaccine and found that slightly more than half of them discouraged getting the vaccine by showing purportedly adverse vaccine reactions. Baumel et al. (Baumel et al., 2019) analyzed the 100 "most liked" videos under each of the #Pfizer and #Moderna hashtags and found that 44.2% and 28.8% of the comments conveyed misleading negative sentiment against these two COVID-19 vaccines, respectively. Baumel et al. (Baumel et al., 2019) also analyzed TikTok commentary related to masks' effectiveness in combating COVID-19 and found that 45.3% of commentary using the #MasksDonWork hashtag contained misinformation. Analyzing a sample of 1000 videos, Shang et al. (Shang et al., 2019) found that around 22.6% of the videos contained misleading COVID-19 content, but were shared as much as one with verified COVID-19 information.
Outside of the COVID-19 theme, O'Sullivan et al. (O'Sullivan et al., 2019) analyzed 27 TikTok videos containing pediatric urology claims and found that only 22.2% contained information that can also be found in official guidelines provided by the European Association of Urology (EAU). Xu et al. (Xu et al., 2019) reviewed 65 TikTok videos with the hashtag #prostatecancer and found that at least 48% of them contained explicit prostate cancer misinformation. Zheng et al. (Zheng et al., 2019) study found that the top 100 videos with the #acne hashtag had seriously misleading information about diagnosis and treatments.
The only study so far analyzing the way misinformation is moderated with warning labels on TikTok focused on COVID-19 content (Li et al., 2019). Ling et al. (Li et al., 2019) collected 41,000 videos that include 26 COVID-19-related hashtags in their description. Through a qualitative analysis, they found out that TikTok likely moderates videos based on hashtags included in the description without an in-depth analysis of the content. Ling et al. learned that this moderation strategy led to a large false positive rate - about a quarter of the videos with a misinformation warning label did not contain content related to COVID-19. The study also found a 7.7% false negative rate where videos with actual COVID-19 misinformation did not include warning labels.
## 3. Abortion Misinformation Context
Prior studies have shown that 70.1% of women obtain information regarding abortion from the Internet (Li et al., 2019). Abortion misinformation online, thus, takes many forms and users generally have difficulties discerning inaccuracies in the related alternative narratives (Zheng et al., 2019). Bessett et al. (Bessett et al., 2019) presented 586 participants with five vignettes of abortion misinformation - safety, breast cancer, infertility, mental health risk, and legality of abortion - and found that only 4% of participants were able to correctly identify all of vignettes as misinformation while 73% pointed to two or fewer vignettes as inaccurate.
Common abortion misinformation topics that have been studied are the increased risk of breast cancer, future infertility, depression/anxiety, and post-traumatic stress (Zheng et al., 2019). This misinformation is spread through multiple sources, including state-mandated _"Women's Right to Know"_ documentation that providers must supply before a woman can consent to having an abortion (Besch et al., 2019), despite official guidance from the National Academies of Sciences, Engineering, and Medicine (Zheng et al., 2019). The Guttmacher Institute found that two states inaccurately include a link between abortion and an increased risk of breast cancer, 19 states link abortion to future infertility, and eight states link abortion to
negative emotional and psychological responses (Kern and Reader, 2017). Kerr and Reader (Kern and Reader, 2017) reported that abortion misinformation specifically related to an "abortion reversal pill" increased on Facebook from 20 interactions on June 23 to 3,500 interactions on June 24 2022, the day after the Supreme Court decision to overturn _Roe v. Wade_. Godoy (2017) also reported that following the Supreme Court ruling, Spanish-language abortion misinformation was deliberately designed to galvanize voters in Latino communities across the US.
Abortifacient herbs - purportedly providing the ability to induce a spontaneous miscarriage - form the majority of post-_Roe v Wade_ misinformation (Kern and Reader, 2017). The toxicity of abortifacient herbs has been widely studied as shown in Table 1 but there is little literature and few studies related to the topic of "herbal abortions" (Kern and Reader, 2017). Most existing studies were done in countries where abortion was not legal until recently. Abortion did not become legal in Uruguay until 2012 (Kern and Reader, 2017), for example, and a 2003 study found that the Montevideo Poison Centre had 86 cases of ingestion of herbal infusions with abortive intent from 1986 to 1999 (Kern and Reader, 2017). In the United States, misinformation surrounding "herbal abortions" in viral videos on TikTok has increased dramatically after legal abortion was overturned (Kern and Reader, 2017). The consequences of these viral misinformation videos already brought several people to the emergency rooms seeking critical lifesaving treatment, making active prebunking and debunking by qualified health professionals imperative (Kern and Reader, 2017).
## 4. Misinformation and alternative abortion narratives on Tiktok
### Research Questions
The volume and output of abortion misinformation naturally prompted health experts, by themselves, to dispel the inaccuracies related to herbal abortions, abortion pills, and abortion side-effects directly on social media (Kern and Reader, 2017). This effort is by all means needed, but is likely not to be sufficient to prevent an "outbreak" of unsafe decisions about people's reproductive health in the long run. An intuitive response, then, would be a systematic prebunking and debunking of abortion misinformation in coordination with moderation of related content on social media. Based on past experiences
\begin{table}
\begin{tabular}{|p{142.3pt} p{142.3pt}|} \hline
**Common Name** & Side Effects \\ \hline
**Black Cohosh** & Hepatoxicity (Kern and Reader, 2017) \\ \hline
**Blue Cohosh** & Nicotinic toxicity: tachycardia, hypertension, headaches, abdominal pain, vomiting, muscle weakness and fasciculations, seizures, and coma (Kern and Reader, 2017) \\ \hline
**Eastern Daisy Fleabane** & Insufficient evidence on the safety and effectiveness as an abortifacient agent (Kern and Reader, 2017) \\ \hline
**Mugwort** & Vomiting, hypertension, confusion, respiratory distress, coma, and seizures (Kern and Reader, 2017) \\ \hline
**Parsley** & Abdominal pain, vomiting, genital hemorrhage, anemia, jaundice (Kern and Reader, 2017), internal bleeding, convulsions, and death (Kern and Reader, 2017) \\ \hline
**Pennyroyal** & Gastrointestinal upset, fainting, intestinal bleeding, seizures, hepatomegaly or injury, multiple organ failure, coma, cardiac arrest, and death (Kern and Reader, 2017; Kern and Reader, 2017) \\ \hline
**Rue** & Vomiting, liver damage, anemia, tremors, respiratory distress, multiple organ failure, and death (Kern and Reader, 2017) \\ \hline \end{tabular}
\end{table}
Table 1. Herbal Abortifacients and Side Effects
with health misinformation, however, such a response leaves little room for nuanced explorations of how people _engage on their own_ with abortion misinformation in the first place (Kumar et al., 2018).
As TikTok has been identified as a "hotbed of abortion misinformation" (TikTok, 2018), such an exploration would be beneficial to the ongoing response of removing and moderating misleading content on TikTok (Kumar et al., 2018) as it will provide knowledge on how users conceptualize, encounter, assess, and respond to abortion falsehoods. So far, such knowledge is scarce and only provides glimpses on how users conceptualize misinformation encountered on the traditional social media platforms (Kumar et al., 2018). To address this knowledge gap, we set to conduct a study that aimed to answer the following research questions:
1. **RQ1:**_Concept_: How do social media conceptualize misinformation on TikTok (definition, origins, targets, and purpose)?
2. **RQ2:**_Encounters_: What encounters with abortion misinformation users had so far on TikTok and how they dealt with it?
3. **RQ3:**_Response_: What strategies users employ in assessing and responding to various abortion misinformation content on TikTok?
### Dataset
Preliminary, we set to collect a dataset of abortion misinformation on TikTok in the immediate period after the overturn of Roe vs Wade, up till the end of November 2022, as shown in Table 2. We leveraged the unofficial TikTok-API python library (Kumar et al., 2018) to scrape 8,226 videos, which we collected using a snowball sampling strategy, starting with scraping three initial hashtags, specifically #TikTokTaughtMe, #Healthcare, and #Abortion. Due to the limitations of the API, each search returned no more than 300 videos. To continue collecting hashtags, we searched for each of the hashtags that were associated with each of the three seeding hashtags above, effectively performing a snowballing sampling of the TikTok's base with abortion-related short-form videos.
From here, we vectorized the hashtags using Scikit-Learn's CountVectorizer (Kumar et al., 2018) to create a dense boolean array of 1,754 tokens - character unigrams, bigrams, and trigrams - that appeared in 0.01 - 99% of hashtag samples. We then identified the closest hashtags to a given input hashtags using Minkowski distance, or \(||x||_{p}\) where \(x\) is the difference between an searched vector and the saved vectors from our hashtag dataset, and \(p\) is a scalar that we selected. To identify \(p\), we reviewed kernel density estimate plots of the distances to several searched hashtags and identified the most expected bimodal distribution, with a smaller left distribution of relevant hashtags, and a larger right distribution of less relevant hashtags. We settled on an ideal \(p\) of 2, which is \(||x||_{2}\), or euclidean distance.
Using these hashtag representations, we were easily able to identify perturbations in hashtags that might otherwise be moderated by TikTok (Kumar et al., 2018). For example, searching the representations for hashtags like #selfharm highlighted the existence of #selfharm, and #abortion revealed #abotion and #anortion, as well as longer hashtags like #abortionishealthcare and #abortionishealthcare. A
\begin{table}
\begin{tabular}{|c c|} \hline
**Attribute** & **Value** \\ \hline Total Number of Posts & 8,226 \\ \hline Number of Hashtags & 77,880 \\ \hline Unique Hashtags & 17,606 \\ \hline \end{tabular}
\end{table}
Table 2. TikTok Abortion Hashtag Dataset
search with regular terms and hashtags like "#abortifacient" indeed does not present any videos tagged as such, but following our dataset analysis above, we discovered that a small change in the spelling - #abOrtifacient, for example - unveils a lot of abortion videos that promote abortifacient solutions for miscarriage.
Many of these videos were not necessarily were tagged with the exact search hashtag and may even be tagged with the original "#abortifacient." One could argue that an ordinary users might not know what hashtags exist, but from the video posting functionality, a list of suggested hashtag completions provide additional variations, and the number of videos with each variation. As such, even an incomplete, suggestive hashtag search on TikTok brings seemingly obscured tags for abortion misinformation, as exemplified in Figure 1 Using these built in features, we quickly identified dozens of videos that described methods for "at-home" abortions as candidates for misleading claims we wanted to test in our study.
### Sample
The analysis of the information in our dataset, given in section 7, helped us identify the main themes of abortion misinformation content on TikTok in the aftermath of _Roe vs Wade_ decision. As we were interested in better understanding how actual users deal with this content, we obtained approval from our Institutional Review Board (IRB) to conduct an exploratory survey (the questionnaire is provided in the Appendix) with a sample of TikTok users ages 18 and above in the United States. We used Prolific for recruitment and after we consolidated the responses we obtained through Qualtrics, we ended with a sample of total of 60 participants. The responses were anonymous, and the survey allowed users to skip any question they were uncomfortable answering, taking
Figure 1. The images above demonstrate how, while a search for “#abortion #herbs” does not return any videos due to the TikTok’s guidelines for harmful content (Sundhi et al., 2018), a search for “#abotion #herbs” not only returns videos with similar typos, it also returns videos with the original “#abortion #herbs” spelled correctly. Additionally, when posting a video, TikTok suggests additional hashtags, any of which can be searched to find additional hashtags, like #abOrtionishelathcare.
around 25 minutes to complete it. Participants were offered a compensation rate of $5 each. The demographic structure of our sample is given in Table 3.
### Method and Analysis
Participants were provided an open ended qualitative survey through Prolific that provided a list of questions and a predetermined set of TikTok videos we selected from our dataset. We singled out seven videos in total from our dataset that contained abortion misinformation already debunked by the time of our study [103]. We used the input on general abortion misinformation from Table 1, information from authoritative verifiable sources [69], and verbatim misinformation terms from two fact-checking articles [23; 109] as a selection criteria for videos promoting the use of herbal abortifaciens. We also chose to focus only on "at-home abortion remedies" as explicit health misinformation [101] and not alternative abortion narratives involving "religion" or "political contextualizaiton" to avoid bias and expressive responding [10].
We wanted to have as many varying modalities, formats, and creators in our selection as possible, therefore we selected two videos that contained only text and five videos featuring the creator of the content. Six of the selected videos were created by women and one was created by an individual who identifies as transgender in their profiles. The creators of the videos were ethnically diverse, consisting of individuals who identify in their profile or other videos as White, Black, North American Indigenous, and Native Hawaiian or Pacific Islander. We must note that TikTok, in response to the increased scrutiny about their lax handling of health misinformation [51], claims to regularly remove misleading abortion content so there is a possibility that our dataset was considerably restricted for our particular selection.
Participants were asked to describe their experience with encountering misinformation on TikTok. Next, we asked participants to provide their opinions on where misinformation comes from, what purpose misinformation serves on social media, and who creates and benefits from it. Participants were then asked to further elaborate how they determine a certain social media post is misinformation, and what tactics they employ when dealing with misinformation.
In reporting the results, we utilized as much as possible verbatim quotation of participants' answers, emphasized in "_italics_" and with a reference to the participant as either **PXYZ#** or **[PXYZ#]**, where **P** denotes **participant**, **X** denotes the **number** of the participant in the sample (ordered by the time of participation), **Y** denotes their **gender** identity (**F** - female, **M** - male, **NC** -
\begin{table}
\begin{tabular}{|c c c c|} \hline \multicolumn{4}{|c|}{**Gender**} \\ \hline \multicolumn{2}{|c|}{**Female**} & \multicolumn{1}{c}{**Male**} & \multicolumn{1}{c|}{**Non-cisgender**} \\ \multicolumn{2}{|c|}{44 (73.33\%)} & \multicolumn{1}{c}{15 (25\%)} & \multicolumn{1}{c|}{1 (1.67\%)} \\ \hline \multicolumn{4}{|c|}{**Age**} \\ \hline
**[18-20]** & **[21-30]** & **[31-40]** & **[41-50]** & **[51-60]** & **[61+]** \\
5 (8.33\%) & 33 (55\%) & 12 (20\%) & 6 (10\%) & 4 (6.67\%) & 0 (0\%) \\ \hline \multicolumn{4}{|c|}{**Political leanings**} \\ \hline
**Left** & **Moderate** & **Right** & **Apolitical** \\
38 (63.33\%) & 14 (23.33\%) & 5 (8.33\%) & 3 (5\%) \\ \hline \multicolumn{4}{|c|}{**Highest Level of Education Completed**} \\ \hline
**High school** & **College** & **Graduate** \\
12 (20\%) & 43 (71.67\%) & 5 (8.33\%) \\ \hline \end{tabular}
\end{table}
Table 3. Sample Demographic Distribution
non-cisgender), **Z** denotes their **political** identity (**L** - left-leaning, **M** - moderate, **R** - right-leaning; **A** - apolitical), and # denotes the upper bound of their **age bracket**. For example, **P16FL30** refers to **participipant 16**, **female**, **left-leaning**, **age bracket [(21-30)]**.
## 5. Misinformation conceptualization on Tiktok
### Definition
First, we asked our participants to define misinformation in their own words. Exactly half the sample provided a definition that did not include any intention in the production or dissemination of questionable content, along the lines of the _misinformation_ definitions outlined in (Krishnan et al., 2017). All of these participants conceptualized falsehoods through the _inherently fallacious information_ mental model of misinformation on social media described in (Saksh et al., 2017). For example, **P31FL30** defined it as "_untrue/unsubstantiated statements being presented as fact_," **P13FR30** as "_incorrect, skewed, or communicated incorrectly_," and **P27MM20** as simply "_false information_." In this half of the sample, 18 (60%) of the participants identified as left-leaning, 3 (10%) as right-leaning, 8 (26.67%) as moderate, and one (3.33%) as apolitical.
The other half of our sample expressed intentionality as an additional quality of misinformation, _de facto_ referring to _disinformation_ instead (Krishnan et al., 2017). Using the folk models of misinformation on social media (Saksh et al., 2017), more than half, 20 (66.67%), of the participants conceptualized misinformation as _out-of-context narratives_, for example, **P36FL40** stated that misinformation is "_is intentionally either by using wrong information or leaving out context, misleading to the people reading it_." The next most popular folk model 6 (20%) was _external propaganda_ and the participants pointed to "_intentional spread of misleading information to stir an emotion or to further promote a system, product, or person_" **P5FL30**. The remaining 4 (13.33%) participants conceptualized misinformation as _political (counter)argumentation_ pointing to cases where "_a journalist or news source provides false information to persuade you in one political direction_" **P47MR30**. Here, the older participants were, the more they saw an intention in the spread of misinformation. For example, **P50FL60** placed the intentionality where "_videos get edited and changed, and convincing memes with cherry picked facts are created as part of misinformation that has been used extensively in politics and the pandemic_."
### Origins
Three quarters or 45 of the participants in our sample felt that misinformation on TikTok came directly from a _creator of the TikTok video_. In the view of **P20FL40**, misinformation on TikTok is brought by "_people who are trying to gain clout, or get numerous views_." **P19FL30** went further and reckoned that "_misinformation can come from the creators' own consumption of misinformation, or a creators' misinterpretation of information, or a creators' attempt to sell something or an idea to influence others/gain attention_." The creators of Tiktok content, in the view of **P27MM20**, "_are people who doesn't care about misinformation but more about views and attention_".
The remaining 25% pointed to the _"other"_ side of a polarized debate or issue i.e. "_people on both the left and right who want to increase views, as well as institutions and political groups with agendas to create misinformation_" **P50FL60**. **P2FL50**, seeing misinformation as out-of-context narrative, felt that it "_comes from a variety of places such as Republicans, Russia or China_" and **P12FL60** seconded the impression of external interference "_directly from a bad-actor or a big-mouth source such as Fox News_. **P50FL60**, using the political (counter)argumentation model of misinformation, directly accused the GOP for "_catering misinformation to low information and low IQ people who will believe anything they get told because GOP knows they can't fool the science/college crowd_."
### Targets
Half of our sample felt that the targets of misinformation are "_vulnerable people who do not know how to research and form their own opinions_" [**P7FM30**], specifically "_Younger people, older people, or more easily-influenced crowds, which are the people who are not likely to fact check a claim_" [**P40FL30**]. **P50FL60** expanded this list to include people "_in areas that have low instances of college education and high poverty areas; who are lower income and very religious; who are already suffering themselves and see anyone who gets ahead as a threat to them; and who have very little access to help so they resent people_." The other half felt that "_anyone and everyone can be a target of misinformation_" [**P44FL30**]. **P5FL30** described the targets of misinformation on TikTok to be from "_All ages, races, sexualities, and backgrounds are targets of misinformation because the algorithm brings it to your 'For You page"_.
### Purpose
20 (33.33%) of our participants explicitly indicated that the purpose of misinformation on TikTok is for profit. The profit was assigned either to "_politicians and large corporations who either make money off what evolves from misinformation campaigns or who benefit politically and financially from legislation enacted when bad actors are elected to government offices_" [**P12FL60**] or to content creators themselves as "_they get paid from the views, and there's probably some devout followers to these people which give them a recurring income from just watching the videos every time they post_" [**P15ML30**]. Implicit gains, such as "_engagement boosts_" [**P1MR50**] that ultimately lead to profit per the TikTok participation model [50], was the purpose that 14 (23.33%) of our participants identified behind the spread of misinformation on TikTok. They identified "_creators and influencers looking to gain followers and views_" [**P1MR50**] and "_people who doesn't [sic] care about misinformation but more about views and attention_" [**P27MM20**].
Misinformation as a political ammunition was the purpose identified by 13 (21.67%) of our participants. **P31FL30** indicated that the purpose of misinformation is "_political influence feeding into distrust of science and government_" and **P59FL30** felt the misinformation on TikTok is brought "_to divide people further and continue to build up the conservative party_." 5 (8.33%) of participants felt that misinformation was to "stir the pot" on TikTok, i.e. "_foreign agency targeting the US or groups within the US that want superiority_" [**P41FM40**]. Videos created by "_trolls make up a portion of deliberate misinformation_" [**P38FL30**] to "_gets a rise out of someone_," in the view of **P5FL30**. A subgroup of 8 (13.33%) participants, indicated that "_no one_" [**P10MA40**] benefits from misinformation on TikTok, both in a short-run and "_ultimately, in the long-run_" [**P58FM30**].
## 6. Abortion Misinformation Encounters on Tiktok
### Encounters
Exactly half of the sample indicated they have seen abortion misinformation on TikTok prior to the study. The misleading content mostly consisted of "_videos like these claiming at home abortion remedies_" [**P25FL20**] but also included "_misinformation rooted in religion - churches show videos of full term pregnancies being ripped apart by limbs from wombs_" [**P50FL60**]. Participants also indicate they were seeing politically contextualized abortion narratives "_on both sides of the political spectrum_" [**P7FM30**] to either ban or allow "_birth control as well as contraceptives provided by the government_" [**P24FR20**]. Participants also indicated that they see "_people who are Pro Life on TikTok that spread all kinds of rumors and lies about abortion all the time_" [**P5FL30**] "_mostly to cause fear_" [**P49FI30**].
### Response
About half of the participants indicated that abortion misinformation invoked negative emotions in them. Participants stated that the "_videos in all just made me sad_" [**P6FA30**], that they were "_very disturbed by this abortion misinformation_" [**P13FR30**], and "_disappointed that people are making content like this_" [**P15ML30**]. Some of them said that their "_first response was shock that someone would even think this_" [**P24FR20**], that and it is "_worrying that this type of misinformation is being shared because it can be dangerous_" [**P39-FL30**]. Participants in our sample felt "_anger, resentment_" [**P42FL30**] and "_disgust that people will believe anything they see and try it_" [**P56FM50**]. The other half indicated they were mostly "_intrigued and wanted to know if anything in these herbal videos was true or not_" [**P41FM40**].
In response to abortion misinformation content on TikTok, our participants said they "_did not engage because it only further spreads the misinformation_" [**P8FL20**] or "_just ignored it_" [**P52MM40**]. Some participants indicated they took action on the video by doing "_research on abortion and take what I gather on TikTok with a grain of salt unless it is information spread by an actual health professional_" [**P26FL30**]. There were also participants that "_blocked the creators that spread the misinformation_" [**P20FL40**], "_reported these videos for spreading false information_" [**P49FI30**], or "_Liked comments pointing out the abortion falsehoods_" [**P31FL30**].
## 7. Response to abortion misinformation on Tiktok
### Post #1
The first post we presented to participants was labeled by the creator with the hashtags #roevwade, #abortion, #herb,s #knowyourherbs, #herbalist, #womensrights, #fightbackwithherbs, #herbalism, #homesteadinglife, #michigan, #crazyplantlady, and #farmlife. This post discusses the use of _Eastern Daisy Fleabane_ root [109]. The use of fleabane as an abortifacient herbal tea was found misleading as it can "have unpredictable effects" and there is no evidence that this root can induce a miscarriage, as shown in Table 1. The screenshot of the post as it appeared in the standard TikTok app is shown in Figure 2.
#### 7.1.1. Assessment
We broke down the results of the participants' evaluation in two groups based on their baseline mental model of misinformation on TikTok we outlined in section 5 above. The assessment results of the first TikTok video in our study are given in Table 4. Out of the 30 participants who thought misinformation is disseminated on TikTok without intent, 14 were randomly selected to assess the first post with abortion misinformation. Three of them thought the video is indeed misinformation, stating that "_this is likely misinformation, as a claim such as this likely has very little evidence to substantiate it_" [**P46MM30**]. A surprising 50% of the participants in this group though the video was not misinformation, feeling that "_it is true because she sound [sic] like she knows what she is talking about_" [**P8FL30**]. Four participants in this group were unsure if this video was misinformation, worried that "_they state some historical context not verified in any way_" [**P42FL30**].
Out of the other 30 participants that saw misinformation being spread with intent on TikTok, 12 were randomly shown the first post (the random selection was done by the Qualtrics survey software we used, leading to slightly unbalanced group). Four of them confirmed the video contains falsehoods with quite a verbose justification: "_This is misinformation. It is referencing clearing your liver which immediately points to something more like a Multi-Level Marketing (MLM) product; She's using the same terminology that essential oil salespeople use; These people can never name what toxins, etc you are eliminating because that is not actually happening_" [**P50FL60**]. Three participants thought otherwise, believing this post was not misinformation because "_the content creator seems
informed_" **[P10MA40]**. Five participants were unsure, because they had "_no idea whether the claim in the video is based in truth or not_" **[P32NC50]**.
#### 7.1.2. Response
Participants were also asked to describe what action they would take for each post, with their actions given in Table 5. Of the participants that thought misinformation is disseminated on TikTok without intent, six (42.86%) said that they would "_scroll past without interacting with the post_" **[P42FL30]**. Participants in this group were equally likely to "_verify said information to see if it is accurate_" **[P20FL40]** and "_would like the post_" **[P8FL20]**. Only two (14.28%) participants in this group said they would "_unfollow this person_" **[P56FM50]** or "_report this video for dangerous activities_" **[P24FR20]**. Of the participants who saw misinformation being spread with intent on TikTok, five (41.67%) said they "_would just scroll past it_" **[P10MA40]** and five (41.67%) said they would "_look at the comments and then conduct my own personal research_" **[P58FM30]**. There were also two (16.67%) participants that said they "_would block it_" **[P4MM50]** or "_would report this_"[P50FL60]** and no participants from this group said they would like the post.
\begin{table}
\begin{tabular}{|c c c|} \hline \multicolumn{3}{|c|}{**Misinformation (no intent) [viewed: 14 participants]**} \\ \hline
**Yes** & **No** & **Unsure** \\
3 (21.43\%) & 7 (50\%) & 4 (28.57\%) \\ \hline \multicolumn{3}{|c|}{**Disinformation (intent) [viewed: 12 participants]**} \\ \hline
**Yes** & **No** & **Unsure** \\
4 (33.34\%) & 3 (25\%) & 5 (41.67\%) \\ \hline \end{tabular}
\end{table}
Table 4. Is Post #1 Misinformation?
Figure 2. TikTok Post #1
### Post #2
The second post that we presented to the participants was labeled by the creator with the hashtags #roevwade, #women, #health, and #holistic. This post discusses the use of an abortion tea containing multiple herbs, including rue, which has dangerous side effects noted in Table 1 and health advisories warn that it "can lead to death for both the mother and baby" [67]. The screenshot of the post as it appeared in the standard TikTok app is shown in Figure 3.
#### 7.2.1. Assessment
The participants that thought that misinformation is disseminated on TikTok without intent were almost evenly split regarding this post, as shown in Table 6. Six (46.15%) said they "_don't believe this is misinformation_" [**P33FL30**] and five (38.46%) said they "_do think this post is misinformation_" [**P13FR30**]. The remaining two (15.38%) participants said they "_cannot confirm if this post is misinformation_" [**P17FR30**]. Participants that saw falsehoods as disinformation on TikTok did not see this post as inaccurate as only two (15.38%) of them thought the post is "_misinformation because it is suggesting that an herb blend is a safe and effective way to self administer
Figure 3. TikTok Post #2
\begin{table}
\begin{tabular}{|c c c c c|} \hline \multicolumn{5}{|c|}{**Misinformation (no intent) [viewed: 14 participants]**} \\ \hline
**Ignore** & **Fact-check** & **Block** & **Report** & **Like** \\
6 (42.86\%) & 3 (21.43\%) & 1 (7.14\%) & 1 (7.14\%) & 3 (21.43\%) \\ \hline \multicolumn{5}{|c|}{**Disinformation (intent) [viewed: 12 participants]**} \\ \hline
**Ignore** & **Fact-check** & **Block** & **Report** & **Like** \\
5 (41.67\%) & 5 (41.67\%) & 1 (8.33\%) & 1 (8.33\%) & 0 (0\%) \\ \hline \end{tabular}
\end{table}
Table 5. What action would you take on Post #1?
an abortion_" **[P36FL40]**. Five participants (38.46%) said they "_don't think this is misinformation because the post provides full context on the information it was trying to provide_" **[P40FL30]**. Six participants, or (46.15%) were unsure because they "_don't have enough knowledge to know if it's misinformation_" **[P59FL30]**.
#### 7.2.2. Response
The 13 participants who thought of no intent behind misinformation on TikTok indicated they would perform a wide variety of activities for this post as shown in Table 7. Four (30.77%) of them said they would "_potentially do some research into the other herbs that they are not familiar with_" **[P17FR30]**, three (23.08%) said they "_would move past it_" **[P8FL20]**, two (15.38%) said they "_would most likely block this account_" **[P13FR30]**, and two (15.38%) said they "_would probably like this post_" **[P31FL30]**. The two participants that said they would block this video indicated they felt very strongly about the content as it is "_definitely misinformation because there is scientific evidence that home remedies for this almost never work_" **[P25FL20]** and "_it was extremely uncalled for in suggesting abortion tea; If this was in my TikTok feed I would [also] report this video_" **[P24FR20]**.
None of the participants that saw misinformation being spread with intent on TikTok said they would block or report this post. The most popular responses included that six (46.15%) said they would "_simply move past the video and not pay no mind to it_" **[P35FL30]** and five (38.46%) said they would fact-check the content. **P5FL30** included that to determine the post is not misinformation they would need to see "_a Gynnocologist [sic] [to] validate or debunk this information before I could trust it. This would include discussing the ingredients, linking relevant papers and studies, and reminding people to go see their doctor for personal medical information._" Two (15.38%) of the participants indicated responses that they've "_done research on natural remedies for several health issues and if it was on my feed I may comment or like it_" **[P6FA30]**.
\begin{table}
\begin{tabular}{|c c c|} \hline \multicolumn{3}{|c|}{**Misinformation (no intent) [viewed: 13 participants]**} \\ \hline
**Ignore** & **Fact-check** & **Block** & **Report** & **Like** \\
3 (23.08\%) & 4 (30.77\%) & 2 (15.38\%) & 2 (15.38\%) \\ \hline \multicolumn{3}{|c|}{**Disinformation (intent) [viewed: 13 participants]**} \\ \hline
**Ignore** & **Fact-check** & **Block** & **Report** & **Like** \\
6 (46.15\%) & 5 (38.46\%) & 0 (0\%) & 0 (0\%) & 2 (15.38\%) \\ \hline \end{tabular}
\end{table}
Table 7. What action would you take on Post #2?
### Post #3
The third post that we presented to the participants appears to be a _ScienceDirect_ article indicating which herbs are abortifaciens, but is actually a Google search result snippet with a caption stating "_learn your herbs_". This TikTok video was not labeled with any hashtags by the creator. Although the full _ScienceDirect_ article purportedly referenced in this video provides warnings about the toxicity risks of the herbs [87], the screenprint also has directions on the dosage for pregnancy termination centrally positioned in the overall text. The screenshot, shown in Figure 4, includes the pennryoyal and mugwort herbs as abortifaciens.
#### 7.3.1. Assessment
Multiple participants in both groups observed that the TikTok post appeared to be from ScienceDirect. A few noticed it was a Google search result or that the "_source looks questionable_" [**P49FI30**] and reflected that in their responses. In the misinformation group, as shown in Table 8, six (40%) of the participants said they "_think this post maybe fake because the whole information is not given just the negative information that they want you to see_" [**P11ML40**]. Five (33.33%) leaned towards it not being misinformation and weighing that "_this post is presenting information from ScienceDirect, which I consider to be a reputable source of science-based and factual information_" [**P44FL30**]. Four (26.67%) participants were unsure because "_This post may contain some degree of accuracy given that the source is somewhat credible, but herbs are often not studied enough for these claims to be made with a high degree of accuracy_" [**P46MM30**].
Most of the participants in the disinformation group were unsure if this post was misinformation. 10 (62.5%) of them said they "_honestly cannot tell if this is misinformation; I see that the website credited is science based, but without going to the website, I am really unsure_" [**P41FM40**]. The next most common response was that "_this tries to present as being medically accurate, but it's just a screenshot of Google search results, so I would not trust it on its face_" [**P12FL60**]. Only **P15ML30**
Figure 4. TikTok Post #3
said he did not "_know if it's misinformation; I'd assume it isn't because I'm not how someone could lie about a google search coming up_."
#### 7.3.2. Response
The participants in both groups, as indicated in Table 9, were mostly inclined to ignore this post, saying they "_wouldn't interact with such a post_" **[P3ML50]** with **P29FL60** noting that the post "_was a standard google search so it may be true but again I would not respond and I would swipe on by._" The next most common response of the participants in the misinformation group was to "_report this creator_" **[P16FL30]**. The remaining two (13.33%) participants in this group said they would "_likely look at the comments to see what others have said_" **[P19FL30]**. Six (37. 5%) of the participants in the disinformation group said they "_would probably do research on the specifics if it was something I desired to learn more about_" **[P40FL30]**. Only one participant in this group said they would block the post "_because it does not state the risks of using these_" **[P34MM30]**.
### Post #4
The fourth post presented to the participants was labeled by the creator with the hashtags #fyp, #prochoice, #roevwade, and #herbodjherchoice. The text overlay in the video also includes pennyroyal and mugwort under the heading "_unfortunately it's come down to this_". The screenshot of the post as it appeared in the standard TikTok app is shown in Figure 5.
#### 7.4.1. Assessment
As shown in Table 10, six (37.5%) participants in the misinformation group indicated that they "_do think that this post is misinformation_" **[P13FR30]**, five (31.25%) said it is "_not misinformation because it didn't advise a particular viewpoint or way of thinking_" **[P54FL30]**, and five (31.25%) said "_It's not clear that the post is misinformation because it's just a list of herbs, supplements, and foods_" **[P49FI30]**. The disinformation group of participants felt more strongly that this post was misinformation, with nine (69.23%) indicating "_this one is completely lacking context or supporting information, so, yes, I would qualify it as misinformation_" **[P12FL60]**. Three (23.08%) were "_not sure it's misinformation but it may cause unsafe things to occur if not posted with caution_" **[P35FL30]**. Only **P39FL30** said she "_doesn't really think it's misinformation_".
\begin{table}
\begin{tabular}{|c c c|} \hline \multicolumn{3}{|c|}{**Misinformation (no intent) [viewed: 15 participants]**} \\ \hline
**Yes** & **No** & **Unsure** \\
6 (40\%) & 5 (33.33\%) & 4 (26.67\%) \\ \hline \multicolumn{3}{|c|}{**Disinformation (intent) [viewed: 16 participants]**} \\ \hline
**Yes** & **No** & **Unsure** \\
5 (31.25\%) & 1 (6.25\%) & 10 (62.5\%) \\ \hline \end{tabular}
\end{table}
Table 8. Is Post #3 Misinformation?
\begin{table}
\begin{tabular}{|c c c c c|} \hline \multicolumn{5}{|c|}{**Misinformation (no intent) [viewed: 15 participants]**} \\ \hline
**Ignore** & **Fact-check** & **Block** & **Report** & **Like** \\
9 (60\%) & 2 (13.33\%) & 0 (0\%) & 4 (26.67\%) & 0 (0\%) \\ \hline \multicolumn{5}{|c|}{**Disinformation (intent) [viewed: 16 participants]**} \\ \hline
**Ignore** & **Fact-check** & **Block** & **Report** & **Like** \\
9 (56.25\%) & 6 (37.5\%) & 1 (6.25\%) & 0 (0\%) & 0 (0\%) \\ \hline \end{tabular}
\end{table}
Table 9. What action would you take on Post #3?
#### 7.4.2. Response
When asked to describe the action they would take on this post, 8 (50%) participants in the misinformation group said they "_would just ignore the video and move on_" [**P52MM40**], as shown in Table 11. Three (18.75%) participants said they "_would probably read through the comments and either search for similar videos on TikTok or Google it_" [**P33FL30**], another three (18.75%) said they would "_report this creator because their information is dangerous_" [**P16FL30**], and the remaining two (12.5%) said they would "_most likely block this user_" [**P13FR30**]. Almost all of the participants in the disinformation group would ignore the fourth post. 11 (84.62%) said "_would probably scroll past this without interacting because even if it is truth it isn't helpful or informative_" [**P48FL30**]. The remaining two participants said they "_would try to fact check as best as I could_" [**P6FA30**] and "_report this post_" [**P12FL60**]. No participants in this group said they would block this post and no participants in either group said they would like the post.
\begin{table}
\begin{tabular}{|c c c|} \hline \multicolumn{3}{|c|}{**Misinformation (no intent) [viewed: 16 participants]**} \\ \hline
**Yes** & **No** & **Unsure** \\
6 (37.5\%) & 5 (31.25\%) & 5 (31.25\%) \\ \hline \multicolumn{3}{|c|}{**Disinformation (intent) [viewed: 13 participants]**} \\ \hline
**Yes** & **No** & **Unsure** \\
9 (69.23\%) & 1 (7.69\%) & 3 (23.08\%) \\ \hline \end{tabular}
\end{table}
Table 10. Is Post #4 Misinformation?
Figure 5. TikTok Post #4
### Post #5
The fifth post that we presented was labeled with the hashtags #themoreyouknow, #parsley, #pes-saryinsertion, #pessary, #fertilityherbs, #ertilityeducation, #plantsheal, #herbalist, #apothecarycabinet, #hippocraticoath, #ayurvedic, and #hippocratesfatherofmedicine. This creator has a series of posts that contain potential "abortion inducers" and emmenagogues (herbs which purportedly stimulate menstruation). The post explains how to insert parsley into the cervix or make it into a tea. In 2018, a women in Argentina died from attempting to induce a miscarriage while utilizing this method, which "stimulates blood flow in the uterus and can lead to massive internal bleeding and convulsions" [(21)]. The screenshot of the post as it appeared in the standard TikTok app is shown in Figure 6.
#### 7.5.1. Assessment
As shown in Table 12, most participants from both groups felt this post was misinformation. Twenty-two participants in total were randomly selected to assess this video. The groups were evenly distributed. Six (54.55%) participants in the misinformation group stated
\begin{table}
\begin{tabular}{|c c c c|} \hline \multicolumn{4}{|c|}{**Misinformation (no intent) [viewed: 16 participants]**} \\ \hline
**Ignore** & **Fact-check** & **Block** & **Report** & **Like** \\
8 (50\%) & 3 (18.75\%) & 2 (12.5\%) & 3 (18.75\%) & 0 (0\%) \\ \hline \multicolumn{4}{|c|}{**Disinformation (intent) [viewed: 13 participants]**} \\ \hline
**Ignore** & **Fact-check** & **Block** & **Report** & **Like** \\
11 (84.62\%) & 1 (7.69\%) & 0 (0\%) & 1 (7.69\%) & 0 (0.00\%) \\ \hline \end{tabular}
\end{table}
Table 11. What action would you take on Post #4?
they "_feel this post is misinformation because it does not relay the effects of placing an unsterile object inside your body, which is what the video is promoting/suggesting_" [**P17FR30**] and that "_this post has no credible citations to support the claims it is making_" [**P14FL30**]. Three (27.27%) of the participants who thought misinformation is disseminated on TikTok without intent felt it was not misinformation because "_It's always the consumers [sic] responsibility to do their own research and make their own choice(s)_" [**P23FL30**]. Two (18.18%) participants in this group said they "_have no idea_" [**P21FA40**]. Eight (72.73%) participants in the disinformation group thought "_it is misinformation because the account is not stating what the benefits of parsley even are, nor where they gathered this information_" [**P7FM30**]. Three (27.27%) said they "_suspect this is misinformation but do not know for sure_" [**P36FL40**]. No participants in the disinformation group thought the post wasn't misinformation.
#### 7.5.2. Response
The misinformation group participants, as shown in Table 13 primarily said they "_would ignore this post_" [**P14FL30**]. Two (18.18%) participants said they would "_report the video as unsafe due to the major consequences that doing what the video says to do could ensue on someone's health_" [**P17FR30**], one (9.09%) said they "_would like to learn more and can always compare information on google_" [**P11ML40**], and one (9.09%) said they "_would most likely block this account_" [**P13FR30**]. Participants in the disinformation group were mostly split on the post's inaccuracies, saying they "_wouldn't respond and just keep scrolling_" [**P26FL30**] and they "_would look up the benefits of parsley to see if the post has an validation_" [**P6FA30**]. Two (18.18%) participants said they would "_typically report this post_" [**P7FM30**] and **P38FL30** said she would "_block this account_._" No participants in either group said they would like this video.
### Post #6
The sixth post that we presented participants was labeled with the hashtags #greenscreen, #womb, #roevwade, #pregnancyrelease, #abortion, #herbal, #safety, and #withlove. This post contains information on which herbs to use to perform a "_pregnancy release_". The herbs in the video include
\begin{table}
\begin{tabular}{|c c c c|} \hline \multicolumn{4}{|c|}{**Misinformation (no intent) [viewed: 11 participants]**} \\ \hline
**Ignore** & **Fact-check** & **Block** & **Report** & **Like** \\
7 (63.74\%) & 1 (9.09\%) & 1 (9.09\%) & 2 (18.18\%) & 0 (0\%) \\ \hline \multicolumn{4}{|c|}{**Disinformation (intent) [viewed: 11 participants]**} \\ \hline
**Ignore** & **Fact-check** & **Block** & **Report** & **Like** \\
4 (36.36\%) & 4 (36.36\%) & 1 (9.09\%) & 2 (18.18\%) & 0 (0\%) \\ \hline \end{tabular}
\end{table}
Table 13. What action would you take on Post #5?
\begin{table}
\begin{tabular}{|c c c|} \hline \multicolumn{4}{|c|}{**Misinformation (no intent) [viewed: 11 participants]**} \\ \hline
**Yes** & **No** & **Unsure** \\
6 (54.55\%) & 3 (27.27\%) & 2 (18.18\%) \\ \hline \multicolumn{4}{|c|}{**Disinformation (intent) [viewed: 11 participants]**} \\ \hline
**Yes** & **No** & **Unsure** \\
8 (72.73\%) & 0 (0\%) & 3 (27.27\%) \\ \hline \end{tabular}
\end{table}
Table 12. Is Post #5 Misinformation?
rue, pennyroyal, and mugwort, which were also included in prior videos. The source cited in the video is indicated by the creator to be an article on herbal abortion from _we.riseup.net_. The screenshot of the post as it appeared in the standard TikTok app is shown in Figure 7.
#### 7.6.1. Assessment
As shown in Table 14, six (42.86%) of the misinformation group participants indicated that this post was misinformation. Participant **P19FL30** explained that "_the content creator in this video does tell viewers to do 'proper research' on the herbs she is promoting before using them_; _However, she notes that if'something goes wrong' and medical attention is needed that the viewer does not need to tell their medical provider that they have been taking these herbs - assuming that the creator is not a medical professional, I would consider this to be misinformation_." Five (35.71%) of the participants said they weren't sure if this post is misinformation because "_there is an article supporting the statements made by the speaker and the speaker seems to have genuine intent; However the shared article is not in the form of a scientific verified and peer reviewed study_" [**P42FL30**]. The remaining three participants in this group said they "_don't believe this post is misinformation because she provides supporting facts that you can double check and they will confirm her point_" [**P54FL30**].
Six (35.29%) of the participants in the disinformation group felt "_this is not misinformation because she is well spoken, shows where she is getting the information from, states what it is for and what it does and also mentions to seek medical help_" [**P51FL40**]. Six (35.29%) were unsure because they "_can't confirm if this is misinformation or not because I am not familiar with the scientific data that either backs up or refutes this info_"[**P1MR50**] and five (29.41%) stated "_Yes, this post is misinformation_; _To start - the post screenshots a website: veriseup.net; That website is not a reliable medical source of information. I would scroll past_" [**P41FM4**].
#### 7.6.2. Response
Table 15 indicates that more that half of the participants from both groups said they "_would scroll past_" [**P41FM40**], "_would simply ignore this post_" [**P1MR50**]. Of the participants in the misinformation group, four (28.57%) said they would "_encourage viewers to follow up with
Figure 7. TikTok Post #6
doing their own research and a notice that every woman's body still responds differently to methods_" **[P54FL30]**, two (14.29%) said they "_would simply block the user and move on_" **[P52MM40]**, and one (7.14%) said they "_would report this post_" **[P25FL20]**. The participants in the disinformation group did not say they would block this post. Three (17.65%) of them indicated they would "_read the full article this video links, and find other videos to make a better judgement on the trueness of this post_" **[P5FL30]** and two (11.76%) said "_there are natural remedies that are legitimate but telling people they can take an herb for contraception is dangerous; I would report this one_**[P50FL60]**. No participants in either group said they would like this post.
### Post #7
We also selected a post that was soft moderated by TikTok [51] and contained a trustworthiness tag with stating "_Participating in this activity could result in you or others getting hurt_." Post seven was labeled by the creator with the hashtags #greenscreen and #roevwade. As this is the only explicitly moderated post, we presented it to all 60 participants. The post, shown in Figure 8, has overlay text that states "_they may be able to ban abortion but they can't ban these_" and has slides of images that include pennyroyal, juniper berries, vitamin C, mugwort, aspirin, and a wire hanger.
#### 7.7.1. Assessment
As shown in Table 16, 14 (46.67%) of the 30 participants in the misinformation group said the post was misinformation "_considering this person is unqualified to be spreading this information_" **[P16FL30]**. Eleven (36.67%) said the post was not misinformation because they "_don't agree with using some of those products for achieving the goal that the video is promoting, but all of those products shown, used in the correct way/dosage can result in abortion_" **[P17FR30]**. The remaining five (16.67%) participants indicated they would "_have to research before deciding if it was misinformation or not_" **[P44FL30]**.
In the disinformation group, 13 (43.33%) of participants indicated that "_this is misinformation as there is no evidence this is safe and effective means of self administering an abortion_" **[P36FL40]**. Nine (30%) of the participants said "_It is not helpful or informative; And the hanger is such a danger suggestion to put out there; I wouldn't consider this misinformation but it isn't helpful information_"
\begin{table}
\begin{tabular}{|c c c c|} \hline \multicolumn{4}{|c|}{**Misinformation (no intent) [viewed: 14 participants]**} \\ \hline
**Teg** & **No** & **Unsure** \\
6 (42.86\%) & 3 (21.43\%) & 5 (35.71\%) \\ \hline \multicolumn{4}{|c|}{**Disinformation (intent) [viewed: 17 participants]**} \\ \hline
**Yes** & **No** & **Unsure** \\
5 (29.41\%) & 6 (35.29\%) & 6 (35.29\%) \\ \hline \end{tabular}
\end{table}
Table 14. Is Post #6 Misinformation?
\begin{table}
\begin{tabular}{|c c c c c|} \hline \multicolumn{4}{|c|}{**Misinformation (no intent) [viewed: 14 participants]**} \\ \hline
**Ignore** & **Fact-check** & **Block** & **Report** & **Like** \\
7 (50\%) & 4 (28.57\%) & 2 (14.29\%) & 1 (7.14\%) & 0 (0\%) \\ \hline \multicolumn{4}{|c|}{**Disinformation (intent) [viewed: 17 participants]**} \\ \hline
**Ignore** & **Fact-check** & **Block** & **Report** & **Like** \\
12 (70.59\%) & 3 (17.65\%) & 0 (0\%) & 2 (11.76\%) & 0 (0\%) \\ \hline \end{tabular}
\end{table}
Table 15. What action would you take on Post #6?
**[P48FL30]** and 8 (26.67%) stated "_I don't have the scientific data to comment on whether or not this is misinformation_" **[P1MR50]**.
#### 7.7.2. Response
As indicated in Table 17, 13 (43.33%) of the misinformation group and 18 (60%) of the disinformation group said they "_wouldn't do anything with the post_" **[P6FA30]** and they "_would not respond and just keep scrolling_" **[P26FL30]**. Eleven participants of the misinformation group (36.67%) and 8 of the disinformation group (26.67%) said they "_would even consider reporting it as harmful because some people are viewing it and are not aware of how to properly use the products_" **[P17FR30]** and "_because it's dangerous_" **[P16FL30]**. Three (10%) participants from each group said they would "_have to do more research_" **[P2FL50]** and they "_would read the comments, look at similar videos, or Google it_" **[P33FL30]**. Two (6.67%) of the participants from the misinformation group said they would like the post as they "_agree with this post and have heard these methods before_" **[P8FL20]**. One of participants from each group said they would block the post because "_it's not humorous joking around about women feeling they are being pushed to use alternative and possibly
\begin{table}
\begin{tabular}{|c c c|} \hline \multicolumn{3}{|c|}{**Misinformation (no intent) [viewed: 30 participants]**} \\ \hline
**Yes** & **No** & **Unsure** \\ \(14\) (46.67\%) & 11 (36.67\%) & 5 (16.67\%) \\ \hline \multicolumn{3}{|c|}{**Disinformation (intent) [viewed: 30 participants]**} \\ \hline
**Yes** & **No** & **Unsure** \\ \(13\) (43.33\%) & 9 (30\%) & 8 (26.67\%) \\ \hline \end{tabular}
\end{table}
Table 16. Is Post #7 Misinformation?
Figure 8. TikTok Post #7
dangerous medicine_" [**P60ML40**] and "_it is very disturbing and should not be allowed to be showed to young women in particular_" [**P13FR30**].
## 8. Discussion
Our first research question aimed to uncover how social media users conceptualize misinformation on TikTok, where does it originate, who are the targets, and what's its purpose. Misinformation, even within such a small sample as ours, invokes nuanced interpretations as people do not always stick to the popular "fake news" association [58]. According to the participants in our sample, misinformation on TikTok might involve falsehoods only, but falsehoods could be interspersed with biased interpretations, facts taken out of context, or emotion-provoking narratives [96]. The amalgamation of factual and inaccurate content was mostly a craft assigned to individual content creators, often for personal gain, but the "usual suspects" - political parties and foreign unfriendly countries to the United States - were not spared in disseminating misinformation on Tiktok.
This finding suggests that while TikTok's affordances are fit for selling products or participate in challenges [50], they are not prohibitive for political and foreign actors to adapt their agenda setting and information operations. In the past, misleading narratives and content were migrating from fringe and alt-communities to the mainstream platforms [124] and the evidence from our study suggest that TikTok could be a future candidate destination. Our participants' view that many vulnerable people and easily-influenced crowds would unlikely step out from the "For You" page to check a claim adds to this impression, confirming previous studies that identified such users as the most sought out targets of misinformation [19; 77]. The TikTok participation model adds additional incentive to engage with misinformation as it provides immediate profit and long-term gains, according both to the participants in our sample and studies exploring political and conspirative influence on TikTok [14; 65].
Our second research question narrowed our exploration on misleading abortion narratives. Given that we conducted our study after _Roe vs Wade_ was overturned, health misinformation regarding herbal abortifaciens dominated the TikTok's 'For You' pages of our participants. While some of them were simply intrigued about these "at-home remedies," there were participants that were disturbed, worried, angered, and disgusted that dangerously misleading content like this finds its way on the platform. This emotion-provoking response, though concentrated on a topic of limited polarization, is worrisome because this is precisely the response that foreign actors sought to incite in past polarizing debates on social media [90; 125]. These debates always involved politicization of the discourse, so the evidence that our participants also saw politically contextualized abortion narratives further suggest that TikTok might be the next health/political misinformation battleground [36], despite the impression of mostly apolitical participation on the platform so far [1].
\begin{table}
\begin{tabular}{|c c c c|} \hline \multicolumn{4}{|c|}{**Misinformation (no intent) [viewed: 30 participants]**} \\ \hline
**Ignore** & **Fact-check** & **Block** & **Report** & **Like** \\
13 (43.33\%) & 3 (10\%) & 1 (3.33\%) & 11 (36.67\%) & 2 (6.67\%) \\ \hline \multicolumn{4}{|c|}{**Disinformation (intent) [viewed: 30 participants]**} \\ \hline
**Ignore** & **Fact-check** & **Block** & **Report** & **Like** \\
18 (60\%) & 3 (10\%) & 1 (3.33\%) & 8 (26.67\%) & 0 (0\%) \\ \hline \end{tabular}
\end{table}
Table 17. What action would you take on Post #7?
But to substantiate such conjectures, one needs evidence on how actual users engage with essential, health-related abortion misinformation in the first place. Uncovering this evidence was central to our study and we attempted to gain as much as possible a nuanced insight into how users actually deal with various abortion misinformation content on TikTok. Our findings indicate that a significant number of TikTok users do not see videos promoting herbal abortifications as abortion misinformation, despite the scientific evidence to the contrary [9, 37, 41, 74, 89, 103]. Regardless of whether the video included the creator itself, contained explicit tags to the herbal abortifacients listed in Table 1, or was simply a textual post promoting these at-home remedies, many participants were unconvinced the posts were misinformation. The majority of them conceptualized misinformation as spreading falsehoods _without_ intent; those that saw intent behind the spread of falsehoods on TikTok, in general, were far more skeptical of the posts.
A deeper look into the responses of the participants that did not see misinformation in the selected posts reveals that it was sufficient for the creators to appear "like they know what they are talking about," "provide context," "seems informed," and are "well spoken" to make the misinformation believable. There was no particular demographic trend among these participants as the posts were equally convincing to all gender, political, and age groups. The group of participants that called out the misinformation indicated both analytical and heuristic cues that could be helpful for the wider audience on TikTok to employ in dealing with misleading abortion content. For example, videos that "look like MLM promotions", "omit any side-effects" of a proposed treatment, "do not include credible citations", and the "creator is unqualified for making any health-related claims" should be dismissed as abortion misinformation. All of these strategies have been proven to work against health misinformation in the past [80], so our findings strengthen the case for continuing the "inoculation" against false and misleading abortion narratives [58].
This is especially important in situations where many users, like the ones in our study, remain unsure whether these misleading videos are abortion misinformation or not. Our results suggest that these users lack knowledge on the safety and the effectiveness of herbal abortifacients, which in turn precludes them to make a decisive dismissal of the misinformation. Educational interventions on TikTok regarding abortions have already be proposed [30], but question is if the algorithmic curation of the 'For You' page for TikTok users "interested" in alternatives includes them too. According to our findings, the undecided participants would be easily "nudged" with accurate scientific information, which is a strategy that worked for other types of misleading content [78].
It is reassuring to see a fact-checking trend among even a small sample like ours. Perhaps affordances of TikTok probably encourage majority of the users to simply ignore many misleading posts, as many of the participants in our study did, and avoid to critically discern the content [79]. But recalling that familiarity, i.e. a repeated exposure to such videos makes the content look truthful [80], this fact-checking trend might need to be appended with better debunking and soft moderation efforts suggested to work for other health-related misinformation [95, 106]. Such a need is further corroborated by our findings suggesting that even in the presence of the current TikTok soft moderation labels on post #7, at least 30% of the entire sample still believed that the content is _not_ misinformation.
Equally important for consideration is that our results suggest a number of users are unafraid to block or report abortion misinformation. Prior evidence on flagging and reporting misleading content suggests that social media users might not opt for such actions as to preserve interpersonal relationships and avoid social arguments, especially for issue that do not matter much to them [33]. However, reporting misinformation becomes a proactive endeavour when health-related misinformation is in question [15]. Our results confirm that this effect also takes place for abortion-related misinformation and occurs across all demographics, making a further case for an actionable framework of misinformation sharing and correction sharing on TikTok [123].
### Ethical Considerations
The purpose of our study was not to generalize to a population; rather, to explore the phenomenon of individual dealing with abortion misinformation from a regular user perspective. While we added the participants' self-reported age, gender, and political leanings for more detailed reporting, we avoided providing definitive numbers accompanying the assessments and response strategies for each of the seven intervention posts with these demographics. Instead, we supported our findings with descriptive quotations from participants that convey the way ordinary users deal with misleading videos on TikTok in a hope that the results can help to elevate the study of abortion misinformation on social media as a whole.
We acknowledge that there might be a potential risk of repeated exposure to abortion misinformation, i.e. an "implied truth effect" [80], as each participant saw four of the short-form videos. To mitigate this risk we explicitly pointed to the debunked information for the related "at-home" remedies and their associated harms. There is also a risk from oversimplification of our results where the participants' conceptualization, assessment, and response to abortion misinformation, expressed on their behalf, might not represent the entirety of the strategies used to deal with misleading videos. We, of course, agree that users do employ others ways and means to deal with misleading videos and we welcome every work that brings them to the fore. This will facilitate a scientific work on abortion misinformation beyond simple oppositional responses [101], a corollary we also want to avoid when contextualizing our results for future anti-misinformation interventions.
### Limitations
Our research was limited was limited in its scope to U.S. TikTok media users and the state of abortion misinformation regarding at-home remedies that existed on TikTok in the period immediately after the overturn of Roe vs Wade by the Supreme Court of the US. In as much as we attempted to have balanced, diverse, and inclusive set of short-form videos regarding herbal abortifaciens, there certainly are, and will be, other similar content which our participants might assess and respond to differently than they did in our study. While the content of these videos was scientifically debunked and constituted misinformation during our study, we acknowledge that this might change in light of new scientific evidence. We are also limited by the state of the content moderation policies on TikTok that, together with public policy changes and new Supreme Court rulings, change and impact the relevance of our results, therefore we exercise caution in considering these results in the narrow post-Roe vs Wade context.
A limitation also comes from the sampling method and the use of a survey provider [85], as other users and other samples might provide results that differ from the once we obtained as there is little insight into general sampling and sample-related differences when users are broadly queried about abortion misinformation. By asking users directly about how they interact with abortion misinformation and misleading video content on TikTok, we got a wide variety of insights from a broad range of perspectives. We did not measure the efficacy of users' assessment approaches and response strategies for explicitly politicized abortion misinformation content, nor did we ask how users dealt with other abortion misinformation on other social media platforms. Short-form videos are a relatively new way of persuasive communication appealing to younger users, but traditional social media text, memes, and visceral images [5] might provoke different responses for a wider population of users. Therefore, we are careful to avoid any predictive use of our findings because TikTok's affordances might change in the future.
## 9. Conclusion
Abortion misinformation undoubtedly shapes the way people make reproductive decisions and TikTok, a very popular social media platform, allows misleading content regarding "at-home" abortion remedies to reach wide audiences. Users possess the critical ability to assess, discern, and reject misleading and scientifically debunked abortion claims, but a worrying number of them are not ready to dismiss these alternatives for self-induced terminations of unwanted pregnancies in a post-_Roe v Wade_ America. Time will tell whether this proclivity for abortion misinformation will remain, but meanwhile we hope that TikTok, the scientific community, and health authorities take our results as actionable insight that can prevent harmful outcomes of abortion misinformation.
|
2308.09219 | Learning in Cooperative Multiagent Systems Using Cognitive and Machine
Models | Developing effective Multi-Agent Systems (MAS) is critical for many
applications requiring collaboration and coordination with humans. Despite the
rapid advance of Multi-Agent Deep Reinforcement Learning (MADRL) in cooperative
MAS, one major challenge is the simultaneous learning and interaction of
independent agents in dynamic environments in the presence of stochastic
rewards. State-of-the-art MADRL models struggle to perform well in Coordinated
Multi-agent Object Transportation Problems (CMOTPs), wherein agents must
coordinate with each other and learn from stochastic rewards. In contrast,
humans often learn rapidly to adapt to nonstationary environments that require
coordination among people. In this paper, motivated by the demonstrated ability
of cognitive models based on Instance-Based Learning Theory (IBLT) to capture
human decisions in many dynamic decision making tasks, we propose three
variants of Multi-Agent IBL models (MAIBL). The idea of these MAIBL algorithms
is to combine the cognitive mechanisms of IBLT and the techniques of MADRL
models to deal with coordination MAS in stochastic environments from the
perspective of independent learners. We demonstrate that the MAIBL models
exhibit faster learning and achieve better coordination in a dynamic CMOTP task
with various settings of stochastic rewards compared to current MADRL models.
We discuss the benefits of integrating cognitive insights into MADRL models. | Thuy Ngoc Nguyen, Duy Nhat Phan, Cleotilde Gonzalez | 2023-08-18T00:39:06Z | http://arxiv.org/abs/2308.09219v1 | # Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
###### Abstract.
Developing effective Multi-Agent Systems (MAS) is critical for many applications requiring collaboration and coordination with humans. Despite the rapid advance of Multi-Agent Deep Reinforcement Learning (MADRL) in cooperative MAS, one of the major challenges that remain is the simultaneous learning and interaction of independent agents in dynamic environments in the presence of stochastic rewards. State-of-the-art MADRL models struggle to perform well in Coordinated Multi-agent Object Transportation Problems (CMOTPs), wherein agents must coordinate with each other and learn from stochastic rewards. In contrast, humans often learn rapidly to adapt to nonstationary environments that require coordination among people. In this paper, motivated by the demonstrated ability of cognitive models based on Instance-Based Learning Theory (IBLT) to capture human decisions in many dynamic decision making tasks, we propose three variants of Multi-Agent IBL models (MAiBL). The idea of these MAiBL algorithms is to combine the cognitive mechanisms of IBLT and the techniques of MADRL models to deal with coordination MAS in stochastic environments, from the perspective of independent learners. We demonstrate that the MAiBL models exhibit faster learning and achieve better coordination in a dynamic CMOTP task with various settings of stochastic rewards compared to current MADRL models. We discuss the benefits of integrating cognitive insights into MADRL models.
Key words and phrases: coordination problems, instance-based learning theory, multi-agent deep reinforcement learning, multi-agent instance-based learning +
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal: Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
+
Footnote †: journal Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
In the MAS literature, learning coordination is a cooperative problem wherein multiple agents are encouraged to work together towards a common goal by receiving an equally-shared reward [Matignon et al., 2012]. Moreover, based on the information available to the agents, they can be independent or joint-action learners [Claus and Boutilier, 1998; Gronauer and Diepold, 2021]. The independent learners are essentially non-communicative agents; they have no knowledge of the rewards and actions of the other agents. In contrast, joint-action learners are aware of the existence of other agents and can observe others' actions. Many practical control applications feature a team of multiple agents that must coordinate independently to achieve a common goal without awareness of other members' actions [Agogino and Tumer, 2012; Verbeeck et al., 2007]. One example of this problem is a team of rescuers splitting up in a network of underground caves wherein information exchange is unavailable. In the current research, we are interested in the coordination problem from the perspective of independent learners who try to learn and adapt their actions to the other teammates' behavior without communicating during learning.
There are a number of challenges for modeling independent learners in multiagent cooperative tasks. One major problem is _simultaneous learning_ within a shared dynamic environment. If the agent selects what appears to be an optimal action for itself as an individual, and if all agents act with the same goal, the situation would result in poor joint actions (i.e., miscoordination). For instance, if a group of drivers departs from one location to the same destination and the driving navigator provides all drivers with the same path, a miscoordination and traffic jam could be generated. Therefore, the navigator's strategies are subject to change over time, and the other agent also has to adapt to this change. Consequently, the presence of other learning and exploring agents make the environment a non-stationary and dynamic situation from the perspective of the independent agents [Tan, 1993]. A number of studies have been proposed to cope with the non-stationarity problem [Foerster et al., 2017, 2017], and yet, these works were inspired by centralized learning where agents can communicate freely [Foerster et al., 2018], rather than non-communicative agents.
To address this emergent challenge with non-communicative agents, numerous methods have been proposed in the literature of multiagent reinforcement learning (MARL), including distributed Q-learning [Lauer and Riedmiller, 2000], hysteretic Q-learning [Matignon et al., 2007; Matignon et al., 2007], and lenient Q-learning [Panait et al., 2006; Wei and Luke, 2016]. In general, distributed and hysteretic learning operate under an optimistic learning assumption. That is, an agent selects an action that meets the expectation that the other agents also choose the best matching actions accordingly. Under this assumption, the agent prefers positive results when playing actions. Alternatively, the lenient method rests on the assumption that agents are more lenient initially when exploration is still high, but they become less lenient over time. Simply put, the more agents explore the environment, the less lenient they become. This idea is formulated in the model through a parameter (i.e., temperature value) that is used to control the degree of leniency. Particularly, the model associates the selected actions with the temperature value that gradually decays by the frequency of state-action pair visits. In another thread of research on how to achieve effective coordination between agents in MAS, [Hao and Leung, 2013; Hao et al., 2014] proposed a social learning framework that focuses on achieving socially optimal outcomes with reinforcement social learning agents. Recently, the integration of deep learning into these traditional Reinforcement Learning (RL) methods led to new branches of research, including multi-agent deep reinforcement learning (MADRL) approaches [Gupta et al., 2017; Lanctot et al., 2017; Omidshafiei et al., 2017; Palmer et al., 2018].
Despite the advance in MADRL, these algorithms still perform poorly in tasks in which the environment is not stationary due to the dynamics of coexisting agents and the presence of stochastic rewards. Indeed, the presence of stochastic rewards can add further complications to the cooperation among agents since agents are not always able to distinguish the environment's stochasticity from another agent's exploration [Claus and Boutilier, 1998]. Previous
research showed that independent learners often result in poor coordination performance in complex or stochastic environments [Claus and Boutilier, 1998; Gronauer and Diepold, 2021; Lauer and Riedmiller, 2000a]. This is perhaps due to ambiguity in the source of stochasticity since it can emerge from many factors, including possible outcomes or their likelihood. Despite the fact that prior studies have presented different approaches to cope with stochasticity in MAS [Matignon et al., 2007; Omidshafiei et al., 2017; Palmer et al., 2018], there is significant room left for characterizing sources of stochastic rewards and addressing their effects on the performance of independent agents in the context of fully cooperative MAS.
It is well-known that humans have the ability to adapt to non-stationary environments with stochastic rewards rapidly, and they learn to collaborate and coordinate effectively, while algorithms cannot capture this human ability [Lake et al., 2017]. Yet, when humans confront stochastic rewards with small-probability outcomes (henceforth referred to as rare events), the decision problem might become complex not only for algorithms but for humans too [Hertwig et al., 2004]. These situations involving highly impacting but rare events are, in fact, very common in real life (e.g., disasters, economic crashes) [Taleb, 2007]. In decision making, dynamic cognitive models inspired by the psychological and cognitive processes of human decision making have been able to capture and explain the tendency for humans to underweight rare events [Gonzalez and Dutt, 2011; Hertwig, 2015]. This current state of affairs motivates the main idea of our paper: constructing algorithms that combine the strengths of MADRL models and cognitive science models, to develop agents that can improve their learning and performance in non-stationary environments with stochastic reward and rare events.
Cognitive modeling has been developed to understand and interpret human behavior by representing the cognitive steps by which a task is performed. In particular, Instance-based Learning Theory (IBLT) was developed to provide a cognitively-plausible account for how humans make decisions from experience and under uncertainty, through interactions with dynamic environments [Gonzalez et al., 2003]. IBLT has shown an accurate representation of human choice and broad applicability in a wide number of decision making domains, from economic decision making to highly applied situations, including complex allocation of resources and cybersecurity, e.g. [Gonzalez, 2013; Gonzalez et al., 2003; Hertwig, 2015]. Also, recent work in combining a cognitive model based on IBLT and the temporal difference (TD) mechanism in RL (IBL-TD) has shed light on how to exploit the respective strengths of cognitive IBL and RL models [Nguyen et al., 2023]. The idea of IBL-TD and the disadvantages of current MADRL in stochastic rewards, lead to questions of how would the IBL-TD perform in the context of cooperative MAS? And how would MAS that exploit cognitive models compare to state-of-the-art MADRL approaches to address fully cooperative tasks with stochastic rewards?
To that end, our first contribution here is to propose novel multi-agent IBL (MAIBL) models that combine the ideas of cognitive IBL models and concepts used in MADRL models, namely decreasing \(\epsilon\)-greedy, hysteretic, and leniency to solve fully cooperative problems from the perspective of independent agents. Next, we characterize different properties of stochastic rewards, including problems with rare events, to aim at understanding their effects on the behavior of independent agents in fully cooperative Coordinated Multi-Agent Object Transportation Problems (MOTP), which have been used to test current MADRL algorithms [Palmer et al., 2018]. Finally, we evaluate the performance of our proposed MAIBL models against these state-of-the-art approaches in the MADRL literature, including decreasing \(\epsilon\)-greedy, hysteretic, and lenient Deep Q-Network algorithms [Palmer et al., 2018] on four scenarios of CMOTP with respect to varying stochastic rewards. We demonstrate that MAIBL can significantly outperform MADRL with respect to different evaluation metrics across the choice scenarios.
## 2 Multi-agent deep reinforcement learning
In general, a fully cooperative multi-agent problem is formulated as a Markov game, which is defined by a set of states \(\mathcal{S}\) describing the possible configurations of all agents, a set of actions \(\mathcal{A}_{1},...,\mathcal{A}_{m}\), and a set of observations \(\mathcal{O}_{1},...,\mathcal{O}_{m}\) for each agent. \(\mathbf{A}=\mathcal{A}_{1}\times...\times\mathcal{A}_{m}\) is the joint action set. To select actions, each agent \(i\) uses its policy \(\pi_{i}:\mathcal{O}_{i}\times\mathcal{A}_{i}\rightarrow\mathbb{R}\), which produces the next state according to the transition function \(\mathcal{T}:\mathcal{S}\times\mathbf{A}\rightarrow\mathcal{S}\). The agent \(i\) receives rewards based on a conditional probability \(P_{i}:\mathcal{S}\times\mathbf{A}\times\mathbb{R}\rightarrow[0,1]\) determining the probability of achieving a reward \(r\in\mathbb{R}\) if the joint action \(a\in\mathbf{A}\) has been executed in the current state \(s\in\mathcal{S}\), and receives private observations correlated with the states \(\mathcal{O}_{i}:\mathcal{S}\rightarrow\mathcal{O}_{i}\).
To date, a large body of research on MADRL focused on building agents that can quickly learn an optimal joint policy in cooperative multi-agent tasks. The most fundamental, as well as recent approaches to MADRL, are summarized in the following.
### Greedy-Madrl
Q-learning (Watkins and Dayan, 1992), one of the most popular single-agent RL algorithms, was among the first algorithms applied to multi-agent settings due to its simplicity and robustness. In the Q-learning algorithm (Watkins and Dayan, 1992), \(Q_{i}:\mathcal{O}_{i}\times\mathcal{A}_{i}\rightarrow\mathbb{R}\) is the Q function of the agent \(i\). The Q-value \(Q_{i}(o,a)\) of the observation-action pair \((o,a)\) can be updated by
\[Q_{i}(o,a)\longleftarrow Q_{i}(o,a)+\alpha\delta, \tag{1}\]
where \(\delta=r_{i}+\gamma\max_{a^{\prime}\in\mathcal{A}_{i}}Q_{i}(o^{\prime},a^{ \prime})-Q_{i}(o,a)\) is the Temporal Difference (TD) error with \(r_{i}\) being the reward, \(\gamma\) being a discount factor and \(o^{\prime}\) being the observation at the next state \(s^{\prime}\), and \(\alpha\) is a learning rate.
Double Deep Q-Network (DQN) (van Hasselt et al., 2016) approximates the \(Q\)-value function by minimizing the loss
\[\mathcal{L}_{i}(\theta)=\mathbb{E}_{o,a,r,o^{\prime}}\bigg{(}r+\gamma\bar{Q}_{ i}(o^{\prime},a^{\prime}|\bar{\theta})-Q_{i}(o,a|\theta)\bigg{)}^{2}, \tag{2}\]
where \(\bar{Q}_{i}\) is the target Q function of \(Q_{i}\), whose parameters \(\bar{\theta}\) are periodically updated with the most recent \(\theta\), which helps stabilize learning, and \(a^{\prime}\in\arg\max_{a^{\prime\prime}\in\mathcal{A}_{i}}Q_{i}(o^{\prime},a^ {\prime\prime}|\theta)\). The idea of the double DQN using a decreasing \(\epsilon\)-greedy exploration strategy (Greedy-MADRL) is that the agent \(i\) chooses an action \(a\) randomly from its set of actions \(\mathcal{A}_{i}\) with probability \(\epsilon\) (explore) that decreases after each episode and selects \(a\in\arg\max_{a^{\prime}\in\mathcal{A}_{i}}Q_{i}(o,a^{\prime}|\theta)\) with probability \(1-\epsilon\) (exploit).
### Hysteretic-MADRL
This approach is an integration of hysteretic idea into the double DQN (Matignon et al., 2012). Hysteretic Q-learning is an optimistic MARL algorithm originally introduced to address maximum based learner's vulnerability towards stochasticity by using two learning rates \(\alpha\) and \(\beta\), where \(\beta<\alpha\)(Matignon et al., 2007). In particular, the optimistic learning idea affects the way Q values are updated. Given a TD error \(\delta\), a hysteretic Q-value update is performed as follows:
\[Q_{i}(o,a)\longleftarrow\begin{cases}Q_{i}(o,a)+\alpha\delta&\text{if } \delta>0\\ Q_{i}(o,a)+\beta\delta&\text{otherwise,}\end{cases} \tag{3}\]
where \(\beta\) is used to reduce the impact of negative Q-value updates while learning rate \(\alpha\) is used for positive updates. In (Palmer et al., 2018), the authors implemented a scheduled hysteretic DQN (Hysteretic-MADRL) that uses \(n\) pre-computed learning rates \(\beta_{1},...,\beta_{n}\) for the double DQN, where \(\beta_{n}\) approaches \(\alpha\), and \(\beta_{j}=d^{n-j}\beta_{n}\) with \(d\in(0,1]\).
### Lenient-MADRL
In (Palmer et al., 2018), the authors proposed a lenient DQN (Lenient-MADRL) by incorporating lenient learning into the double DQN. The lenient learning was introduced in (Potter and Jong, 1994) that updates multiple agents' policies towards an optimal joint policy simultaneously by letting each agent adopt an optimistic disposition at the initial exploration phase. More precisely, lenient agents keep track of the temperature \(T(o,a)\) for each observation-action pair, which is initialized by a defined maximum temperature value, and compute lenient functions by
\[l_{i}(o,a)=1-e^{-K*T_{i}(o,a)}, \tag{4}\]
where \(K\) is a constant determining how the temperature affects the decay in leniency. The temperature \(T_{i}\) can be simply decayed by a discount factor \(\theta\in[0,1]\) such that \(T_{i}(o,a)\leftarrow\theta T_{i}(o,a)\). (Wei and Luke, 2016) deployed the average temperature of the agent's next state in updating of the current temperature
\[T_{i}(o,a)\longleftarrow\beta\begin{cases}T_{i}(o,a)&\text{if $s^{\prime}$ is the terminal state}\\ (1-v)T_{i}(o,a)+v\bar{T}_{i}(o^{\prime})&\text{otherwise,}\end{cases} \tag{5}\]
where \(\bar{T}_{i}(o^{\prime})=\frac{1}{|\mathcal{A}_{i}|}\sum_{a^{\prime}\in \mathcal{A}_{i}}T_{i}(o^{\prime},a^{\prime})\). The Q-value is then updated by
\[Q_{i}(o,a)\longleftarrow\begin{cases}Q_{i}(o,a)+\alpha\delta&\text{if $\delta>0$ or $x>l_{i}(o,a)$}\\ Q_{i}(o,a)&\text{otherwise,}\end{cases} \tag{6}\]
where the random variable \(x\sim U(0,1)\) guarantees that a negative update \(\delta\) is performed with a probability \(1-l_{i}(o,a)\). Recently, the idea of the lenient Q-learning has been successfully applied to the double DQN (Palmer et al., 2018). In particular, (Palmer et al., 2018) proposed Lenient-MADRL that minimizes the loss function (2) with samples satisfying the conditions in (6).
While these aforementioned DRL approaches have been successful in solving CMOP tasks, it is still unclear to what extent they can perform in extended variations of stochastic reward environments, and in particular, in problems with rare events. In this work, we focus on improving the performance of the independent agents in fully cooperative tasks under various settings of stochastic rewards. Our approaches are to leverage cognitive models of human decision making and integrate MADRL concepts to help the agents enhance coordination among each other in the face of coping with diverse situations of stochastic and rare rewards.
## 3 Multi-agent IBL Models
IBLT is a theory of decisions from experience, developed to explain human learning in dynamic decision environments (Gonzalez et al., 2003). IBLT provides a decision making algorithm and a set of cognitive mechanisms that can be used to implement computational models of human decision learning processes. The algorithm involves the recognition and retrieval of past experiences (i.e., instances) according to their similarity to a current decision situation, the generation of expected utility of the various decision alternatives, and a choice rule that generalizes from experience. An "instance" in IBLT is a memory unit that results from the potential alternatives evaluated. These are memory
representations consisting of three elements: a situation (a set of attributes that give a context to the decision, or observation \(o\)); a decision (the action taken corresponding to an alternative in state \(s\), or action \(a\)); and a utility (expected utility or experienced outcome \(x\) of the action taken in a state).
In particular, for the agent \(i\), an option \(k=(o,a)\) is defined by taking action \(a\) after observing state \(s\). At time \(t\), assume that there are \(n_{k,t}\) different generated instances \((k,x_{j,k,t})\) for \(j=1,...,n_{k,t}\), corresponding to selecting \(k\) and achieving outcome \(x_{j,k,t}\). Each instance \(j\) in memory has an _Activation_ value, which represents how readily available that information is in memory, and it is determined by similarity to past situations, recency, frequency, and noise (Anderson and Lebiere, 2014). Here we consider a simplified version of the Activation equation which only captures how recently and frequently instances are activated:
\[\Lambda_{j,k,t}=\ln\left(\sum\limits_{t^{\prime}\in T_{j,k,t}}(t-t^{\prime})^{ -d}\right)+\sigma\ln\frac{1-\xi_{j,k,t}}{\xi_{j,k,t}}, \tag{7}\]
where \(d\) and \(\sigma\) are the decay and noise parameters, respectively, and \(T_{j,k,t}\subset\{0,...,t-1\}\) is the set of the previous timestamps in which the instance \(j\) was observed. The rightmost term represents the Gaussian noise for capturing individual variation in activation, and \(\xi_{j,k,t}\) is a random number drawn from a uniform distribution \(U(0,1)\) at each timestep and for each instance and option.
Activation of an instance \(j\) is used to determine the probability of retrieval of an instance from memory. The probability of an instance \(j\) is defined by a soft-max function as follows
\[p_{j,k,t}=\frac{e^{\Lambda_{j,k,t}/\tau}}{\sum_{j^{\prime}=1}^{n_{k,t}}e^{ \Lambda_{j^{\prime},k,t}/\tau}}, \tag{8}\]
where \(\tau\) is the Boltzmann constant (i.e., the "temperature") in the Boltzmann distribution. For simplicity, \(\tau\) is often defined as a function of the same \(\sigma\) used in the activation equation \(\tau=\sigma\sqrt{2}\).
The expected utility of option \(k\) is calculated based on a mechanism called _Blending_(Lebiere, 1999) as specified in IBLT (Gonzalez et al., 2003), using the past experienced outcomes stored in each instance. Here we employ the Blending calculation for the agent \(i\) as defined for choice tasks (Gonzalez and Dutt, 2011; Lejarraga et al., 2012):
\[V_{i,k,t}=\sum_{j=1}^{n_{k,t}}p_{j,k,t}x_{j,k,t}. \tag{9}\]
The choice rule is to select the option that corresponds to the maximum blended value. In particular, at the \(l\)-th step of an episode, the agent \(i\) select the option \((o_{i,l},a_{i,l})\) with
\[a_{i,l}\in\arg\max_{a\in\mathcal{A}_{i}}V_{i,(o_{i,l},a),t} \tag{10}\]
Our proposed Multi-Agent IBL (MAIBL) Models are developed to deal with fully cooperative tasks that can be described as a Markov game (Shapley, 1953) where all agents receive the same rewards. Also, (Nguyen et al., 2023) proposed an IBL model (**IBL-TD**) that uses the TD-learning mechanism of RL models, to estimate the outcome of an action as follows:
\[x_{i,l}\gets V_{i,(o_{i,l},a_{i,l}),t}+\alpha\delta_{i,l}, \tag{11}\]
where \(\alpha\) is a learning rate and \(\delta_{i,I}\) is an TD error defined by:
\[\delta_{i,I}=r_{i,I+1}+\gamma\max_{a\in\mathcal{A}_{i}}V_{i,(o_{i,I+1},a),t}-V_{i,(o_{i,I},a_{i,I}),t}. \tag{12}\]
We refer to (Nguyen and Gonzalez, 2020, 2021, 2021) to demonstrate how to investigate IBLT for multi-state environments.
The MAIBL process is described in Algorithm 1. We propose three MAIBL algorithms that rely on the IBL-TD, and are enhanced with a \(\epsilon\)-greedy exploration strategy to deal with fully cooperative tasks in MAS (Greedy-MAIBL); and a Hysteretic-MAIBL and Lenient-MAIBL, which are respectively the integration of hysteretic and lenient concepts from MADRL into the MAIBL models.
```
Input: default utility \(x_{0}\), a memory dictionary \(\mathcal{M}=\{\}\), global counter \(t=1\), step limit \(L\) repeat// Loop for each episode Initialize a counter (i.e., step) \(l=0\) and each agent \(i\) observes \(o_{i,I}\) from state \(s_{I}\)while\(s_{I}\) is not terminal and \(l<L\)do
Each agent \(i\) chooses an action \(a_{i,I}\in\arg\max_{a\in\mathcal{A}_{i}}V_{i,(o_{i,I},a),t}\), where \(V_{i,(o_{i,I},a),t}\) is computed by Equation (9) Take joint action \([a_{1,I},...,a_{m,I}]\), move to state \(s_{I+1}\), each agent \(i\) observes \(o_{i,I+1}\), and gets outcome \(x_{i,I}\) Store timestamp \(t\) to instances \((o_{i,I},a_{i,I},x_{i,I})\) for \(i=1,...,m\)\(l\gets l+1\) and \(t\gets t+1\)
end
``` Input: default utility \(x_{0}\), a memory dictionary \(\mathcal{M}=\{\}\), global counter \(t=1\), step limit \(L\) repeat// Loop for each episode Initialize a counter (i.e., step) \(l=0\) and each agent \(i\) observes \(o_{i,I}\) from state \(s_{I}\)while\(s_{I}\) is not terminal and \(l<L\)do
Each agent \(i\) chooses an action \(a_{i,I}\in\arg\max_{a\in\mathcal{A}_{i}}V_{i,(o_{i,I},a),t}\), where \(V_{i,(o_{i,I},a),t}\) is computed by Equation (9) Take joint action \([a_{1,I},...,a_{m,I}]\), move to state \(s_{I+1}\), each agent \(i\) observes \(o_{i,I+1}\), and gets outcome \(x_{i,I}\) Store timestamp \(t\) to instances \((o_{i,I},a_{i,I},x_{i,I})\) for \(i=1,...,m\)\(l\gets l+1\) and \(t\gets t+1\)
end
```
Input: default utility \(x_{0}\), a memory dictionary \(\mathcal{M}=\{\}\), global counter \(t=1\), step limit \(L\) repeat// Loop for each episode Initialize a counter (i.e., step) \(l=0\) and each agent \(i\) observes \(o_{i,I}\) from state \(s_{I}\)while\(s_{I}\) is not terminal and \(l<L\)do
Each agent \(i\) chooses an action \(a_{i,I}\in\arg\max_{a\in\mathcal{A}_{i}}V_{i,(o_{i,I},a),t}\), where \(V_{i,(o_{i,I},a),t}\) is computed by Equation (9) Take joint action \([a_{1,I},...,a_{m,I}]\), move to state \(s_{I+1}\), each agent \(i\) observes \(o_{i,I+1}\), and gets outcome \(x_{i,I}\) Store timestamp \(t\) to instances \((o_{i,I},a_{i,I},x_{i,I})\) for \(i=1,...,m\)\(l\gets l+1\) and \(t\gets t+1\)
end
``` Input: default utility \(x_{0}\), a memory dictionary \(\mathcal{M}=\{\}\), global counter \(t=1\), step limit \(L\) repeat// Loop for each episode Initialize a counter (i.e., step) \(l=0\) and each agent \(i\) observes \(o_{i,I}\) from state \(s_{I}\) while\(s_{I}\) is not terminal and \(l<L\)do
Each agent \(i\) chooses an action \(a_{i,I}\in\arg\max_{a\in\mathcal{A}_{i}}V_{i,(o_{i,I},a),t}\), where \(V_{i,(o_{i,I},a),t}\) is computed by Equation (9) Take joint action \([a_{1,I},...,a_{m,I}]\), move to state \(s_{I+1}\), each agent \(i\) observes \(o_{i,I+1}\), and gets outcome \(x_{i,I}\) Store timestamp \(t\) to instances \((o_{i,I},a_{i,I},x_{i,I})\) for \(i=1,...,m\)\(l\gets l+1\) and \(t\gets t+1\)
end
``` Input: default utility \(x_{0}\), a memory dictionary \(\mathcal{M}=\{\}\), global counter \(t=1\), step limit \(L\) repeat// Loop for each episode Initialize a counter (i.e., step) \(l=0\) and each agent \(i\) observes \(o_{i,I}\) from state \(s_{I}\) while\(s_{I}\) is not terminal and \(l<L\)do
```
Input: default utility \(x_{0}\), a memory dictionary \(\mathcal{M}=\{\}\), global counter \(t=1\), \(\epsilon\), \(\eta\in(0,1)\), \(\alpha\), and step limit \(L\in\mathbb{N}^{+}\) repeat// Loop for each episode Initialize episode step counter \(l=0\), \(\epsilon\leftarrow\eta\epsilon\), and each agent \(i\) observes \(o_{i,l}\) at state \(s_{l}\) while\(s_{l}\) is not terminal and \(l<L\)do Each agent \(i\) chooses \(a_{i,l}\in\mathcal{A}_{i}\) randomly according to \(p(o_{i,l},a_{i,l})\) with probability \(\epsilon\), and \(a_{i,l}\in\arg\max_{a\in\mathcal{A}_{i}}V_{i,(o_{i,l},a),t}\) with probability \(1-\epsilon\), where \(p(o,a)=e^{V_{i,(o,a),t}/T}/\sum_{a^{\prime}\in\mathcal{A}_{i}}e^{V_{i,(o,a^{ \prime}),t}/T}\) Take actions \([a_{1,l},...,a_{m,l}]\), move to state \(s_{l+1}\), each agent \(i\) observes \(o_{l+1}\) at state \(s_{l+1}\), and reward \(r_{l+1}\) Each agent \(i\) computes the TD error \(\delta_{i,l}\) by Equation (12) and estimates an outcome \(x_{i,l}\) by Equation (11) Each agent \(i\) stores timestamp \(t\) to instance \((o_{i,l},a_{i,l},x_{i,l})\)\(l\gets l+1\) and \(t\gets t+1\) end while
1
2untiltask stopping condition
```
**Algorithm 2**Greedy-MAIBL
### Hysteretic-MAIBL
The Hysteretic-MAIBL model is built upon the Greedy-MAIBL algorithm by incorporating an optimistic learning assumption of hysteretic Q-learning. In the context of MARL, an intuitive way of interpreting the assumption is that an agent selects any action it finds suitable with the expectation that the other agents also choose the best match accordingly (Lauer and Riedmiller, 2000). Under this assumption, when playing actions, the agents prefer superior results (i.e. TD error is greater than 0), and hence the superior results are updated with a higher learning rate. More specifically, the Hysteretic-MAIBL algorithm uses two learning rates \(\alpha>\beta\) for the increase and decrease of outcomes instead of only one, as in the Greedy-MAIBL. The Hysteretic-MAIBL algorithm for cooperative MAS is specified in Algorithm 3.
### Lenient-MAIBL
The Lenient-MAIBL approach incorporates the concept of lenient learning in MARL into Greedy-MAIBL. The idea of leniency here is that initially, none of the agents have a good understanding of their best joint actions. Therefore they must be lenient to the foolish and arbitrary actions being made by their collaborators at the beginning (Wei and Luke, 2016). More specifically, the leniency is affected by the frequency of visiting state-action pairs. For each state-action pair, initially, it is visited less frequently, resulting in the higher value of \(T(o,a)\). The higher the value of \(T(o,a)\), the more lenient the agent is (see Eq. (4)). That is, it ignores inferior results (i.e. the ones that result in negative TD error). When the state-action pair has been encountered frequently enough, more results are updated no matter what. The whole procedure of the Lenient-MAIBL approach is depicted in Algorithm 4.
```
Input: default utility \(x_{0}\), a memory dictionary \(\mathcal{M}=\{\}\), global counter \(t=1\), \(\epsilon\), \(\eta\in(0,1)\), \(\alpha\), and step limit \(L\in\mathbb{N}^{+}\) repeat// Loop for each episode Initialize episode step counter \(l=0\), \(\epsilon\leftarrow\eta\epsilon\), and each agent \(i\) observes \(o_{i,l}\) at state \(s_{l}\) while\(s_{l}\) is not terminal and \(l<L\)do Each agent \(i\) chooses \(a_{i,l}\in\mathcal{A}_{i}\) randomly according to \(p(o_{i,l},a_{i,l})\) with probability \(\epsilon\), and \(a_{i,l}\in\arg\max_{a\in\mathcal{A}_{i}}V_{i,(o_{i,l},a),t}\) with probability \(1-\epsilon\), where \(p(o,a)=e^{V_{i,(o,a),t}/T}/\sum_{a^{\prime}\in\mathcal{A}_{i}}e^{V_{i,(o,a^{ \prime}),t}/T}\) Take actions \([a_{1,l},...,a_{m,l}]\), move to state \(s_{l+1}\), each agent \(i\) observes \(o_{l+1}\) at state \(s_{l+1}\), and reward \(r_{l+1}\) Each agent \(i\) computes the TD error \(\delta_{i,l}\) by Equation (12) and estimates an outcome \(x_{i,l}\) by Equation (11) Each agent \(i\) stores timestamp \(t\) to instance \((o_{i,l},a_{i,l},x_{i,l})\)\(l\gets l+1\) and \(t\gets t+1\) end while
1
2untiltask stopping condition
```
**Algorithm 3**Lening-MAIBL
## 4. Experiments
To make our focus concrete, we specifically consider one of the prominent examples of fully cooperative games which is the Coordinated Multi-agent Object Transportation Problems (CMOTPs) (Busoniu et al., 2010; Palmer et al., 2018; Tuci et al., 2006) with the presence of _stochastic_ rewards. In such environments, we examine how well IBL-based models can learn and adapt to the other teammates' behavior to accomplish the task without communicating during the learning process. We compare our proposed models with the three state-of-the-art algorithms in CMOTPs (see Section 2): Decreasing
\(\epsilon\)-greedy double Deep Q-Network algorithm (Greedy-MADRL) (van Hasselt et al., 2016), Scheduled Hysteretic Deep Q-Network algorithm (Hysteretic-MADRL) and Lenient Deep Q-Network (Lenient-MADRL) (Palmer et al., 2018).
### Coordinated Multi-Agent Object Transportation Problems
The CMOTP is an abstraction of a generic task involving two agents' coordinated transportation of an item. It has been used as an illustrative demonstration of a number of MARL algorithms (Busoniu et al., 2010; Palmer et al., 2018).
In particular, the CMOTP is simulated in a gridworld, that is, in a two-dimensional discrete grid with \(16\times 16\) cells as illustrated in Fig 1. The idea of the task is that the agents (represented by letter A) have to navigate and transport a target item (G) to one of two drop-zone(s) (yellow areas) while avoiding obstacles (represented by black cells). In other words, the agents share a common interest in delivering an item (G) to the drop zone. Thereby, the agents must coordinate themselves to get an equally shared reward; otherwise, they fail and get a zero reward.
To complete the tasks, the agents must exit the room individually to locate and collect item G. Pickup is only possible when the two agents stand on item G's left- and right-hand sides in the grid (Fig. 1 a). Once the two agents have grasped either side of the item, they can move it. The task is fully cooperative, as the item can only be transported when both agents successfully grab the item by always being side-by-side and deciding to move in the same direction. The agents choose to _stay in place, move left, right, up_, or _down_, and can move only one cell at a time. Agents can only move to an empty cell, and if both try to move to the same cell, neither moves.
Agents only receive a positive reward after placing the item inside the dropzone (illustrated in Fig. 1 b). In case there are multiple drop zones, the agents' goal is to drop the item into the drop zone yielding the highest expected reward. To encourage agents to complete the task as quickly as possible, agents are penalized for walking into an obstacle (\(-0.05\)) and deciding to stand still (\(-0.01\)).
```
Input: default utility \(x_{0}\), a memory dictionary \(\mathcal{M}=\{\}\), global counter \(t=1\), \(\epsilon\), \(\eta\in(0,1)\), \(\alpha\), step limit \(L\in\mathbb{N}^{+}\), maximum temperature value \(T_{\text{max}}\), \(K\), \(\theta\), and \(\nu\)
1repeat// Loop for each episode
2 Initialize episode step counter \(l=0\), \(\epsilon\leftarrow\eta\epsilon\), and each agent \(i\) observes \(o_{i,l}\) at state \(s_{l}\) while\(s_{l}\) is not terminal and \(l<L\)do
3 Each agent \(i\) chooses \(a_{i,l}\in\mathcal{A}_{i}\) randomly according to \(p(o_{i,l},a_{i,l})\) with probability \(\epsilon\), and \(a_{i,l}\in\arg\max_{a\in\mathcal{A}_{i}}V_{i,(o_{i,l},a),t}\) with probability \(1-\epsilon\), where \(p(o,a)=e^{V_{i,(o,a),t}/T}/\sum_{a^{\prime}\in\mathcal{A}_{i}}e^{V_{i,(o,a^{ \prime}),t}/T}\) Take actions \([a_{1,l},...,a_{m,l}]\), move to state \(s_{l+1}\), each agent \(i\) observes \(o_{i,l+1}\) at state \(s_{l+1}\), and reward \(r_{i,l+1}\)
4 Each agent \(i\) computes the TD error \(\delta_{i,l}\) by Equation (12)
5\(R\longleftarrow\) random real value between \(0\) and \(1\)
6 Each agent \(i\) estimates an outcome \(x_{i,l}\) by \[x_{i,l}\longleftarrow\begin{cases}V_{i,(o_{i},a_{i}),t}+\alpha\delta_{i,l}& \text{ if }\delta_{i,l}>0\text{ or }R>1-e^{-K\pi T_{i,(o_{i},a_{l})}}\\ V_{i,(o_{i},a_{i}),t}&\text{ otherwise}\end{cases}\] (14) Store timestamp \(t\) to instances \((o_{i,l},a_{i,l},x_{i,l})\) for \(i=1,...,m\)
7 Update the temperature \(T\) \[T_{i,(o_{i,l},a_{i,l})}\longleftarrow\theta\begin{cases}(1-\nu)T_{i,(o_{i,l},a _{i,l})}+\nu\bar{T}_{i,o_{i,l+1}}&\text{ if }s_{l+1}\text{ is not the terminal state}\\ T_{i,(o_{i,l},a_{i,l})}&\text{ otherwise,}\end{cases}\] (15) where \(\bar{T}_{i,o_{i,l+1}}=\frac{1}{|\mathcal{A}_{i}|}\sum_{a\in\mathcal{A}_{i}}T _{i,(o_{i,l+1},a)}\)
8\(l\gets l+1\) and \(t\gets t+1\)
9 end while
10untiltask stopping condition
```
**Algorithm 4**Lenient-MAIBL
### Experimental Design
_Stochastic Reward Scenarios._ We characterize four different stochastic reward scenarios inspired by the study of decisions from experience with rare events in risky choice (Hertwig et al., 2004). That is, the scenarios are selected to represent a diverse variety of situations with stochastic rewards in order to understand better how the agents not only coordinate to accomplish the task but also learn to deal with diverse situations of stochastic rewards. The characteristics of the scenarios considered are determined by whether the optimal option is deterministic (safe) or stochastic (risky) and the probability of getting the high value of the stochastic option. These scenarios are summarized in Table 1.
\begin{table}
\begin{tabular}{c|l c|c c} \hline \multirow{2}{*}{Scenarios} & \multicolumn{2}{c|}{Zones} & \multicolumn{2}{c}{Expected reward} \\ \cline{2-5} & High (DZ1) & Low (DZ2) & High (DZ1) & Low (DZ2) \\ \hline
1 & \(\mathbb{P}(r_{1}=0.8)=1\) & \(\mathbb{P}(r_{2}=1)=0.6\) and \(\mathbb{P}(r_{2}=0.4)=0.4\) & 0.8 & 0.76 \\
2 & \(\mathbb{P}(r_{1}=0.8)=1\) & \(\mathbb{P}(r_{2}=7)=0.1\) and \(\mathbb{P}(r_{2}=0.06)=0.9\) & 0.8 & 0.754 \\
3 & \(\mathbb{P}(r_{1}=4)=0.8\) and \(\mathbb{P}(r_{1}=0)=0.2\) & \(\mathbb{P}(r_{2}=3)=1\) & 3.2 & 3 \\
4 & \(\mathbb{P}(r_{1}=32)=0.1\) and \(\mathbb{P}(r_{1}=0)=0.9\) & \(\mathbb{P}(r_{2}=3)=1\) & 3.2 & 3 \\ \hline \end{tabular}
\end{table}
Table 1. Stochastic scenarios.
* **Scenarios 1**: DZ1 is a deterministic zone always giving a reward 0.8, whereas the DZ2 is a stochastic zone returning a reward of 1 on 60% of occasions and 0.4 on the other 40%. Therefore, the optimal joint policy is that the agents deliver the item to the deterministic DZ1 yielding a reward of 0.8, as opposed to an average reward of 0.76 for DZ2.
* **Scenarios 2**: DZ1 is a deterministic zone giving a reward of 0.8, while the DZ2 is the stochastic zone that returns a higher reward of 7 on a low probability of 0.1 and 0.06 on the other 0.9. The optimal joint policy is the deterministic DZ1 yielding a reward of 0.8, as opposed to DZ2 yielding an average reward of 0.754.
* **Scenarios 3**: DZ1 is a stochastic zone giving a reward of 4 on 80% of occasions and 0 otherwise, whereas the DZ2 returns a reward of 3. The optimal joint policy is the stochastic DZ1 yielding an expected reward of 3.2, as opposed to DZ2, that only returns a reward of 3.
* **Scenarios 4**: DZ1 is stochastic, giving a reward of 32 on a low probability of 0.1 and 0 otherwise, and the DZ2 is a deterministic zone giving a reward of 3. The optimal joint policy is the stochastic DZ1 yielding an expected reward of 3.2, as opposed to DZ2, which only has a reward of 3.
Figure 1: Coordinated Multi-Agent Object Transportation Problem.
### Measures
For each of the models considered in the experiment, we measured their behavior and performance with respect to the following metrics:
(1) _Average Proportion of Maximization-PMax_: the average proportion of episodes in which the agents delivered the item to the optimal zone, that is,
\[PMax=\frac{1}{\#run}\sum_{i=1}^{\#run}\frac{\#episode^{i}_{o}}{\#episode}, \tag{16}\]
where \(\#run\) is the number of runs, \(\#episode^{i}_{o}\), and \(\#episode\) are respectively the number of episodes of the \(i\)-th run that the agents delivered the item to the optimal zone (called _optimal_ episodes) and the total number of episodes. This metric essentially captures the effectiveness of agents as a team by delivering the item to the optimal zone.
(2) _Average Coordination Rate-PCoordinate_: the average proportion of steps the agents successfully move together (i.e. they both move in the same direction) to the total number of steps after they stick together with the item from the pickup point, multiplied by Pmax, namely
\[PCoordinate=\frac{1}{\#run}\sum_{i=1}^{\#run}\frac{1}{\#episode}\sum_{e=1}^{ \#episode^{i}_{o}}\frac{\#step^{i,e}_{m}}{\#step^{i,e}_{s}}, \tag{17}\]
where \(\#step^{i,e}_{m}\) is the number of steps that the agents successfully move the item, \(\#step^{i,e}_{s}\) is the total number of steps after they stick together with the item from the pickup point at the optimal episode \(e\). This metric represents how well the agents coordinate with each other- that is, how many times they reach a consensus on selecting their movement direction, throughout the process of dropping the item into the optimal zone.
(3) _Average Discounted Reward-Efficiency_: the discounted reward is defined by \(y^{l}r\) in which a positive reward \(r\) is discounted by a discount factor \(\gamma\) increased to the power of the number of steps taken \(l\), multiplied by PMax, namely
\[Efficiency=\frac{1}{\#run}\sum_{i=1}^{\#run}\frac{1}{\#episode}\sum_{e=1}^{ \#episode^{i}_{o}}\frac{y^{\#step^{i,e}}r^{i,e}}{R}, \tag{18}\]
where \(\#step^{i,e}\) and \(r^{i,e}\) are respectively the total number of steps taken by the two agents in a team and the collective reward at episode \(e\) of the run \(i\), and \(R\) is the high expected reward. This metric captures the efficiency of agents as a team in delivering the item to the optimal zone. Indeed, the metric considers not only the rewards obtained (i.e. how effective the agents are) but also how many steps are taken to get the reward (i.e. how quickly the agents learn to successfully accomplish the task).
(4) _Number of Steps-Step_: the average total number of steps taken by the two agents in a team, namely
\[Step=\frac{1}{\#run}\sum_{i=1}^{\#run}\frac{1}{\#episode^{i}_{o}}\sum_{e=1}^ {\#episode^{i}_{o}}\sum_{e=1}^{\#step^{i,e}}; \tag{19}\]
This metric evaluates the total number of steps taken by the agents to successfully drop the item into the optimal zone. In particular, it counts the steps when the agents are successful in moving in the same direction as well as when they are not.
(5) _Maximum Pickup Steps-MStep_: the average maximum number of steps taken by both agents to locate and pick up the item, namely
\[MStep=\frac{1}{\#run}\sum_{i=1}^{\#run}\frac{1}{\#episode^{i}_{0}}\sum_{e=1}^{ \#episode^{i}_{o}}\max(\#step^{i,e}_{1},\#step^{i,e}_{2}), \tag{20}\]
where \(\#step^{i,e}_{1}\) and \(\#step^{i,e}_{2}\) are respectively the numbers of steps of the run \(i\) taken by the two agents to pick up the item at optimal episode \(e\). This metric examines the maximum number of steps that one could take to find the item.
(6) _Difference Pickup Step-DStep_: the average difference in the number of steps taken between the two agents, taken to pick up the item, namely
\[DStep=\frac{1}{\#run}\sum_{i=1}^{\#run}\frac{1}{\#episode^{i}_{0}}\sum_{e=1}^ {\#episode^{i}_{o}}|\#step^{i,e}_{1}-\#step^{i,e}_{2}|. \tag{21}\]
It is worth noting that in this measure, we only consider the episodes of the \(i\)-th run that the agents successfully delivered the item to the optimal zone (\(\#episode^{i}_{o}\), called _optimal_ episodes). This metric relates to the functional delay metric (Hoffman, 2019), in which their teammate incurs the delay experienced by the agent after locating the item. The higher value, the longer it takes for an agent to wait for another agent to get to the pickup place.
### Model Parameters
We ran experiments using default parameter values for each model. That is, these values are commonly used in the literature. For the IBL part and the decreasing \(\epsilon\)-greedy strategy of the three IBL-based models, we used decay \(d=0.5\), noise \(\sigma=0.25\), default_utility = 0.1, initial epsilon \(\epsilon=1\), decreasing factor \(\eta=0.999\), T = 0.8, discount factor \(\gamma=0.99\), and learning rate \(\alpha=0.5\). For Hysteretic-IBL, we need an additional learning \(\beta=0.01\) while Leniet-IBL requires four more parameters \(T_{\max}=2,K=1,\theta=0.995\), and \(\nu=0.1\). Importantly, none of the parameters in our models were optimized, whereas for those in the comparative Hysteretic-MADRL and Leniet-MADRL algorithms, including hyper-parameters we used the same values as suggested in the previous work (Palmer et al., 2018). We do not report the parameter values of these models here, please see Table 1 in the paper of (Palmer et al., 2018) for more details.
We conducted 30 runs of 1000 episodes per run. An episode terminates when a 5000-step limit is reached or when the agents successfully place the item inside the drop zone.
## 5. Results
### Overall performance of MAIBL and MADRL models
Table 2 reports the aggregate performance metrics averaged over all episodes, for each of the four CMOTP scenarios. These results clearly indicate that the MAIBL models perform better than the MADRL models in all scenarios.
Overall, we observe that when the highest expected reward is associated with the deterministic zone, as in scenarios 1 and 2, the Greedy-MAIBL agents are the best performers, followed by Leniet models with regard to all the metrics. We also see that all the models perform much better in scenario 1 compared to scenario 2. This can be explained by the fact that when the stochastic zone is unlikely to return the high reward (i.e. the probability of the high value in the stochastic zone is low), it is easier for the agents to decide to select the deterministic zone with the higher expected value. By contrast, there is more tension in choosing between the optimal zone and the stochastic zone when the high reward in the stochastic zone is more likely to happen (i.e., scenario 1). As a result, the performance of all the models was lower.
Interestingly, in scenarios 3 and 4, wherein the stochastic zone yields the highest expected value, the Hysteretic-MAIBL turns out to achieve the best performance in terms of PMax, coordination rate, and efficiency (Discounted Reward). That said, we notice that the Greedy-MAIBL model is still more effective than the Hysteretic-MAIBL in terms of coordinating with each other to pick up the item. Compared to scenario 3, it is clear that it is more difficult for the agents in scenario 4 to select the highest expected reward zone as the high value of this zone happens rarely.
These results show that the characteristics of stochastic reward did impact the behavior and robustness of the models, suggesting that the strength and shortcomings of each model depend on each scenario. That is, the plain Greedy-MAIBL can complete the task more successfully in the settings wherein the highest expected value belongs to the deterministic zone. However, when the highest expected value is associated with the stochastic zone, incorporating the hysteretic mechanism into Greedy-MAIBL, i.e. Hysteretic-MAIBL, becomes more effective.
### Model's Effectiveness in Learning Optimal Delivery
Figure 2 shows the performance of each model with respect to the effectiveness calculated by the optimal policy rate for 1000 episodes in each of the four scenarios. First, we observe that in the first two scenarios (Scenario 1 and 2), wherein the high expected reward (optimal zone) is associated with the deterministic zone, Greedy-MAIBL agents not only outperform the other models but also learn faster. Additionally, we notice that the distinction between Greedy-MAIBL and the other models is more clear in Scenario 1 than in Scenario 2. In scenario 2, all models except for Hysteretic-MAIBL
\begin{table}
\begin{tabular}{c|l|l c c|c c c} \hline \hline \multirow{2}{*}{Scenario} & \multirow{2}{*}{Metric} & \multicolumn{3}{c|}{MAIBL} & \multicolumn{3}{c}{MADRL} \\ \cline{3-8} & & Greedy & Hysteretic & Lenient & Greedy & Hysteretic & Lenient \\ \hline \multirow{3}{*}{1} & PMax & **0.801** (0.099) & 0.163 (0.219) & 0.294 (0.135) & 0.210 (0.127) & 0.099 (0.031) & 0.332 (0.112) \\ & Efficiency & **0.195** (0.026) & 0.006 (0.009) & 0.037 (0.014) & 0.002 (0.002) & 0.002 (0.001) & 0.070 (0.019) \\ & PCoordinate & **0.306** (0.009) & 0.039 (0.005) & 0.087 (0.008) & 0.046 (0.002) & 0.022 (0.001) & 0.107 (0.011) \\
1, 0.6/0.4, 0.4 & Step & **322.4** (49.9) & 1578.3 (555.2) & 659.4 (175.1) & 1823.1 (417.1) & 1514.1 (235.3) & 603.7 (176.2) \\ & MStep & **121.6** (34.7) & 514.2 (155.8) & 194.2 (44.8) & 809.2 (120.9) & 622.4 (104.8) & 282.1 (84.1) \\ & DStep & **30.5** (2.8) & 130.2 (47.1) & 44.1 (9.7) & 196.2 (22.9) & 144.6 (17.6) & 115.3 (36.0) \\ \hline \multirow{3}{*}{2} & PMax & **0.936** (0.025) & 0.308 (0.345) & 0.840 (0.099) & 0.806 (0.009) & 0.815 (0.018) & 0.735 (0.047) \\ & Efficiency & **0.350** (0.027) & 0.093 (0.086) & 0.263 (0.099) & 0.169 (0.001) & 0.170 (0.001) & 0.156 (0.003) \\
0.8, 1 & PCoordinate & **0.350** (0.019) & 0.093 (0.119) & 0.263 (0.073) & 0.169 (0.001) & 0.170 (0.003) & 0.156 (0.008) \\
7, 0.1/0.06, 0.9 & Step & **371.4** (42.5) & 1649.2 (945.5) & 679.8 (300.4) & 1368.7 (77.5) & 1221.8 (58.5) & 1091.4 (95.3) \\ & MStep & **122.1** (25.9) & 500.8 (262.5) & 252.2 (190.3) & 631.6 (43.6) & 440.9 (33.1) & 439.1 (37.5) \\ & DStep & **33.3** (1.3) & 125.5 (63.8) & 35.7 (2.3) & 180.1 (11.0) & 156.6 (13.1) & 149.2 (18.6) \\ \hline \multirow{3}{*}{3} & PMax & 0.475 (0.375) & **0.642** (0.328) & 0.565 (0.407) & 0.166 (0.028) & 0.240 (0.021) & 0.237 (0.016) \\ & Efficiency & 0.134 (0.114) & **0.245** (0.125) & 0.224 (0.163) & 0.001 (0.001) & 0.004 (0.001) & 0.004 (0.002) \\
4, 0.8/0.0, 0.2 & PCoordinate & 0.224 (0.177) & **0.354** (0.177) & 0.326 (0.235) & 0.036 (0.006) & 0.052 (0.005) & 0.051 (0.004) \\
3, 1 & Step & 872.1 (886.5) & **696.1** (920.6) & 1078.4 (1241.2) & 1503.1 (174.9) & 1058.7 (81.7) & 1091.4 (117.2) \\ & MStep & 229.1 (167.2) & **203.8** (191.8) & 257.1 (264.7) & 637.3 (75.3) & 408.9 (43.1) & 414.7 (55.7) \\ & DStep & **55.2** (40.1) & 63.4 (57.5) & 71.2 (63.4) & 180.1 (17.9) & 140.2 (14.8) & 144.4 (21.3) \\ \hline \multirow{3}{*}{4} & PMax & 0.016 (0.014 & **0.294** (0.394) & 0.040 (0.059) & 0.041 (0.008) & 0.021 (0.006) & 0.021 (0.004) \\ & Efficiency & 0.000 (0.000) & **0.179** (0.247) & 0.011 (0.032) & 0.000 (0.000) & 0.000 (0.000) & 0.000 (0.000) \\
32, 0.1/0, 0.9 & PCoordinate & 0.004 (0.003) & **0.263** (0.360) & 0.019 (0.035) & 0.008 (0.002) & 0.004 (0.001) & 0.004 (0.001) \\
3, 1 & Step & 2793.8 (394.8) & **2106.3** (1500.8) & 2533.8 (1009.8) & 2195.6 (203.6) & 2847.4 (288.1) & 2674.0 (239.1) \\ & MStep & **534.7** (184.8) & 535.7 (414.6) & 603.8 (315.8) & 672.6 (117.3) & 784.4 (195.9) & 801.7 (149.9) \\ & DStep & **121.2** (29.7) & 157.0 (122.7) & 160.2 (106.9) & 190.7 (27.3) & 216.1 (47.4) & 221.6 (77.1) \\ \hline \hline \end{tabular}
\end{table}
Table 2. Performance of the agents reported in the form of the mean value (standard deviation) with respect to the different metrics for each of the four CMOTP scenario. Bold values indicate the best results.
compare to the Greedy-MAIBL model. Again, the explanation for this observation is that scenario 2 is a much easier decision making problem than scenario 1. The very low and common reward (0.06 with probability 0.9) of the risky option in Scenario 2, makes the discrimination between the deterministic zone much easier for most models.
Also, we see that Hysteretic-MAIBL learns faster and better in scenarios 3 and 4. These are the scenarios in which the highest expected reward is in the stochastic zone. In scenario 3, with the high probability corresponding to the higher outcome (i.e., high frequency of the high outcome), all the MAIBL models do better than the MADRL models; in contrast to scenario 4, in which the agents are misled by the high frequency of the low outcome, resulting in a decline in the performance of most models but the Hysteretic-MAIBL.
### Models' Efficiency
Fig. 3 illustrates the behavior of the models in terms of efficiency captured by the average discounted reward over 1000 episodes. We can see that after 200 episodes, Greedy-MAIBL model not only exhibits its ability to accomplish the task successfully but also is able to learn to do that with fewer steps. This pattern holds true in the first two scenarios. Interestingly, the distinction between Lenient-MAIBL and Lenient-MADRL in terms of PMax in scenarios 1 and 2 is negligible, it becomes distinct in light of average efficiency. That is, Lenient-MADRL is more efficient than Lenient-MAIBL in scenario 1. Nevertheless, it is not the case in scenario 2, wherein Lenient-MAIBL is the second-best efficient model.
Figure 2. Average proportion of maximization (PMax) over time for different agents
In scenario 3, Hysteretic-MAIBL and Lenient-MAIBL clearly demonstrate the trend of increasing the average discounted reward over time, followed by Greedy-MAIBL. In scenario 4, by contrast, the learning curve of Hysteretic-MAIBL is the most efficient, yet it produces a large variance in the results.
### Models' Coordination Ability
Fig. 4 further demonstrates the models' coordination ability over the episodes, the average proportion of steps that the agents successfully move together, of each model across 1000 episodes. In agreement with our previous observations, Greedy-MAIBL agents have the highest coordination rates over the other agents in scenarios 1 and 2. The results also show that it is easier for the agents to coordinate in scenario 2 compared to scenario 1. That is, in scenario 1, we observe that the coordination performance of Hysteretic-MADRL and Lenient-MAIBL agents picks up after 600 episodes. In contrast, in scenario 2, it only took them about 200 episodes to see their improvement in coordination.
In scenarios 3 and 4, Hysteretic-MAIBL agents coordinate best to accomplish the task. Furthermore, all the models are extremely poor in scenario 4, wherein the optimal option is stochastic, yet the probability of getting its high value is really low. The only model that is able to handle such a challenging condition is Hysteretic-MAIBL.
### Models' Functional Delay
We additionally examine the coordination ability of the models from the perspective of functional delay. In particular, this measure captures how long one agent must wait for the other agent coming to pick up the object. Simply put, the delay experienced by the agent is incurred by their teammate.
Figure 3. Average discounted reward (Efficiency)
Figure 4: Average coordination rate (PCoordinate)
Figure 5: Average difference pickup steps (DStep)
Fig. 5 shows the average difference in the number of steps taken between the two agents to pick up the item (\(DStep\)). Notably, we only calculated this measure in the episodes in which the agents accomplish the task by delivering the item to the optimal zone. A lower value of \(DStep\) translates into better collaboration, as it indicates an efficient use of team members' time (steps) and a sense that their activities are smooth.
The \(DStep\) of the Greedy-, Lenient-, and Hysteretic-MAIBL models show a decreasing trend after 200 episodes, irrespective of scenarios. This trend can be explained by the fact that picking up the item is only a subtask of the CMOTP, and it is not directly influenced by stochastic rewards in different scenarios. Moreover, the \(DStep\) of MADRL models is higher than that of MAIBL models, indicating that when MADRL agents collaborate to collect the item, the delay in the transport of the item is longer than the MAIBL models. Additionally, the results suggest that the MADRL agents fail to converge to optimal actions within 1000 episodes. In particular, the Lenient-MADRL agents show the largest disparity in the number of steps between the agents, which can be attributed to the leniency characteristics of the agents that enable them to tolerate miscoordination while the exploration is high.
## 6. Conclusions
Many practical real-world applications require coordination in multi-agent systems to accomplish a common goal in the absence of explicit communication. Coordination in MAS becomes particularly complicated in the presence of reward stochasticity and rare events, since miscoordination may arise when independent learners have difficulty differentiating between the teammate's exploratory behavior and the stochasticity of the environment. As a result, the current state-of-the-art MADRL models show that sub-optimal solutions emerge in non-stationary environments, due to these dynamics of coexisting agents and stochastic rewards.
This research proposes and demonstrates a solution to this problem in the current MADRL. Our solution is inspired by the human ability to adapt quickly to non-stationary environments, by the benefits of cognitive modeling approaches that have been demonstrated to capture this human behavior, and by the efficiency of RL computational concepts, such as the "temporal difference" adjustments that can be combined with cognitive approaches (Nguyen et al., 2023). Building on such concepts, we proposed three novel models to study cooperation and coordination behavior in MAS in the presence of stochastic rewards. The models called Greedy-MAIBL, Hysteretic-MAIBL and Lenient-MAIBL, combine the cognitive principles of the Instance-Based Learning Theory (Gonzalez et al., 2003; Nguyen et al., 2022) and RL techniques to address coordination of MAS with stochastic rewards. In particular, the Greedy-MAIBL model is the enhancement of the IBL natural exploration process with the decreasing \(\epsilon\)-greedy Boltzmann exploration strategy due to the characteristics of the cooperative multi-agent tasks that typically require the agents to explore the environment extensively. The Hysteretic-MAIBL and Lenient-MAIBL are the methods that involve the integration of optimistic learning and leniency ideas from RL into the Greedy-MAIBL model.
We demonstrate the merits of combining cognitive IBL and RL approaches in fully-cooperative multi-agent problems that exhibit challenging characteristics of stochastic rewards. In particular, a simulation experiment demonstrates different sources of stochasticity challenges for the MADRL models and the benefits of using MAIBL models in a Coordinated Multi-agent Object Transportation Problem. The results demonstrate these benefits on metrics including efficiency and coordination.
Our findings reveal that our proposed approaches, which are a combination of cognitive IBL and RL concepts, outperform the three state-of-the-art Deep Reinforcement Learning (DRL) algorithms in all the scenarios of stochastic rewards. These results can be attributed to the benefits of leveraging the cognitive, memory retrieval process in IBL models when applied to multi-agent problems. Indeed, the results emphasize the importance of how MAIBL models
characterize the cognitive frequency and recency information in the presence of stochastic reward in MAS. Although Lenient MADRL models have been advanced by incorporating the frequency information to determine how lenient an agent is supposed to be regarding others' actions, our experimental results show that it does not characterize the frequency as effectively as MAIBL models do. In MAIBL models, such frequency and recency characteristics are well captured and represented due to the declarative knowledge offered by a well-known cognitive architecture, ACT-R (Anderson and Lebiere, 2014), which derives from well-validated human memory retrieval processes. Additionally, it is clear that the MAIBL models also inherit the advantage of RL concepts, that is, decreasing \(\epsilon\)-greedy for exploration and optimistic learning of hysteretic. Thus, it is this combination of the cognitive concepts or IBL models and the computational advantages of Deep RL mechanisms that can make the MAIBL models advantageous over the MADRL in stochastic situations.
These results can inform the selection of models that are more appropriate in specific stochastic settings and identify the characteristics of a task given an unknown reward scheme. More concretely, the results suggest that using the simple Greedy-MAIBL model itself, the one that has the least number of parameters, it is able to surpass the other sophisticated models in the scenarios where the highest expected reward is associated with the _deterministic_ option regardless of the probability of returning the high value of the stochastic alternative. Our findings also indicate that Hysteretic and Lenient-based models are sensitive to the choice of the parameters and require a longer process to be able to accomplish the task. Given that hyper-parameter tuning is one of the challenging and crucial steps in the successful application of Deep RL algorithms, this work demonstrates the great benefit of using a simple Greedy-MAIBL model in settings in which the highest expected reward is associated with the deterministic alternative.
We also learned that when the stochastic option yields the higher expected reward, incorporating Hysteretic mechanism into the Greedy-MAIBL is beneficial, leading to the fact that Hysteretic-MAIBL outperforms other models in these cases. The advantages of the Hysteretic-MAIBL model are gained from the optimistic learning idea characterized in the hysteretic model and from the frequency, and recency biases inherited in the Greedy-MAIBL model. The results suggest that in the scenarios where the stochastic alternative yields the higher expected reward, it is important for a model to integrate optimistic learning, frequency, and recency biases to effectively address fully cooperative MAS. Interestingly, the results further demonstrate that not all combinations of the IBL model and RL concepts are advantageous. Specifically, we observed that Leniency-MAIBL is not as effective as expected, suggesting that incorporating both methods of characterization of frequency in Lenient-MAIBL might not be beneficial.
Arguably, one of the main goals of AI is to generate agents that can collaborate with humans and augment people's capabilities. Due to human suboptimality, prior research in collaborative scenarios has shown that agents trained to play well with other AI agents perform much worse when paired with humans (Carroll et al., 2019). By incorporating the cognitive characteristics of humans' decision behavior, we expect that the MAIBL models will enhance human-AI collaboration. That is, they would learn to be more adaptive to human behavior. Therefore, our future research will evaluate the performance of the MAIBL models as teammates collaborating with human participants. We further intend to experiment with heterogeneous teams wherein we team MAIBL models with different types of MADRL models. In future work, we also plan to investigate the robustness of our proposed models in different settings of multi-agent tasks, such as the sequentially coordinated delivery with the presence of multiple roles and expiration time. In such a problem, there are two roles of agents, and the accomplishment of the task requires the sequential collaborations of two sub-tasks. Moreover, progress in cognitive science suggests that computational models that accurately represent human behavior would be able to collaborate with humans more successfully than models that are focused on engineering rather than the cognitive aspects of learning (Lake et al., 2017). Given the demonstrated ability of IBL models to learn
quickly and account for human learning in a wide range of tasks (Nguyen and Gonzalez, 2021; Nguyen et al., 2022), our proposed models that are based on the combination of IBL and DRL models are expected to be an effective human partner in cooperative human-machine teaming tasks.
## Reproducibility
The code of the MAIBL models is implemented using the SpeedyIBL library (Nguyen et al., 2022). All the code for MAIBL models, simulation data, and all scripts used for the analyses presented in this manuscript are available at [https://github.com/DDM-Lab/greedy-hysteretic-lenient-maibl](https://github.com/DDM-Lab/greedy-hysteretic-lenient-maibl). The codes for the comparative models are available at [https://github.com/gjp1203/nui_in_madrl](https://github.com/gjp1203/nui_in_madrl).
## Acknowledgments
This research was partly sponsored by the Defense Advanced Research Projects Agency and was accomplished under Grant Number W911NF-20-1-0006 and by AFRL Award FA8650-20-F-6212 subaward number 1990692 to Cleotilde Gonzalez.
|
2310.14764 | Improved K-mer Based Prediction of Protein-Protein Interactions With
Chaos Game Representation, Deep Learning and Reduced Representation Bias | Protein-protein interactions drive many biological processes, including the
detection of phytopathogens by plants' R-Proteins and cell surface receptors.
Many machine learning studies have attempted to predict protein-protein
interactions but performance is highly dependent on training data; models have
been shown to accurately predict interactions when the proteins involved are
included in the training data, but achieve consistently poorer results when
applied to previously unseen proteins. In addition, models that are trained
using proteins that take part in multiple interactions can suffer from
representation bias, where predictions are driven not by learned biological
features but by learning of the structure of the interaction dataset.
We present a method for extracting unique pairs from an interaction dataset,
generating non-redundant paired data for unbiased machine learning. After
applying the method to datasets containing _Arabidopsis thaliana_ and pathogen
effector interations, we developed a convolutional neural network model capable
of learning and predicting interactions from Chaos Game Representations of
proteins' coding genes. | Ruth Veevers, Dan MacLean | 2023-10-23T10:02:23Z | http://arxiv.org/abs/2310.14764v1 | Improved \(K\)-mer Based Prediction of Protein-Protein Interactions With Chaos Game Representation, Deep Learning and Reduced Representation Bias
###### Abstract
Protein-protein interactions drive many biological processes, including the detection of phytopathogens by plants' R-Proteins and cell surface receptors. Many machine learning studies have attempted to predict protein-protein interactions but performance is highly dependent on training data; models have been shown to accurately predict interactions when the proteins involved are included in the training data, but achieve consistently poorer results when applied to previously unseen proteins. In addition, models that are trained using proteins that take part in multiple interactions can suffer from representation bias, where predictions are driven not by learned biological features but by learning of the structure of the interaction dataset.
We present a method for extracting unique pairs from an interaction dataset, generating non-redundant paired data for unbiased machine learning. After applying the method to datasets containing _Arabidopsis thaliana_ and pathogen effector interations, we developed a convolutional neural network model capable of learning and predicting interactions from Chaos Game Representations of proteins' coding genes.
## 1 Introduction
Phytopathogens are a major threat to global crop production. The fungal phytopathogen _Magnoporthe oryzae_ that causes cereal blast is responsible for around 30% of rice production loss and has now emerged as a pandemic problem on wheat (Nalley et al., 2016). Theomycette _Phytophthora infestans_ causes losses of around 6 billion USD to potato production, annually (Haas et al., 2009). The bacterium _Ralstonia solanacearum_ has a wide host range and can cause loses of over 30% in potato, banana and groundhut (Yuliar, Nion, and Toyota, 2015). The incidences of crop disease are increasing as global climate change and agricultural practice are expanding the geographical range of pathogens and upping the stakes in the evolutionary arms race.
Plant detection of pathogen infection occurs at two levels: the cell surface where pathogen molecules (PAMPS) are detected by a family of Receptor Like Kinase (RLK) cell surface receptors to trigger immune response and also intracellularly where pathogen effectors can interact with R-Proteins that trigger the immune response. Identifying interacting RLK/PAMPs and Effector/R-Proteins is a key part of immunity research. On the plant side, RLKs and
R proteins are well characterised and can be identified bioinformatically. On the pathogen side PAMPs and effector proteins are the shock troops of infection, manipulating the host at the infection interface to the pathogens advantage. Identifying and characterising a pathogen's effector content is a critical first step in understanding diseases and developing resistance, but until recently effectors were notoriously difficult to characters from sequence data. In most phyla they have only a few easily determined sequence characteristics (some in fungi are cysteine rich or have a MAX motif, some in oomycetes have the RXLR motif or WY fold) but in many cases no sequence identifiers are known (Franceschetti et al. 2017).
To understand infection processes, to provide genome-level understanding of the functions of this important class of genes and to develop future disease resisting crop varieties and agricultural management strategies there is a critical need to identify PAMP-RLK and effector-R gene complements and immune triggering pairings computationally from genome and protein sequence data.
Kristianingsih and MacLean (2021) recently developed class-leading deep learning based models for effectors that perform classification of effectors across species and phyla with an accuracy greater than 90%. This removed the need for older computational pipelines with in-built _a priori_ assumptions about what might constitute an effector sequence in the absence of sequence features known to group them (Sperschneider et al. 2015). A significant aspect of these models is that training set sizes were on the order of 100-200 input sequences, demonstrating that with appropriately parameterized and optimized architectures, deep learning can be successfully applied despite apparently low sample sizes.
### Prediction of PPIs
The utility of PPIs such as these has led to the development of several tools for the prediction of new ones in order to discover proteins that may interact with a given target protein. Molecular dynamics simulations can model the physics underlying proteins' behaviours to simulate how they may interact. However, these simulations come at a high computational cost, particularly when attempting to model an event as specific and potentially rare as these interactions, making the method unsuitable for large-scale searches.
There is considerably more primary sequence data than structural data known regarding proteins, and so methods for prediction of interactions that require only sequence data have been developed. Following the success of AlphaFold2(Jumper et al. 2021) in predicting protein structure from sequence, there have been attempts to apply the AlphaFold model to other problems. AlphaFold-Multimer(Evans et al. 2021) predicts complexes formed by multiple protein chains, and AlphaPulldown(Yu et al. 2022) builds this into means of searching for interactions in large databases.
Sequence data has been encoded for machine learning as \(k\)-mer counts or as one-hot encoding. Convolutional Neural Networks (CNNs) have previously been applied to the problem. Sequence data has been augmented using using PSI-BLAST to derive input matrices for each protein containing probabilistic protein profiles or Position-Specific Scoring Matrices(Hashemifar et al. 2018) (Wang et al. 2019). Approaches including Hashemifar et al. (2018) use shared parameters in twinned(Bromley et al. 1993) architectures, training a single model to extract features from both components of the PPI. Ding and Kihara (2019) also incorporates physicochemical properties like side-chain charge and hydrophobicity.
Some methods of predicting from protein sequence have approached sequences using methods designed for handling language: creating doc2vec(Le and Mikolov 2014) encodings of a set of sequences to use as features(X. Yang et al. 2020), or building a protein language model through the use of transformers(Rao et al. 2021) (Rives et al. 2021). Qiu et al. (2020) incorporates both homology and language vector encoding for per-residue binding prediction.
S. Yang et al. (2019) apply existing PPI prediction methods to interactions between _Arabidopsis thaliana_ and pathogen proteins and find that the methods do not translate well. They supplement their random forest model by explicitly including information about the network of interactions.
### Representation bias
The paired input of PPI data introduces a potential source of bias. Each input sample consists of two proteins, so proteins could appear in multiple pairs. Park and Marcotte (2012) enumerate the ways that proteins can be combined into pairs: both proteins in a test pair might be part of the training set (in-network), or one might be previously unseen by the model, or they could both be unseen (out-of-network). Test sets that consisted of in-network proteins yielded better performance that those featuring out of network proteins. Hamp and Rost (2015) identify similar groups and concluded that the choice of test group dramatically affected the prediction results, while also identifying that including a given protein in multiple pairs in the training data lowers the accuracy on unseen test proteins compared to including it in a single training pair. Eid et al. (2021) examine the extent to which this has effected published machine learning studies, and demonstrate that models can achieve 0.99 AUC scores on existing benchmark datasets even when the
protein sequences have been masked such that biological features cannot contribute to prediction. They propose a comprehensive framework for identifying bias in PPI datasets.
As well as representation bias driven learning, the application of machine learning to biological sequences also requires consideration of the issue of data leakage from homology(Jones 2019). Pairs of sequences may correspond to related structures with very similar forms and functions despite differences in sequence(Rost 1999). The inclusion of homologous biomolecules appearing in both the testing and training data can result in overestimation of a model's accuracy and poor generalization to other data. While sequence identity cannot fully identify all homologous proteins, a common approach used by recent studies in the prediction of PPIs(X. Yang et al. 2020), binding residues(Qiu et al. 2020), tertiary structures(Singh et al. 2019), and other biomolecular properties is to filter by sequence identity using software like CD-Hit(Li and Godzik 2006).
However, machine learning models are widely understood to require abundant training data. The naive method of sequence filtering for PPI prediction - clustering to a given sequence identity threshold and then discarding interactions until there are no pairs of interactions that use a protein from the same cluster - potentially discards more information than necessary to a degree that depends on the random seed.
### Chaos Game Representation
The Chaos Game Representation (CGR)(Jeffrey 1990) of a nucleotide sequence is a representation of the composition of a sequence as a 2-dimensional image. The "Chaos Game" is a process in which a sequence drawn from an alphabet of \(n\) characters can be represented as points in an \(n\)-sided polygon. Certain values of \(n\) produce fractal patterns given enough points, but when \(n=4\), the square produced is filled uniformly. To encode a genetic sequence, the four points of a CGR represent the four nucleotides and a point is added for each residue in the sequence. If the square is divided into quadrants, the number of points in each quadrant will be the number of times the nucleotide representing the nearest corner occurs in the sequence. Further division of the quadrants creates sub-quadrants, containing points corresponding to the occurrences of two-character-long subsequences (or 2-mers). This division and subdivision can be repeated to any value of \(k\). By setting a value for \(k\), CGRs can be represented as a 2-dimensional data structure in which each cell contains either booleans (to show presence or absense of a corresponding \(k\)-mer) or numbers (to show the number of times the \(k\)-mer occurs).
Figure 1: An example CGR attractor, built from the sequence of the Arabidopsis _CUL4_ gene (TAIR ID: AT5G46210.1) at a 4-mer resolution. This CGR incorporates frequency; the darkness of each pixel corresponds to a given 4-mer’s frequency in the sequence; each cell contains the number of times the 4-mer appears, normalised within each CGR image.
An example with frequency information included at 4-mer resolution is shown in Figure 1, which shows some visually identifiable indications of the composition of the sequence. The dark line along the diagonal shows an abundance of 4-mers that consist solely of "A" and "G" residues, whereas the lighter region along the bottom edge indicates that 4-mers of "C"s and "G"s in combination are more scarce.
A variety of methods have been proposed to encode sequences of amino acids as CGRs; Dick and Green (2020) implements four of these approaches for a taxonomic classification task: reverse mapping to standardised nucleotide sequences(Deschavanne and Tuffery, 2008); two versions of representation via a 20-sided polygon with a point for each potential amino acid, with appearance of 9-mers represented either as a greyscale intensity or black-and-white binary; and a 20-flake frequency matrix chaos game representation(Lochel et al., 2020). The models performed similarly modestly. Jia et al. (2019) uses chaos game representation to build inputs for a PPI classifier, converting their protein sequences into pseudo-nucleic sequences by replacing each amino acid with a single representative codon following Deschavanne and Tuffery (2008) and counting frequencies of points within the regions of the CGR attractors to pass as input into an ensemble of random forest classifiers.
The 2-dimensional arrangement of \(k\)-mers representing a sequence is reminiscent of single channel images, and we have extended CGRs such that each array element contains an ordered 2-tuple of \(k\)-mer counts from 2 distinct sequences, analogous to a two-channel image. There has been extensive research into machine learning for the purposes of computer vision, resulting in powerful models capable of image processing tasks such as semantic segmentation and object recognition. When rendered as an image, a CGR displays recognisable visual patterns, suggesting that it would be amenable to use in deep-learning models effective in recognising patterns in images such as those based on Convolutional Neural networks (CNNs). In this work, we develop and train CNN architectures for the prediction of _Arabidopsis thaliana_ PPIs from two-channel CGR tensors that represent the pair of gene sequences that code for the proteins. We extend this further using transfer learning, focusing specifically on interactions between _Arabidopsis thaliana_ proteins and effectors. We also explore the ability of our models to identify features specific to taxonomic ranks, testing their performance by examining two metagenomics questions:
### Augmentations and Synthetic data
While the image-like format of the CGR attractors proved well-suited to CNN models, the field of image processing using neural networks has developed methods beyond architectural considerations to improve models' performance on tasks. On-the-fly augmentations, from simple transformations such as rotation and scaling to more complex blending of images(Xu et al., 2023), have assisted in the training of object detection and semantic segmentation neural networks. By showing the adjusting the training data during supervised learning, the model is exposed to more variety in its training data with the intent that it is more readily able to generalise to unseen test data.
Additionally, various studies have demonstrated methods of generating synthetic images. Supervised machine learning for image processing tasks like object detection require ground truth data from which to learn, but the per-pixel labelling of the contents of an image can be an expensive or time-consuming bottleneck, particularly if the labelling task calls for expert knowledge. Synthetic images present an opportunity to quickly produce both an image and its ground truth labels and gives researchers control over the contents of their training data which has been used to augment real training data in some domains(Man and Chahl, 2022).
Cut-and-paste methods(Dwibedi, Misra, and Hebert, 2017) take small numbers of manually labelled image components and position them on a background to vary the composition of a collection of images. While computationally simple, this has recently been successfully applied to plant phenotyping (Albert et al., 2021). Even more control over an image's contents can be gained when the image is rendered using 3-dimensional modelling, though the process of creating and rendering the objects can still require significant time and expense. Researchers have used video game engines Unreal Engine(Epic Games, 2023) and Grand Theft Auto V(Rockstar Games, 2013) as well as bespoke engines to render environments(Saleh et al., 2018). Ward and Moghadam (2020) create 3D plants to render as top-down images along with segmentation masks for each leaf. Generative Adversarial Nets (GANs)(Goodfellow et al., 2014) create images using a pair of specialised and opposed neural networks, where one attempts to generate images that can pass as real and another attempts to identify whether an image is real. Each model improves as it trains which makes the other model's task harder, driving improvement further. Shin et al. (2018) leverage this process not just to generate more training data but to anonymise medical photographs. The GAN model can be extended to Conditional Generative Adversarial Nets (cGANS)(Mirza and Osindero, 2014) to generate images belonging to different classes, such as photographs of diseased and non-diseased plants(Abbas et al., 2021).
Training data for PPI models must be experimentally determined, which requires time, expense and expert knowledge. As such, exploiting existing data to further training poses a potential advantage. However, despite their image-like properties, standard methods of image augmentation such as rotation, cropping and scaling would not work for CGR
representations as they would change the underlying sequences such that the labels could no longer reliably be used as ground truth. We explore two methods of augmenting or generating synthetic CGR data.
## 2 Results
Graph-Reduction Algorithm for Non-redundant Datasets (GRAND) consistently retains more data than random discarding of duplicated samples
The issue of redundancy in biological sequence data caused by high sequence similarity is exacerbated in the case of PPI predictions, where each sample is composed of two sequences. The representation bias-driven learning that this can introduce may result in models that appear to perform well but which cannot generalise to new data. We enforce strict non-redundancy in the training and test datasets to remove this potential source of bias, to seek a model that can predict interactions from unseen, out-of-network sequences. However, a secondary priority is that since labelled, experimentally-derived interaction data is not trivial to obtain, we wish to retain as many samples as possible.
Figure 2 (a) shows the PPIs from ArabidopsisPPI represented as a network. Each node represents a cluster of one or more similar genes, from the output of CD-Hit. Each edge represents a known interaction between at least one pair of the proteins coded for by the genes contained within each cluster. A node's degree is defined as the number of other nodes to which it is connected by an edge. Many of the genes' products are included in multiple interactions; some nodes in the network have a degree of over one hundred. GRAND iteratively prunes this network until each remaining node has one single neighbour.
While any node has a degree greater than one, the following is repeated. Each node with degree one is taken in turn, and its sole neighbour is found. If the neighbour has other edges, they are removed. When no nodes with degree one remain, for each edge an "edge-sum" is calculated by summing together the degrees of the two nodes that it connects. The edge with the lowest edge-sum is selected for inclusion, and all other edges between the connected nodes and their other neighbours are dropped. Any nodes that are no longer connected by any edges are removed, and considered for inclusion as part of a negative pair. The dataset is then constructed from this network by choosing randomly one of the known interactions that each edge represents.
Figure 2 (b) shows the new network, wherein each node shares an edge with exactly one other node. This new network contains 1,106, whereas the network created by random discarding contained 867. This is a 27.6% increase in available non-redundant data.
The improvement holds for the other datasets to which we have applied GRAND. On the EffectorK host/pathogen effector interaction dataset, which contains 1,220 interactions between 769 CD-Hit clusters, 1,000 applications of the naive approach returns a mean of 131.62 interactions (standard deviation 5.26), where none of the attempts return more than 148 interactions. GRAND returns 183 interacting pairs, for a 23.6% increase over the maximum naive result and 39.0% increase over the mean. The larger HPIDB dataset contains 36,139 interactions between 13,099 CD-Hit clusters. 100 applications of the naive approach returned between 1,796 and 2,010 pairs, with a mean of 1,893.23 and a standard deviation of 34.29. GRAND finds 3,282 non-redundant pairs, which increases the non-redundant data by 63.3% over the maximum naive result and 73.4% over the mean.
GRAND offers a fast, standard process for creating non-redundant datasets that consistently contain more data than is achieved by a random approach, offering increases of between 23.6% and 73.4% more paired samples in the datasets to which we applied it.
### CNN models can learn PPI prediction from non-redundant datasets
Knowing that out-of-network PPI prediction is a difficult task, we anticipated that finding the right model architecture to learn from our ArabidopsisPPI dataset would be important. We conducted a wide random search of the model hyperparameter space in which we varied which layers we used, and also parameters controlling the tightness of fit including dropout rate, regularisation parameters and CGR resolution.
We selected the structure for our CNN architecture using a random grid search of a wide parameter space. The structures followed the same pattern: the channels were split and passed through a convolutional stage consisting of zero or more "convolutional blocks" which each contained one or more convolutional layers. As in Hashemifar et al. (2018), we used a single model for this convolutional stage allowing a single set of model weights to be trained on both interacting partners' sequences. As our CGR attractors are "image-like", we examined well-known and well-performing models from the field of image processing as they were implemented in the Keras Python package, such as ResNet(He et al., 2015), EfficientNet(Tan and Le, 2019) and VGG19(Simonyan and Zisserman, 2014). Such models proved unsuitable for immediate use as the dimensions of our CGRs were too small for the models' depth and convolutional size, but we took inspiration from the ResNet(He et al., 2015) architecture by incorporating potential "skip" layers within these convolutional blocks when they contained more than one layer; this structure has performed well for computer vision
tasks, addressing the "vanishing gradient" issue. Following this convolutional stage the outputs for each input channel are pooled and concatenated to create a single vector for each sample. A dense stage follows, wherein the vector is passed through zero or more dense layers. A final classification layer with sigmoid activation provides the model's output, and binary cross-entropy is used as the model's loss function. We searched the space by varying the numbers and depths of these layers, convolutional filter sizes, dropout rates and regularisation parameters, as well as the choice of \(k\) which dictates the size and resolution of the initial input. For each value of \(k\), the same 1,000 models were tested 5 times by training on 60% of the dataset, using a 20% split for validation purposes. The same 20% was withheld each time for final testing after the models trained and the final models were selected based on validation performance. Each model was trained for a maximum of 500 epochs, with early stopping triggered if the validation loss stopped improving, and a model checkpoint kept track of the version that achieved the best validation results.
The validation accuracy varied from around 50%, at which point the model is indistinguishable from random, up to 70.2%. The amount of models with poor validation performance may indicate the difficulty of generalisable learning in this task. We examined the relationships between various parameters and their results, such as the number of convolutional blocks as shown in Figure 3. We found that no models that skipped the convolutional stage were able to achieve above 65%, while the best results were obtained by models that included two or three convolutional blocks.
We also studied the effect of \(k\) on accuracy in our ResNet-inspired models. The results, as shown in Figure 4, show that the smaller, lower-resolution 4-mers yield better validation accuracy than higher values of \(k\).
The top 25 models from the 4,000 models considered in the architecture search were carried forward to testing with the remaining 20% of the data. While these models achieved mean validation accuracies of up to 70.2% (see Figure 4), the highest mean accuracy achieved on the holdout data was 65.0%; through the use of an ensemble of different models compiled with a meta-learner, this was increased to 66.7%.
### Alternatives to deep learning do not achieve comparable accuracy
To explore whether our deep learning approach was necessary, we applied two non-Neural Network machine learning methods, support vector classifiers and random forest classifiers, to the same ArabidopsisPPI dataset formatted as vectors of \(k\)-mer counts. In each case, we performed a grid search using training and validation splits to find the hyperparameters that achieved the highest validation accuracy, then applied those trained models to the 20% of samples that were held out for testing. Again, larger values of \(k\) yielded worse results; validation accuracies achieved using 5-mer, 6-mer and 7-mer counts appeared indistinguishable from random chance, ranging from 42.5% to 54.3%. The best results were achieved by the support vector classifier using a radial basis function kernal on vectors of 3-mers,
Figure 2: (a) The network of Arabidopsis thaliana PPIs from which the positive samples were selected. Nodes are clusters of similar genes, and edges represent a known interaction between one or more proteins. (b) The network after filtering with GRAND.
which reached a validation accuracy of 58.4%. However, this failed to generalise to the hold-out data as the test accuracy was 52.3%.
### Transfer learning allowed modest learning from a related but very small dataset
We applied our methods to our EffectorPPI dataset in order to focus specifically on the interactions between R-Proteins and _Arabidopsis thaliana_ proteins.
To begin, we repeated the hyperparameter search repeated using training and validation subsets taken from our EffectorPPI dataset. Some parameter sets appeared to perform well, with one model architecture achieving a mean validation accuracy of 73.9%. However, when the best of these models according to validation accuracy were applied to the holdout test data, they failed to achieve any test accuracy higher than 55.4%, close to the expected performance of a random classifier on the 50% positive, 50% negative data.
As this contains approximate one sixth of the number of samples in ArabidopsisPPI, we explored tactics for using what the models had previously learned from the comparatively abundant ArabidopsisPPI dataset. The first method used the results of the ArabidopsisPPI architecture, taking the hyperparameter sets that achieved the ten highest validation accuracy scores when trained and validated using ArabidopsisPPI We then initialised these models with random weights and trained, validated and tested them on the EffectorPPI data. The best mean holdout accuracy obtained by one of these randomly initialised models was 0.576.
In our second approach, we used the same ten architectures found using the ArabidopsisPPI parameter search, but instead of randomly initialising new instances of these models, we used the five model checkpoints that were produced as each model was repeatedly trained with ArabidopsisPPI. These were used as the starting point for training and validating on subsets of EffectorPPI, and the highest achieved mean holdout accuracy was 60.0%.
Figure 3: Mean validation accuracy from a ResNet random grid architecture search using a dataset of 4-mers, with increasing convolutional blocks
The final transfer learning approach we took used the same ten parameter sets, and again started with the model checkpoints trained on ArabidopsisPPI. We froze the weights in all but the final classification layer of each model, before retraining using the same subsets of EffectorPPI. The models trained in this manner achieved up to 0.597 test accuracy.
Across the top ten models by validation, the second approach (training on ArabidopsisPPI, then retraining all layers on EffectorPPI) yielded generally higher holdout results, outperforming the models trained from random initialisation for 6 of the 10 architectures, and outperforming the models where only the classification layer was retrained in 7 of the 10 cases. However, the performance ranking varied between architectures, and the single model with the most consistent good performance was trained using the weight-freezing approach; each of the holdout tests achieved an accuracy between 0.581 and 0.635.
The architecture extends to other data and achieves state-of-the-art results on difficult benchmark cases
We wanted to compare the models we had found with models in the existing literature by using a benchmark PPI dataset. Wei et al. (2017) provides a set of positive PPI pairs and three alternative sets of negative PPI pairs using different strategies. We used the positive data along with the Negatome negative data for one test and the positive data and the RecombiePairs negative data for another.
\begin{table}
\begin{tabular}{l r r} \hline \hline Model & Negatome & Recombine.pairs \\ \hline Best published F1 score & 0.87 & 0.68 \\ Our F1 score (Ensemble) & 0.90 & 0.79 \\ \hline \hline \end{tabular}
\end{table}
Table 1: outperforms existing PPI papers
Figure 4: Mean validation accuracy from a random grid search of ResNet architectures across a range of k lengths
We found the best ensemble of 10 high-performing models according to the ArabidopsisPPI validation dataset and applied it to each dataset. Table 1 shows the F1 scores achieved by this ensemble on each set. Distinguishing between the positive data and Negatome negative data appears to be comparatively easy; the F1 scores published in Wei et al. (2017) reached 0.87, and our model performed similarly at 0.90. The RecombinePairs negative data provides a much more difficult challenge as the positive and negative data contain very similar samples. Some of the methods tested in Wei et al. (2017) obtained close to random performance, while they were able to achieve an F1 score of 0.68 with their highest-performing model. Our ensemble model achieved an F1 score of 0.79, which has not been surpassed as far as we could find among other works citing Wei et al. (2017). This suggests our model offers an improvement over previous methods of learning PPI prediction, particularly where the data is challenging to discriminate between.
### Strategies for augmentations and synthetic data remain unsolved
We have exploited the image-like arrangement of \(k\)-mers in CGR attractors to develop model architectures inspired by research in the computer vision field. Our ResNet-inspired twinned architecture models were able to learn from non-redundant data and make predictions on out-of-network PPIs with state-of-the-art accuracy. However, the non-redundant datasets produced by GRAND are still comparably small by the standards of deep learning where benchmark sets like ImageNet(Deng et al. 2009) contain millions of training images. Typical computer vision methods for augmenting training data manipulate images in each training cycle, to increase the variety of training data seen by a model. These methods are not suited for application to CGR attractors as augmentation such as rotating, cropping, and recolouring CGR images would change the \(k\)-mer counts they represent, destroying the original biological sequence data. We explored two techniques to generate new data from samples in the ArabidopsisPPI training data. In the first method, synonymous substitution, we replace codons in the coding region of a gene sequence with other codons such that the produced protein still has the same sequence of amino acids. The second method uses generative deep learning to create CGR-like images.
6.1 Synonymous substitution produces CGRs with extremely likely labels, but does not improve model performance
We generated 10 synthetic samples for each real interaction pair in our ArabidopsisPPI 4-mers dataset using the synonymous substitution method, and trained and validation the same parameter sets with the synthetic training data included. As Figure 5 shows, the highest validation accuracies were achieved without the use of synthetic data. Fitting a Lowess curve elucidates this further: when the model originally performed essentially randomly, the synthetic data was in some cases able to raise the validation accuracy, but as the original performance improves the synthetic data
Figure 5: Scatter plot comparing validation accuracy achieved by each parameter set when trained on data with and without synthetic data generated by synonymous substitutions included in the training data.
more often results in worse performance. Only 23.2% of models were more accurate with the addition of synthetic data, and the best validation accuracy achieved among these was only 0.61.
#### 2.6.2 Conditional GANs models can produce CGRs that appear real, but reduce models' accuracy
Figure 6 shows real and synthetic CGR attractors. The real images (Figure 5(a) and Figure 5(b)) represent the two proteins interacting in a real PPI. The synthetic images (Figure 5(c) and Figure 5(d)) were generated using a cGAN model as examples of the true interacting class.
While the generated images appear similar to the real CGR attractors, they result in worse performance. We took the parameter sets that yielded the highest validation accuracy on the real ArabidopsisPPI dataset and retrained them with 2,000 true and 2,000 false synthetic samples included. Figure 7 plots the holdout accuracy, and shows that only three of the models performed marginally better with the synthetic data included.
### CGRs contain \(k\)-mer count features that indicate taxonomic ranks, introducing potential bias
We continued the experiments into our approach to CGR-based PPI prediction by attempting to repeat the GRAND and architecture search processes using data taken from the HPIDB database(Ammari et al., 2016). At 3,282 non-redundant pairs, this set is larger than ArabidopsisPPI (1,106 pairs) and EffectorPPI (183 pairs). It also includes more variety among hosts; while EffectorPPI is limited to interactions between Arabidopsis thaliana and pathogens, HPIDB includes 66 host species.
Initial results were promising, with validation accuracy reaching 78.45%. However, we questioned whether we could be confident that the model was learning patterns relating to the actual interactions, or whether it was biased by species information. The positive samples experimentally validated interactions between hosts and pathogens from various species, whereas the negative data is randomly drawn from all hosts and all species. As our chosen CGR format represents the coding sequence that codes for a protein rather than the amino acid sequence, our data includes features that have been used in the field of metagenomics to characterise and distinguish genomes such as GC-content, codon usage and di-, tri- and tetranucleotide frequencies. While it is possible that the model may learn subtleties of interaction
Figure 6: GAN outputs
that preclude, for example, an Arabidopsis R-protein interacting with an influenza effector, it may also have simply learned to reject the interaction because the host does not have the sequence characteristics of an animal.
To explore this hypothesis we masked our dataset, replacing each CGR with a one-hot encoding of the species from which the gene originated. We trained and tested a k-nearest neighbour classifier on the masked data and from the species alone, achieved a holdout test accuracy of 87.67%. This indicates that bias is present in the species distributions of the data, from which even a simple model can appear to learn to accurately predict interactions.
In order to learn from this bias, our models would have to be able to know from which species a gene represented in a CGR had originated. We therefore wanted to see if CGR attractors from different species' genomes were noticeably different. We compared the sequences of two different genomes, building CGR attractors for all CDS sequences in the _Magnaporthe oryzae_ and _Oryza sativa_ genomes as shown in Figure 8. We compiled all CGR attractors into a single image for each of the Magnaporthe (Figure 8b) and rice (Figure 8a) genomes. There are some visible differences for individual cells such as strong representation of individual 4-mers such as "CTCC" in Oryza and "CGAG" in Magnaporthe. There also appears to be a more general pattern, in that the Oryza CGR is strongest at certain positions along the A-G diagonal and C-G horizontal lines, and Magnaporthe is generally darker at most other points below the A-G diagonal. Figure 8c shows this pattern via a heatmap of the differences between the two plots where the scaled pathogen combined CGR was subtracted from the scaled host combined CGR.
### CGR CNNs can learn to perform taxonomic classification
The HPIDB dataset bias and the differing features between genomes cast doubt on our intended goal of predicting the interactions in the HPIDB dataset. However, the ability to distinguish between taxonomic classes also has potential applications. We wanted to investigate whether CNNs trained with CGR attractors would work well at taxonomic classification tasks as, when built and trained, our resulting models could provide a relatively light-weight alternative to existing taxonomic classification models. We performed two experiments to this end. In the first, we classified
Figure 7: Scatter plot comparing holdout results from the best-performing models according to validation accuracy with and without the inclusion of GAN-generated synthetic images in the training data. A line marks where results would be equal.
gene CGRs according to their superkingdom to see how well the model could give a broad overview of an unknown gene's origin. In the second, we classified read CGRs according to whether they came from a specific host or pathogen organism, as this may be helpful as an early step in assembling sequences taken from sequencing an infected plant.
The best-performing model architectures found during the PPI prediction model search were not directly applicable to the classification problems due to differences in input and output of the tasks. We performed new parameter searches for each taxonomic classification task, where each model would take in one CGR as input, and output predictions of class membership.
Figure 9 shows the holdout test performance of a CNN trained on CGRs taken from our HPIDB dataset and labelled according to superkingdom. The mean class accuracy was 86.17%. The 757 bacteria samples in the test data were generally predicted well, with 94% correctly identified. There was some confusion between eukaryota and viruses; the model achieved class accuracies of 88% for the more abundant eukaryotic test samples and 77% for the viruses.
Read classifier results are shown in Figure 10. While the _Oryza sativa_ reads were classified correctly in 94.3% of cases, the prediction of _Magnaporthe oryzae_ reads were only correct in 60.4% of reads tested, giving a mean class accuracy of 77.37%. While the model did learn to discriminate between the two classes to a degree, the use of CGRs and a CNN did not seem to achieve more than is possible without them. We trained a linear logistic regression model on the same data, extracting the \(k\)-mer counts from the CGR attractors into feature vectors. It performed similarly with a mean class accuracy of 76.90%, where the model was better at classifying the Oryza reads.
## 3 Discussion
The task of PPI prediction is challenging, and machine learning approaches require careful consideration of the data that is used. Machine learning requires abundant data but including all known interactions introduces representation bias. However, the interactions in PPI datasets often consist of many interactions for the same proteins, so eliminating redundancy significantly reduces the amount of available data. We have developed an algorithm for eliminating redundant data while maintaining more samples than random discarding.
From a comprehensive architecture search we have developed a model that is able to learn PPI prediction from small, non-redundant datasets. When applied to the benchmark PPIPre datasets, we obtained results higher than previously achieved for both the easier Negatome version and the difficult RecombinePairs version of the data.
Smaller values of \(k\) result in better prediction accuracy despite the coarser resolution information they contain. It may be that shorter sequences are sufficient to discriminate between classes making longer sequences redundant. It may also be that as the CGRs double in height and width with each increment of \(k\), larger attractors contain too many more features than there are samples in the dataset, resulting in more overfitting. Finally, higher values of \(k\) mean that there are more possible \(k\)-mers, and so fewer cells in a CGR will contain data. CNN models are known to perform badly on sparse data.
The arrangement of \(k\)-mer counts within CGRs has assisted with this performance, as subsets of the search area that do not include convolutional layers do not obtain the same maximum accuracy. Furthermore, machine learning methods
Figure 8: CGRs showing the entire genome for (a) a host (_Oryza sativa_) and (b) a pathogen (_Magnaporthe oryzae_) species, scaled to values between 0 and 1. (c) shows the difference between the two plots, where positive (red) values indicate that the 4-mer was more represented in the _Oryza sativa_ genome and negative (blue) values indicate that the 4-mer was more represented in the _Magnaporthe oryzae_ genome.
Figure 10: Confusion matrix showing the results of a CNN model attempting to classify synthetic reads
Figure 9: Confusion matrix from a CNN model which uses CGR inputs to predict super kingdom from CDS region. Data was extracted from the HPIDB dataset and consists of 757 bacteria samples, 1,745 eukaryote samples and 124 virus samples
without neural networks that use vectors of \(k\)-mer counts as their input without regard to order do not perform as well as the CGR CNN models. Whether there are different arrangements within a CGR-like structure that would lead to higher performance remains an open question.
We explored a method of augmenting a CGR dataset with synthetic sequences by synonymous substitution, theorising that as the different CDS sequences would encode the same protein sequence, homology and likely identical behaviour would follow. While augmenting images to increase the variety of input often increases the accuracy of computer vision models, our experiments showed that this means of augmentation generally does not improve our model at its PPI prediction task. The synonymous substitutions appear to offer some improvement when the model performed particularly badly on the validation data without augmented training data. We consider that this may be the result of a slight reduction in overfitting that caused the original poor performance. However, the improvements gained by adding the synthetic images were much weaker than improvements gained by choosing more suitable architecture, and when performance was high without synonymous substitutions, their addition made validation accuracy worse. We also note that depending on the intended task of the model, the substitutions may lose codon usage or tetranucleotide frequency biases that may have been useful features.
GANs do not provide even this small potential for improvement, but almost always worsened performance. The GAN model itself may require more data in order to learn enough to produce synthetic images with sufficient accuracy and variety. It may also be the case that while the generated CGR attractors seem to share qualities with the real images to the human eye, they lack important distinguishing features needed for class distinction.
The question of suitable augmentations for deep learning of genetic data therefore remains open.
We have also shown that machine learning methods using \(k\)-mer counts to predict protein-protein interactions could introduce bias if positive and negative samples use different species pairings. \(k\)-mer counts can be sufficient to identify taxonomic classifications, and uneven species pairings between positive and negative data can be sufficient to separate the data. Our models for high-level taxonomic classification were able to learn features to discriminate between genomes with accuracy that is high, albeit not comparable with existing weightier methods. In addition, linear logistic regression using \(k\)-mer counts achieved similar performance, indicating that for these tasks the CGR arrangement and CNN architecture do not contribute to learning. Dick and Green (2020)'s comprehensive study uses several approaches to building CGRs from proteins, building models to identify the source organisms of protein source identification. While their predictions were better than random chance, they described these results as "modest". Our taxonomic experiments, which look at broader classes, are not directly comparable but generally perform well and achieve >90% accuracy in the prediction of some classes. Our CGRs, and the \(k\)-mers we use for logistic regression, are built with the benefit of the true knowledge of the genetic coding sequence, which may underline the importance of features such as nucleotide pattern frequency and codon usage in taxonomic classification tasks, and the relative difficulty of identify source organism from protein sequence and from genetic sequence.
To COVID. Which made this all a lot more fiddly than it needed be.
We would like to thank George Deeks and the Norwich Bioscience Institutes' Research Computing department for their assistance with the High Performance Computing cluster. We would also like to thank Volodymyr Chapman for compiling the ArabidopsisPPI data, Anthony Duncan for helpful discussions about metagenomics and Bethany Nichols for naming GRAND. RV was supported by BBSRC grant number BB/V509267/1. This work was supported by Wave 1 of The UKRI Strategic Priorities Fund under the EPSRC Grant EP/T001569/1, particularly the "AI for Science" theme within that grant & The Alan Turing Institute. DM was supported by The Gatsby Charitable Foundation through a core grant to The Sainsbury Laboratory.
|
2302.03009 | A novel Doppler backscattering (DBS) system to simultaneously monitor
radio frequency plasma fluctuations and low frequency turbulence | A novel quadrature Doppler Backscattering (DBS) system has been developed and
optimized for the E-band (60-90GHz) frequency range using either O-mode or
X-mode polarization in DIII-D plasmas. In general, DBS measures the amplitude
of density fluctuations and their velocity in the lab frame. The system can
simultaneously monitor both low-frequency turbulence (f < 10MHz) and
radiofrequency plasma density fluctuations over a selectable frequency range
(20-500 MHz). Detection of high-frequency fluctuations has been demonstrated
for low harmonics of the ion cyclotron frequency (e.g., 2fci~23MHz) and
externally driven high-frequency helicon waves (f = 476MHz) using an adjustable
frequency down conversion system. Importantly, this extends the application of
DBS to a high-frequency spectral domain while maintaining important turbulence
and flow measurement capabilities. This unique system has low phase noise, good
temporal resolution (sub-millisecond) and excellent wavenumber coverage
(k_{\theta} ~ 1-20cm^{-1} and k_r ~ 20-30cm^{-1}). As a demonstration,
localized internal DIII-D plasma measurements are presented from turbulence (f
<= 5MHz), Alfvenic waves (f~6.5MHz), ion cyclotron waves (f >= 20MHz) as well
as fluctuations around 476MHz driven by an external high-power 476 MHz helicon
wave antenna. In the future, helicon measurements will be used to validate
GENRAY and AORSA modeling tools for prediction of helicon wave propagation,
absorption and current drive location for the newly installed helicon current
drive system on DIII-D. | S. Chowdhury, N. A. Crocker, W. A. Peebles, T. L. Rhodes, L. Zeng, B. Van Compernolle, M. Brookman, R. I. Pinsker, C. Lau | 2023-02-06T18:48:46Z | http://arxiv.org/abs/2302.03009v1 | A novel Doppler backscattering (DBS) system to simultaneously monitor radio frequency plasma fluctuations and low frequency turbulence
###### Abstract
(Presented XXXX; received XXXX; accepted XXXX; published online XXXXX) (Dates appearing here are provided by the Editorial Office)
A novel quadrature Doppler Backscattering (DBS) system has been developed and optimized for the E-band (60-90GHz) frequency range using either O-mode or X-mode polarization in DIII-D plasmas. In general, DBS measures the amplitude of density fluctuations and their velocity in the lab frame. The system can simultaneously monitor both low-frequency turbulence (f \(<\) 10MHz) and radiofrequency plasma density fluctuations over a selectable frequency range (20-500 MHz). Detection of high-frequency fluctuations has been demonstrated for low harmonics of the ion cyclotron frequency (e.g., 2f\({}_{\rm ci}\)\(\sim\)23MHz) and externally driven high-frequency helicon waves (f = 476MHz) using an adjustable frequency down conversion system. Importantly, this extends the application of DBS to a high-frequency spectral domain while maintaining important turbulence and flow measurement capabilities. This unique system has low phase noise, good temporal resolution (sub-millisecond) and excellent wavenumber coverage (k\({}_{0}\) \(\sim\) 1-20cm\({}^{-1}\) and k\({}_{r}\) \(\sim\) 20-30cm\({}^{-1}\)). As a demonstration, localized internal DIII-D plasma measurements are presented from turbulence (f\(\leq\) 5MHz), Alfvenic waves (f\(\sim\)6.5MHz), ion cyclotron waves (f\(\geq\) 20MHz) as well as fluctuations around 476MHz driven by an external high-power 476 MHz helicon wave antenna. In the future, helicon measurements will be used to validate GENRAY and AORSA modeling tools for prediction of helicon wave propagation, absorption and current drive location for the newly installed helicon current drive system on DIII-D.
## I Introduction
Doppler backscattering (DBS) [1, 2, 3, 4] is a non-invasive active diagnostic technique that allows the study of electron density fluctuations via backscattering of a millimeter-wave beam launched obliquely towards a cut-off layer. This technique has been used over the past three decades to study the transport role of intermediate scale turbulence as well as determine flow velocity of the turbulence (the \(E\times B\) velocity \(\bar{p}_{K\times B}\) can often be extracted from this velocity). The probed wavenumber is controlled by steering the launched mm-wave beam using mirrors and quasi-optical components. When adjusted for normal incidence to the plasma cutoff the technique reduces to well-known reflectometry. Radiation backscattered from perturbations of the index of refraction (typically near cutoff) carries information about \(\bar{n}_{e}\) (also \(\bar{b}_{\parallel}\) for X-mode polarized millimeter waves) at the scattering location. The backscattering process is constrained by the Bragg condition \(k_{\tilde{x}}\) = \(-2k_{i}\) (where \(\tilde{x}\) refers to the probed fluctuation and \(i\) to the incident wave at the scattering location), so the DBS technique probes fluctuations with a particular wavenumber, \(k_{\tilde{x}}\) which depends in the wave number and incident angle of the incident beam. From the Doppler frequency shift in the backscattered turbulence signal, one can determine the turbulence flow velocity, which can often be converted into a local \(E\times B\) velocity. Detection of radio frequency fluctuations can occur in two different ways. The DBS beam can directly backscatter from the radio frequency (RF) wave itself (which can have an associated \(\bar{n}_{e}\) also \(\bar{b}_{\parallel}\) at the RF wave frequency) when the RF wave satisfies the Bragg condition. Alternatively, when the RF wave has long wavelength, the probe beam can scatter from low frequency turbulence where the turbulence (or the mm-wave probe beam itself) is modulated by the RF wave.
DBS provides very good measurement resolution (sub-millisecond temporal and sub-cm range spatial) for both the edge and core plasma. This diagnostic has recently been established as a reliable and powerful tool for burning plasma research [5, 6]. Historically, DBS has been used to study
a wide range of plasma phenomena such as the low to high confinement mode (L-H) transition [7], geodesic acoustic modes (GAMs) [8], zonal flows [9], ion cyclotron emission (ICE) [10], lower hybrid radio frequency waves from an external antenna [11] etc. The current paper reports on the design and initial results from a novel quadrature DBS system performance over a wide signal frequency range f-0-500 Mhz. Section IV discusses and summarizes the important measurement results obtained using a system prototype during recent helicon current drive experiments on DIII-D.
## II System Design
The heart of the new DBS system is the millimeter wave circuit, shown in Fig. 1. The millimeter wave circuit is driven by a fixed frequency phase-locked dielectric resonant oscillator (PLDRO). PLDROs have very low phase-noise (\(<\) -120dBc/Hz at 1MHz) and high long-term stability, as well as insensitivity to the background neutron radiation expected in the vicinity of DIII-D during hot plasma experiments. By design, the system can employ sources in the range 15 - 22.5GHz. For the plasma test results reported in Section III, two different PLDROs, at 15.75 and 18GHz, are utilized at different times, dependent on the plasma profile parameters. As shown in Fig. 1, the output power from the chosen source is equally divided using a Wilkinson power splitter (Krytar, 6020265). The outputs pump two separate E-band active quadrupplers (Eravant, SFA-603903420-12KF-E1). One provides the launch power, which is directed to a target (a reflector, for lab testing, or the DIII-D plasma). The other multiplier delivers the required local-oscillator (LO) power (16dBm) for the millimeter-wave quadrature mixer (Eravant, SFQ-60390315-1212SF-N1-M). A fixed waveguide attenuator (not shown in figure) regulates the LO power for optimum operation of the quadrature mixer. After the quadruplers, WR-12 waveguide is used for wave transmission. The launch power is directed to the target via a 3dB directional coupler (Eravant, SWD-0340H-12-SB), rectangular-to-circular transition, scalar horn and a quasi-optical steering optics (not shown in the figure). The backscattered received power is mixed with the LO in the E-band quadrature mixer.
Figure 1: A circuit for millimeter wave probe beam production and DBS measurement.
The de-modulated outputs provide the in-phase (I) and quadrature (Q) information that allows determination of the received electric field amplitude and phase variations over time.
The quadrature mixer outputs are processed by either of two different receiver circuits (Figs. 2 and 3) for the shunted to common to minimize signal contamination by local pickup of low frequency electromagnetic interference.
For initial DIII-D plasma tests, the design of the receiver circuit (Fig. 2) is optimized for relatively low frequency RF waves (e.g., energetic-ion driven ICE, typically observed at \(f\geq 20\)MHz in DIII-D[10]).
As shown in Fig. 2, the quadrature mixer output signals (both I and Q) are each separated into low frequency (LF) and high frequency (HF) bands via low pass (\(\leq\)13MHz) and high pass (\(\geq\)17MHz) filters. (Min-Circuits SLP-15+ and SHP
Figure 3: Receiving circuit with frequency down conversion setup for high frequency fluctuation measurement in helicon frequency range (f = 476 MHz). A 500 MHz crystal and RF mixer (positioned close to the mm-wave mixer) down-convert the detected high frequency fluctuation to a lower range (476 MHz to 24 MHz).
Figure 2: DBS receiving electronics for low frequency (LF) and high frequency (HF) fluctuations.
\(20+\), respectively, are used. Both have 3dB cutoffs at \(\sim\) 15MHz, individually. When joined together via a tee, as in Fig. 2, there is some overlap at \(\sim\) 15MHz.) This division into low and high frequency components allows the LF and HF signals to be amplified separately, which is useful because the HF fluctuations tend to have a much smaller amplitude than the LF turbulence. The amplifier gain for the HF signal is \(\sim\) 40.5dB (Mini-Circuits ZKL-1R5+). A low noise 44.6dB gain amplifier (custom-made) is used for the LF signals. Various attenuators (not shown in Fig. 2) are utilized to prevent saturation of the LF signal amplifiers and digitizer, leading to a significantly lower net gain for the LF signals than the HF signals. The amplified LF and HF components of the I and Q signals are also both filtered with a 50MHz low-pass anti-alias filter and recorded by a multichannel digitizer at 100MS/sec. (The digitizer model is the Alazartech ATS9416, featuring 14-bit resolution). The HF component is also further high-pass (\(f\)\(\geq\) 750kHz) filtered at the digitizer input to eliminate cable pickup of low frequency electromagnetic interference (Mini-Circuits ZFHP-0R75-S+).
For measurement of the 476MHz helicon wave, the receiver circuit is modified as shown in Fig. 3 to down-convert the high frequency components of the I and Q signals using a 500MHz local oscillator (LO) before amplification, translating the detected helicon signal to 24MHz. Given the high power of the helicon wave injected into the plasma (\(\lesssim\) 1 MW), down-conversion is essential to reduce unwanted electromagnetic pick-up of stray 476 MHz radiation outside the tokamak. The down-converting RF mixers are positioned physically close to the millimeter-wave quadrature mixer (within a few inches, limited by the physical dimensions of the diplexers) to minimize pickup. Other advantages of down-conversion are that it allows for low-cost digitization of the signal and eliminates significance cable loss that would occur at the helicon frequency. In addition, the use of different LO down-conversion frequencies allows measurement of RF waves over different frequency ranges. The processing of the low frequency components is unchanged from the non-down-converting receiver circuit in Fig. 2.
As shown in Fig. 3, diplexers (Mini-Circuits model ZDPLX-2150) are used to separate the signals from the quadrature mixer into low and high frequency components. The down-conversion is accomplished using radio frequency mixers (Mini-Circuits ZFM-1W-S). The LO is supplied by a 500MHz crystal oscillator (Crystek RFPRO33 500.000M). A 600MHz low pass filter (Min-Circuits SLP-600) rejects any higher harmonic content in the crystal oscillator signal. Another difference from the receiver circuit of Fig. 2 is the second stage of amplification with \(\sim\) 14.5dB gain (Mini-Circuits ZX60-P105LN+) for the high frequency channel giving a total amplifier gain of \(\sim\)55dB to the down-converted signal.
Several measures are taken to prevent unwanted 476MHz pickup from getting into the down-converting RF mixer. High pass filters (KR-electronics Inc. KR-3463-SMA) with a sharp cut-off below 500MHz are placed directly at the LO ports of the RF mixers (\(\sim\)30dB rejection at 476MHz). The crystal oscillator is powered by a battery coupled via a 5MHz low pass filter (LPF) (Mini-Circuits SLP-5). Finally, bi-directional 36MHz low pass filters (Mini-Circuits SLP-36+) are directly connected to the IF outputs of the RF mixers to prevent helicon pickup from entering via the IF output ports.
The system is tested in the laboratory, and then later installed on the DIII-D tokamak and tested during plasma experiments, using both receiving circuits (Figs. 2 & 3). The system design and component arrangement are optimized for compactness and portability, to facilitate easy installation at DIII-D and easy relocation to different ports offering access to the plasma. The millimeter wave circuit and receiver circuit, along with power supply regulators are installed on a portable \(12^{n}\times 18^{n}\) optical breadboard. Results of the plasma tests are reported in Section III.
As noted above, the down-converting receiver of Fig. 3 can be easily modified to target RF plasma waves at different frequencies. This capability is exploited for some of the test results reported in Section III targeting lower harmonics of ICE. To target these plasma waves, the circuit of Fig. 3 is modified by replacing the 500MHz crystal oscillator with a 50 MHz crystal oscillator (Crystek CHPRO 033 50.000), and the \(f\)\(<\) 600MHz low pass filter on the output of the crystal oscillator is replaced with a 50 \(<\)\(f\)\(<\) 71MHz bandpass filter (Min-Circuits SBP-60) for harmonic rejection. For convenience, this modified version of the circuit in Fig. 3 will be referred to throughout this paper as the "modified down-converting circuit". Also, the assorted filters described above for 476MHz pickup rejection are removed. This includes, (in particular) \(f\)\(<\) 36MHz low pass filters on the RF mixer (down-converting) output ports, since they would have the side effect of restricting the range of sensitivity of the ICE measurement. No pickup rejection filters for ICE are added in their place. Since ICE is a low power plasma wave driven by an instability, there is little danger of contamination of the measurement by electromagnetic interference with the measurement circuit due to stray radiation outside the tokamak.
For the laboratory tests, a collimated millimeter-wave beam is launched using the scalar horn, as shown in Fig. 1, and an aspheric lens (f = 10"), towards a flat reflector approximately \(\sim\) 1.5m from the lens, where the beam is retroreflected. The mirror is translated a distance of a few inches along the beam to introduce a path length variation in the retroreflected radiation and a change in phase relative to the local oscillator. The path length variation manifests as variation in the \(I\) and \(Q\) signals from the E-band quadrature mixer, which are related to the amplitude (\(A\)) and
phase (\(\phi\)) of the retro-reflected radiation by \(I=A\cos(\phi)\) and \(Q=A\sin(\phi)\). The amount of retroreflected power in this arrangement is much larger than the mixer can accept, so an absorber is attached to the surface of the reflector to reduce the retroreflected power. A clean circular phasor in the plot of \(I\) vs \(Q\) from the low frequency signal amplifiers is observed as the reflector is translated, as expected for normal operation. In addition, using the PLDROs phase noise is found to make a negligible contribution to the overall system noise which is set primarily by amplifier and mixer noise.
For the DIII-D test results described in Section III, the system is installed near the midplane on the low magnetic field side of the DIII-D tokamak. Taking advantage of its compact, portable configuration, the prototype is coupled to the plasma for these tests by integrating the system into an existing plasma interface developed for a V-band cross-polarization scattering (CPS) system described in Ref. [20, 21, 22]. The transition and scalar horn shown in Fig. 1 are replaced by an E-band-to-V-band transition in order to couple into the V-band transmission system of the interface. The two PLDRO sources used for results reported in Section III produce launch frequencies of 63 or 72GHz, which are within the V-band transmission system.
The CPS interface couples radiation from a scalar horn to the plasma by imaging radiation from the horn via a series of quasi-optical elements into an overmoded 3.5" diameter (ID) corrugated waveguide which penetrates the vacuum vessel. Radiation from the waveguide is focused by a metallic dichroic lens [22] onto a flat steering mirror. A two-axis motorized system is attached to the steering mirror to control beam launch angle both poloidally (-30\({}^{\circ}\) to 20\({}^{\circ}\)) and toroidally (-10\({}^{\circ}\) to 5\({}^{\circ}\)) in order to allow coverage of a wide range of possible wave numbers at cutoff. The poloidal steering enables the DBS beam to cover a wave number range of k\({}_{0}\) \(\sim\) 1-20cm\({}^{-1}\) at cutoff, while the toroidal steering allows minimization of wave number mismatch for probing plasma waves of interest (e.g., minimization of \(k_{\parallel}\) for probing turbulence [23]). This matching requirement becomes more important as the probed wavenumber increases. Notably, the beam can also cover k\({}_{r}\) \(\sim\) 20-30cm\({}^{-1}\) in the edge, (as in ref. [11]) since backscattering can occur anywhere along the DBS beam if the Bragg scattering conditions are met: \(\omega_{\parallel}+\omega_{g}=\omega_{s}\) and \(\mathbf{k_{\parallel}+k_{\parallel}=k_{s}}\) where o and **k** correspond to frequencies and wave number vectors, the subscripts i and s correspond to the incident and scattered millimeter waves and \(\mathbf{\bar{x}}\) to the plasma fluctuation causing
Figure 4: DBS installed at DIII-D near mid plane at low field side borrowing existing V-band cross polarization optics [20, 22]. A waveguide transition from E \(\rightarrow\) V is used for 63 or 72GHz microwave launch. A collimating aspheric lens (made of HDPE) and a combination of two plane mirrors are used to collimate beam to in-vacuum quasi-optics. The vacuum quasi-optics mainly contains 3.5” long corrugated waveguide, 45\({}^{\circ}\) plane mirror, metallic lens and a steerable flat mirror for beam steering. A two-axes motorized system is attached to this mirror in vacuum to steer (remotely) the mirror at different poloidal as well as in toroidal angle to target different cut-off locations. Back-scattered plasma signal is then collected via same scalar horn antenna finally to the receiving electronics. A polarizer has been used (not showed in fig) to select either X/O-mode polarization for the launched DBS beam. Left hand side shows a set of blue contour lines indicating magnetic flux surfaces bounded by a material plasma facing surface. The plasma is largely confined to the region of closed flux surfaces, extending slightly into the region beyond the last closed flux surface referred to as the scrape-off layer (SOL). The red line shows a typical millimeter wave beam trajectory for the DBS system.
the scattering. Normally, \(\mathbf{\omega}_{1}\approx\mathbf{\omega}_{S}\gg\mathbf{\omega}_{\Sigma}\mathbf{k}_{s}\approx- \mathbf{k}_{l}\approx\frac{1}{2}\mathbf{k}_{\Sigma}\) are satisfied by a plasma wave (e.g., helicon or slow waves at 476MHz launched by the helicon antenna [24]). The steering angles are chosen and set prior to creation of the discharge in order to probe the desired wave number. The choice is made using millimeter wave ray tracing (GENRAY [25]) for a reference discharge that is taken to be a model for the upcoming discharge.
The CPS interface includes a capability to switch between X- and O-mode polarization, allowing millimeter waves to be launched into the plasma with either polarization and returning millimeter waves of the same polarization to be received. This capability enables variation in measurement location by choice of operating polarization. The switching is accomplished by routing launch power to either of two horn antennae that are roughly oriented for X- or O-mode polarization and the use of a wire-grid polarizer to fine-tune the polarization. The exact choice of polarization angle is made prior to creation of the discharge with the goal of minimizing coupling to the unwanted mode, considering the expected pitch of the edge magnetic field, which is typically determined from the equilibrium reconstruction of the plasma discharge.
## III System Performance and Initial Plasma Measurements
The DBS system is installed on the DIII-D tokamak for testing via plasma measurements. These initial tests are performed using plasmas with current and toroidal magnetic field of \(\mathrm{I}\mathrm{p}=1-2\mathrm{MA}\) and \(\mathrm{B}\mathrm{T}=1.5-2\mathrm{T}\), respectively with injected neutral beam power as high as \(\sim\)10MW. Measurements are obtained for both high-confinement (H-mode) plasmas, which tend to have broad density profiles, and low-confinement (L-mode) plasmas, which tend to have more centrally peaked density profiles. Measurements are obtained using both sources (i.e., for launched frequencies of 63 or 72GHz) and a broad range of poloidal (-18' to 0') and toroidal (-7' to 0') angles allowing wavenumbers of \(\mathrm{k}_{0}\)\(\sim\) 1-10cm\({}^{-1}\) to be probed. The following results are selected to demonstrate the system performance at low frequency (f \(\leq\) 10MHz), intermediate frequency (f \(\geq\) 20MHz), and high frequency (i.e., the helicon system frequency range f \(\sim\) 476MHz).
Initial testing using the non-down-converting receiver circuit of Fig. 2 demonstrates the capability of the system to measure both turbulence and Alfven eigenmodes. Figure 5 shows O-mode measurements with a launch frequency of 63GHz in a neutral beam heated (\(\mathrm{P}_{\mathrm{NB}}\sim\) 8MW) H-mode plasma with central magnetic field \(\mathrm{B}\mathrm{T}=2\mathrm{T}\) and plasma current \(\mathrm{I}\mathrm{p}=1.3\mathrm{MA}\) (shot # 186656). The central electron density is \(\mathrm{n}_{\mathrm{e0}}\sim\) 6.25 x 10\({}^{13}\)cm\({}^{-3}\) for the time period shown (t \(=\) 2500 \(-\) 2700ms). The poloidal steering angle for these measurements is -10.2\({}^{\circ}\), resulting in a wave number of \(k_{\theta}=\) 2.3cm\({}^{-1}\) probed at cutoff at \(\rho\approx\) 0.7. (The quantity \(\rho\) is a radial label for the closed magnetic flux surfaces in the plasma - cf. Figure 4 - ranging from 0 to 1 between the magnetic axis and the last closed flux surface. For each surface, \(\rho\) corresponds to the square root of normalized toroidal flux enclosed by the flux surface.) The toroidal steering angle for these measurements is 0\({}^{\circ}\). Fig. 5 shows spectrograms of quadrature electric field fluctuations from the low-frequency output of the system with \(\mathrm{log}_{10}\) scale in intensity.
The quadrature electric field is proportional to \(\tilde{n}\) for a scattering measurement. Fig. 5a shows turbulence in the range \(|f|\leq\) 2MHz and Fig. 5b shows high frequency Alfven eigenmodes, of either compression [26] or global [27] polarization, at \(\sim\) 6.5MHz \(\sim\) 0.53\(f_{cl}\) where, \(f_{cl}\) is the ion
Figure 5: Two different frequency bands of quadrature spectrum for low frequency signal are shown in (a) and (b) for receiver circuit as shown in Fig. 2 using 15.75 GHz DRO as a low frequency source to launch 63GHz mm-wave in O-mode polarization. (a) Low frequency turbulence in frequency band \(f=\) -2MHz to +2MHz and (b) high frequency CAE/GAE mode at 6.5 MHz in band \(f=\) 5 to 8MHz. (c) Injected beam power for two different co-ingle beams (\(\mathrm{P}_{\mathrm{NB},\mathrm{NB}}\), and \(\mathrm{P}_{\mathrm{NB},\mathrm{NB}}\)) and (d) \(\mathrm{D}_{\mathrm{\alpha}}\) optical emission. Jumps in \(\mathrm{D}_{\mathrm{\alpha}}\) correspond to ELMs. Turbulence spectrum in panel (a) shows modulation correlated with ELMs, while CAE/GAE mode in panel (b) appears (after a – 6ms delay) during periods when both beams are simultaneously injecting at full power.
cyclotron frequency at the location of the millimeter wave cutoff. The turbulence in Fig. 5a shows a clear modulation of Doppler shift and intensity. This is correlated with the occurrence of edge localized modes (ELM), which are indicated by bursts of D\({}_{a}\) emission (Fig. 5d). The observed modulation of the turbulent spectrum is at least partly attributable to a change in measurement location since the ELMs cause significant transient changes to the edge density profile. The short periods of Alfven eigenmode activity in Fig. 5b occur whenever both the 3DL and 330L beams are simultaneously on, indicating the AEs are excited by fast ion population created by the combination of these beams, which both have the same tangency radius and input beam voltage (\(\sim\)80kV) and thus create similar fast-ion populations. This suggests that the combined beams cause the fast-ion density to rise above a stability threshold whereas one beam by itself is insufficient to drive the CAE/GAE [27]. The modes disappear as soon as either of the beams (30L/330L) is turned off, whereas during periods where both beams are on, a \(\sim\)6ms delay is observed for mode appearance after the moment both beams become simultaneously on. The reason behind this delay requires further analysis and measurement, which is beyond the scope of this paper. In addition, there is a difference of \(\sim\)100kHz between the magnitudes of the frequencies of the peaks seen in the negative (not shown) and positive (Fig. 5b) frequency sides of the spectrum for the observed mode. This difference is approximately twice the Doppler shift \(f_{Doppler}\sim\) 60kHz (t = 2550ms) in the turbulence spectrum. This indicates that the peaks in the high frequency spectrum are caused by modulation of the turbulence-scattered radiation (or of the turbulence itself) by long wavelength high frequency modes, as discussed in Ref. [10]. In principle, the peaks in the high frequency spectrum should occur at \(f_{Doppler}\simeq f_{CAE/GAE}\), so the observed difference frequency should be \(2f_{Doppler}\sim 120\)kHz. In practice, the broadening of both the high and low frequency spectral peaks make the determination of the peak frequencies imprecise. The turbulence-modulation interpretation of the high frequency spectrum is further supported by moments of transient broadening and shifting of the high frequency peak (e.g., at t \(\approx\) 2522 ms and t \(\approx\) 2528 ms) that correlate with the ELM-associated broadening and Doppler shift change of the turbulent spectrum.
Later in the same discharge (shot#186656, t = \(3200-3250\)ms), beam-driven low-harmonic ICE is observed in the spectrogram of the high-frequency output of the DBS system (Fig. 6). Neutral beam heating power is increased to P\({}_{\rm NB}\sim 10\)MW at t = 3000ms. Also, the central density is increased to n\({}_{\rm e0}\sim 6.5\) x \(10^{13}\)cm\({}^{\rm-3}\), which moves the cutoff out to \(\rho\approx 0.75\), where the system probes \(k_{\theta}\sim 2.5\)cm\({}^{\rm-1}\). Fig. 6(a) shows beam-driven ICE at harmonics of the edge ion cyclotron frequency (\(f=23\) MHz \(\sim 2f_{ci}\), and \(f=34.5\)MHz \(\sim 3f_{ci}\)) appearing at t = 3225ms, shortly after the neutral beam injection power increases. Modulation of the ICE is observed to correlate with ELMs (similar to observations reported in ref. [10]). Short bursts of broadband emission covering the entire measured frequency range are also observed at the ELM crashes. (The short period in which D\({}_{a}\) emission sharply rises is referred to as a "crash" because just inside the last-closed flux surface of the plasma, electron density and temperature drop abruptly during this period.) Note that there are significant differences in the evolution of intensity in the peaks at the 2\({}^{\rm nd}\) and 3\({}^{\rm nd}\) harmonic ICE radiation in Fig 5a, consistent with interpretation of these spectral features as being associated with plasma waves (as opposed to harmonic artifacts produced by systematic nonlinear effects in the electronics).
For later testing in DIII-D, the down-converting receiver circuit of Fig. 3 is used for measurement of higher frequency fluctuations. To begin with, the receiver circuit of Fig. 3, modified for ICE as described in Section II, is tested using ICE as the target wave. Measurements of edge density fluctuation are obtained for a beam heated (\(\sim\)7MW) H-mode discharge using a 63GHz launch frequency but in X-mode polarization. Fig. 7(a) shows a clear signature of 3\({}^{\rm nd}\) harmonic ICE at 3\({}_{\rm ci}\)-34.5MHz in the density fluctuation spectrograph measured near the plasma edge (\(\rho\approx 0.95\)). Note that the actual recorded frequency is \(\sim\)15.5MHz. As discussed in Section II, this potentially corresponds to a plasma wave at either \(\sim\)34.5MHz or \(\sim\)65.5MHz, considering the 50MHz frequency shift. The measurement is interpreted to correspond to a plasma wave at \(\sim\)34.5MHz, and the figure frequency axis is adjusted accordingly, since ICE is expected to be stronger at lower harmonics. A frequency of \(\sim\)34.5MHz would correspond to the 3\({}^{\rm rd}\) ICE harmonic at
Figure 6: (a) Quadrature spectrum of high frequency signal using same receiver circuit as shown in Fig. 2 using 15.75GHz DRO as a low frequency source to launch 63GHz mm-wave in X-mode polarization. (b) Total injected beam power (P\({}_{\rm NB}\)) and D\({}_{a}\) optical emission. Jumps in D\({}_{a}\) correspond to ELMs. ELM-correlated broadband bursts and ELM-related modulation of ICE can be seen in spectrum in panel (a).
plasma edge, \(\rho\sim 1\), whereas \(\sim 65.5\)MHz would match the 5\({}^{\rm th}\) ICE harmonic deep in plasma core, at \(\rho\sim 0.4\). This interpretation is also consistent with other DBS measurements of ICE presented here and in ref. [10], which show frequencies matching harmonics in the edge, not the core. At t = 1890ms, the discharge begins to exhibit ELMs, as can be seen from the D\({}_{\alpha}\) trace in Fig. 7b, and the 3\({}^{\rm rd}\) harmonic disappears nearly completely from the spectrum. This is potentially because of fast-ion transport caused by ELMs [10]. These results confirm the down-conversion receiver circuit sensitivity to ICE harmonic and motivates its usefulness in measuring waves at higher frequencies than ICE.
After completing tests of the ICE-modified down-converting receiver, the receiver is reconfigured for helicon measurement as in Fig. 3 for testing during experiments with high power helicon injection. The DBS with helicon down-converting receiver circuit is tested during high power (\(\sim\)450kW) helicon injection during both low-confinement (L-mode) and high-confinement (H-mode) discharges. A 72GHz millimeter-wave beam is launched in X-mode to probe helicon fluctuations near outer midplane, in the edge plasma.
Fig. 8 shows measurements for an L-mode plasma with \(n_{e}\)\(\sim\)\(2\times 10^{13}cm^{-3}\) where a short pulse of \(\sim\)450kW helicon
Figure 8: DBS observes broadband helicon fluctuations during high power (\(\sim\)450kW) helicon injection in L-mode plasma. System uses down conversion setup with 500MHz crystal (as shown in Fig. 3) to down-convert the 476MHz helicon fluctuation to lower frequency (24MHz). Contour of plot quadrature electric field spectrum in log\({}_{10}\) - scale shows (a) Doppler shifted (\(f_{\rm D}\sim 600\)kHz) turbulence and (b) high-frequency fluctuations around helicon frequency, consisting of a sharp peak at the helicon frequency (476MHz), partially due to pickup, and broadband fluctuations within a band a few MHz wide around 476 MHz. [Int. scales of (a) and (b) differ by an arbitrary factor.] (c) Helicon power vs time. (d) Overlay of positive and negative frequency side of high frequency quadrature spectrum (t=2001ms).
Figure 7: (a) Quadrature spectrum in density contour (log\({}_{10}\)\(\bar{\rm n}\) scale) for high frequency channel shows 3\({}^{\rm rd}\) harmonic edge ICE (\(\rho\sim 0.95\)). (b) A neutral beam of P\({}_{\rm NB}\sim 7\)MW is injected and ELM appears at \(\sim\)1890ms. DBS mm-circuit uses frequency down-conversion (as described in Fig. 3) using 50 MHz crystal to test the ICE sensitivity. ICE harmonic disappears after t \(\sim\) 1890ms due to ELM effects on fast ion transport.
power (Fig. 8c) is injected. The DBS is system steered to probe density fluctuations (\(\bar{n}\)) with \(k_{\theta}\sim\) 3.6cm-1 at the cutoff, which is at \(\rho=0.46\) (the toroidal steering angle is 0\({}^{\circ}\)). Quadrature spectra for both low (Fig 8a) and high frequency signals (Fig. 8b) are shown in Fig. 8. A Doppler shift of \(f_{D}\sim\) 600kHz of the turbulence spectrum can be observed in the LF quadrature spectrum (Fig. 8a).
Simultaneous measurements of helicon fluctuations in the HF are shown in Fig. 8(b). A contour plot of the HF fluctuation spectrum shows broadband fluctuations a few MHz wide (\(\Delta f\sim\) 2MHz) around the helicon injection frequency (\(f=\) 476MHz), with a relative amplitude \(\sim\)70dB lower than the simultaneously measured lower frequency turbulence (f \(\leq\) 2 MHz). A sharp peak is also seen in the spectrum at 476MHz, as a sharp yellow line in the spectrum contour plot in Fig. 8b, as well as a sharp peak in time-slice spectra in Fig. 8d. (Note that the frequency scales are adjusted to account for the frequency shift of 500MHz caused by down-conversion. The measured frequency of this peak before adjusting for the down-conversion is actually 24MHz).
The broadband fluctuations are due to measurement of high frequency density fluctuations (\(\bar{n}\)) in the plasma, while the sharp peak may be partially due to stray radiation from the helicon antenna picked up by the DBS electronics. Measurements (not shown) are performed during a similar plasma in which the millimeter waves are blocked from entering the plasma. A similar sharp peak at 476MHz is observed, but there are no broadband fluctuations observed around the peak, in contrast with Fig. 8b. The broadening of the spectrum around the helicon frequency is probably due to the interaction of helicon waves with turbulence. One possible form of interaction is that turbulent density fluctuations in the plasma edge near the helicon antenna modulate the amplitude of the helicon wave coupled from the antenna to the plasma. The injected helicon wave is evanescent in the vicinity of the antenna where density is low, and only propagates freely deeper inside the plasma where density is higher. Amplitude modulation would manifest as a symmetric broadening of the spectrum. Full-wave helicon modeling[28] indicates that edge turbulence can significantly modify coupling and propagation in the plasma.
Comparison of the broadband fluctuations on the positive and negative sides of the high frequency quadrature spectrum (shown for, e.g., \(t=\) 2001ms in Fig. 8d) leads to the preliminary conclusion that the observed broadband fluctuations are the product of direct backscattering of the incident probe beam millimeter waves from plasma waves with frequencies near 476MHz. As discussed in ref.[10], several different physical processes could, in principle, cause backscattered millimeter waves to exhibit the broadband fluctuations seen in Fig. 8b. The first is that the incident millimeter waves could directly backscatter from the turbulence in the presence of the plasma waves. The plasma waves could then introduce a high frequency component into the spectrum of scattered millimeter waves by either modulating the turbulence (via oscillatory ExB motion caused by the plasma wave E-field) or by modulating the index of refraction along the path of the probe beam and backscattered radiation. A second process is that incident millimeter waves could directly backscatter from plasma waves with the observed frequencies if the plasma waves satisfy the Bragg scattering condition for wave number, \(\mathbf{k}_{g}=-2\mathbf{k}_{i}\), where \(\mathbf{k}_{g}\) and \(\mathbf{k}_{i}\) are the plasma wave and millimeter wave wavenumbers, respectively. (The plasma wave must have an associated density fluctuation or, if the millimeter wave is X-mode polarized, an asscociated fluctuation in the B field strength.) The comparison of spectra around \(|\)\(f|=\) 476MHz on the negative and positive sides of the quadrature spectrum can be used to help distinguish which process is responsible. For the 1\({}^{\mathrm{st}}\) process, scattering from turbulence, the broadband fluctations should be observed to center around peaks at \(f_{Doppler}\pm\) 476MHz, where \(f_{Doppler}\) is the turbulent Doppler shift frequency at the scattering location. Most of the power scattered from turbulence comes from the probe beam at cutoff, and the simultaneously measured turbulence has a Doppler shift \(f_{Doppler}\sim\) 600kHz (Fig. 8a), so any broadband fluctuations produced by this process should center around 476.6MHz and -475.4MHz.
In contrast, for the 2\({}^{\mathrm{nd}}\) process, direct backscattering from the plasma waves, the broadband fluctuations on the negative side should center around \(f=\) -476MHz, since the broadband fluctuations on the positive side of the spectrum center around \(f=\) 476MHz. From Fig. 8d it can be seen that the broadband fluctuations on the positive and negative side are, in fact, centered around +476MHz and -476MHz, respectively, consistent with direct backscattering of the incident millimeter waves from plasma waves at
Figure 9: Broadband fluctuations amplitude around helicon frequency only observed at low \(k_{\theta}\) i.e., at lower scattering angle (within \(\pm\) 5\({}^{\mathrm{o}}\) of normal incidence) for L-mode plasma.
frequencies close to 476MHz. In principle, backscattering can occur anywhere along the millimeter wave probe beam path. Further modeling is required to determine where along the path the Bragg scattering condition might be satisfied by waves launched by the helicon antenna.
A poloidal angle scan with the DBS system over a range of similar L-mode plasmas shows that the broadband fluctuations are observed when the DBS probes fluctuations with \(k_{\theta}<\) - 4cm\({}^{-1}\). The cutoff location for this scan is \(\rho\) - 0.45. Fig. 9 shows average spectral amplitude within the broadband range of frequencies (excluding the sharp peak at 476 MHz) vs. probed \(k_{\theta}\). The poloidal angle is varied \(\pm 5^{\circ}\) around normal incidence (at a toroidal angle of 0\({}^{\circ}\)). This dependence of broadband power on \(k_{\theta}\) supports the interpretation that the broadband fluctuations are the result of milllimeter wave scattering from plasma waves, as opposed to being the result of pickup. This also supports the interpretation that the broadband fluctuations are from direct backscattering of the probe beam from helicon plasma waves, suggesting in particular that the scattering takes place near the probe beam cutoff, since the probe beam scatters from waves with \(k_{\theta}\sim 1-4\) cm\({}^{-1}\) at the cutoff. Note that the broadband amplitude shows very little dependence on the amount of helicon injected power, which ranges from 350 to 450kW during this scan. This is consistent with the narrow range of launch powers and the expectation that the amplitude of helicon waves launched from the external antenna should scale as the square root of launched power.
Measurements during a neutral beam heated (P\({}_{\rm NB}\) -2.5MW) H-mode discharge show that the broadband fluctuations significantly change during the period around an ELM. Fig. 10a shows quadrature spectra vs time (with 0.5ms temporal smoothing) for the high frequency signal during the time period around an ELM, during injection of
Figure 11: Reflectometry measurement [29] for 72GHz mm-wave launch in X-mode: (a) DBS cut-off location (\(\rho_{cut-off}\)= 0.87 –0.75 for t = 2588 – 2591ms) varies as the ELM changes the cut-off density (b) edge density temporal evolution at radial location (\(p=0.87\)).
Figure 10: Helicon broadband fluctuation and its evolution in ELM’y H-mode plasma. 72GHz DBS beam with X-mode polarization is launched at low-k to test helicon antenna power coupling (absorption) by the high- \(\beta\) beam heated (P\({}_{\rm NB}\approx\)2.5MW) plasma. (a) helicon broad-band fluctuation amplitude changes with ELM and completely disappear at ELM event. (b) \(>\)300kW helicon power in pulse mode was injected to H-mode plasma, lower divertor \(D_{\alpha}\) visible light emission shows ELM occurrence at the edge. (c) Partial helicon pickup and helicon broadband fluctuation change during ELM (t = 2582, 2588 and 2591ms) shows real plasma effect on the helicon wave.
300kW of helicon power (Fig.10b). The broadband fluctuations are seen to decrease during the \(\sim\) 5ms period before the crash and then reappear almost immediately after the crash. Figure 10c shows spectra for three time slices around the ELM crash, t = 2582, 2588 and 2591ms, to facilitate visualisation of the spectral evolution. The spectra at t = 2582ms and 2588ms, which are both before the crash, show the decrease in broadband fluctuations over time. The spectrum at t = 2591ms, immediately after the crash, shows that reduction of the broadband fluctuations to the noise level.
The variability of the broadband fluctuations is potentially caused by changes in coupling of helicon power to the plasma since edge density evolves rapidly during ELM events. Another potentially significant factor is a change in measurement location as the edge density changes. Profile reflectometry measurements[29] are used to determine how edge density evolves and the corresponding changes in the cutoff location for the DBS system. Fig. 11a shows that the cutoff location for a 72GHz DBS beam (X-mode launch) varies from \(\rho_{cut-off}\) = 0.87 before (2588ms) to \(\rho_{cut-off}\) = 0.75 after (2591ms) after the ELM crash. Fig. 11b shows the density at p = 0.87 changes by \(\sim\)24% between t = 2582 and t = 2591ms.
Notably, Fig. 10c also shows that the amplitude of the sharp peak at 476MHz changes substantially during the period around the ELM, as well. The pattern of evolution is more complex than for the broadband fluctuations with the 476MHz peak varying in a random fashion. A potential explanation for this variability is that the sharp peak is not fully due to pickup, but rather partly due to a plasma wave measurement. This would then lead to interference playing a significant role in the ultimate observed signal level.
## IV Discussion and Summary
The principal motivation for development of the new DBS system is to measure the amplitude and spatial distribution of helicon wave power in the plasma (or any slow wave power unintentionally coupled into the plasma edge) during high power helicon injection. These measurements will be used to validate the GENRAY[25] and AORSA[19, 28] full-wave models, which predict helicon (and slow wave) propagation, absorption and current drive. This full wave 3D modeling (AORSA) also predicts a complex 3D spatial structure for the helicon wave in the plasma, which makes DBS measurement of the helicon wave challenging. The prototype DBS system described here, and the tests performed with it, establish the feasibility of the concept. The system measures low amplitude broadband fluctuations at frequencies close to the launched helicon wave, at an amplitude \(\sim\)70dB lower than simultaneously measured turbulence amplitude, although the injected helicon wave is launched toroidally away from the DBS probed location. In particular, the plasma test data demonstrates the sensitivity of the prototype to the helicon fluctuations inside plasma.
The initial plasma test data also offers a valuable opportunity for a preliminary validation effort.
For the helicon fluctuation measurements in Fig. 8b and d, the plasma is positioned close to the antenna to ensure strong coupling of the wave to the plasma, leading to strong toroidal localization of the launched wave. Figure 12 shows RF probe measurements of power in the helicon antenna vs. position in shot# 187301 at t = 2001ms. Power fed into the antenna from the end at module # 30 mostly couples to the plasma within the first few antenna modules, which is \(\sim\) 1 - 2 parallel wavelengths. This strong localization of the helicon wave at the antenna must be taken into account to understand how power from the antenna spreads through the plasma and reaches the location of the DBS probe beam.
The toroidal localization of the launch power distributes the helicon power at the antenna over a broad range of \(N_{\text{I}}\) around \(N_{\text{I}}\approx\) 3. A simple 1D model for the antenna wavefield is constructed to estimate the antenna power spectrum vs. \(N_{\text{I}}\) (Fig. 13). Electric field vs. toroidal angle \(\phi\) is assumed to be given by \(E(\phi)=E_{0}\exp\left(\alpha R_{a}\phi\right),\phi\in[0,\Delta\phi_{a}]\), and \(E(\phi)=0\) otherwise, where \(R_{a}\) is the major radial position of the antenna and \(R_{a}\Delta\phi_{a}=1.5\) m is antenna length. The complex value of \(\alpha\) is chosen considering the observed drop in wave power in Fig. 12 and a 90\({}^{\circ}\) phase-shift between modules. The drop in power is modeled by the solid black line in Fig. 12, which shows power distribution vs position along the antenna assuming a 20% power reduction per module. The modules have a spacing of 0.05 m in the toroidal direction, giving \(\alpha=[\ln(0.89)+\frac{in}{2}]/0.05\) m. The spectrum of wave power vs. toroidal index of refraction, \(N_{\phi}\frac{c\cdot k_{\phi}}{\omega_{H}}\) (where \(\omega_{H}\) and \(c\) are the angular frequency of the launched wave and the
Figure 12: RF probe measurement of antenna module power at t = 2001ms for plasma shot#187301. Antenna launches power (from 210\({}^{\circ}\) port) mostly couples from first few antenna modules at launch end. Solid black line is for projected exponential power fall to estimate antenna power spectrum Vs N\({}_{\text{I}}\).
speed of light), is given by \(\mathrm{pow}_{\phi}\big{(}N_{\phi}\big{)}=\left|\tilde{E}\left(\frac{\omega_{R}}{ c}N_{\phi}\right)\right|^{2}\), where \(\tilde{E}\big{(}k_{\phi}\big{)}\) is the Fourier transform of \(E(\phi)\),
\[\tilde{E}\big{(}k_{\phi}\big{)}=\left(\frac{1}{2\pi}\right)\tilde{ \phi}\,E(\phi)\,\exp\bigl{(}-ik_{\phi}R_{a}\phi\bigr{)}\,d\phi=\] \[E_{0}\,\Bigl{(}(\alpha-ik_{\phi})R_{a}2\pi\Bigr{)}^{-1}\,\Bigl{(} \exp\bigl{(}(\alpha-ik_{\phi})R_{a}\Delta\phi_{a}\bigr{)}-1\Bigr{)}.\]
Since, \(\mathrm{Re}(\alpha R_{a}\Delta\phi_{a})\approx-3.35\) is a large negative number, \(\mathrm{pow}_{\phi}\big{(}N_{\phi}\big{)}\propto\sim\left|\frac{\alpha\alpha}{ \omega_{R}}-iN_{\phi}\right|^{-2}\). Assuming a field pitch of \(15^{\circ}\) (the design value for the antenna), the value of \(N_{\mathrm{I}}\) for a given value of \(N_{\phi}\) is given by \(N_{\mathrm{I}}=N_{\phi}\,\cos(15^{\circ})\) and the spectrum of power vs. \(N_{\mathrm{I}}\) is given by \(\mathrm{pow}_{\mathrm{I}}(N_{\mathrm{I}})=\mathrm{pow}_{\phi}(N_{\mathrm{I}}/ \cos(15^{\circ}))\). Fig. 13 shows \(\mathrm{pow}_{\mathrm{I}}(N_{\mathrm{I}})\), using the exact equation, normalized to a maximum of 1. The modeled spectrum (Fig. 13) indicates that power is spread over a broad \(N_{\mathrm{I}}\) range around \(N_{\mathrm{I}}=3\).
The propagation path for different \(N_{\mathrm{I}}\) can be expected to vary significantly. To model which part of the spectrum can reach the DBS probe location, GENRAY ray tracing is used. For helicon waves, the cold plasma dispersion relation is used for the real part of the wavenumber, while a model developed by Chiu, _et al_.[30] is used for absorption. (GENRAY ray tracing with the Appleton-Hartree dispersion relation is also used to model propagation of the DBS system millimeter wave probe beam as a synthetic diagnostic to interpret the DBS measurements.[2]) A search over multiple values of \(N_{\mathrm{I}}\) shows that a ray with \(N_{\mathrm{I}}=2.3\), which has power as high as \(\sim 8\%\) of the peak power at \(N_{\mathrm{I}}\approx 3\) (Fig. 13), can pass very close to the DBS probe location. Fig. 14 shows ray tracing for a helicon ray with index of refraction \(N_{\mathrm{I}}=2.3\) at the antenna. The DBS ray, which propagates at an approximately constant toroidal position of \(\phi=240^{\circ}\), is also shown in Fig. 14. The helicon ray is marked with a colored circle at every point where it passes \(\phi=240^{\circ}\). (Note that the helicon ray undergoes toroidal reflections as \(N_{\mathrm{I}}\) reverses sign.) Figure 14 shows that on its second pass, the helicon beam passes nearly (separation \(\sim 5\)cm) through the DBS beam at cutoff, close enough for a measurement, since the DBS beam has a finite width (\(2\mathrm{wo}\)\(\sim\)\(10\)cm)[22] at the cutoff location. Of course, this type of 1D ray tracing model is insufficient to determine the complex spatial distribution of injected helicon beam as well as its interaction location with DBS. Further modeling, for instance with the 3D full wave codes as AORSA[28], is necessary to get a clear idea of helicon-millimeter wave interaction.
In summary, the newly developed DBS system has demonstrated simultaneous measurement of both low frequency turbulence and high frequency fluctuations over a wide RF frequency range. The high frequency channel shows very good ICE sensitivity when using both the non-down-converting (Fig. 2) and down-converting (Fig. 3) receiver circuits. The system has also unambiguously detected high frequency (476MHz) helicon broadband fluctuations during helicon current drive experiments using
the down-converting receiver circuit. These broadband high frequency fluctuations are interpreted as resulting from interaction with turbulence near the launch helicon antenna. One likely mechanism for such interaction is modification
Figure 14: GENRAY modelling showing helicon and mm-wave propagation throughout the torus. Poloidal view of ray tracing for shot #187301, \(=200\)ms: 72GHz, X-mode DBS millimeter wave ray from \(\phi=240^{\circ}\) (blue) and 476MHz (with \(N_{\mathrm{I}}\)=2.3) helicon ray launched from \(\phi=180^{\circ}\) (gray). The helicon ray is marked with a colored circle every time it passes \(\phi=240^{\circ}\), the location of the prototype DBS. The color of the helicon ray relates to the fraction of unabsorbed power in the ray. At launch, the helicon ray is black, fully unabsorbed, but it becomes lighter as it propagates. For clarity, the ray is terminated when \(\sim 80\%\) of the initial power is absorbed.
Figure 13: Modeled power spectrum (shot#187301, t= 2001ms) for the helicon antenna module vs \(N_{\mathrm{I}}\). The squared Fourier transform of the projected antenna wave electric field (solid black line from Fig. 14) is shown. Vertical green line marks \(N_{\mathrm{I}}\)= 2.3, corresponding to a helicon ray that would make close approach to DBS ray.
of the helicon wave coupling through density perturbations modifying the load near the antenna.
Future upgrades to the system include frequency tunability across the entire E-band range (60-90GHz). This will allow much improved spatial coverage which can even extend past plasma center to the high-field plasma region. In addition, the DBS launch location will be moved a position significantly higher above the midplane than for the plasma tests here. The launch radiation will be quasi-optically coupled into one of the ECH overmoded corrugated waveguides via switch to take advantage of the existing steering capability [31, 32]. This geometry will allow improved targeting of the helicon wave. This upgraded E-band DBS system will be able to experimentally investigate helicon wave propagation and, thereby, facilitate determination of wave absorption and the location of current drive. Finally, it will allow detailed comparison with a variety of code predictions for wave propagation (GENRAY [25], AORSA [28]).
###### Acknowledgements.
This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Fusion Energy Sciences, using the DIII-D National Fusion Facility, a DOE Office of Science user facility, under Award(s) DE-FC02-04ER54698.This work is also supported by U.S. DoE grants DE-SC0020649 and DE-SC0020337. The authors would like to thank Roman Lantsov, Larry Bradley and DIII-D team for their technical support in installing the DBS setup.
## Author Declarations
### Conflict of Interest
The authors have no conflicts of interest to disclose.
## Data Availability
The data that support the findings of this study are available from the corresponding author upon reasonable request.
### Disclaimer:
This report was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or any agency thereof. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof.
|
2310.03218 | Learning Energy-Based Prior Model with Diffusion-Amortized MCMC | Latent space Energy-Based Models (EBMs), also known as energy-based priors,
have drawn growing interests in the field of generative modeling due to its
flexibility in the formulation and strong modeling power of the latent space.
However, the common practice of learning latent space EBMs with non-convergent
short-run MCMC for prior and posterior sampling is hindering the model from
further progress; the degenerate MCMC sampling quality in practice often leads
to degraded generation quality and instability in training, especially with
highly multi-modal and/or high-dimensional target distributions. To remedy this
sampling issue, in this paper we introduce a simple but effective
diffusion-based amortization method for long-run MCMC sampling and develop a
novel learning algorithm for the latent space EBM based on it. We provide
theoretical evidence that the learned amortization of MCMC is a valid long-run
MCMC sampler. Experiments on several image modeling benchmark datasets
demonstrate the superior performance of our method compared with strong
counterparts | Peiyu Yu, Yaxuan Zhu, Sirui Xie, Xiaojian Ma, Ruiqi Gao, Song-Chun Zhu, Ying Nian Wu | 2023-10-05T00:23:34Z | http://arxiv.org/abs/2310.03218v1 | # Learning Energy-Based Prior Model with Diffusion-Amortized MCMC
###### Abstract
Latent space Energy-Based Models (EBMs), also known as energy-based priors, have drawn growing interests in the field of generative modeling due to its flexibility in the formulation and strong modeling power of the latent space. However, the common practice of learning latent space EBMs with non-convergent short-run MCMC for prior and posterior sampling is hindering the model from further progress; the degenerate MCMC sampling quality in practice often leads to degraded generation quality and instability in training, especially with highly multi-modal and/or high-dimensional target distributions. To remedy this sampling issue, in this paper we introduce a simple but effective diffusion-based amortization method for long-run MCMC sampling and develop a novel learning algorithm for the latent space EBM based on it. We provide theoretical evidence that the learned amortization of MCMC is a valid long-run MCMC sampler. Experiments on several image modeling benchmark datasets demonstrate the superior performance of our method compared with strong counterparts1.
Footnote 1: Code and data available at [https://github.com/yuPeiyu98/Diffusion-Amortized-MCMC](https://github.com/yuPeiyu98/Diffusion-Amortized-MCMC).
## 1 Introduction
Generative modeling of data distributions has achieved impressive progress with the fast development of deep generative models in recent years [1; 2; 3; 4; 5; 6; 7; 8; 9]. It provides a powerful framework that allows successful applications in synthesizing data of different modalities [10; 11; 12; 13; 14; 15], extracting semantically meaningful data representation [16; 17; 18] as well as other important domains of unsupervised or semi-supervised learning [19; 20; 21]. A fundamental and powerful branch of generative modeling is the Deep Latent Variable Model (DLVM). Typically, DLVM assumes that the observation (_e.g._, a piece of text or images) is generated by its corresponding low-dimensional latent variables via a top-down generator network [1; 2; 3]. The latent variables are often assumed to follow a non-informative prior distribution, such as a uniform or isotropic Gaussian distribution. While one can directly learn a deep top-down generator network to faithfully map the non-informative prior distribution to the data distribution, learning an informative prior model in the latent space could further improve the ex
pressive power of the DLVM with significantly less parameters [22]. In this paper, we specifically consider learning an EBM in the latent space as an informative prior for the model.
Learning energy-based prior can be challenging, as it typically requires computationally expensive Markov Chain Monte Carlo (MCMC) sampling to estimate learning gradients. The difficulty of MCMC-based sampling is non-negligible when the target distribution is highly multi-modal or high-dimensional. In these situations, MCMC sampling can take a long time to converge and perform poorly on traversing modes with limited iterations [23]. Consequently, training models with samples from non-convergent short-run MCMC [24], which is a common choice for learning latent space EBMs [22], often results in midformed energy landscapes [15; 24; 25] and biased estimation of the model parameter. One possible solution is to follow the variational learning scheme [1], which however requires non-trivial extra efforts on model design to deal with problems like posterior collapse [26; 27; 28] and limited expressivity induced by model assumptions [1; 29; 30].
To remedy this sampling issue and further unleash the expressive power of the prior model, we propose to shift attention to an economical compromise between unrealistically expensive long-run MCMC and biased short-run MCMC: _we consider learning valid amortization of the potentially long-run MCMC for learning energy-based priors_. Specifically, inspired by the connection between MCMC sampling and denoising diffusion process [7; 8; 31], in this paper we propose a diffusion-based amortization method suitable for long-run MCMC sampling in learning latent space EBMs. The learning algorithm derived from it breaks the long-run chain into consecutive affordable short-run segments that can be iteratively distilled by a diffusion-based sampler. The core idea is simple and can be summarized by a one-liner (Fig. 1). We provide theoretical and empirical evidence that the resulting sampler approximates the long-run chain (see the proof-of-concept toy examples in Appendix E.1), and brings significant performance improvement for learning latent space EBMs on several tasks. We believe that this proposal is a notable attempt to address the learning issues of energy-based priors and is new to the best of our knowledge. We kindly refer to Section 5 for a comprehensive discussion of the related work.
Contributionsi) We propose a diffusion-based amortization method for MCMC sampling and develop a novel learning algorithm for the latent space EBM. ii) We provide a theoretical understanding that the learned amortization of MCMC is a valid long-run MCMC sampler. iii) Our experiments demonstrate empirically that the proposed method brings higher sampling quality, a better-learned model and stronger performance on several image modeling benchmark datasets.
## 2 Background
### Energy-Based Prior Model
We assume that for the observed sample \(\mathbf{x}\in\mathbb{R}^{D}\), there exists \(\mathbf{z}\in\mathbb{R}^{d}\) as its unobserved latent variable vector. The complete-data distribution is
\[p_{\mathbf{\theta}}(\mathbf{z},\mathbf{x}):=p_{\mathbf{\alpha}}(\mathbf{z})p_{\mathbf{\beta}}(\mathbf{x}| \mathbf{z}),\;p_{\mathbf{\alpha}}(\mathbf{z}):=\frac{1}{Z_{\mathbf{\alpha}}}\exp\left(f_{\bm {\alpha}}(\mathbf{z})\right)p_{0}(\mathbf{z}), \tag{1}\]
where \(p_{\mathbf{\alpha}}(\mathbf{z})\) is the prior model with parameters \(\mathbf{\alpha}\), \(p_{\mathbf{\beta}}(\mathbf{x}|\mathbf{z})\) is the top-down generation model with parameters \(\mathbf{\beta}\), and \(\mathbf{\theta}=(\mathbf{\alpha},\mathbf{\beta})\). The prior model \(p_{\mathbf{\alpha}}(\mathbf{z})\) can be formulated as an energy-based model, which we refer to as the Latent-space Energy-Based Model (LEBM) [22] throughout the paper. In this formulation, \(f_{\mathbf{\alpha}}(\mathbf{z})\) is parameterized by a neural network with scalar output, \(Z_{\mathbf{\alpha}}\) is the partition function, and \(p_{0}(\mathbf{z})\) is standard normal as a reference distribution. The prior model in Eq. (1) can be interpreted as an energy-based correction or exponential tilting of the original prior distribution \(p_{0}\). The generation model follows \(p_{\mathbf{\beta}}(\mathbf{x}|\mathbf{z})=\mathcal{N}(g_{\mathbf{\beta}}(\mathbf{z}),\sigma^{2} \mathbf{I}_{D})\), where \(g_{\mathbf{\beta}}\) is the generator network and \(\sigma^{2}\) takes a pre-specified value as in VAE [1]. This is equivalent to using \(l_{2}\) error for reconstruction.
The parameters of LEBM and the generation model can be learned by Maximum Likelihood Estimation (MLE) [22]. To be specific, given the training data \(\mathbf{x}\), the gradients for updating \(\mathbf{\alpha},\mathbf{\beta}\) are,
\[\delta_{\mathbf{\alpha}}(\mathbf{x}):=\mathbb{E}_{p_{\mathbf{\theta}}(\mathbf{z}|\mathbf{x})} \left[\nabla_{\mathbf{\alpha}}f_{\mathbf{\alpha}}(\mathbf{z})\right]-\mathbb{E}_{p_{\mathbf{ \alpha}}(\mathbf{z})}\left[\nabla_{\mathbf{\alpha}}f_{\mathbf{\alpha}}(\mathbf{z})\right],\; \delta_{\mathbf{\beta}}(\mathbf{x}):=\mathbb{E}_{p_{\mathbf{\theta}}(\mathbf{z}|\mathbf{x})}\left[ \nabla_{\mathbf{\beta}}\log p_{\mathbf{\beta}}(\mathbf{x}|\mathbf{z})\right]. \tag{2}\]
In practice, one may use the Monte-Carlo average to estimate the expectations in Eq. (2). This involves sampling from the prior \(p_{\mathbf{\alpha}}(\mathbf{z})\) and the posterior \(p_{\mathbf{\theta}}(\mathbf{z}|\mathbf{x})\) distribution using MCMC, specifically Langevin Dynamics (LD) [32], to estimate the expectations and hence the gradient. For a target
distribution \(\pi(\mathbf{z})\), the dynamics iterates
\[\mathbf{z}_{t+1}=\mathbf{z}_{t}+\frac{s^{2}}{2}\nabla_{\mathbf{z}_{t}}\log\pi(\mathbf{z}_{t})+s \mathbf{w}_{t},\;t=0,1,...,T-1,\;\mathbf{w}_{t}\sim\mathcal{N}(\mathbf{0},\mathbf{I}_{d}), \tag{3}\]
where \(s\) is a small step size. One can draw \(\mathbf{z}_{0}\sim\mathcal{N}(\mathbf{0},\mathbf{I}_{d})\) to initialize the chain. For sufficiently small step size \(s\), the distribution of \(\mathbf{z}_{t}\) will converge to \(\pi\) as \(t\to\infty\)[32]. However, it is prohibitively expensive to run LD until convergence in most cases, for which we may resort to limited iterations of LD for sampling in practice. This non-convergent short chain yields a moment-matching distribution close to the true \(\pi(\mathbf{z})\) but is often biased, which was dubbed as short-run LD [15; 23; 24; 25].
### Denoising Diffusion Probabilistic Model
Closely related to EBMs are the Denoising Diffusion Probabilistic Models (DDPMs) [5; 7; 8]. As pointed out in [5; 8], the sampling procedure of DDPM with \(\mathbf{\epsilon}\)-prediction parametrization resembles LD; \(\mathbf{\epsilon}\) (predicted noise) plays a similar role to the gradient of the log density [8].
In the formulation proposed by Kingma et al. [33], the DDPM parameterized by \(\mathbf{\phi}\) is specified by a noise schedule built upon \(\lambda_{s}=\log[\beta_{s}^{2}/\sigma_{s}^{2}]\), _i.e._, the log signal-to-noise-ratio, that decreases monotonically with \(s\). \(\beta_{s}\) and \(\sigma_{s}^{2}\) are strictly positive scalar-valued functions of \(s\). We use \(\mathbf{z}_{0}\) to denote training data in \(\mathbb{R}^{d}\). The forward-time diffusion process \(q(\mathbf{z}|\mathbf{z}_{0})\) is defined as:
\[q(\mathbf{z}_{s}|\mathbf{z}_{0})=\mathcal{N}(\mathbf{z}_{s};\beta_{s}\mathbf{z}_{0},\sigma_{s} ^{2}\mathbf{I}_{d}),\quad q(\mathbf{z}_{s}^{\prime}|\mathbf{z}_{s})=\mathcal{N}(\mathbf{z }_{s}^{\prime};(\beta_{s}/\beta_{s})\mathbf{z}_{s},\sigma_{s^{\prime}|s}^{2} \mathbf{I}_{d}), \tag{4}\]
where \(0\leq s<s^{\prime}\leq S\) and \(\sigma_{s^{\prime}|s}^{2}=(1-e^{\lambda_{s^{\prime}}-\lambda_{s}})\sigma_{s}^ {2}\). Noticing that the forward process can be reverted as \(q(\mathbf{z}_{s}|\mathbf{z}_{s^{\prime}},\mathbf{z}_{0})=\mathcal{N}(\mathbf{z}_{s};\tilde{ \mathbf{\mu}}_{s|s^{\prime}}(\mathbf{z}_{s^{\prime}},\mathbf{z}_{0}),\tilde{\sigma}_{s|s^ {\prime}}^{2}\mathbf{I}_{d})\), an ancestral sampler \(q_{\mathbf{\phi}}(\mathbf{z}_{s}|\mathbf{z}_{s^{\prime}})\)[8] that starts at \(\mathbf{z}_{S}\sim\mathcal{N}(\mathbf{0},\mathbf{I}_{d})\) can be derived accordingly [33]:
\[\tilde{\mathbf{\mu}}_{\mathbf{s}|s^{\prime}}(\mathbf{z}_{s^{\prime}},\mathbf{z}_ {0}) =e^{\lambda_{s^{\prime}}-\lambda_{s}}(\alpha_{s}/\alpha_{s^{\prime }})\mathbf{z}_{s^{\prime}}+(1-e^{\lambda_{s^{\prime}}-\lambda_{s}})\alpha_{s}\mathbf{z }_{0},\quad\tilde{\sigma}_{s|s^{\prime}}^{2}=(1-e^{\lambda_{s^{\prime}}- \lambda_{s}})\sigma_{s}^{2}, \tag{5}\] \[\mathbf{z}_{s} =\tilde{\mathbf{\mu}}_{s|s^{\prime}}(\mathbf{z}_{s^{\prime}},\hat{\mathbf{z} }_{0})+\sqrt{(\tilde{\sigma}_{s|s^{\prime}}^{2})^{1-\gamma}(\sigma_{s^{\prime }|s^{\prime}}^{2})^{\gamma})}\mathbf{\epsilon},\]
where \(\mathbf{\epsilon}\) is standard Gaussian noise, \(\hat{\mathbf{z}}_{0}\) is the prediction of \(\mathbf{z}_{0}\) by the DDPM \(\mathbf{\phi}\), and \(\gamma\) is a hyperparameter that controls the noise magnitude, following [34]. The goal of DDPM is to recover the distribution of \(\mathbf{z}_{0}\) from the given Gaussian noise distribution. It can be trained by optimizing \(\mathbb{E}_{\mathbf{\epsilon},\lambda}\left[\left\|\mathbf{\epsilon}(\mathbf{z}_{\lambda})- \mathbf{\epsilon}\right\|_{2}^{2}\right],\) where \(\mathbf{\epsilon}\sim\mathcal{N}(\mathbf{0},\mathbf{I}_{d})\) and \(\lambda\) is drawn from a distribution of log noise-to-signal ratio \(p(\lambda)\) over uniformly sampled times \(s\in[0,S]\). This loss can be justified as a lower bound on the data log-likelihood [8; 33] or as a variant of denoising score matching [7; 35]. We will exploit in this paper the connection between DDPMs and LD sampling of EBMs, based upon which we achieve better sampling performance for LEBM compared with short-run LD.
## 3 Method
In this section, we introduce the diffusion-based amortization method for long-run MCMC sampling in learning LEBM in Section 3.1. The learning algorithm of LEBM based on the proposed method and details about implementation are then presented in Section 3.2 and Section 3.3, respectively.
Figure 1: **Learning the DAMC sampler.** The training samples for updating the sampler to \(\mathbf{\phi}_{k+1}\) is obtained by \(T\)-step short-run LD, initialized with the samples from the current learned sampler \(\mathbf{\phi}_{k}\). Best viewed in color.
### Amortizing MCMC with DDPM
Amortized MCMCUsing the same notation as in Section 2, we denote the starting distribution of LD as \(\pi_{0}(\mathbf{z})\), the distribution after \(t\)-th iteration as \(\pi_{t}(\mathbf{z})\) and the target distribution as \(\pi(\mathbf{z})\). The trajectory of LD in Eq. (3) is typically specified by its transition kernel \(\mathcal{K}(\mathbf{z}|\mathbf{z}^{\prime})\). The process starts with drawing \(\mathbf{z}_{0}\) from \(\pi_{0}(\mathbf{z})\) and iteratively sample \(\mathbf{z}_{t}\) at the \(t\)-th iteration from the transition kernel conditioned on \(\mathbf{z}_{t-1}\), _i.e._, \(\pi_{t}(\mathbf{z})=\mathcal{K}\pi_{t-1}(\mathbf{z})\), where \(\mathcal{K}\pi_{t-1}(\mathbf{z}):=\int\mathcal{K}(\mathbf{z}|\mathbf{z}^{\prime})\pi_{t-1} (\mathbf{z}^{\prime})d\mathbf{z}^{\prime}\). Recursively, \(\pi_{t}=\mathcal{K}_{t}\pi_{0}\), where \(\mathcal{K}_{t}\) denotes the \(t\)-step transition kernel. LD can therefore be viewed as approximating a fixed point update in a non-parametric fashion since the target distribution \(\pi\) is a stationary distribution \(\pi(\mathbf{z}):=\int\mathcal{K}(\mathbf{z}|\mathbf{z}^{\prime})\pi(\mathbf{z}^{\prime})d\mathbf{ z}^{\prime},\forall\mathbf{z}\). This motivates several works for more general approximations of this update [36; 37; 38] with the help of neural samplers.
Inspired by [36], we propose to use the following framework for amortizing the LD in learning the LEBM. Formally, let \(\mathcal{Q}=\{q_{\mathbf{\phi}}\}\) be the set of amortized samplers parameterized by \(\mathbf{\phi}\). Given the transition kernel \(\mathcal{K}\), the goal is to find a sampler \(q_{\mathbf{\phi}*}\) to closely approximate the target distribution \(\pi\). This can be achieved by iteratively distill \(T\)-step transitions of LD into \(q_{\mathbf{\phi}}\):
\[q_{\mathbf{\phi}_{k}}\leftarrow\operatorname*{arg\,min}_{q_{\mathbf{\phi} }\in\mathcal{Q}}\mathcal{D}[q_{\mathbf{\phi}_{k-1},T}||q_{\mathbf{\phi}}],\;q_{\mathbf{ \phi}_{k-1},T}:=\mathcal{K}_{T}q_{\mathbf{\phi}_{k-1}},\;q_{\mathbf{\phi}_{0}}\approx \pi_{0},\;k=0,...,K-1. \tag{6}\]
where \(\mathcal{D}[\cdot||\cdot]\) is the Kullback-Leibler Divergence (KLD) measure between distributions. Concretely, Eq. (6) means that to recover the target distribution \(\pi\), instead of using long-run LD, we can repeat the following steps: i) employ a \(T\)-step short-run LD initialized with the current sampler \(q_{\mathbf{\phi}_{k-1}}\) to approximate \(\mathcal{K}_{T}q_{\mathbf{\phi}_{k-1}}\) as the target distribution of the current sampler, and ii) update the current sampler \(q_{\mathbf{\phi}_{k-1}}\) to \(q_{\mathbf{\phi}_{k}}\). The correct convergence of \(q_{\mathbf{\phi}}\) to \(\pi\) with Eq. (6) is supported by the standard theory of Markov chains [39], which suggests that the update in Eq. (6) is monotonically decreasing in terms of KLD, \(\mathcal{D}[q_{\mathbf{\phi}_{k}}|\pi]\leq\mathcal{D}[q_{\mathbf{\phi}_{k-1}}||\pi]\). We refer to Appendix A.1 for a detailed explanation and discussion of this statement. In practice, one can apply gradient-based methods to minimize \(\mathcal{D}[q_{\mathbf{\phi}_{k-1},T}||q_{\mathbf{\phi}}]\) and approximate Eq. (6) for the update from \(q_{\mathbf{\phi}_{k-1}}\) to \(q_{\mathbf{\phi}_{k}}\). The above formulation provides a generic and flexible framework for amortizing the potentially long MCMC.
Diffusion-based amortizationTo avoid clutter, we simply write \(q_{\mathbf{\phi}_{k-1},T}\) as \(q_{T}\). We can see that
\[\operatorname*{arg\,min}_{q_{\mathbf{\phi}}}\mathcal{D}[q_{T}||q_{\mathbf{\phi}}]= \operatorname*{arg\,min}_{q_{\mathbf{\phi}}}-\mathcal{H}(q_{T})+\mathcal{H}(q_{T}, q_{\mathbf{\phi}})=\operatorname*{arg\,min}_{q_{\mathbf{\phi}}}-\mathbb{E}_{q_{T}} \left[\log q_{\mathbf{\phi}}\right], \tag{7}\]
where \(\mathcal{H}\) represents the entropy of distributions. The selection of the sampler \(q_{\mathbf{\phi}}\) is a matter of design. According to Eq. (7), we may expect the following properties from \(q_{\mathbf{\phi}}\): i) having analytically tractable expression of the exact value or lower bound of log-likelihood, ii) easy to draw samples from and iii) capable of close approximation to the given distribution \(\{q_{T}\}\). In practice, iii) is important for the convergence of Eq. (6). If \(q_{\mathbf{\phi}}\) is far away from \(q_{T}\) in each iteration, then non-increasing KLD property \(\mathcal{D}[q_{\mathbf{\phi}_{k}}|\pi]\leq\mathcal{D}[q_{\mathbf{\phi}_{k-1}}||\pi]\) might not hold, and the resulting amortized sampler would not converge to the tuple target distribution \(\pi(\mathbf{z})\).
For the choice of \(q_{\mathbf{\phi}}\), let us consider distilling the gradient field of \(q_{T}\) in each iteration, so that the resulting sampler is close to the \(q_{T}\) distribution. This naturally points to the DDPMs [8]. To be specific, learning a DDPM with \(\mathbf{\epsilon}\)-prediction parameterization is equivalent to fitting the finite-time marginal of a sampling chain resembling annealed Langevin dynamics [7; 8; 31]. Moreover, it also fulfills i) and ii) of the desired properties mentioned above. We can plug in the objective of DDPM (Section 2.2), which is a lower bound of \(\log q_{\mathbf{\phi}}\), to obtain the gradient-based update rule for \(q_{\mathbf{\phi}}\):
\[\mathbf{\phi}_{k-1}^{(i+1)}\leftarrow\mathbf{\phi}_{k-1}^{(i)}-\eta\nabla_{\mathbf{\phi}} \mathbb{E}_{\mathbf{\epsilon},\lambda}\left[\left\|\mathbf{\epsilon}(\mathbf{z}_{\lambda})- \mathbf{\epsilon}\right\|_{2}^{2}\right],\;\mathbf{\phi}_{k}^{(0)}\leftarrow\mathbf{\phi} _{k-1}^{(M)},\;i=0,1,...,M-1 \tag{8}\]
where \(\mathbf{\epsilon}\sim\mathcal{N}(0,\mathbf{I}_{d})\). \(\lambda\) is drawn from a distribution of log noise-to-signal ratio \(p(\lambda)\). \(\eta\) is the step size for the update, and \(M\) is the number of iterations needed in Eq. (8). In practice, we find that when amortizing the LD sampling chain, a light-weight DDPM \(q_{\mathbf{\phi}}\) updated with very small \(M\), _i.e._, few steps of Eq. (8) iteration, approximates Eq. (6) well. We provide a possible explanation using the Fisher information by scoping the asymptotic behavior of this update rule in the Appendix A.2. We term the resulting sampler as Diffusion-Amortized MCMC (DAMC).
### Approximate MLE with DAMC
In this section, we show how to integrate the DAMC sampler into the learning framework of LEBM and form a symbiosis between these models. Given a set of \(N\) training samples \(\{\mathbf{x}_{i}\}_{i=1}^{N}\) independently drawn from the unknown data distribution \(p_{\text{data}}(\mathbf{x})\), the model \(p_{\mathbf{\theta}}\) (Section 2.1) can be trained
by maximizing the log-likelihood over training samples \(\mathcal{L}(\mathbf{\theta})=\frac{1}{N}\sum_{i=1}^{N}\log p_{\mathbf{\theta}}\left(\mathbf{x}_ {i}\right)\). Doing so typically requires computing the gradients of \(\mathcal{L}(\mathbf{\theta})\), where for each \(\mathbf{x}_{i}\) the learning gradient satisfies:
\[\nabla_{\mathbf{\theta}}\log p_{\mathbf{\theta}}(\mathbf{x}_{i}) =\mathbb{E}_{p_{\mathbf{\theta}}(\mathbf{z}_{i}|\mathbf{x}_{i})}\left[\nabla_ {\mathbf{\theta}}\log p_{\mathbf{\theta}}(\mathbf{z}_{i},\mathbf{x}_{i})\right]\] \[=\big{(}\underbrace{\mathbb{E}_{p_{\mathbf{\theta}}(\mathbf{z}_{i}|\mathbf{x }_{i})}\left[\nabla_{\mathbf{\alpha}}f_{\mathbf{\alpha}}(\mathbf{z}_{i})\right]-\mathbb{E} _{p_{\mathbf{\alpha}}(\mathbf{z}_{i})}\left[\nabla_{\mathbf{\alpha}}f_{\mathbf{\alpha}}(\mathbf{z }_{i})\right]}_{\delta_{\mathbf{\alpha}}(\mathbf{x}_{i})}\big{]},\underbrace{\mathbb{ E}_{p_{\mathbf{\theta}}(\mathbf{z}_{i}|\mathbf{x}_{i})}\left[\nabla_{\mathbf{\beta}}\log p_{\mathbf{ \theta}}(\mathbf{x}_{i}|\mathbf{z}_{i})\right]}_{\delta_{\mathbf{\theta}}(\mathbf{x}_{i})} \big{)}. \tag{9}\]
Intuitively, based on the discussion in Section 2.1 and Section 3.1, we can approximate the distributions in Eq. (9) by drawing samples from \([\mathbf{z}_{i}|\mathbf{x}_{i}]\sim\mathcal{K}_{T,\mathbf{z}_{i}|\mathbf{x}_{i}}q_{\mathbf{\phi}_{ k}}(\mathbf{z}_{i}|\mathbf{x}_{i})\), \(\mathbf{z}_{i}\sim\mathcal{K}_{T,\mathbf{z}_{i}}q_{\mathbf{\phi}_{k}}(\mathbf{z}_{i})\), to estimate the expectations and hence the learning gradient. Here we learn the DAMC samplers \(q_{\mathbf{\phi}_{k}}(\mathbf{z}_{i}|\mathbf{x}_{i})\) and \(q_{\mathbf{\phi}_{k}}(\mathbf{z}_{i})\) for the posterior and prior sampling chain, respectively. \(\mathbf{\phi}_{k}\) represents the current sampler as in Section 3.1; \(\mathcal{K}_{T,\mathbf{z}_{i}|\mathbf{x}_{i}}\) and \(\mathcal{K}_{T,\mathbf{z}_{i}}\) are the transition kernels for posterior and prior sampling chain. Equivalently, this means to better estimate the learning gradients we can i) first draw approximate posterior and prior MCMC samples from the current \(q_{\mathbf{\phi}_{k}}\) model, and ii) update the approximation of the prior \(p_{\mathbf{\alpha}}(\mathbf{z})\) and posterior \(p_{\mathbf{\theta}}(\mathbf{z}|\mathbf{x})\) distributions with additional \(T\)-step LD initialized with \(q_{\mathbf{\phi}_{k}}\) samples. These updated samples are closer to \(p_{\mathbf{\theta}}(\mathbf{z}_{i}|\mathbf{x}_{i})\) and \(p_{\mathbf{\alpha}}(\mathbf{z}_{i})\) compared with short-run LD samples based on our discussion in Section 2.2. Consequently, the diffusion-amortized LD samples provide a generally better estimation of the learning gradients and lead to better performance, as we will show empirically in Section 4. After updating \(\mathbf{\theta}=(\mathbf{\alpha},\mathbf{\beta})\) based on these approximate samples with Eq. (9), we can update \(q_{\mathbf{\phi}_{k}}\) with Eq. (8) to distill the sampling chain into \(q_{\mathbf{\phi}_{k+1}}\). As shown in Fig. 1, we can see that the whole learning procedure iterates between the approximate MLE of \(p_{\mathbf{\theta}}\) and the amortization of MCMC with \(q_{\mathbf{\phi}}\). We refer to Appendix A.3 for an extended discussion of this procedure.
After learning the models, we can use either DAMC or LEBM for prior sampling. For DAMC, we may draw samples from \(q_{\mathbf{\phi}}(\mathbf{z}_{i})\) with Eq. (5). Prior sampling with LEBM still requires short-run LD initialized from \(\mathcal{N}(0,\mathbf{I}_{d})\). For posterior sampling, we may sample from \(\mathcal{K}_{T,\mathbf{z}_{i}|\mathbf{x}_{i}}q_{\mathbf{\phi}_{k}}(\mathbf{z}_{i}|\mathbf{x}_{i})\), _i.e._, first draw samples from DAMC and then run few steps of LD to obtain posterior samples.
```
0: initial parameters \((\mathbf{\alpha},\mathbf{\beta},\mathbf{\phi})\); learning rate \(\eta=(\eta_{\mathbf{\alpha}},\eta_{\mathbf{\beta}},\eta_{\mathbf{\phi}})\); observed examples \(\{\mathbf{x}^{(i)}\}_{i=1}^{N}\); prob. of uncond. training \(p_{\mathrm{uncond}}\) for the DAMC sampler.
0:\(\left(\mathbf{\theta}^{(K)}=\{\mathbf{\alpha}^{(K)},\mathbf{\beta}^{(K)}\},\mathbf{\phi}^{(K) }\right)\).
1for\(k=0:K-1\)do
2 Sample a minibatch of data \(\{\mathbf{x}^{(i)}\}_{i=1}^{B}\);
3 Draw DAMC samples: For each \(\mathbf{x}^{(i)}\), draw \(\mathbf{z}^{(i)}_{+}\) and \(\mathbf{z}^{(i)}_{-}\) from \(q_{\mathbf{\phi}_{k}}(\mathbf{z}_{i}|\mathbf{x}_{i})\).
4 Prior LD update: For each \(\mathbf{x}^{(i)}\), update \(\mathbf{z}^{(i)}_{-}\) using Eq. (3), with \(\pi(\mathbf{z}_{i})=p_{\mathbf{\alpha}^{(k)}}(\mathbf{z}_{i})\);
5 Posterior LD update: For each \(\mathbf{x}^{(i)}\), update \(\mathbf{z}^{(+)}_{+}\) using Eq. (3), with \(\pi(\mathbf{z}_{i})=p_{\mathbf{\beta}^{(k)}}(\mathbf{z}_{i}|\mathbf{x}_{i})\);
6 Update \(\mathbf{\theta}^{(k)}\): Update \(\mathbf{\alpha}^{(k)}\) and \(\mathbf{\beta}^{(k)}\) using Monte-Carlo estimates (_i.e._, Monte-Carlo average) of Eq. (9) with \(\{\mathbf{z}^{(i)}_{+}\}_{i=1}^{B}\) and \(\{\mathbf{z}^{(i)}_{-}\}_{i=1}^{B}\).
7 Update \(\mathbf{\phi}^{(k)}\): Update \(\mathbf{\phi}^{(k)}\) using Eq. (8) with \(p_{\mathrm{uncond}}\) and \(\{\mathbf{z}^{(i)}_{+}\}_{i=1}^{B}\) as the target.
```
**Algorithm 1****Learning algorithm of DAMC.**
### Implementation
In order to efficiently model both \(q_{\mathbf{\phi}_{k}}(\mathbf{z}_{i}|\mathbf{x}_{i})\) and \(q_{\mathbf{\phi}_{k}}(\mathbf{z}_{i})\), we follow the method of [40] to train a single neural network to parameterize both models, where \(q_{\mathbf{\phi}_{k}}(\mathbf{z}_{i}|\mathbf{x}_{i})\) can be viewed as a conditional DDPM with the embedding of \(\mathbf{x}_{i}\) produced by an encoder network as its condition, and \(q_{\mathbf{\phi}_{k}}(\mathbf{z}_{i})\) an unconditional one. For \(q_{\mathbf{\phi}_{k}}(\mathbf{z}_{i})\), we can simply input a null token \(\emptyset\) as its condition when predicting the noise \(\mathbf{\epsilon}\). We jointly train both models by randomly nullifying the inputs with the probability \(p_{\mathrm{uncond}}=0.2\). During training, we use samples from \(q_{\mathbf{\phi}_{k}}(\mathbf{z}_{i}|\mathbf{x}_{i})\) to initialize both prior and posterior updates. For the posterior and prior DAMC samplers, we set the number of diffusion steps to \(100\). The number of iterations in Eq. (8) is set to \(M=6\) throughout the experiments. The LD runs \(T=30\) and \(T=60\) iterations for posterior and prior updates during training with a step size of \(s=0.1\). For
test time sampling from \(\mathcal{K}_{T,\mathbf{z}_{i}|\mathbf{x}_{i}}q_{\phi_{k}}(\mathbf{z}_{i}|\mathbf{x}_{i})\), \(T=10\) for the additional LD. For a fair comparison, we use the same LEBM and generator as in [22; 41] for all the experiments. We summarize the learning algorithm in Algorithm 1. Please see Appendices B and C for network architecture and further training details, as well as the pytorch-style pseudocode of the algorithm.
## 4 Experiments
In this section, we are interested in the following questions: (i) How does the proposed method compare with its previous counterparts (_e.g_., purely MCMC-based or variational methods)? (ii) How is the scalability of this method? (iii) How are the time and parameter efficiencies? (iv) Does the proposed method provide a desirable latent space? To answer these questions, we present a series of experiments on benchmark datasets including MNIST [42], SVHN [43], CelebA64 [44], CIFAR-10 [45], CelebAMask-HQ [46], FFHQ [10] and LSUN-Tower [47]. As to be shown, the proposed method demonstrates consistently better performance in various experimental settings compared with previous methods. We refer to Appendix D for details about the experiments.
### Generation and Inference: Prior and Posterior Sampling
Generation and reconstructionWe evaluate the quality of the generated and reconstructed images to examine the sampling quality of DAMC. Specifically, we would like to check i) how well does DAMC fit the seen data, ii) does DAMC provide better MCMC samples for learning LEBM and iii) the generalizability of DAMC on unseen data. We check the goodness of fit of DAMC by evaluating the quality of the images generated with DAMC prior samples. If DAMC does provide better MCMC samples for learning LEBM, we would expect better fitting of data and hence an improved generation quality of LEBM. We evaluate the performance of posterior sampling given unseen testing images by examining the reconstruction error on testing data. We benchmark our model against a variety of previous methods in two groups. The first group covers competing methods that adopt the variational learning scheme, including VAE [1], as well as recent two-stage methods such as 2-stage VAE [48], RAE [49] and NCP-VAE [50], whose prior distributions are learned with posterior samples in a second stage after the generator is trained. The second group includes methods that adopt MCMC-based sampling. It includes Alternating Back-Propogation (ABP) [51], Short-Run Inference (SRI) from [24] and the vanilla learning method of LEBM [22], which relies on short-run LD for both posterior and prior sampling. We also compare our method with the recently proposed Adaptive CE [41]. It learns a series of LEBMs adaptively during training, while these LEBMs are sequentially updated by density ratio estimation instead of MLE. To make fair comparisons, we follow the same evaluation protocol as in [22; 41].
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{2}{c}{SVHN} & \multicolumn{2}{c}{CelebA} & \multicolumn{2}{c}{CIFAR-10} & \multicolumn{2}{c}{CelebA-HQ} \\ \cline{2-9} & MSE & FID & MSE & FID & MSE & FID & MSE & FID \\ \hline VAE [1] & 0.019 & 46.78 & 0.021 & 65.75 & 0.057 & 106.37 & 0.031 & 180.49 \\
2s-VAE [48] & 0.019 & 42.81 & 0.021 & 44.40 & 0.056 & 72.90 & - & - \\ RAE [49] & 0.014 & 40.02 & 0.018 & 40.95 & 0.027 & 74.16 & - & - \\ NCP-VAE [50] & 0.020 & 33.23 & 0.021 & 42.07 & 0.054 & 78.06 & - & - \\ \hline Adaptive CE*[41] & 0.004 & 26.19 & 0.009 & 35.38 & **0.008** & 65.01 & - & - \\ \hline ABP [51] & - & 49.71 & - & 51.50 & 0.018 & 90.30 & 0.025 & 160.21 \\ SRI [24] & 0.018 & 44.86 & 0.020 & 61.03 & - & - & - & - \\ SRI (L=5) [24] & 0.011 & 35.32 & 0.015 & 47.95 & - & - & - & - \\ LEBM [22] & 0.008 & 29.44 & 0.013 & 37.87 & 0.020 & 70.15 & 0.025 & 133.07 \\ \hline Ours-LEBM & **0.002** & \begin{tabular}{c} 21.17 \\ **18.76** \\ \end{tabular} & **0.005** & \begin{tabular}{c} 35.67 \\ **30.83** \\ \end{tabular} & 0.015 & \begin{tabular}{c} 60.89 \\ **57.72** \\ \end{tabular} & **0.023** &
\begin{tabular}{c} 89.54 \\ **85.88** \\ \end{tabular} \\ \hline \hline \end{tabular}
\end{table}
Table 1: **MSE(\(\downarrow\)) and FID(\(\downarrow\)) obtained from models trained on different datasets**. The FID scores are computed based on 50k generated images and training images for the first three datasets and 5k images for the CelebA-HQ dataset. The MSEs are computed based on unseen testing images. We highlight our model results in gray color. The best and second-best performances are marked in bold numbers and underlines, respectively; tables henceforth follow this format. *[41] uses a prior model with 4x parameters compared with [22] and ours.
For generation, we report the FID scores [52] in Table 1. We observe that i) the DAMC sampler, denoted as Ours-DAMC, provides superior generation performance compared to baseline models, and ii) the LEBM learned with samples from DAMC, denoted as Ours-LEBM, demonstrates significant performance improvement compared with the LEBM trained with short-run LD, denoted as LEBM. These results confirm that DAMC is a reliable sampler and indeed partly addresses the learning issue of LEBM caused by short-run LD. We would like to point out that the improvement is clearer on the CelebAMask-HQ dataset, where the input data is of higher dimension (\(256\times 256\)) and contains richer details compared with other datasets. This illustrates the superiority of DAMC sampler over short-run LD when the target distribution is potentially highly multi-modal. We show qualitative results of generated samples in Fig. 2, where we observe that our method can generate diverse, sharp and high-quality samples. For reconstruction, we compare our method with baseline methods in terms of MSE in Table 1. We observe that our method demonstrates competitive reconstruction error, if not better, than competing methods do. Additional qualitative results of generation and reconstruction are presented in Appendices E.2 and E.3.
GAN inversionWe have examined the scalability of our method on the CelebAMask-HQ dataset. Next, we provide more results on high-dimensional and highly multi-modal data by performing GAN inversion [54] using the proposed method. Indeed, we may regard GAN inversion as an inference problem and a special case of posterior sampling. As a suitable testbed, the StyleGAN structure [10] is specifically considered as our generator in the experiments: [53] points out that to effectively infer the latent representation of a given image, the GAN inversion method needs to consider an extended latent space of StyleGAN, consisting of 14 different 512-dimensional latent vectors. We attempt to use the DAMC sampler for GAN inversion. We benchmark our method against i) learning an encoder that maps a given image to the latent space [16], which relates to the variational methods for posterior inference, ii) optimizing a random initial latent code by minimizing the reconstruction error and perceptual loss [53], which can be viewed as a variant of LD sampling, and iii) optimizing the latent code by minimizing both the objectives used in ii) and the energy score provided by LEBM. We use the pretrained weights provided by [18] for the experiments. Both the DAMC sampler and the encoder-based method are augmented with \(100\) post-processing optimization iterations. We refer to Appendix D for more experiment details. We test LEBM-based inversion with different optimization iterations. To be specific, 1x, 2x, and 4x represent 100, 200, and 400 iterations respectively. We can see in Table 2 that DAMC performs better than all the baseline methods on the unseen testing data, which supports the efficacy of our method in high-dimensional settings. We provide qualitative results in Fig. 3.
Parameter efficiency and sampling timeOne potential disadvantage of our method is its parameter inefficiency for introducing an extra DDPM. Fortunately, our models are in the latent space
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{2}{c}{FFHQ} & \multicolumn{2}{c}{LSUN-T} \\ \cline{2-5} & MSE & FID & MSE & FID \\ \hline Opt. [53] & 0.055 & 149.39 & 0.080 & 240.11 \\ Enc. [16] & 0.028 & 62.32 & 0.079 & 132.41 \\ \hline
[22] w/ 1x & 0.054 & 149.21 & 0.072 & 239.51 \\
[22] w/ 2x & 0.039 & 101.59 & 0.066 & 163.20 \\
[22] w/ 4x & 0.032 & 84.64 & 0.059 & 111.53 \\ \hline
**Ours** & **0.025** & **52.85** & **0.059** & **80.42** \\ \hline \hline \end{tabular}
\end{table}
Table 2: **MSE(\(\downarrow\)) and FID(\(\downarrow\)) for GAN inversion on different datasets**. Opt. and Enc. denotes the optimization-based and encoder-based methods.
Figure 2: **Samples generated from the DAMC sampler and LEBM trained on SVHN, CelebA, CIFAR-10 and CelebA-HQ datasets. In each sub-figure, the first four rows are generated by the DAMC sampler. The last four rows are generated by LEBM trained with the DAMC sampler.**
so the network is lightweight. To be specific, on CIFAR-10 dataset the number of parameters in the DDPM is only around \(10\%\) (excluding the encoder) of those in the generator. The method has competitive time efficiency. With the batch size of \(64\), the DAMC prior sampling takes \(0.3s\), while \(100\) steps of short-run LD with LEBM takes \(0.2s\). The DAMC posterior sampling takes \(1.0s\), while LEBM takes \(8.0s\). Further discussions about the limitations can be found in the Appendix G.1.
### Analysis of Latent Space
Long-run langevin transitionIn this section, we examine the energy landscape induced by the learned LEBM. We expect that a well-trained \(p_{\mathbf{\alpha}}(\mathbf{z})\) fueled by better prior and posterior samples from the DAMC sampler would lead to energy landscape with regular geometry. In Fig. 4, we visualize the transition of LD initialized from \(\mathcal{N}(0,\mathbf{I}_{d})\) towards \(p_{\mathbf{\alpha}}(\mathbf{z})\) on the model trained on the CelebA dataset. Additional visualization of transitions on SVHN and CIFAR-10 datasets can be found in the Appendix E.4. The LD iterates for \(200\) and \(2500\) steps, which is longer than the LD within each training iteration (\(60\) steps). For the \(200\)-step set-up, we can see that the generation quality quickly improves by exploring the local modes (demonstrating different facial features, _e.g._, hairstyle, facial expression and lighting). For the \(2500\)-step long-run set-up, we can see that the LD produces consistently valid results without the oversaturating issue of the long-run chain samples [23]. These observations provide empirical evidence that the LEBM is well-trained.
Anomaly detectionWe further evaluate how the LEBM learned by our method could benefit the anomaly detection task. With properly learned models, the posterior \(p_{\mathbf{\theta},\mathbf{\phi}}(\mathbf{z}|\mathbf{x})\) could form a discriminative latent space that has separated probability densities for in-distribution (normal) and out-of-distribution (anomalous) data. Given the testing sample \(\mathbf{x}\), we use un-normalized log joint density \(p_{\mathbf{\theta},\mathbf{\phi}}(\mathbf{z}|\mathbf{x})\mathcal{\propto}p_{\mathbf{\theta},\mathbf{ \phi}}(\mathbf{x},\mathbf{z})\approx p_{\mathbf{\beta}}(\mathbf{x}|\mathbf{z})p_{\mathbf{\alpha}}(\bm {z})|_{\mathbf{z}\sim\mathcal{K},\mathbf{z}=\mathbf{q}\mathbf{\phi}(\mathbf{z}|\mathbf{x})}\) as our decision function. This means that we draw samples from \(\mathcal{K}_{T,\mathbf{z}|\mathbf{x}}q_{\mathbf{\phi}}(\mathbf{z}|\mathbf{x})\) and compare the corresponding reconstruction errors and energy scores. A higher value of log joint density indicates a higher probability of the test sample being a normal sample. To make fair comparisons, we follow the experimental settings in [22; 41; 55; 56] and train our model on MNIST with one class held out as an anomalous class. We consider the
Figure 4: **Transition of Markov chains initialized from \(\mathcal{N}(0,\mathbf{I}_{d})\) towards \(p_{\mathbf{\alpha}}(\mathbf{z})\). We present results by running LD for 200 and 2500 steps. In each sub-figure, the top panel displays the trajectory in the data space uniformly sampled along the chain. The bottom panel shows the energy score \(f_{\mathbf{\alpha}}(\mathbf{z})\) over the iterations.**
Figure 3: **Qualitative results of StyleGAN inversion using the DAMC sampler. In each sub-figure, the left panel contain samples from the FFHQ dataset, and the right panel contains samples from the LSUN-T dataset.**
baseline models that employ MCMC-based or variational inferential mechanisms. Table 3 shows the results of AUPRC scores averaged over the last 10 trials. We observe significant improvements in our method over the previous counterparts.
### Ablation Study
In this section, we conduct ablation study on several variants of the proposed method. Specifically, we would like to know: i) what is the difference between the proposed method and directly training a DDPM in a fixed latent space? ii) What is the role of LEBM in this learning scheme? iii) Does DAMC effectively amortize the sampling chain? We use CIFAR-10 dataset for the ablative experiments to empirically answer these questions. More ablation studies can be found in Appendix F.
Non-Amortized DDPM vs. DamcWe term directly training a DDPM in a fixed latent space as the non-amortized DDPM. To analyze the difference between non-amortized DDPM and DAMC, we first train a LEBM model with persistent long-run chain sampling [57] and use the trained model to obtain persistent samples for learning the non-amortized DDPM. In short, the non-amortized DDPM can be viewed as directly distilling the long-run MCMC sampling process, instead of progressively amortizing the chain. We present the FID and MSE of the non-amortized model (NALR) in Table 4. We observe that directly amortizing the long-run chain leads to degraded performance compared with the proposed method. The results are consistently worse for both posterior and prior sampling and the learned LEBMs, which verify the effectiveness of the proposed iterative learning scheme.
The contribution of LEBMOne may argue that since we have the DAMC as a powerful sampler, it might not be necessary to jointly learn LEBM in the latent space. To demonstrate the necessity of this joint learning scheme, we train a variant of DAMC by replacing the LEBM with a Gaussian prior. The results are presented in Table 4. We observe that models trained with non-informative Gaussian prior obtain significantly worse generation results. It suggests that LEBM involved in the learning iteration provides positive feedback to the DAMC sampler. Therefore, we believe that it is crucial to jointly learn the DAMC and LEBM.
Vanilla sampling vs. DamcWe compare the vanilla sampling process of each model with DAMC. The vanilla sampling typically refers to short-run or long-run LD initialized with \(\mathcal{N}(0,\mathbf{I}_{d})\). We also provide results of learning LEBM using variational methods for comparison. We can see in Table 4 that sampling with DAMC shows significantly better scores of the listed models, compared with vanilla sampling. The result is even better than that of the persistent chain sampler (V. of NALR-LEBM). This indicates that DAMC effectively amortizes the sampling chain. Comparing DAMC sampler with the variational sampler also indicates that DAMC is different from general variational approximation: it benefits from its connection with LD and shows better expressive power.
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{2}{c}{VI-LEBM} & \multicolumn{2}{c}{SR-LEBM} & \multicolumn{2}{c}{DAMC-G} & \multicolumn{2}{c}{NALR-LEBM} & \multicolumn{2}{c}{DAMC-LEBM} \\ \cline{2-10} & V. & D. & V. & D. & V. & D. & V. & D. & V. & D. \\ \hline MSE & 0.054 & - & 0.020 & - & 0.018 & 0.015 & 0.028 & 0.016 & 0.021 & **0.015** \\ FID & 78.06 & - & 70.15 & - & 90.30 & 66.93 & 68.52 & 64.38 & 60.89 & **57.72** \\ \hline \hline \end{tabular}
\end{table}
Table 4: **Ablation study on CIFAR-10 dataset.** VI denotes learning LEBM using variational methods. SR denotes learning LEBM with short-run LD. DAMC-G replaces the LEBM in DAMC-LEBM with a standard Gaussian distribution. NALR denotes the non-amortized DDPM setting. For each set-up, we provide results using the vanilla sampling method, denoted as V, and the ones using the DAMC sampler, denoted as D.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multicolumn{1}{c}{Heldout Digit} & 1 & 4 & 5 & 7 & 9 \\ \hline VAE [1] & 0.063 & 0.337 & 0.325 & 0.148 & 0.104 \\ ABP [51] & \(0.095\pm 0.03\) & \(0.138\pm 0.04\) & \(0.147\pm 0.03\) & \(0.138\pm 0.02\) & \(0.102\pm 0.03\) \\ MEG [55] & \(0.281\pm 0.04\) & \(0.401\pm 0.06\) & \(0.402\pm 0.06\) & \(0.290\pm 0.04\) & \(0.342\pm 0.03\) \\ BiGAN-\(\sigma\)[56] & \(0.287\pm 0.02\) & \(0.443\pm 0.03\) & \(0.514\pm 0.03\) & \(0.347\pm 0.02\) & \(0.307\pm 0.03\) \\ LEBM [22] & \(0.336\pm 0.01\) & \(0.630\pm 0.02\) & \(0.619\pm 0.01\) & \(0.463\pm 0.01\) & \(0.413\pm 0.01\) \\ Adaptive CE [41] & \(0.531\pm 0.02\) & \(0.729\pm 0.02\) & \(0.742\pm 0.01\) & \(0.620\pm 0.02\) & \(0.499\pm 0.01\) \\ \hline \hline
**Ours** & **0.684 \(\pm\) 0.02** & **0.911 \(\pm\) 0.01** & **0.939 \(\pm\) 0.02** & **0.801 \(\pm\) 0.01** & **0.705 \(\pm\) 0.01** \\ \hline \hline \end{tabular}
\end{table}
Table 3: **AUPRC(\(\uparrow\)) scores for unsupervised anomaly detection on MNIST.** Numbers are taken from [41]. Results of our model are averaged over the last 10 trials to account for variance.
## 5 Related Work
Energy-based prior modelEBMs [23; 58; 59; 60; 24] play an important role in generative modeling. Pang et al. [22] propose to learn an EBM as a prior model in the latent space of DLVMs; it greatly improves the model expressivity over those with non-informative priors and brings strong performance on downstream tasks, _e.g._, image segmentation, text modeling, molecule generation, and trajectory prediction [61; 12; 15; 62]. However, learning EBMs or latent space EBMs requires MCMC sampling to estimate the learning gradients, which needs numerous iterations to converge when the target distributions are high-dimensional or highly multi-modal. Typical choices of sampling with non-convergent short-run MCMC [24] in practice can lead to poor generation quality, malformed energy landscapes [25; 15; 24], biased estimation of the model parameter and instability in training [25; 59; 60; 23]. In this work, we consider learning valid amortization of the long-run MCMC for energy-based priors; the proposed model shows reliable sampling quality in practice.
Denoising diffusion probabilistic modelDDPMs [7; 31; 8], originating from [5], learn the generative process by recovering the observed data from a sequence of noise-perturbed versions of the data. The learning objective can be viewed as a variant of the denoising score matching objective [35]. As pointed out in [5; 8], the sampling procedure of DDPM with \(\epsilon\)-prediction parametrization resembles LD of an EBM; \(\epsilon\) (predicted noise) plays a similar role to the gradient of the log density [8]. To be specific, learning a DDPM with \(\epsilon\)-prediction parameterization is equivalent to fitting the finite-time marginal of a sampling chain resembling annealed Langevin dynamics [7; 31; 8]. Inspired by this connection, we propose to amortize the long-run MCMC in learning energy-based prior by iteratively distilling the short-run chain segments with a DDPM-based sampler. We show empirically and theoretically that the learned sampler is valid for long-run chain sampling.
Amortized MCMCThe amortized MCMC technique is formally brought up by Li et al. [36], which incorporates feedback from MCMC back to the parameters of the amortizer distribution \(q_{\phi}\). It is concurrently and independently proposed by Xie et al. [37] as the MCMC teaching framework. Methods under this umbrella term [36; 37; 38; 63; 64; 65] generally learns the amortized by minimizing the divergence (typically the KLD) between the improved distribution and its initialization, i.e., \(\mathcal{D}[\mathcal{K}_{T}q_{\phi_{k-1}}||q_{\phi}]\), where \(\mathcal{K}_{T}\) represents \(T\)-step MCMC transition kernel and \(q_{\phi_{k-1}}\) represents the current amortized. The diffusion-based amortization proposed in this work can be viewed as an instantiation of this framework, while our focus is on learning the energy-based prior. Compared with previous methods, our method i) specifically exploits the connection between EBMs and DDPMs and is suitable for amortizing the prior and posterior sampling MCMC of energy-based prior, and ii) resides in the lower-dimensional latent space and enables faster sampling and better convergence.
More methods for learning EBMSeveral techniques other than short-run MCMC have been proposed to learn the EBM. In the seminal work, Hinton [67] proposes to initialize Markov chains using real data and run several steps of MCMC to obtain samples from the model distribution. Tieleman [57] proposes to start Markov chains from past samples in the previous sampling iteration, known as Persistent Contrastive Divergence (PCD) or persistent chain sampling, to mimic the long-run sampling chain. Nijkamp et al. [23] provide comprehensive discussions about tuning choices for LD such as the step size \(s\) and sampling steps \(T\) to obtain stable long-run samples for persistent training. [59; 60] employ a hybrid of persistent chain sampling and short-run sampling by maintaining a buffer of previous samples. The methods draw from the buffer or initialize the short-run chain with noise distribution with some pre-specified probability. Another branch of work, stemmed from [68], considers discriminative contrastive estimation to avoid MCMC sampling. Gao et al. [69] use a normalizing flow [4] as the base distribution for contrastive estimation. Aneja et al. [50] propose to estimate the energy-based prior model based on the prior of a pre-trained VAE [70] by noise contrastive estimation. More recently, Xiao and Han [41] learn a sequence of EBMs in the latent space with adaptive multi-stage NCE to further improve the expressive power of the model.
## 6 Conclusion
In this paper, we propose the DAMC sampler and develop a novel learning algorithm for LEBM based on it. We provide theoretical and empirical evidence for the effectiveness of our method. We notice that our method can be applied to amortizing MCMC sampling of unnormalized continuous densities in general. It can also be applied to sampling posterior distributions of continuous latent variables in general latent variable models. We would like to explore these directions in future work.
## Acknowledgements
Y. N. Wu was supported by NSF DMS-2015577. We would like to thank the anonymous reviewers for their constructive comments.
|
2306.06486 | On the limit problem arising in the kinetic derivation of the
Cahn-Hilliard equation | The non-local degenerate Cahn-Hilliard equation is derived from the Vlasov
equation with long-range attraction. We study the local limit as the
delocalization parameter converges to 0. The difficulty arises from the
degeneracy which requires compactness estimates, but all necessary a priori
estimates can be obtained only on the nonlocal quantities yielding almost no
information on the limiting solution itself. We introduce a novel condition on
the nonlocal kernel which allows us to exploit the available nonlocal a priori
estimates. The condition, satisfied by most of the kernels appearing in the
applications, can be of independent interest. Our approach is flexible and
systems can be treated as well. | Charles Elbar, Benoît Perthame, Jakub Skrzeczkowski | 2023-06-10T16:48:02Z | http://arxiv.org/abs/2306.06486v1 | # On the limit problem arising in the kinetic derivation of the Cahn-Hilliard equation
###### Abstract.
The non-local degenerate Cahn-Hilliard equation is derived from the Vlasov equation with long range attraction. We study the local limit as the delocalization parameter converges to \(0\). The difficulty arises from the degeneracy which requires compactness estimates, but all necessary a priori estimates can be obtained only on the nonlocal quantities yielding almost no information on the limiting solution itself. We introduce a novel condition on the nonlocal kernel which allows us to exploit the available nonlocal a priori estimates. The condition, satisfied by most of the kernels appearing in the applications, can be of independent interest. Our approach is flexible and systems can be treated as well.
2020 Mathematics Subject Classification: 35B40, 35D30, 35K25, 35K55 Jakub Skrzeczkowski was supported by National Science Center, Poland through project no. 2019/35/N/ST1/03459.
Introduction
In this paper we consider the following _linear Schrodinger equation_
\[\partial_{t}\rho-\Delta\rho+\operatorname{div}(\rho\nabla\Delta\rho)=0. \tag{1.1}\]
Here \(\rho\) is a positive constant, \(\Delta\) is a bounded bounded operator on \(\mathbb{R}^{d}\) and \(\rho\) is a positive constant. The operator \(\Delta\rho\) is called the _linear Schrodinger equation_. The operator \(\Delta\rho\) is called the _linear Schrodinger equation_.
Such inequalities allow to transform difficult-to-estimate terms by better understood ones, for instance \(\varepsilon\,\rho_{\varepsilon}*|\nabla\omega_{\varepsilon}|\) can be estimated pointwisely by \(\rho_{\varepsilon}*\omega_{\varepsilon}\) due to nonnegativity of \(\rho_{\varepsilon}\) and \(\omega_{\varepsilon}\). Nevertheless, (1.3) is fairly restrictive - for instance, it excludes compactly supported kernels. In our setting, the following assumption turns out to be successful:
**Assumption 1.1**.: We assume that \(\omega:\mathbb{R}^{d}\to[0,\infty)\) is a smooth function such that \(\int_{\mathbb{R}^{d}}\omega(x)\,\mathrm{d}x=1\) and \(\omega(x)=\omega(-x)\). Moreover, there exists an integrable kernel \(f:\mathbb{R}^{d}\to[0,\infty)\) such that for all \(x\in\mathbb{R}^{d}\)
\[(|x|+|x|^{2})\,|\nabla\omega(x)|\leq C\,\omega*f(x). \tag{1.4}\]
Furthermore, we assume that \(\omega\) has sufficient decay at \(+\infty\):
\[\lim_{R\to\infty}\sup_{|x|=R}|x|^{d}\,\omega(x)=0. \tag{1.5}\]
This covers the case of Gaussian \(\omega(x)=\frac{1}{(2\pi)^{d/2}}e^{-|x|^{2}/2}\) and any nonnegative, compactly supported kernel by choosing \(f=\omega\), see Lemmas 2.1 and 2.2. Moreover, we can cover the kernel \(\omega(x)=e^{-(1+|x|^{2})^{1/2}}\) by choosing \(f\) more carefully, see Lemma 2.3.
For the initial condition we suppose that \(\rho(t=0)=\rho^{0}\) where \(\rho^{0}\) satisfies
\[\rho^{0}\geq 0,\ \ \rho^{0}\in L^{1}(\mathbb{R}^{d})\cap H^{1}(\mathbb{R}^{d}), \ \ \rho^{0}|\log\rho^{0}|\in L^{1}(\mathbb{R}^{d}),\ \ |x|^{2}\rho^{0}\in L^{1}(\mathbb{R}^{d}). \tag{1.6}\]
Our main result reads:
**Theorem 1.2**.: _Let \(\{\rho_{\varepsilon}\}_{\varepsilon}\) be a sequence of solutions to (1.1) with initial condition \(\rho^{0}\) satisfying (1.6). Then, up to a subsequence not relabeled, \(\rho_{\varepsilon}\to\rho\) in \(L^{p}(0,T;L^{1}(\mathbb{R}^{d}))\) for all \(p\in[1,\infty)\) where \(\rho\) is a weak solution of the degenerate Cahn-Hilliard equation (1.2)._
Our methods are quite flexible and they allow to study the same question for systems of the type
\[\partial_{t}\rho_{\varepsilon}^{i}-\Delta\rho_{\varepsilon}^{i}-\operatorname{ div}\left(\rho_{\varepsilon}^{i}\nabla\Delta\sum_{j=1}^{N}K_{\varepsilon}^{i,j}* \rho_{\varepsilon}^{j}\right)=0,\qquad i=1,...,N, \tag{1.7}\]
under some additional structural assumptions. This is discussed in Section 5 (see Theorem 5.1).
To conclude the introduction, let us mention that similar problems have been studied in the literature for the porous media equation. Up to our knowledg, the first result of this type was obtained by Lions and Mas-Gallic [18] for the PDE
\[\partial_{t}\rho_{\varepsilon}=\operatorname{div}(\rho_{\varepsilon}\nabla \rho_{\varepsilon}*\omega_{\varepsilon}*\omega_{\varepsilon})\]
Then, the cases of cross-diffusion systems and general nonlinear diffusion equations has been considered in [7] and [2, 17], respectively. These problems are motivated by the numerical algorithms called particle methods. More precisely, consider \(N\) particles moving according to the system of ODEs
\[X_{i}^{\prime}(t)=-\frac{1}{N}\sum_{j\neq i}\nabla W(X_{i}(t)-X_{j}(t)).\]
Then, the empirical measure \(\mu^{N}(t)=\frac{1}{N}\sum_{i=1}^{N}\delta_{X_{i}(t)}\) solves in the sense of distributions
\[\partial_{t}\mu^{N}-\operatorname{div}(\mu^{N}\nabla\mu^{N}*W)=0\]
so in the limit \(N\to\infty\)
\[\partial_{t}\mu-\operatorname{div}(\mu\nabla\mu*W)=0.\]
If \(W\overset{*}{\rightharpoonup}\delta_{0}\), we recover the porous media equation. For numerical experiments based on this method we refer to [18].
Let us also comment that (1.1) could be called the nonlocal Cahn-Hilliard equation but it should not be confused with the nonlocal effect in the following PDE
\[\partial_{t}\rho_{\varepsilon}=\operatorname{div}\big{(}\rho_{\varepsilon}\nabla( B_{\varepsilon}[\rho_{\varepsilon}]+F^{\prime}(\rho_{\varepsilon}))\big{)}, \tag{1.8}\]
where \(F\) is the potential and \(B_{\varepsilon}[u]\) is a nonlocal operator approximating \(-\Delta u\), i.e.
\[B_{\varepsilon}[u](x)=\frac{1}{\varepsilon^{2}}(u(x)-\omega_{\varepsilon}*u(x ))=\frac{1}{\varepsilon^{2}}\int_{\mathbb{T}^{d}}\omega_{\varepsilon}(y)(u(x) -u(x-y))\,\mathrm{d}y.\]
The equation was obtained by Giacomin and Lebowitz [15] as a derivation of the degenerate Cahn-Hilliard equation
\[\partial_{t}\rho=\operatorname{div}\big{(}\rho\nabla(-\Delta\rho+F^{\prime}( \rho))\big{)}, \tag{1.9}\]
proposed in [3] to model the dynamics of phase separation in binary mixtures. The question of passing to the limit from (1.8) to (1.9) was addressed only recently in [14] for a single equation and in [6] for a system. This problem is fairly different from (1.1) as energy and entropy yields strong compactness of \(\{\rho_{\varepsilon}\}\) and \(\{\nabla\rho_{\varepsilon}\}\) rather than their mollifications \(\{\rho_{\varepsilon}*\omega_{\varepsilon}\}\) and \(\{\nabla\rho_{\varepsilon}*\omega_{\varepsilon}\}\) as in the case of (1.1). We also remark that the same problem was studied in the context of the nondegenerate Cahn-Hilliard equation [9, 10, 11, 19]
\[\partial_{t}\rho_{\varepsilon}=\operatorname{div}\nabla\mu_{\varepsilon}, \qquad\mu_{\varepsilon}=B_{\varepsilon}[\rho_{\varepsilon}]+F^{\prime}(\rho_ {\varepsilon}).\]
Here, one obtains immediately an estimate on \(\{\nabla\mu_{\varepsilon}\}\) (by multiplying by \(\mu_{\varepsilon}\)) which greatly simplifies identification of the limits. Nevertheless, we point out that in [9, 10, 11, 19] the difficulty is rather the low regularity of the potential and the kernel.
The structure of the paper is as follows. In Section 2 we show that Assumption 1.1 is satisfied by a wide class of kernels. In Section 3 we gather the a priori estimates necessary for the proof of the main result, Theorem 1.2, which is proved in Section 4. In the last Section 5, we show how the result can be extended to systems. Finally,
Appendix A is dedicated to the proof of Theorem 1.2 in dimension 2 for a broader class of kernels.
## 2. Examples of kernels satisfying Assumption 1.1
Three particular classes of kernels are usually found in the literature and we show they satisfy Assumption 1.1. In fact, in all of those examples, we only need to verify condition (1.4).
**Lemma 2.1**.: _Let \(\omega:\mathbb{R}^{d}\to[0,\infty)\) be a smooth function such that \(\int_{\mathbb{R}^{d}}\omega(x)\,\mathrm{d}x=1\). Suppose that \(\omega\) is supported on the unit ball \(\{x\in\mathbb{R}^{d}:|x|\leq 1\}\) and \(\omega>0\) on the interior \(\{x\in\mathbb{R}^{d}:|x|<1\}\). Then, \(\omega\) satisfies (1.4) with \(f=\omega\)._
Proof.: To prove (1.4), we only need to consider \(|x|\leq 1\). By smoothness and compact support of \(\omega\), there exists a constant such that \((|x|+|x|^{2})\,|\nabla\omega(x)|\leq C\) and it remains to prove that \(\inf_{|x|\leq 1}\omega\ast\omega(x)>0\). For any \(|x|\leq 1\), we see from the formula \(\omega\ast\omega(x)=\int_{|y|\leq 1}\omega(x-y)\,\omega(y)\,\mathrm{d}y\) that \(\omega\ast\omega(x)>0\). As any continuous function attains its infimum on a compact set, the conclusion follows.
**Lemma 2.2**.: _Let \(\omega(x)=\frac{1}{(2\pi)^{d/2}}e^{-|x|^{2}/2}\). Then, \(\omega\) satisfies (1.4) with \(f=\omega\)._
Proof.: For Gaussians, we know that
\[\omega\ast\omega(x)=\frac{1}{(2\pi)^{d}}e^{-|x|^{2}/4}\int_{\mathbb{R}^{d}}e^ {-\left|\frac{x}{\sqrt{2}}-\sqrt{2}y\right|^{2}/2}\,\mathrm{d}y=\frac{1}{(2 \pi)^{d/2}}e^{-|x|^{2}/4}.\]
Therefore, since the function \((|x|^{2}+|x|^{3})\,e^{-|x|^{2}/4}\) is globally bounded, we find
\[(|x|+|x|^{2})\,|\nabla\omega(x)|\leq C\,(|x|^{2}+|x|^{3})\,e^{-|x|^{2}/2}\leq C \,e^{-|x|^{2}/4}=C\,\omega\ast\omega(x).\]
**Lemma 2.3**.: _Let \(\omega(x)=e^{-(1+|x|^{2})^{1/2}}\). Then, \(\omega\) satisfies (1.4) with \(f(x)=e^{-(1+|x|^{2}/3)^{1/2}}\)._
Proof.: We need to estimate the convolution \(\omega*f\) from below so we need to estimate the expression \(\sqrt{1+|x-y|^{2}}+\sqrt{1+|y|^{2}/3}\) from above. Using \(\sqrt{x}+\sqrt{y}\leq\sqrt{2(x+y)}\) and \(\sqrt{x+y}\leq\sqrt{x}+\sqrt{y}\), we have
\[\sqrt{1+|x-y|^{2}}+\sqrt{1+|y|^{2}/3} \leq\sqrt{2}\,\sqrt{2+|x|^{2}-2x\cdot y+\frac{4}{3}|y|^{2}}\] \[\leq\sqrt{2}\sqrt{\frac{1+|x|^{2}}{4}+\left|\frac{\sqrt{3}}{2}x- \frac{2}{\sqrt{3}}y\right|^{2}+\frac{7}{4}}\] \[\leq\frac{\sqrt{2}}{2}\sqrt{1+|x|^{2}}+\sqrt{2}\,\sqrt{\left| \frac{\sqrt{3}}{2}x-\frac{2}{\sqrt{3}}y\right|^{2}+\frac{7}{4}}.\]
Note that the integral \(\int_{\mathbb{R}^{d}}e^{-\sqrt{2}\sqrt{\left|\frac{\sqrt{3}}{2}x-\frac{2}{ \sqrt{3}}y\right|^{2}+\frac{7}{4}}}\,\mathrm{d}y\) is a constant independent of \(x\) (by a change of variables). Therefore,
\[e^{-\frac{\sqrt{2}}{2}\sqrt{1+|x|^{2}}}\leq C\,\omega*f(x).\]
We conclude by observing that the function \((|x|+|x|^{2})\,e^{-\left(1-\frac{\sqrt{2}}{2}\right)\sqrt{1+|x|^{2}}}\) is globally bounded
\[(|x|+|x|^{2})\,|\nabla\omega(x)|\leq C\,(|x|+|x|^{2})\,e^{-\sqrt{1 +|x|^{2}}}=\\ =C\,(|x|+|x|^{2})\,e^{-\left(1-\frac{\sqrt{2}}{2}\right)\sqrt{1+| x|^{2}}}e^{-\frac{\sqrt{2}}{2}\sqrt{1+|x|^{2}}}\leq C\,e^{-\frac{\sqrt{2}}{2} \sqrt{1+|x|^{2}}}\leq C\,\omega*f(x).\]
## 3. Uniform estimates for (1.1) and compactness
The first immediate estimate is the conservation of mass. Integrating the equation in space we obtain an \(L^{\infty}(0,T;L^{1}(\mathbb{R}^{d}))\) control on the solution. Moreover, the nonlocal equation (1.1) comes with an energy/entropy structure. Defining
\[E_{\varepsilon}[\rho]=\int_{\mathbb{R}^{d}}\frac{|\nabla\rho*\omega_{ \varepsilon}|^{2}}{2}\,\mathrm{d}x,\qquad\Phi[\rho]=\int_{\mathbb{R}^{d}}\rho \log(\rho)\,\mathrm{d}x, \tag{3.1}\]
we obtain the dissipation equalities:
\[\frac{dE_{\varepsilon}[\rho]}{dt}+\int_{\mathbb{R}^{d}}\left|\Delta \rho*\omega_{\varepsilon}\right|^{2}\mathrm{d}x+\int_{\Omega}\rho\left|\nabla \Delta\rho*\omega_{\varepsilon}*\omega_{\varepsilon}\right|^{2}\mathrm{d}x=0, \tag{3.2}\] \[\frac{d\Phi[\rho]}{dt}+\int_{\mathbb{R}^{d}}\frac{|\nabla\rho|^{2 }}{\rho}\,\mathrm{d}x+\int_{\mathbb{R}^{d}}\left|\Delta\rho*\omega_{ \varepsilon}\right|^{2}\mathrm{d}x=0. \tag{3.3}\]
Of course one has to be careful with the entropy equality, as \(\rho\log(\rho)\) can be negative when \(\rho\) is small and one needs to show that its negative part is integrable.
**Proposition 3.1**.: _Suppose the initial condition \(\rho^{0}\) satisfies (1.6). Then, there exists a unique nonnegative weak solution to (1.1) satisfying the following bounds, uniformly with respect to \(\varepsilon\):_
1. \(\{\rho_{\varepsilon}\}_{\varepsilon}\in L^{\infty}(0,T;L^{1}(\mathbb{R}^{d}) \cap L\log L(\mathbb{R}^{d}))\)_,_
2. \(\{\partial_{t}\rho_{\varepsilon}\}_{\varepsilon}\in L^{2}(0,T;H^{-k}(\mathbb{ R}^{d}))\) _for some_ \(k\)_,_
3. \(\{\sqrt{\rho_{\varepsilon}}\,\nabla\Delta\rho_{\varepsilon}*\omega_{ \varepsilon}*\omega_{\varepsilon}\}_{\varepsilon}\in L^{2}((0,T)\times \mathbb{R}^{d})\)_,_
4. \(\{\rho_{\varepsilon}*\omega_{\varepsilon}\}_{\varepsilon}\in L^{\infty}(0,T; H^{1}(\mathbb{R}^{d}))\cap L^{2}(0,T;H^{2}(\mathbb{R}^{d}))\)_,_
5. \(\{|x|^{2}\rho_{\varepsilon}\}_{\varepsilon}\in L^{\infty}(0,T;L^{1}(\mathbb{ R}^{d}))\)_,_
6. \(\{\nabla\sqrt{\rho_{\varepsilon}}\}_{\varepsilon}\in L^{2}(0,T;L^{2}(\mathbb{R}^{d}))\)_,_
7. \(\{\nabla\rho_{\varepsilon}\}_{\varepsilon}\in L^{2}(0,T;L^{1}(\mathbb{R}^{d}))\)_._
_Moreover, we can extract a subsequence such that_
\[\rho_{\varepsilon}\to\rho\,\,\,\text{strongly in $L^{p}(0,T;L^{1}( \mathbb{R}^{d}))$, $p<\infty$}, \tag{3.4}\] \[\rho_{\varepsilon}*\omega_{\varepsilon}\rightharpoonup\rho\,\,\, \text{weakly in $L^{\infty}(0,T;H^{1}(\mathbb{R}^{d}))\cap L^{2}(0,T;H^{2}(\mathbb{R}^{d}))$},\] (3.5) \[\rho_{\varepsilon}*\omega_{\varepsilon}\to\rho\,\,\,\text{a.e. and strongly in $L^{2}(0,T;H^{1}_{loc}(\mathbb{R}^{d}))$}. \tag{3.6}\]
Proof of Proposition 3.1.: The existence and uniqueness of solutions is a classical matter as (1.1) is an advection-diffusion equation with smooth advection (as \(\omega_{\varepsilon}\) is smooth). The \(L^{1}\) bound in (A) is a consequence of mass conservation. (C) follows directly from (3.2). Estimate (B) is a consequence of (A), (C), Equation (1.1) and splitting
\[\rho_{\varepsilon}\,\nabla\Delta\rho_{\varepsilon}*\omega_{\varepsilon}* \omega_{\varepsilon}=\sqrt{\rho_{\varepsilon}}\,\sqrt{\rho_{\varepsilon}}\, \nabla\Delta\rho_{\varepsilon}*\omega_{\varepsilon}*\omega_{\varepsilon}.\]
To prove (D), we first deduce from (3.2) bounds on \(\nabla\rho_{\varepsilon}*\omega_{\varepsilon}\) and \(\Delta\rho_{\varepsilon}*\omega_{\varepsilon}\). Then, the \(L^{\infty}(0,T;L^{2}(\mathbb{R}^{d}))\) bound on \(\{\rho_{\varepsilon}*\omega_{\varepsilon}\}_{\varepsilon}\) follows from bounds on \(\{\rho_{\varepsilon}*\omega_{\varepsilon}\}_{\varepsilon}\) in \(L^{\infty}(0,T;L^{1}(\mathbb{R}^{d}))\), \(\{\nabla\rho_{\varepsilon}*\omega_{\varepsilon}\}_{\varepsilon}\) in \(L^{\infty}(0,T;L^{2}(\mathbb{R}^{d}))\) and from the Gagliardo-Nirenberg inequality). The estimate in \(L^{2}(0,T;H^{2}(\mathbb{R}^{d}))\) is a consequence of the second-order regularizing property of the operator \(\Delta\) on the whole space.
To see (E), we multiply Equation (1.1) by \(|x|^{2}\) and obtain after integration by parts
\[\frac{d}{dt}\int_{\mathbb{R}^{d}}|x|^{2}\rho_{\varepsilon}\,\mathrm{d}x=2\int _{\mathbb{R}^{d}}\rho_{\varepsilon}\,\mathrm{d}x+2\int_{\mathbb{R}^{d}}\sqrt{ \rho_{\varepsilon}}\,x\cdot\sqrt{\rho_{\varepsilon}}\,\nabla\Delta\rho_{ \varepsilon}*\omega_{\varepsilon}*\omega_{\varepsilon}\,\mathrm{d}x.\]
Using (A), (C), the Cauchy-Schwarz inequality and the Gronwall lemma, we obtain (E).
It remains to prove the second part of estimate (A) namely the \(L\log(L)\) bound on \(\rho_{\varepsilon}\). A small difficulty is that the negative part of \(\rho_{\varepsilon}\log(\rho_{\varepsilon})\) might not be integrable on the whole space. Nevertheless, as in [17], one can prove that \(\rho_{\varepsilon}|\log\rho_{\varepsilon}|_{-}\) is uniformly bounded in \(L^{\infty}(0,T;L^{1}(\mathbb{R}^{d}))\) by splitting \(\mathbb{R}^{d}\) for \(\{x:|\rho_{\varepsilon}|\leq e^{-|x|^{2}}\}\), \(\{x:|\rho_{\varepsilon}|>e^{-|x|^{2}}\}\) and applying tail estimate (E). Hence, we can use (3.3) and deduce Estimate (F). Estimate (G) follows from (A) and (F) by writing \(\nabla\rho_{\varepsilon}=\frac{\nabla\rho_{\varepsilon}}{\sqrt{\rho_{ \varepsilon}}}\sqrt{\rho_{\varepsilon}}\).
The convergences (3.4)-(3.6) are a consequence of the Lions-Aubin lemma and the Banach-Alaoglu theorem where convergence (3.4) has been upgraded from a local to a global one by the tail estimate (E).
## 4. The proof of the main result
We only need to study the term \(\int_{0}^{T}\int_{\mathbb{R}^{d}}\nabla\varphi\,\rho_{\varepsilon}\,\nabla \Delta(\rho_{\varepsilon}*\omega_{\varepsilon}*\omega_{\varepsilon})\,\mathrm{ d}x\,\mathrm{d}t\), for test functions \(\varphi\in C^{\infty}_{c}([0,T]\times\mathbb{R}^{d})\). Using the properties of the mollifiers
\[\int_{0}^{T}\int_{\mathbb{R}^{d}}\nabla\varphi\,\rho_{\varepsilon}\,\nabla \Delta(\rho_{\varepsilon}*\omega_{\varepsilon}*\omega_{\varepsilon})\,\mathrm{ d}x\,\mathrm{d}t=-\int_{0}^{T}\int_{\mathbb{R}^{d}}(\nabla\varphi\,\rho_{ \varepsilon})*\nabla\omega_{\varepsilon}\,\Delta(\rho_{\varepsilon}*\omega_{ \varepsilon})\,\mathrm{d}x\,\mathrm{d}t.\]
Thanks to the weak convergence (3.6), we only need to prove the following strong convergence result
\[(\nabla\varphi\,\rho_{\varepsilon})\ast\nabla\omega_{\varepsilon}\to\nabla\varphi \cdot\nabla\rho+\Delta\varphi\,\rho=\operatorname{div}(\nabla\varphi\,\rho) \text{ strongly in }L^{2}((0,T)\times\mathbb{R}^{d}). \tag{4.1}\]
We write \(\nabla\varphi(y)=\nabla\varphi(y)-\nabla\varphi(x)+\nabla\varphi(x)\) which results in two terms:
\[(\nabla\varphi\,\rho_{\varepsilon})\ast\nabla\omega_{\varepsilon }(x)=\int_{\mathbb{R}^{d}}\nabla\varphi(y)\,\rho_{\varepsilon}(y)\cdot\,\nabla \omega_{\varepsilon}(x-y)\,\mathrm{d}y=\nabla\varphi(x)\cdot\nabla\rho_{ \varepsilon}\ast\omega_{\varepsilon}+\\ +\int_{\mathbb{R}^{d}}(\nabla\varphi(y)-\nabla\varphi(x))\rho_{ \varepsilon}(y)\cdot\nabla\omega_{\varepsilon}(x-y)\,\mathrm{d}y=:I+J.\]
According to (3.6), the term \(I\) converges strongly in \(L^{2}((0,T)\times\mathbb{R}^{d})\) (note that \(\varphi\) is compactly supported). The rest of the proof is devoted to the analysis of the term \(J\).
By Taylor's expansion
\[\nabla\varphi(y)-\nabla\varphi(x)=D^{2}\varphi(x)\,(y-x)+R(x,y)\]
where the term \(R\) satisfies \(|R|\leq C\,|y-x|^{2}\). We split \(J=J_{1}+J_{2}\), where
\[J_{1}=\int_{\mathbb{R}^{d}}D^{2}\varphi(x)\,(y-x)\rho_{\varepsilon}(y)\nabla \omega_{\varepsilon}(x-y)\,\mathrm{d}y,\,J_{2}=\int_{\mathbb{R}^{d}}R(x,y)\, \rho_{\varepsilon}(y)\nabla\omega_{\varepsilon}(x-y)\,\mathrm{d}y.\]
Term \(J_{1}.\) We prove that \(\lim_{\varepsilon\to 0}J_{1}=\Delta\varphi\,\rho\) in \(L^{2}((0,T)\times\mathbb{R}^{d})\). Since \(\varphi\) is compactly supported, it is sufficient to prove that
\[\int_{\mathbb{R}^{d}}\rho_{\varepsilon}(y)(x_{i}-y_{i})\partial_{j}\omega_{ \varepsilon}(x-y)\,\mathrm{d}y\to-\rho(x)\,\delta_{i,j}\qquad\text{ in }L^{2}_{\mathrm{loc}}((0,T)\times\mathbb{R}^{d}). \tag{4.2}\]
Assertion (4.2) will be obtained by proving convergence in \(L^{1}((0,T)\times\mathbb{R}^{d})\) and uniform boundedness in \(L^{p}((0,T)\times\mathbb{R}^{d})\) for some \(p>2\). Concerning the convergence in \(L^{1}((0,T)\times\mathbb{R}^{d})\), we first change variables
\[\int_{\mathbb{R}^{d}}\rho_{\varepsilon}(y)(x_{i}-y_{i})\partial_{j}\omega_{ \varepsilon}(x-y)\,\mathrm{d}y=\int_{\mathbb{R}^{d}}\rho_{\varepsilon}(x- \varepsilon z)z_{i}\,\partial_{j}\omega(z)\,\mathrm{d}z\]
We can estimate in \(L^{1}((0,T)\times\mathbb{R}^{d})\) the difference
\[\int_{0}^{T}\int_{\mathbb{R}^{d}}\left|\int_{\mathbb{R}^{d}}\left( \rho_{\varepsilon}(x-\varepsilon z)-\rho_{\varepsilon}(x)\right)z_{i}\,\partial_ {j}\omega(z)\,\mathrm{d}z\right|\mathrm{d}x\,\mathrm{d}t\leq\\ \leq\int_{0}^{T}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\int_{0} ^{1}\varepsilon|\nabla\rho_{\varepsilon}(x-\varepsilon sz)|\,|z|^{2}\,|\nabla \omega(z)|\,\mathrm{d}s\,\mathrm{d}z\,\mathrm{d}x\,\mathrm{d}t\leq\varepsilon \|\nabla\rho_{\varepsilon}\|_{L^{1}_{t,x}}\,\||z|^{2}\nabla\omega\|_{L^{1}_{x}},\]
where integrability of \(|z|^{2}\nabla\omega(z)\) is a consequence of assumption (1.4):
\[\int_{\mathbb{R}^{d}}|z|^{2}\,|\nabla\omega(z)|\,\mathrm{d}z\leq C\,\int_{ \mathbb{R}^{d}}\omega\ast f(z)\,\mathrm{d}z=C\,\|\omega\|_{L^{1}}\,\|f\|_{L^{ 1}}\leq C.\]
Therefore, it is sufficient to study the term \(\rho_{\varepsilon}(x)\int_{\mathbb{R}^{d}}z_{i}\partial_{j}\omega(z)\,\mathrm{ d}z\) which equals \(-\rho_{\varepsilon}(x)\) because
\[\int_{\mathbb{R}^{d}}z_{i}\partial_{j}\omega(z)\,\mathrm{d}z=-\delta_{i,j}\int _{\mathbb{R}^{d}}\omega(z)\,\mathrm{d}z=-\delta_{i,j},\]
where the boundary term vanishes thanks to (1.5). The conclusion follows because \(\rho_{\varepsilon}\) is strongly convergent in \(L^{1}((0,T)\times\mathbb{R}^{d})\), cf. (3.4).
Concerning the uniform boundedness in \(L^{p}((0,T)\times\mathbb{R}^{d})\) with \(p>2\), by nonnegativity of \(\rho_{\varepsilon}\), definition of \(\omega_{\varepsilon}(x)=\frac{1}{\varepsilon^{d}}\omega\left(\frac{x}{ \varepsilon}\right)\) and assumption (1.4),
\[\begin{split}\left|\int_{\mathbb{R}^{d}}\rho_{\varepsilon}(y)(x_{ i}-y_{i})\partial_{j}\omega_{\varepsilon}(x-y)\,\mathrm{d}y\right|=\left|\int_{ \mathbb{R}^{d}}\rho_{\varepsilon}(y)\frac{(x_{i}-y_{i})}{\varepsilon^{d+1}} \,\partial_{j}\omega\left(\frac{x-y}{\varepsilon}\right)\mathrm{d}y\right| \leq\\ \leq C\,\int_{\mathbb{R}^{d}}\rho_{\varepsilon}(y)\,\frac{1}{ \varepsilon^{d}}\omega\ast f\left(\frac{x-y}{\varepsilon}\right)\mathrm{d}y. \end{split} \tag{4.3}\]
A change of variables shows that
\[\begin{split}\frac{1}{\varepsilon^{d}}\omega\ast f\left(\frac{x-y} {\varepsilon}\right)&=\frac{1}{\varepsilon^{d}}\int_{\mathbb{R} ^{d}}\omega(z)\,f\left(\frac{x-y}{\varepsilon}-z\right)\mathrm{d}z=\\ &=\frac{1}{\varepsilon^{2d}}\int_{\mathbb{R}^{d}}\omega\left( \frac{z}{\varepsilon}\right)\,f\left(\frac{x-y-z}{\varepsilon}\right)\mathrm{ d}z=\omega_{\varepsilon}\ast f_{\varepsilon}(x-y),\end{split} \tag{4.4}\]
where \(f_{\varepsilon}(x):=\frac{1}{\varepsilon^{d}}f\left(\frac{x}{\varepsilon}\right)\). Due to (4.3),
\[\left|\int_{\mathbb{R}^{d}}\rho_{\varepsilon}(y)(x_{i}-y_{i})\partial_{j}\omega _{\varepsilon}(x-y)\,\mathrm{d}y\right|\leq C\,\rho_{\varepsilon}\ast\omega_ {\varepsilon}\ast f_{\varepsilon}(x).\]
Note that by the Gagliardo-Nirenberg inequality and uniform bound in \(L^{\infty}(0,T;H^{1}(\mathbb{R}^{d}))\), \(\{\rho_{\varepsilon}*\omega_{\varepsilon}\}_{\varepsilon}\) is bounded in \(L^{\infty}(0,T;L^{\frac{2d}{d-2}}(\mathbb{R}^{d}))\) where \(\frac{2\,d}{d-2}>2\). The same is true for \(\{\rho_{\varepsilon}*\omega_{\varepsilon}*f_{\varepsilon}\}_{\varepsilon}\) by the Young convolutional inequality. The conclusion follows.
Term \(J_{2}\).We prove that \(\lim_{\varepsilon\to 0}J_{2}=0\). By \(|R|\leq C\,|y-x|^{2}\), it is sufficient to prove
\[\int_{\mathbb{R}^{d}}\rho_{\varepsilon}(y)\,|x-y|^{2}\,|\nabla\omega_{ \varepsilon}(x-y)|\,\mathrm{d}y\to 0\qquad\text{ in }L^{2}((0,T)\times\mathbb{R}^{d}).\]
Again, we want to use assumption (1.4). By definition of \(\omega_{\varepsilon}\):
\[\int_{\mathbb{R}^{d}}\rho_{\varepsilon}(y)\,|x-y|^{2}\,|\nabla \omega_{\varepsilon}(x-y)|\,\mathrm{d}y=\varepsilon\int_{\mathbb{R}^{d}}\rho_ {\varepsilon}(y)\,\frac{1}{\varepsilon^{d}}\,\left|\frac{x-y}{\varepsilon} \right|^{2}\,\left|\nabla\omega\left(\frac{x-y}{\varepsilon}\right)\right| \mathrm{d}y\leq\\ \leq\varepsilon\int_{\mathbb{R}^{d}}\rho_{\varepsilon}(y)\,\frac{ 1}{\varepsilon^{d}}\omega*f\left(\frac{x-y}{\varepsilon}\right)=\varepsilon \,\rho_{\varepsilon}*\omega_{\varepsilon}*f_{\varepsilon}(x),\]
where in the last line we applied (4.4). By the Young convolutional inequality, \(\rho_{\varepsilon}*\omega_{\varepsilon}*f_{\varepsilon}\) on the (RHS) is bounded in \(L^{2}((0,T)\times\mathbb{R}^{d})\) so the conclusion follows.
## 5. Extension to systems
Motivated by [17], we consider the system of \(N\) equations
\[\partial_{t}\rho_{\varepsilon}^{i}-\Delta\rho_{\varepsilon}^{i}+\operatorname{ div}\left(\rho_{\varepsilon}^{i}\nabla\Delta\sum_{j=1}^{N}K_{\varepsilon}^{i,j}* \rho_{\varepsilon}^{j}\right)=0, \tag{5.1}\]
where \(1\leq i\leq N\) and the kernels \(K_{\varepsilon}^{i,j}\) are of the form
\[K_{\varepsilon}^{i,j}=\sum_{k=1}^{N}\alpha^{i,k}\,\alpha^{j,k}\,\omega_{ \varepsilon}^{i}*\omega_{\varepsilon}^{j},\]
where the \(\omega^{i}\) are kernels satisfying Assumption 1.1. The coefficients \(\{\alpha^{i,k}\}\) form a matrix \(A\) and we assume it is invertible. Under these assumptions, for any set of functions \(\eta_{1}\),..., \(\eta_{N}\):
\[\sum_{i,j=1}^{N}\int_{\mathbb{R}^{d}}\eta^{i}\,K_{\varepsilon}^{i,j}*\eta^{j}= \int_{\mathbb{R}^{d}}\sum_{k=1}^{N}\left(\sum_{i=1}^{N}\alpha^{i,k}\,\eta^{i}* \omega_{\varepsilon}^{i}\right)^{2}\]
so that
\[\widetilde{C}\,\|A^{-1}\|^{2}\int_{\mathbb{R}^{d}}\sum_{i=1}^{N} \left(\eta^{i}\ast\omega_{\varepsilon}^{i}\right)^{2}\geq\sum_{i,j=1}^{N}\int_{ \mathbb{R}^{d}}\eta^{i}\,K_{\varepsilon}^{i,j}\ast\eta^{j}\geq C\,\|A^{-1}\|^{ 2}\int_{\mathbb{R}^{d}}\sum_{i=1}^{N}\left(\eta^{i}\ast\omega_{\varepsilon}^{i} \right)^{2}. \tag{5.2}\]
**Theorem 5.1**.: _Let \(\{\rho_{\varepsilon}^{i}\}_{\varepsilon}\) be a sequence of solutions to (5.1) with initial condition \(\rho^{0,i}\) satisfying (1.6). Then, for \(i=1,...,N\), and for a subsequence not relabeled, \(\rho_{\varepsilon}^{i}\to\rho^{i}\) in \(L^{p}(0,T;L^{1}(\mathbb{R}^{d}))\) for all \(p\in[1,\infty)\) where \(\rho^{i}\) is a weak solution of_
\[\partial_{t}\rho^{i}-\Delta\rho^{i}+\operatorname{div}\left(\rho^{i}\,\nabla \Delta\sum_{j=1}^{N}K^{i,j}\,\rho^{j}\right)=0,\qquad K^{i,j}=\sum_{k=1}^{N} \alpha^{i,k}\,\alpha^{j,k}.\]
We first extend the uniform bounds in Proposition 3.1 to the case of system (5.1).
**Proposition 5.2**.: _Suppose that for all \(i=1,...,N\), the initial conditions \(\rho^{0,i}\) satisfy (1.6). Then, the nonnegative solution to (5.1) satisfies the following bounds, uniformly with respect to \(\varepsilon\):_
1. \(\{\rho_{\varepsilon}^{i}\}_{\varepsilon}\in L^{\infty}(0,T;L^{1}(\mathbb{R}^{ d})\cap L\log L(\mathbb{R}^{d}))\)_,_
2. \(\{\partial_{t}\rho_{\varepsilon}^{i}\}_{\varepsilon}\in L^{2}(0,T;H^{-k}( \mathbb{R}^{d}))\) _for some_ \(k\)_,_
3. \(\{\sqrt{\rho_{\varepsilon}^{i}}\,\nabla\Delta\sum_{j=1}^{N}K_{\varepsilon}^{i,j }\ast\rho_{\varepsilon}^{j}\}_{\varepsilon}\in L^{2}((0,T)\times\mathbb{R}^{ d})\)_,_
4. \(\{\rho_{\varepsilon}^{i}\ast\omega_{\varepsilon}^{i}\}_{\varepsilon}\in L^{ \infty}(0,T;H^{1}(\mathbb{R}^{d}))\cap L^{2}(0,T;H^{2}(\mathbb{R}^{d}))\)_,_
5. \(\{|x|^{2}\rho_{\varepsilon}^{i}\}_{\varepsilon}\in L^{\infty}(0,T;L^{1}( \mathbb{R}^{d}))\)_,_
6. \(\{\nabla\sqrt{\rho_{\varepsilon}^{i}}\}_{\varepsilon}\in L^{2}(0,T;L^{2}( \mathbb{R}^{d}))\)_,_
7. \(\{\nabla\rho_{\varepsilon}^{i}\}_{\varepsilon}\in L^{2}(0,T;L^{1}(\mathbb{R}^ {d}))\)_,_
_Moreover, we can extract a subsequence such that for all \(i=1,...,N\)_
\[\rho_{\varepsilon}^{i}\to\rho^{i}\;\;\text{strongly in }L^{p}(0,T;L^{1}(\mathbb{R}^{d})),\,p<\infty \tag{5.3}\] \[\rho_{\varepsilon}^{i}\ast\omega_{\varepsilon}^{i}\to\rho^{i}\; \;\text{weakly in }L^{\infty}(0,T;H^{1}(\mathbb{R}^{d}))\cap L^{2}(0,T;H^{2}(\mathbb{R}^{d})),\] (5.4) \[\rho_{\varepsilon}^{i}\ast\omega_{\varepsilon}^{i}\to\rho^{i}\; \;\text{a.e. and strongly in }L^{2}(0,T;H^{1}_{loc}(\mathbb{R}^{d})). \tag{5.5}\]
Proof.: The proof is almost the same as the proof of Proposition 3.1. The only difficulty is to obtain the energy and entropy identities corresponding to (3.2) and (3.3), respectively.
Concerning the energy, we multiply (5.1) with \(\Delta\sum_{j=1}^{N}K^{i,j}\ast\rho_{\varepsilon}^{j}\), integrating in space and summing up for \(i=1,...,N\) yields
\[\frac{C\,\|A^{-1}\|^{2}}{2} \sum_{i=1}^{N}\int_{\mathbb{R}^{d}}|\nabla\rho_{\varepsilon}^{i} \ast\omega_{\varepsilon}^{i}(t,x)|^{2}\,\mathrm{d}x+C\,\|A^{-1}\|^{2}\,\sum_{i =1}^{N}\int_{0}^{t}\int_{\mathbb{R}^{d}}|\Delta\rho_{\varepsilon}^{i}\ast \omega^{i}|^{2}\,\mathrm{d}x\,\mathrm{d}s+\] \[+\sum_{i=1}^{N}\int_{0}^{t}\int_{\mathbb{R}^{d}}\rho_{ \varepsilon}^{i}\left|\nabla\Delta\sum_{j=1}^{N}K_{\varepsilon}^{i,j}\ast\rho _{\varepsilon}^{j}\right|^{2}\,\mathrm{d}x\,\mathrm{d}s\leq\frac{\widetilde{C} \,\|A\|^{2}}{2}\sum_{i=1}^{N}\int_{\mathbb{R}^{d}}|\nabla\rho^{i,0}|^{2}\, \mathrm{d}x.\]
This identity implies (C) and (D). Estimate (B) follows from the PDE (5.1) and (C). Thanks to (C), we also deduce (E).
Concerning the entropy, we multiply (5.1) with \(\log\rho_{\varepsilon}^{i}\), integrate in space and sum up to obtain
\[\sum_{i=1}^{N}\partial_{t}\int_{\mathbb{R}^{d}}\rho_{\varepsilon}^{i}\left( \log\rho_{\varepsilon}^{i}-1\right)\mathrm{d}x+\sum_{i=1}^{N}\int_{\mathbb{R}^ {d}}\frac{|\nabla\rho_{\varepsilon}^{i}|^{2}}{\rho_{\varepsilon}^{i}}\, \mathrm{d}x+\sum_{i,j=1}^{N}\int_{\mathbb{R}^{d}}K^{i,j}\ast\Delta\rho_{ \varepsilon}^{j}\,\Delta\rho_{\varepsilon}^{i}\,\mathrm{d}x=0.\]
Applying (5.2) with \(\eta_{i}=\Delta\rho_{\varepsilon}^{i}\), we deduce
\[\sum_{i=1}^{N}\partial_{t}\int_{\mathbb{R}^{d}}\rho_{\varepsilon}^{i}\left( \log\rho_{\varepsilon}^{i}-1\right)\mathrm{d}x+\sum_{i=1}^{N}\int_{\mathbb{R} ^{d}}\frac{|\nabla\rho_{\varepsilon}^{i}|^{2}}{\rho_{\varepsilon}^{i}}\, \mathrm{d}x+C\,\|A^{-1}\|\,\sum_{i=1}^{N}\int_{\mathbb{R}^{d}}|\Delta\rho_{ \varepsilon}^{i}\ast\omega_{\varepsilon}^{i}|^{2}\,\mathrm{d}x\leq 0.\]
As in the proof of Proposition 3.1, one may check that \(\rho_{\varepsilon}^{i}\,|\log\rho_{\varepsilon}^{i}|_{-}\) is uniformly bounded in \(L^{\infty}(0,T;L^{1}(\mathbb{R}^{d}))\) which implies (A), (F) and (G). The convergences (5.3)-(5.5) easily follow from the estimates.
Proof of Theorem 5.1.: By linearity, we only need to explain how to pass to the limit in the term
\[\int_{0}^{T}\int_{\mathbb{R}^{d}}\nabla\varphi\,\rho_{\varepsilon}^{i}\, \nabla\Delta(\rho_{\varepsilon}^{j}\ast\omega_{\varepsilon}^{j}\ast\omega_{ \varepsilon}^{i})\,\mathrm{d}x\,\mathrm{d}t=-\int_{0}^{T}\int_{\mathbb{R}^{d} }(\nabla\varphi\,\rho_{\varepsilon}^{i})\ast\nabla\omega_{\varepsilon}^{i}\, \Delta(\rho_{\varepsilon}^{j}\ast\omega_{\varepsilon}^{j})\,\mathrm{d}x\, \mathrm{d}t.\]
However, in the proof of Theorem 1.2, we proved that
\[(\nabla\varphi\,\rho_{\varepsilon}^{i})*\nabla\omega_{\varepsilon}^{i}\to\nabla \varphi\cdot\nabla\rho^{i}+\Delta\varphi\,\rho^{i}\text{ strongly in }L^{2}((0,T)\times\mathbb{R}^{d}),\]
see (4.1). Thanks to the weak convergence \(\Delta(\rho_{\varepsilon}^{j}*\omega_{\varepsilon}^{j})\rightharpoonup\Delta \rho_{\varepsilon}^{j}\) in (5.5), we conclude the proof.
## Appendix A Proof of the convergence for general kernels and \(d=2\)
In dimension \(d=2\) another proof of the main result uses weaker assumptions, namely
\[d=2,\qquad y\,\omega(y)\in L^{1}(\mathbb{R}^{d}),\qquad y\,\nabla\omega(y)\in L ^{2}(\mathbb{R}^{d}).\]
As in the main proof, we only need to study term \(\int_{0}^{T}\int_{\mathbb{R}^{d}}\nabla\varphi\,\rho_{\varepsilon}\,\nabla \Delta(\rho_{\varepsilon}*\omega_{\varepsilon}*\omega_{\varepsilon})\, \mathrm{d}x\,\mathrm{d}t\), where \(\varphi\in C_{c}^{\infty}([0,T]\times\mathbb{R}^{d})\). Integrating by parts
\[\int_{0}^{T}\int_{\mathbb{R}^{d}}\nabla\varphi\,\rho_{\varepsilon}\,\nabla \Delta(\rho_{\varepsilon}*\omega_{\varepsilon}*\omega_{\varepsilon})\, \mathrm{d}x\,\mathrm{d}t=-\int_{0}^{T}\int_{\mathbb{R}^{d}}\operatorname{div}( \nabla\varphi\rho_{\varepsilon})*\omega_{\varepsilon}\,\Delta(\rho_{ \varepsilon}*\omega_{\varepsilon})\,\mathrm{d}x\,\mathrm{d}t\]
According to the a priori estimate (D), we need to prove that \(\operatorname{div}(\nabla\varphi\rho_{\varepsilon})*\omega_{\varepsilon}\) converges strongly in \(L^{2}((0,T)\times\mathbb{R}^{d})\). We introduce the truncation operator
\[T_{M}(\rho)=\begin{cases}\rho&\text{ if }\rho\leq M,\\ M&\text{ if }\rho>M,\end{cases}\]
so that splitting \(\rho_{\varepsilon}=\rho_{\varepsilon}-T_{M}(\rho_{\varepsilon})+T_{M}(\rho_{ \varepsilon})\) we have
\[\operatorname{div}(\nabla\varphi\rho_{\varepsilon})*\omega_{\varepsilon} =\operatorname{div}(\nabla\varphi T_{M}(\rho_{\varepsilon}))* \omega_{\varepsilon}+\operatorname{div}(\nabla\varphi(\rho_{\varepsilon}-T_{M }(\rho_{\varepsilon})))*\omega_{\varepsilon}\] \[=(\Delta\varphi T_{M}(\rho_{\varepsilon}))*\omega_{\varepsilon} +(\nabla\varphi\cdot\nabla\rho_{\varepsilon}\mathds{1}_{\rho_{\varepsilon} \leq M})*\omega_{\varepsilon}+(\nabla\varphi(\rho_{\varepsilon}-T_{M}(\rho_{ \varepsilon})))*\nabla\omega_{\varepsilon}\] \[=:I_{1}+I_{2}+I_{3}.\]
The parameter \(M\) will be chosen later in terms of \(\varepsilon\) so that \(M\to\infty\) as \(\varepsilon\to 0\).
Term \(I_{1}\). We write
\[I_{1}(t,x) =\int_{\mathbb{R}^{d}}(\Delta\varphi(x-y)-\Delta\varphi(x))T_{M}( \rho_{\varepsilon})(x-y)\,\omega_{\varepsilon}(y)\,\mathrm{d}y+\Delta\varphi(T _{M}(\rho_{\varepsilon}))*\omega_{\varepsilon}\] \[=I_{1}^{A}+I_{1}^{B}.\]
As \(|\Delta\varphi(x-y)-\Delta\varphi(x)|\leq C\,|y|\), we can estimate
\[\|I_{1}^{A}\|_{L^{2}_{t,x}}\leq\sqrt{M}\,\|\sqrt{\rho_{\varepsilon}}\|_{L^{2}_ {t,x}}\,\|\|y|\omega_{\varepsilon}(y)\|_{L^{1}}\leq\varepsilon\,\sqrt{M}\,\| \sqrt{\rho_{\varepsilon}}\|_{L^{2}_{t,x}}\,\|y|\omega(y)\|_{L^{1}}\]
so that \(\|I_{1}^{A}\|_{L^{2}_{t,x}}\leq C\,\varepsilon\,\sqrt{M}\). Furthermore, note that the term \(I_{1}^{B}\) is compact in \(L^{2}((0,T)\times\mathbb{R}^{d})\) whenever \(M\to\infty\), \(\varepsilon\to 0\). To see this, first note that it is sufficient to establish local compactness as \(\varphi\) is compactly supported. The latter can be proved by the Vitali theorem: we have convergence in measure (even in \(L^{1}_{\mathrm{loc}}((0,T)\times\mathbb{R}^{d})\)) of \(T_{M}(\rho_{\varepsilon})*\omega_{\varepsilon}\) and uniform integrability thanks to the pointwise estimate
\[0\leq T_{M}(\rho_{\varepsilon})*\omega_{\varepsilon}\leq\rho_{\varepsilon}* \omega_{\varepsilon}\]
since \(\rho_{\varepsilon}*\omega_{\varepsilon}\) is compact in \(L^{2}_{\mathrm{loc}}((0,T)\times\mathbb{R}^{d})\). We conclude that
\[I_{1}^{B}\to\Delta\varphi\,\rho\,\text{ in }L^{2}((0,T)\times\mathbb{R}^{d}) \qquad\text{ when }\varepsilon\to 0,M\to\infty.\] (A.1)
Term \(I_{2}\). We have
\[I_{2}(t,x)= \int_{\mathbb{R}^{d}}(\nabla\varphi(x-y)-\nabla\varphi(x))\, \nabla\rho_{\varepsilon}(x-y)\mathds{1}_{\rho_{\varepsilon}(x-y)\leq M}\, \omega_{\varepsilon}(y)\,\mathrm{d}y\] \[+\nabla\varphi\cdot(\nabla\rho_{\varepsilon}\mathds{1}_{\rho_{ \varepsilon}\leq M})*\omega_{\varepsilon}=:I_{2}^{A}+I_{2}^{B}.\]
As \((\nabla\varphi(x-y)-\nabla\varphi(x))\leq C\,|y|\) and \(|\nabla\rho_{\varepsilon}(x-y)|\mathds{1}_{\rho_{\varepsilon}(x-y)\leq M} \leq\sqrt{M}\frac{|\nabla\rho_{\varepsilon}(x-y)|}{\sqrt{\rho_{\varepsilon}( x-y)}}\), we can estimate the term \(I_{2}^{A}\) as follows
\[\|I_{2}^{A}\|_{L^{2}_{t,x}}\leq\sqrt{M}\,\left\|\frac{\nabla\rho_{\varepsilon}}{ \sqrt{\rho_{\varepsilon}}}\right\|_{L^{2}_{t,x}}\,\|y|\omega_{\varepsilon}(y) \|_{L^{1}}\leq\varepsilon\,\sqrt{M}\,\left\|\frac{\nabla\rho_{\varepsilon}}{ \sqrt{\rho_{\varepsilon}}}\right\|_{L^{2}_{t,x}}\,\|y|\omega(y)\|_{L^{1}}\]
so that \(|I_{2}^{A}|\leq C\,\varepsilon\,\sqrt{M}\) according to estimate (F).
Term \(I_{3}\). We write
\[I_{3}(t,x)= \int_{\mathbb{R}^{d}}(\nabla\varphi(x-y)-\nabla\varphi(x))\,(\rho_{ \varepsilon}(x-y)-T_{M}(\rho_{\varepsilon}(x-y)))\nabla\omega_{\varepsilon}(y) \,\mathrm{d}y\] \[+\nabla\varphi\,(\rho_{\varepsilon}-T_{M}(\rho_{\varepsilon}))* \nabla\omega_{\varepsilon}=I_{3}^{A}+I_{3}^{B}.\]
We observe that \(|\nabla\varphi(x-y)-\nabla\varphi(x)|\leq C\,|y|\) and \(|\rho_{\varepsilon}-T_{M}(\rho_{\varepsilon})|\leq 2\,\rho_{\varepsilon}\, \mathds{1}_{\rho_{\varepsilon}\geq M}\) so the term \(I_{3}^{A}\) can be estimated as
\[\|I_{3}^{A}\|_{L^{2}_{t,x}}\leq C\,\|(\rho_{\varepsilon}\mathds{1}_{\rho_{ \varepsilon}\geq M})*(|y||\nabla\omega_{\varepsilon}(y)|)\|_{L^{2}_{t,x}}.\]
By the Gagliardo-Nirenberg-Sobolev inequality, we get that \(\{\rho_{\varepsilon}\}\) is uniformly bounded in \(L^{2}((0,T)\times\mathbb{R}^{d})\). Therefore,
\[\|I_{3}^{A}\|_{L^{2}_{t,x}}\leq C\,\|\rho_{\varepsilon}\mathds{1}_{\rho_{ \varepsilon}\geq M}\|_{L^{2}_{t}L^{1}_{x}}\,\||y|\nabla\omega_{\varepsilon}(y ))\|_{L^{2}}\leq C\,\|\rho_{\varepsilon}\|_{L^{2}_{t,x}}\,\|\mathds{1}_{\rho _{\varepsilon}\geq M}\|_{L^{\infty}_{t}L^{2}_{x}}\,\||y|\nabla\omega_{ \varepsilon}(y))\|_{L^{2}}.\]
It remains to estimate \(\|\mathds{1}_{\rho_{\varepsilon}\geq M}\|_{L^{\infty}_{t}L^{2}_{x}}\) and \(\,\||y|\nabla\omega_{\varepsilon}(y))\|_{L^{2}}\). We have
\[\|\mathds{1}_{\rho_{\varepsilon}\geq M}\|_{L^{\infty}_{t}L^{2}_{x}}\leq\sup_{ t\in(0,T)}\left(\int_{\mathbb{R}^{d}}\frac{\rho_{\varepsilon}\log\rho_{ \varepsilon}}{M\log(M)}\,\mathrm{d}x\right)^{1/2}\leq\frac{C}{M^{1/2}\log^{1/ 2}M},\]
\[\||y|\nabla\omega_{\varepsilon}(y))\|_{L^{2}}^{2}\leq\int_{\mathbb{R}^{d}} \frac{1}{\varepsilon^{2d+2}}|y|^{2}\,\left|\nabla\omega\left(\frac{y}{ \varepsilon}\right)\right|^{2}\mathrm{d}y=\frac{1}{\varepsilon^{d}}\int_{ \mathbb{R}^{d}}|y|^{2}\,|\nabla\omega\left(y\right)|^{2}\,\mathrm{d}y\leq \frac{C}{\varepsilon^{2}},\]
since \(d=2\) and using mass conservation (A). We conclude that
\[\|I_{3}^{A}\|_{L^{2}_{t,x}}\leq\frac{C}{\varepsilon\,M^{1/2}\log^{1/2}M}.\]
The conclusion. Note that the terms \(I_{2}^{B}\) and \(I_{3}^{B}\) combine to
\[I_{2}^{B}+I_{3}^{B}=\nabla\varphi\nabla(\rho_{\varepsilon}*\omega_{\varepsilon})\]
which is compact in \(L^{2}((0,T)\times\mathbb{R}^{d})\) and converges to \(\nabla\varphi\,\nabla\rho\). Therefore,
\[\mathrm{div}(\nabla\varphi\rho_{\varepsilon})*\omega_{\varepsilon}=\Delta \varphi(T_{M}(\rho_{\varepsilon}))*\omega_{\varepsilon}+\nabla\varphi\nabla( \rho_{\varepsilon}*\omega_{\varepsilon})+R,\] (A.2)
where the first two terms are compact in \(L^{2}((0,T)\times\mathbb{R}^{d})\) (see also (A.1)) while
\[\|R\|_{L^{2}_{t,x}}\leq C\,\varepsilon\,\sqrt{M}+\frac{C}{\varepsilon\,\sqrt{ M}\,\log^{1/2}(M)}.\]
The conclusion follows by choosing \(M\) such that \(\varepsilon^{2}M\log^{1/2}M=1\).
**Remark A.1**.: In arbitrary dimension \(d\), if we knew that \(\{\rho_{\varepsilon}\}_{\varepsilon}\) is uniformly integrable in \(L^{2}((0,T)\times\mathbb{R}^{d})\), i.e.
\[\lim_{\varepsilon\to 0}\|\rho_{\varepsilon}\mathds{1}_{\rho_{\varepsilon}> \frac{1}{\varepsilon}}\|_{L^{2}_{t,x}}=0,\] (A.3)
we could conclude in an easier way. Indeed, assuming that \(y\,\nabla\omega(y)\in L^{1}(\mathbb{R}^{d})\), one can estimate
\[\|I^{A}_{3}\|_{L^{2}_{t,x}}\leq\|\psi\rho_{\varepsilon}\mathds{1}_{\rho_{ \varepsilon}\geq M}\|_{L^{2}_{t,x}}\,\||y|\nabla\omega_{\varepsilon}(y)\|_{L^ {1}}\leq C\,\|\psi\rho_{\varepsilon}\mathds{1}_{\rho_{\varepsilon}\geq M}\|_{ L^{2}_{t,x}}.\]
Choosing \(M=\frac{1}{\varepsilon}\), we conclude. The condition (A.3) can be relaxed to be satisfied locally when \(\omega\) is compactly supported. We stress that we do not have any a priori estimate implying (A.3).
|
2310.14002 | A note on compact homogeneous manifolds with Bismut parallel torsion | In this article, we investigate the class of Hermitian manifolds whose Bismut
connection has parallel torsion ({\rm BTP} for brevity). In particular, we
focus on the case where the manifold is (locally) homogeneous with respect to a
group of holomorphic isometries and we fully characterize the compact Chern
flat {\rm BTP} manifolds. Moreover we show that certain compact flag manifolds
are {\rm BTP} if and only if the metric is K\"ahler or induced by the
Cartan-Killing form and we then characterize {\rm BTP} invariant metrics on
compact semisimple Lie groups which are Hermitian w.r.t. a Samelson structure
and are projectable along the Tits fibration. We state a conjecture concerning
the question when the Bismut connection of a BTP compact Hermitian locally
homogeneous manifold has parallel curvature, giving examples and providing
evidence in some special cases. | Fabio Podestà, Fangyang Zheng | 2023-10-21T13:18:59Z | http://arxiv.org/abs/2310.14002v1 | # A note on compact homogeneous manifolds with Bismut parallel torsion
###### Abstract.
In this article, we investigate the class of Hermitian manifolds whose Bismut connection has parallel torsion (BTP for brevity). In particular, we focus on the case where the manifold is (locally) homogeneous with respect to a group of holomorphic isometries and we fully characterize the compact Chern flat BTP manifolds. Moreover we show that certain compact flag manifolds are BTP if and only if the metric is Kahler or induced by the Cartan-Killing form and we then characterize BTP invariant metrics on compact semisimple Lie groups which are Hermitian w.r.t. a Samelson structure and are projectable along the Tits fibration. We state a conjecture concerning the question when the Bismut connection of a BTP compact Hermitian locally homogeneous manifold has parallel curvature, giving examples and providing evidence in some special cases.
Key words and phrases:Bismut connection, Bismut parallel torsion, homogeneous space 2010 Mathematics Subject Classification: 53C55, 53C25 Podesta was supported by GNSAGA of INdAM and by the project PRIN 2022APSHZ9 "Real and Complex Manifolds: Geometry and Holomorphic Dynamics". Zheng is partially supported by NSFC grant 12071050 and 12141101, a Chongqing grant cstc2021ycjh-bgzxm0139, and the 111 Project D21024. |
2306.00949 | Mean-field limit for stochastic control problems under state constraint | We study the convergence problem of mean-field control theory in the presence
of state constraints and non-degenerate idiosyncratic noise. Our main result is
the convergence of the value functions associated to stochastic control
problems for many interacting particles subject to symmetric, almost-sure
constraints toward the value function of a control problem of mean-field type,
set on the space of probability measures. The key step of the proof is to show
that admissible controls for the limit problem can be turned into admissible
controls for the $N$-particle problem up to a correction which vanishes as the
number of particles increases. The rest of the proof relies on compactness
methods. We also provide optimality conditions for the mean-field problem and
discuss the regularity of the optimal controls. Finally we present some
applications and connections with large deviations for weakly interacting
particle systems. | Samuel Daudin | 2023-06-01T17:46:54Z | http://arxiv.org/abs/2306.00949v1 | # Mean-field limit for stochastic control problems under state constraint
###### Abstract.
We study the convergence problem of mean-field control theory in the presence of state constraints and non-degenerate idiosyncratic noise. Our main result is the convergence of the value functions associated to stochastic control problems for many interacting particles subject to symmetric, almost-sure constraints toward the value function of a control problem of mean-field type, set on the space of probability measures. The key step of the proof is to show that admissible controls for the limit problem can be turned into admissible controls for the \(N\)-particle problem up to a correction which vanishes as the number of particles increases. The rest of the proof relies on compactness methods. We also provide optimality conditions for the mean-field problem and discuss the regularity of the optimal controls. Finally we present some applications and connections with large deviations for weakly interacting particle systems.
###### Contents
* 1 Assumptions and statement of the main results
* 1.1 Assumptions
* 1.2 The problem with almost-sure constraint
* 1.3 The limit problem
* 1.4 Main result
* 2 Properties of the mean-field problem
* 2.1 Optimality conditions and regularity of optimal controls
* 2.2 Stability with respect to the constraint
* 3 Mean field limit
* 3.1 From mean-field to almost-sure constraint
* 3.2 From almost-sure to mean-field constraint
* 4 Application to Large Deviations
* 5 Appendix
* 5.1 Optimality conditions
* 5.2 Concentration limit
## Introduction
The goal of this paper is to address the so-called convergence problem of mean-field control theory in the presence of state constraints and non-degenerate idiosyncratic noise. The pre-limit problem involves a large number of interacting particles subject to symmetric, almost-sure constraints while, in the limit, the constraint acts on the law of a typical particle. Constraints in law arise naturally in applications in economy and finance, as a way to control the risk associated with a given strategy, [32, 38, 47]. They have generated some recent attention in the stochastic control community [5, 19, 33, 34, 36, 53, 54] and the development of mean-field game and mean-field control
theory [10, 11, 13, 16, 17, 42, 43, 44, 46] provides new insights and techniques to address these problems. Notably, given the nature of the constraint, we are naturally led to consider control problems on the space of probability measures which require appropriate tools, [2, 3, 16]. Constraints in law also arise in continuous descriptions of controlled particle systems [14, 26, 48, 49, 55, 56]. While the convergence problem in mean-field control is well understood [8, 12, 15, 24, 29, 35, 40] the goal of this paper is to investigate the validity of this approximation in the presence of constraints.
More precisely we investigate the connection between the following two control problems. The first one involves a large number of interacting particles. Its value function is given by
\[\mathcal{V}^{N}\big{(}t_{0},\mathbf{x}_{0}^{N}\big{)}=\inf_{(\alpha_{t}^{i,N })_{1\leq i\leq N}}\mathbb{E}\left[\int_{t_{0}}^{T}\frac{1}{N}\sum_{i=1}^{N}L \big{(}X_{t}^{i,N},\alpha_{t}^{i,N}\big{)}dt+\int_{t_{0}}^{T}\mathcal{F}\big{(} \hat{\mu}_{t}^{N}\big{)}dt+\mathcal{G}\big{(}\hat{\mu}_{T}^{N}\big{)}\right]\] (NP)
where \(T>0\) is a finite horizon, \(t_{0}\in[0,T]\) is the initial time and \(\mathbf{x}_{0}^{N}=(x_{0}^{1,N},\ldots,x_{0}^{N,N})\in(\mathbb{R}^{d})^{N}\) denotes the initial position of the particles. The dynamics are given by the stochastic differential equations
\[\left\{\begin{array}{ll}dX_{t}^{i,N}=b\big{(}X_{t}^{i,N},\hat{\mu}_{t}^{N} \big{)}dt+\alpha_{t}^{i,N}dt+\sqrt{2}dB_{t}^{i,N}\\ \big{(}X_{t_{0}}^{1,N},\ldots,X_{t_{0}}^{N,N}\big{)}=\mathbf{x}_{0}^{N}\end{array} \right. \hat{\mu}_{t}^{N}:=\frac{1}{N}\sum_{i=1}^{N}\delta_{X_{t}^{i,N}} \tag{2}\]
and the infimum is taken over a suitable class of controls \(\alpha=\big{(}\alpha^{1,N},\ldots,\alpha^{N,N}\big{)}\). Importantly, the particles \((X^{1,N},\ldots,X^{N,N})\) are subject to the state constraint
\[\Psi^{N}\big{(}X_{t}^{1,N},\ldots,X_{t}^{N,N}\big{)}<0\text{ for all }t\in[0,T], \quad\mathbb{P}-\text{almost-surely},\]
for some symmetric map \(\Psi^{N}:(\mathbb{R}^{d})^{N}\to\mathbb{R}\). We will always assume that \(\Psi^{N}\) has the form
\[\Psi^{N}\big{(}x^{1,N},\ldots,x^{1,N}\big{)}=\Psi\Big{(}\frac{1}{N}\sum_{i=1 }^{N}\delta_{x^{i,N}}\Big{)},\]
for some functionnal \(\Psi\) defined over \(\mathcal{P}(\mathbb{R}^{d})\) the set of Borel probability measures over \(\mathbb{R}^{d}\). Therefore the constraint reads
\[\Psi\big{(}\hat{\mu}_{t}^{N}\big{)}<0\text{ for all }t\in[0,T],\quad\mathbb{P}- \text{almost-surely}.\]
Above, the cost function \(L:\mathbb{R}^{d}\times\mathbb{R}^{d}\to\mathbb{R}\) is supposed to be convex with quadratic growth in the second variable and smooth. The non-linear drift \(b:\mathbb{R}^{d}\times\mathcal{P}_{1}(\mathbb{R}^{d})\to\mathbb{R}^{d}\) is (at least) Lipschitz continuous and bounded. The mean-field costs \(\mathcal{F},\mathcal{G}:\mathcal{P}_{1}(\mathbb{R}^{d})\to\mathbb{R}\) as well as the constraint \(\Psi:\mathcal{P}_{1}(\mathbb{R}^{d})\to\mathbb{R}\) are (at least) Lipschitz continuous functions over the space of Borel probability measures with finite first order moment \(\mathcal{P}_{1}(\mathbb{R}^{d})\) (precise assumptions will be given later in Section 1.1 ). For each \(N\geq 1\), the \((B^{i,N})_{1\leq i\leq N}\) are independent \(d\)-dimensional Brownian motions.
The second problem of interest in this paper in an optimal control problem for a non-linear Fokker-Planck equation. Its value function is defined for every \((t_{0},m_{0})\in[0,T]\times\mathcal{P}_{2}(\mathbb{R}^{d})\) such that \(\Psi(m_{0})\leq 0\) by
\[\mathcal{U}(t_{0},m_{0})=\inf_{(\mu,\alpha)}\int_{t_{0}}^{T}\int_{\mathbb{R}^ {d}}L\big{(}x,\alpha(t,x)\big{)}d\mu(t)(x)dt+\int_{t_{0}}^{T}\mathcal{F} \big{(}\mu(t)\big{)}dt+\mathcal{G}\big{(}\mu(T)\big{)}\] (mfp)
where the infimum is taken over the couples \((\mu,\alpha)\) with \(\mu\in\mathcal{C}([t_{0},T],\mathcal{P}_{2}(\mathbb{R}^{d})),\alpha\in L^{2} _{dt\otimes\mu(t)}([t_{0},T]\times\mathbb{R}^{d},\mathbb{R}^{d})\) satisfying the non-linear Fokker-Planck equation
\[\left\{\begin{array}{ll}\partial_{t}\mu+\text{div}\big{(}(\alpha(t,x)+b(x, \mu(t))\mu\big{)}-\Delta\mu=0\quad\text{ in }(t_{0},T)\times\mathbb{R}^{d},\\ \mu(t_{0})=\mu_{0}\in\mathcal{P}_{2}(\mathbb{R}^{d}),\end{array}\right. \tag{3}\]
subject to the state constraint
\[\Psi(\mu(t))\leq 0,\quad\forall t\in[t_{0},T].\]
The latter problem was analyzed by the author in [23]. It is proved there, under appropriate conditions on the data involving qualification conditions for the constraint, that optimal controls are bounded and Lipschitz continuous in space, at least for any initial position \(\mu_{0}\in\mathcal{P}_{2}(\mathbb{R}^{d})\) such that \(\Psi(\mu_{0})<0\). Problem (NP) which defines \(\mathcal{V}^{N}\) is, however, very different in nature. Indeed the constraint has to be satisfied almost-surely while the noises driving the dynamics of the particles are non-degenerate. This type of constraint leads to new difficulties. Indeed, to dominate the effect of the diffusion, controls cannot remain bounded and the value function associated to this problem blows-up near the boundary.
Without constraint, the connection between Problem (mfp) and Problems (NP) is by now well understood. Under more general structure conditions, Lacker proved in [40] that the law of the empirical measures of weak solutions to the \(N\)-particle system converges to probability measures supported on the set of optimal solutions to the mean-field problem and therefore convergence of the value functions hold. Taking advantage of the regularizing effect of the diffusion and uniform in \(N\) Lipschitz and semi-concavity estimates for the value functions of the \(N\)-particles system, it was shown in [12] that convergence actually holds with a rate. In the same setting, Cardaliaguet and Souganidis later proved in [15] a propagation of chaos around "stable" solutions of the mean-field problem. We mention that, under convexity assumptions on the mean-field costs \(\mathcal{F}\) and \(\mathcal{G}\) it is shown in [13] that the value function associated to the mean-field control problem is a smooth (enough) function in the Wasserstein space. In this setting it is not difficult to prove that the convergence of the value functions holds with an optimal rate and we have quantitative propagation estimates for the optimal trajectories to the \(N\)-particles system toward the solution to the mean-field control problem. Finally, the recent contribution [24] by the author, together with Delarue and Jackson, provides optimal rates of convergence under appropriate regularity conditions on the data but without assuming that the value function \(\mathcal{U}\) is differentiable.
We also mention that recent progresses were made in order to characterize the value function for the mean-field problem, in the general situation where it is not expected to be smooth. Similarly to the finite dimensional case, we expect the value function to be the unique viscosity solution (in some sense) to the dynamic programming equation. Different approaches have been taken in [9, 18, 20, 21]. The most general result, so far, being [21], where the authors rely on the approximation of the mean-field control problem by control problems for finite numbers of interacting particles. An analog characterization of the value function in the presence of state constraints is still an open question.
Stochastic control problems with state constraints and non-degenerate diffusions were addressed in the seminal work [41] of Lasry and Lions. They showed that the blow-up behavior of the value function is directly related to the growth of the Hamiltonian and provided rates of divergence. This problem was later revisited by Leonori and Porretta in [45] where the authors also find the rate of divergence of the optimal controls. Another approach consists in requiring the volatility to degenerate at the boundary in order to find admissible controls with a finite cost, see for instance [4]. In this case, Dynamic Programming leads to constrained viscosity solutions in the sense of Soner [58] to the corresponding Hamilton Jacobi Bellman equation, see [37].
Without constraint, Problem (mfp) has been widely studied in the literature, mainly for its connection with potential mean-field games. Indeed optimality conditions for Problem (mfp) (without constraint) lead to the celebrated mean-field game system of PDEs introduced by Lasry and Lions in [44]. We refer to [6, 13, 44] for various properties of the pde system in the general case and to [23] in the presence of state constraints.
In the context of mean-field control, state constraints have been primarily studied at the level of the limit problem in order to derive versions of the Pontryagin maximum principle. This is achieved in [2, 3] for first order problem (namely without diffusion). In [34] the authors provide first and second order conditions for optimality for stochastic control problems with expectation constraints, which corresponds to Problem (mfp). While much effort was made to understand optimal control
problems over the space of probability measures under state constraints, less is known about the connection between the limit problem and the associated \(N\)-particle problem. This is a particularly challenging question when the dynamics of the particles are stochastic.
In this paper we prove the convergence of the value functions for the problems with almost-sure constraints toward the value function for the mean-field problem. Similarly to [12] we proceed in two steps. On the one hand we prove that
\[\mathcal{U}(t_{0},\mu_{0})\leq\liminf_{N\to+\infty}\mathcal{V}^{N}\big{(}t_{0 },x_{0}^{1,N},\ldots,x_{0}^{N,N}\big{)},\]
whenever \(\hat{\mu}_{0}^{N}\) converges to \(\mu_{0}\) in \(\mathcal{P}_{2}(\mathbb{R}^{d})\). This boils down to finding weak limit points of sequences of nearly optimal weak solutions to the \(N\)-particle problem. Once we know that \(\mathcal{V}^{N}\) is bounded independently from \(N\), this follows from the line of arguments of [40] for problems without constraint. On the other hand, proving that
\[\limsup_{N\to+\infty}\mathcal{V}^{N}\big{(}t_{0},x_{0}^{1,N},\ldots,x_{0}^{N, N}\big{)}\leq\mathcal{U}(t_{0},\mu_{0})\]
requires more care. Indeed an admissible control for the mean-field problem is, in general, not admissible for the particle system because of the almost-sure constraint.
Our strategy can be described as follows. Given an admissible control \(\alpha\) for the mean-field control problem, we consider the particle system starting from an initial position \(\big{(}X_{0}^{1,N},\ldots,X_{N}^{N,N}\big{)}=\big{(}x_{0}^{1,N},\ldots,x_{0} ^{N,N}\big{)}\),
\[X_{t}^{i,N}=x_{0}^{i,N}+\int_{t_{0}}^{t\wedge\tau_{N}}\alpha\big{(}s,X_{s}^{i, N}\big{)}dt+\int_{t_{0}}^{t\wedge\tau_{N}}b\big{(}X_{s}^{i,N},\hat{\mu}_{s}^{N} \big{)}ds+\int_{t\wedge\tau_{N}}^{t}\beta_{t}^{i,N}dt+\sqrt{2}\int_{t_{0}}^{t \wedge\tau_{N}}dB_{t}^{i,N}\]
where \(\tau_{N}:=\inf\big{\{}t\geq t_{0},\Psi(\hat{\mu}_{t}^{N})\geq-\frac{\delta}{2} \big{\}}\) and \(\beta_{t}^{i,N}\) is a feedback control designed so that, \(\mathbb{P}\)-almost-surely
\[\frac{1}{N}\sum_{i=1}^{N}\Big{|}X_{t}^{i,N}-X_{\tau_{N}}^{i,N}\Big{|}^{2}\leq r ^{2}\hskip 28.452756pt\forall t\geq\tau_{N},\]
where \(r\) is a small radius depending on \(\delta\) which guarantees that, \(\mathbb{P}\)-almost-surely,
\[\Psi(\hat{\mu}_{t}^{N})<0,\hskip 28.452756pt\forall t\in[t_{0},T].\]
If \(\alpha\) is bounded, Lipschitz continuous and taken so that the corresponding solution \(\mu\) to
\[\left\{\begin{array}{l}\partial_{t}\mu+\mathrm{div}\big{(}(\alpha(t,x)+b(x, \mu(t))\mu\big{)}-\Delta\mu=0\hskip 14.226378pt\mbox{ in }(t_{0},T)\times\mathbb{R}^{d},\\ \mu(t_{0})=\mu_{0}\end{array}\right. \tag{4}\]
satisfies \(\Psi(\mu(t))\leq-\delta\), for all \(t\in[t_{0},T]\), for some \(\delta>0\), we expect a strong convergence of \(\hat{\mu}_{t}^{N}\) toward \(\mu(t)\) for \(t\in[t_{0},\tau_{N}]\) and therefore \(\tau_{N}\wedge T\) must converge to \(T\). The key step is to build \(\big{(}\beta_{t}^{i,N}\big{)}_{1\leq i\leq N}\) so that its contribution to the cost for the \(N\)-particle problem, vanishes as \(N\to+\infty\). We are able to do so only if \(\Psi(\mu_{0})<0\). We also need to prove that it is enough to approximate admissible candidates \((\alpha,\mu)\) such that \(\alpha\) is bounded, Lipschitz continuous with respect to the space variable and \(\Psi(\mu(t))\leq-\delta\) for all \(t\in[t_{0},T]\), for some \(\delta>0\).
Overall, our main result, Theorem 1.1, states that, under Assumption (1) (introduced in Section 1.1), we have
\[\lim_{N\to+\infty}\mathcal{V}^{N}\big{(}t_{0},x_{0}^{1,N},\ldots,x_{0}^{N,N} \big{)}{=}\ \mathcal{U}(t_{0},\mu_{0}),\]
whenever \(\frac{1}{N}\sum_{i=1}^{N}\delta_{x_{0}^{i,N}}\) converges in \(\mathcal{P}_{2}(\mathbb{R}^{d})\) toward some \(\mu_{0}\) such that \(\Psi(\mu_{0})<0\). We also prove that accumulation points of weak solutions (introduced in Subsection 1.2) to the \(N\)-particle problems are supported on the set of solutions of the limiting problem and therefore we have a proper convergence result for the optimal solutions when the limit problem admits a unique solution.
**Connection with large deviations for weakly interacting diffusions.** Our result is closely related to the large deviation principle for weakly interacting particle systems, see [7, 25, 28]. Indeed, consider the (uncontrolled) particle system
\[\left\{\begin{array}{l}X_{t}^{i,N}=x^{i,N}+\int_{0}^{t}b\big{(}X_{s}^{i,N}, \hat{\mu}_{s}^{N}\big{)}ds+\sqrt{2}B_{t}^{i,N},\ \ t\in[0,T],\quad i\in\left\{1,\ldots,N\right\},\\ \hat{\mu}_{s}^{N}=\frac{1}{N}\sum_{i=1}^{N}\delta_{X_{s}^{i,N}},\end{array}\right.\]
where \((B_{t}^{1,N},\ldots,B_{t}^{N,N})\) are \(N\) independent \(d\)-dimensional standard Brownian motions supported on some probability space, and let \(v^{N}(t,\mathbf{x}^{N})\) be the probability that the particles initialized at \((0,\mathbf{x}^{N})\) stay strictly inside the constraint, at least up to time \(t\). That is
\[v^{N}(t,\mathbf{x}^{N}):=\mathbb{P}\big{(}\forall s\in[0,t],\Psi(\hat{\mu}_{s} ^{N})<0\big{)}.\]
Under appropriate assumptions on \(b\) and \(\Psi\), the map \((t,\mathbf{x}^{N})\mapsto-\frac{2}{N}\log v^{N}(T-t,\mathbf{x}^{N})\), solves the dynamic programming equation for Problem (NP) when \(\mathcal{F}=\mathcal{G}=0\) and \(L(x,q)=\frac{1}{2}|q|^{2}\). As a consequence we can deduce the exponential decay of \(v^{N}\) by looking at the limit of \(\mathcal{V}^{N}(t,\mathbf{x}^{N})\) when \(\frac{1}{N}\sum_{i=1}^{N}\delta_{x^{i,N}}\to\mu_{0}\) in \(\mathcal{P}_{2}(\mathbb{R}^{d})\). In Section 4 we discuss this rigorously. Notice that this method to obtain estimates on the probability \(v^{N}\) by making a logarithmic transformation and studying the stochastic control problem corresponding to the resulting Hamilton-Jacobi-Bellman equation is reminiscent of [30].
**Organization of the paper**. The rest of the paper is organized as follows. In Section 1 we give our working assumptions, the precise formulation of the problems considered and our main results. In Section 2 we give properties of the mean-field problem. In particular, we give optimality conditions in the form of a mean-field game system of pdes. We also discuss essential stability properties of the men-field problem with respect to small perturbations of the constraint. In the next section, Section 3, we prove our main convergence result. In Section 4 we present some applications to theory of large deviations for weakly interacting particles. Finally we postpone to Appendix 5.1 the proof of the optimality conditions of Section 2 and to Appendix 5.2 a useful lemma about the convergence of weakly interacting particle systems (without control).
**Notation**. The Wasserstein space of Borel probability measures over \(\mathbb{R}^{d}\) with finite moment of order \(r\geq 1\) is denoted by \(\mathcal{P}_{r}(\mathbb{R}^{d})\). It is endowed with the \(r\)-Wasserstein distance \(d_{r}\).
For \(n\geq 1\) we denote by \(E_{n}\) the subspace of \(\mathcal{C}^{n}(\mathbb{R}^{d})\) consisting of functions \(u\) such that
\[\|u\|_{n}:=\sup_{x\in\mathbb{R}^{d}}\frac{|u(x)|}{1+|x|}+\sum_{k=1}^{n}\sup_{x \in\mathbb{R}^{d}}\big{|}D^{k}u(x)\big{|}<+\infty.\]
Similarly we define \(E_{n+\alpha}\) for \(n\geq 1\) and \(\alpha\in(0,1)\) to be the subset of \(E_{n}\) consisting of functions \(u\) satisfying
\[\|u\|_{n+\alpha}:=\|u\|_{n}+\sup_{x\neq y}\frac{|D^{n}u(x)-D^{n}u(y)|}{|x-y|^{ \alpha}}<+\infty.\]
Finally we will use the heat kernel \(P_{t}\) associated to \(-\Delta\) defined, when it makes sense, by
\[P_{t}f(x):=\int_{\mathbb{R}^{d}}\frac{1}{(4\pi t)^{d/2}}e^{-\frac{|x-y|^{2}}{4 t}}f(y)dy.\]
## 1. Assumptions and statement of the main results
### Assumptions
We first give the assumptions satisfied by \(L\), \(\mathcal{F}\), \(\mathcal{G}\) and \(\Psi\). They involve an integer \(n\geq 3\). For \(U=\mathcal{F},\mathcal{G},\Psi\), the map \(U:\mathcal{P}_{2}(\mathbb{R}^{d})\to\mathbb{R}^{d}\) satisfies
\[U\text{ is a bounded from below, }\mathcal{C}^{1}\text{ map and }\frac{\delta U}{\delta m}\text{ belongs to }\mathcal{C}(\mathcal{P}_{2}(\mathbb{R}^{d}),E_{n+\alpha}),\] (Ureg)
We recall that a map \(U:\mathcal{P}_{2}(\mathbb{R}^{d})\to\mathbb{R}^{m}\) is \(\mathcal{C}^{1}\) if there exists a jointly continuous map \(\frac{\delta U}{\delta m}:\mathcal{P}_{2}(\mathbb{R}^{d})\times\mathbb{R}^{d} \to\mathbb{R}^{m}\) such that, for any bounded subset \(\mathcal{K}\subset\mathcal{P}_{2}(\mathbb{R}^{d})\), \(x\to\frac{\delta U}{\delta m}(m,x)\) has at most quadratic growth in \(x\) uniformly in \(m\in\mathcal{K}\) and such that, for all \(m,m^{\prime}\in\mathcal{P}_{2}(\mathbb{R}^{d})\),
\[U(m^{\prime})-U(m)=\int_{0}^{1}\int_{\mathbb{R}^{d}}\frac{\delta U}{\delta m }\big{(}(1-h)m+hm^{\prime},x\big{)}d(m^{\prime}-m)(x)dh.\]
The function \(\frac{\delta U}{\delta m}\) is defined up to an additive constant and we adopt the normalization convention \(\int_{\mathbb{R}^{d}}\frac{\delta U}{\delta m}(m,x)dm(x)=0.\) We refer to the monographs [13, 16] for a discussion about the notion(s) of derivatives in the space of probability measures.
The Lagrangian \(L\) verifies \(L(x,q)=\sup_{p\in\mathbb{R}^{d}}\left\{-p.q-H(x,p)\right\}\) for all \((x,q)\in\mathbb{R}^{d}\times\mathbb{R}^{d}\) where \(H\), the Hamiltonian, satisfies the following conditions.
\[\left\{\begin{array}{l}H\text{ belongs to }\mathcal{C}^{n}(\mathbb{R}^{d} \times\mathbb{R}^{d}).\\ H\text{ and its derivatives are bounded on sets of the form }\mathbb{R}^{d}\times B(0,R)\text{ for all }R>0. \end{array}\right.\] (AH)
\[\left|D_{x}H(x,p)\right|\leq C_{0}(1+|p|).\] (AH)
For some \(\mu>0\) and all \((x,p)\in\mathbb{R}^{d}\times\mathbb{R}^{d}\),
\[\begin{array}{c}\frac{1}{\mu}I_{d}\leq D_{pp}^{2}H(x,p)\leq\mu I_{d}.\end{array}\]
The non-linear drift \(b\) is assumed to satisfy
\[b:\mathbb{R}^{d}\times\mathcal{P}_{2}(\mathbb{R}^{d})\to\mathbb{R}\text{ is bounded, Lipschitz continuous and admits a linear derivative}\\ \frac{\delta b}{\delta m}\in\mathcal{C}_{b}\big{(}\mathbb{R}^{d} \times\mathcal{P}_{2}(\mathbb{R}^{d}),E_{n}(\mathbb{R}^{d})\big{)}.\] (Ab)
For the constraint, we also assume that
\[\Psi\text{ is convex,}\] (APsiConv)
Finally we also assume that:
\[\text{There is at least one }\mu\in\mathcal{P}_{2}(\mathbb{R}^{d})\text{ such that }\Psi(\mu)<0.\] (APsiInside)
For convenience we put all of the above assumptions into
**Assumption 1**.: Assume that (Ureg) holds for \(\mathcal{F},\mathcal{G}\) and \(\Psi\), (AH) holds for \(H\), (Ab) holds for \(b\) and (APsiConv), (APsiInside) hold for \(\Psi\).
A typical example of functions satisfying the condition (Ureg) is the class of cylindrical functions of the form
\[U(m)=F\left(\int_{\mathbb{R}^{d}}f_{1}(x)dm(x),\ldots,\int_{\mathbb{R}^{d}}f_{ k}(x)dm(x)\right),\]
where \(F\) and the \(f_{i}\), \(1\leq i\leq k\) are smooth with bounded derivatives. Assumption (Ureg) also implies that \((m,x)\to D_{m}U(m,x)\) is uniformly bounded in \(\mathcal{P}_{2}(\mathbb{R}^{d})\times\mathbb{R}^{d}\) and therefore, a simple application of Kantorovitch-Rubinstein duality for \(d_{1}\) proves that \(U\) is Lipschitz continuous with respect to this distance. Finally, \(\Psi(m):=\int_{\mathbb{R}^{d}}\psi(x)dm(x)\) satisfies Assumptions (APsiConv) and (APsiInside) as
soon as \(\psi\in E_{n}\) (not necessarily convex) and there is \(x_{0}\in\mathbb{R}^{d}\) such that \(\psi(x_{0})<0\). In this "linear" case we recover the control problem with expectation constraints, see [5, 19, 33, 34, 54].
### The problem with almost-sure constraint
**Strong formulation**. Throughout this section we fix some \(t_{0}\in[0,T]\) and \(\mu_{0}\in\mathcal{P}_{2}(\mathbb{R}^{d})\) such that \(\Psi(\mu_{0})<0\). In its strong formulation, the \(N\)-state control problem is described as follows. We fix a probability space \((\Omega,\mathcal{F},\mathbb{P})\) endowed with \(N\) independent standard Brownian motions \((B_{t}^{i,N})_{i=1,\ldots,N}\). We also fix some initial positions \(\mathbf{x}_{0}=\big{(}x_{0}^{1,N},\ldots,x_{0}^{N,N}\big{)}\in(\mathbb{R}^{d}) ^{N}\) such that \(\Psi\left(\frac{1}{N}\sum_{i=1}^{N}\delta_{x_{0}^{i,N}}\right)<0\) for all \(N\geq 1\).
The controller's problem is to minimize over controls \((\alpha_{t}^{i,N})_{i=1,\ldots,N}\in L^{2}([0,T]\times\Omega,(\mathbb{R}^{d}) ^{N})\) adapted to the filtration generated by the Brownian motions
\[J^{N}\big{(}t_{0},\mathbf{x}_{0};(\alpha_{t}^{i,N})_{1\leq i\leq N}\big{)}:= \mathbb{E}\left[\int_{t_{0}}^{T}\left(\frac{1}{N}\sum_{i=1}^{N}L\big{(}X_{t}^{ i,N},\alpha_{t}^{i,N}\big{)}+\mathcal{F}\big{(}\hat{\mu}_{t}^{N}\big{)}\right)dt+ \mathcal{G}\big{(}\hat{\mu}_{T}^{N}\big{)}\right]\]
under the dynamics
\[\left\{\begin{array}{l}X_{t}^{i,N}=x_{0}^{i,N}+\int_{t_{0}}^{t}b\big{(}X_{s }^{i,N},\hat{\mu}_{s}^{N}\big{)}ds+\int_{t_{0}}^{t}\alpha_{s}^{i,N}ds+\sqrt{2 }\big{(}B_{t}^{i,N}-B_{t_{0}}^{i,N}\big{)},\hskip 28.452756ptt\geq t_{0},\\ \hat{\mu}_{t}^{N}:=\frac{1}{N}\sum_{i=1}^{N}\delta_{X_{t}^{i,N}} \end{array}\right.\]
and the constraint
\[\Psi(\hat{\mu}_{t}^{N})<0,\hskip 21.681pt\text{for all }t\in[t_{0},T],\hskip 21.681pt \mathbb{P}-\text{almost-surely}. \tag{5}\]
We will also denote the constraint for the \(N\)-particle problem by
\[\Omega_{N}:=\left\{\mathbf{x}^{N}\in\big{(}\mathbb{R}^{d}\big{)}^{N},\Psi \Big{(}\frac{1}{N}\sum_{i=1}^{N}\delta_{x^{i,N}}\Big{)}<0\right\}\]
and, in this case, (5) reads
\[\big{(}X_{t}^{1,N},\ldots,X_{t}^{N,N}\big{)}\in\Omega_{N},\hskip 21.681pt \text{for all }t\in[t_{0},T],\hskip 21.681pt\mathbb{P}-\text{almost-surely}.\]
We denote by \(\mathcal{V}^{N}(t_{0},\mathbf{x}_{0})\) the value of the above problem. Under Assumption (1) we expect, by dynamic programming or a verification argument --see [41, 45]--, that \(\mathcal{V}^{N}\) satisfies the Hamilton-Jacobi-Bellman equation
\[\left\{\begin{array}{l}-\partial_{t}\mathcal{V}^{N}-\sum_{i=1}^{N}b^{i,N}( \mathbf{x}^{N}).D_{x^{i,N}}\mathcal{V}^{N}\\ \hskip 56.905512pt+\frac{1}{N}\sum_{i=1}^{N}H\big{(}x^{i,N},ND_{x^{i,N}} \mathcal{V}^{N}\big{)}-\sum_{i=1}^{N}\Delta_{x^{i,N}}\mathcal{V}^{N}= \mathcal{F}^{N}(\mathbf{x}^{N}),\hskip 14.226378pt\text{in }(0,T)\times\Omega_{N}\\ \mathcal{V}^{N}(t,\mathbf{x}^{N})=+\infty,\hskip 14.226378pt\text{in }[0,T] \times\partial\Omega_{N}\\ \mathcal{V}^{N}(T,\mathbf{x}^{N})=\mathcal{G}^{N}(\mathbf{x}^{N}) \hskip 14.226378pt\text{in }\Omega_{N},\end{array}\right. \tag{6}\]
where \(\mathcal{F}^{N}:\big{(}\mathbb{R}^{d}\big{)}^{N}\to\mathbb{R}\) and \(\mathcal{G}^{N}:\big{(}\mathbb{R}^{d}\big{)}^{N}\to\mathbb{R}\) are defined by
\[\mathcal{F}^{N}\big{(}\mathbf{x}^{N}\big{)}=\mathcal{F}\Big{(}\frac{1}{N}\sum_{ i=1}^{N}\delta_{x^{i,N}}\Big{)},\hskip 14.226378pt\mathcal{G}^{N}\big{(} \mathbf{x}^{N}\big{)}=\mathcal{G}\Big{(}\frac{1}{N}\sum_{i=1}^{N}\delta_{x^{i,N }}\Big{)},\hskip 21.681pt\mathbf{x}^{N}\in\big{(}\mathbb{R}^{d}\big{)}^{N}.\]
In the special case \(H(x,p)=1/2|p|^{2}\) and under additional assumptions on \(\Psi\) we actually prove that \(\mathcal{V}^{N}\) belongs to \(\mathcal{C}^{1,2}\big{(}[0,T)\times\Omega_{N}\big{)}\) and satisfies (6) -- see the comment after Proposition 4.1 in Section 4.
_Remark 1_.: Notice that it could very well happen that \(\Omega_{N}=\varnothing\) for small values of \(N\). However we neglect this detail since we always assume that there is some \(\mu_{0}\in\mathcal{P}_{2}(\mathbb{R}^{d})\) such that \(\Psi(\mu_{0})<0\). By an approximation argument we find that \(\Omega_{N}\) is not empty for \(N\) large enough.
**Weak formulation.** Let us introduce some notation. We denote by \(\mathcal{C}^{d}:=\mathcal{C}([t_{0},T],\mathbb{R}^{d})\) the path space. The control space \(\mathcal{V}\) is defined as the set of non-negative measures \(q\) over \([t_{0},T]\times\mathbb{R}^{d}\) with the Lebesgue measure as time marginal and such that
\[\int_{[t_{0},T]\times\mathbb{R}^{d}}|a|^{2}dq(t,a)<+\infty.\]
We denote by \((X,\Lambda)\) the canonical process on \((\mathcal{C}^{d}\times\mathcal{V})\) and by \((X^{i,N},\Lambda^{i,N})_{1\leq i\leq N}\) the canonical process on \((\mathcal{C}^{d}\times\mathcal{V})^{N}\) and define the empirical measures
\[\hat{\nu}^{N}:=\frac{1}{N}\sum_{i=1}^{N}\delta_{(X^{i,N},\Lambda^{i,N})}, \hskip 28.452756pt\hat{\mu}_{t}^{N}:=\frac{1}{N}\sum_{i=1}^{N}\delta_{X^{i,N}_{ t}}=X_{t}\#\nu^{N}.\]
We define \(\mathcal{R}^{N}\) as the set of probabilities \(P_{N}\in\mathcal{P}_{2}((\mathcal{C}^{d}\times\mathcal{V})^{N})\) under which \((X^{i,N}_{0})_{i=1,\ldots,N}=\mathbf{x}^{N}_{0}\), \(P_{N}\)-almost-surely,
\[\varphi\big{(}X^{1,N}_{t},\ldots,X^{N,N}_{t}\big{)}-\sum_{i=1}^{N}\int_{t_{0}} ^{t}\int_{\mathbb{R}^{d}}\mathcal{L}^{N}_{i}\varphi\big{(}\hat{\mu}_{s}^{N},X ^{1,N}_{s},\ldots,X^{N,N}_{s},a\big{)}d\Lambda^{i,N}_{s}(a)ds\]
is a martingale under \(P_{N}\), for all smooth, compactly supported \(\varphi\) with
\[\mathcal{L}^{N}_{i}\varphi(\mu,x_{1},\ldots,x_{N},a):=D_{x_{i}}\varphi(x_{1}, \ldots,x_{N}).a+D_{x_{i}}\varphi(x_{1},\ldots,x_{N}).b(x_{i},\mu)+\Delta_{x_{i }}\varphi(x_{1},\ldots,x_{N}).\]
The control rule \(P_{N}\) is also assumed to satisfy
\[P_{N}\Big{(}\Psi(\hat{\mu}_{t}^{N})<0,\quad\forall t\in[t_{0},T]\Big{)}=1.\]
The \(N\)-state problem in its weak formulation is therefore to minimize over \(P_{N}\in\mathcal{R}^{N}\) the cost functional
\[\mathbb{E}^{P_{N}}\left[\int_{t_{0}}^{T}\left(\int_{\mathbb{R}^{d}}\frac{1}{N }\sum_{i=1}^{N}L(X^{i,N}_{t},a)d\Lambda^{i,N}_{t}(a)+\mathcal{F}(\hat{\mu}_{t }^{N})\right)dt+\mathcal{G}(\hat{\mu}_{T}^{N})\right]\]
where \(\mathbb{E}^{P_{N}}\) is the expectation under \(P_{N}\). We denote by \(\mathcal{V}^{N}_{w}(t_{0},\mathbf{x}_{0})\) the value of this new problem. The next result asserts that the value of the weak formulation is no greater than the value of the strong one.
**Lemma 1.1**.: _For all \((t_{0},\mathbf{x}^{N}_{0})\in[0,T]\times(\mathbb{R}^{d})^{N}\) such that \(\Psi\Big{(}\frac{1}{N}\sum_{i=1}^{N}\delta_{x^{i,N}_{0}}\Big{)}<0\) it holds_
\[\mathcal{V}^{N}_{w}(t_{0},\mathbf{x}^{N}_{0})\leq\mathcal{V}^{N}(t_{0}, \mathbf{x}^{N}_{0}).\]
Proof.: It suffices to consider an admissible control \(\big{(}\alpha^{i,N}_{t}\big{)}_{t_{0}\leq t\leq T},1\leq i\leq N\) for \(\mathcal{V}^{N}\big{(}t_{0},\mathbf{x}^{N}_{0}\big{)}\) and take
\[P_{N}=\mathcal{L}\big{(}(X^{i,N},\delta_{\alpha^{i,N}_{t}}\otimes dt)_{1\leq i \leq N}\big{)}\]
where \(\mathcal{L}\) denotes the law under \(\mathbb{P}\). This choice of \(\mathbb{P}_{N}\) is admissible for the weak formulation and we have
\[\mathbb{E}^{P_{N}}\left[\int_{t_{0}}^{T}\left(\int_{\mathbb{R}^{d}}\frac{1}{N }\sum_{i=1}^{N}L\big{(}X^{i,N}_{t},a\big{)}d\Lambda^{i,N}_{t}(a)+\mathcal{F} \big{(}\hat{\mu}_{t}^{N}\big{)}\right)dt+\mathcal{G}\big{(}\hat{\mu}_{T}^{N} \big{)}\right]=J^{N}\big{(}t_{0},\mathbf{x}_{0};(\alpha^{i,N}_{t})_{1\leq i \leq N}\big{)}.\]
This implies that
\[\mathcal{V}_{w}^{N}(t_{0},\mathbf{x}_{0}) =\inf_{P_{N}\in\mathcal{R}^{N}}\mathbb{E}^{P_{N}}\left[\int_{t_{0}}^ {T}\left(\int_{\mathbb{R}^{d}}\frac{1}{N}\sum_{i=1}^{N}L\big{(}X_{t}^{i,N},a \big{)}d\Lambda_{t}^{i,N}(a)+\mathcal{F}\big{(}\hat{\mu}_{t}^{N}\big{)}\right) dt+\mathcal{G}\big{(}\hat{\mu}_{T}^{N}\big{)}\right]\] \[\leq\inf_{(\alpha_{t}^{i,N})_{1\leq i\leq N}}J^{N}\big{(}t_{0}, \mathbf{x}_{0}^{N};(\alpha_{t}^{i,N})_{1\leq i\leq N}\big{)}\] \[=\mathcal{V}^{N}\big{(}t_{0},\mathbf{x}_{0}^{N}\big{)},\]
and proves the lemma.
### The limit problem
For some \(t_{0}\in[0,T]\) and \(\mu_{0}\in\mathcal{P}_{2}(\mathbb{R}^{d})\) such that \(\Psi(\mu_{0})<0\), the constrained problem is
\[\inf_{(\mu,\alpha)}J(t_{0},\mu_{0};(\mu,\alpha))\] (P)
with \(J(t_{0},\mu_{0};(\mu,\alpha))\) defined by
\[J(t_{0},\mu_{0};(\mu,\alpha)):=\int_{t_{0}}^{T}\int_{\mathbb{R}^{d}}L\big{(}x, \alpha(t,x)\big{)}d\mu(t)(x)dt+\int_{t_{0}}^{T}\mathcal{F}(\mu(t))dt+\mathcal{ G}(\mu(T)) \tag{7}\]
and the infimum is taken over couples \((\mu,\alpha)\in\mathcal{C}([t_{0},T],\mathcal{P}_{2}(\mathbb{R}^{d}))\times L _{dt\otimes\mu(t)}^{2}([t_{0},T]\times\mathbb{R}^{d},\mathbb{R}^{d})\) satisfying in the sense of distributions the Fokker-Planck equation
\[\left\{\begin{array}{l}\partial_{t}\mu+\mathrm{div}(\alpha(t,x)\mu)+\mathrm{ div}(b(x,\mu(t))\mu)-\Delta\mu=0\quad\text{in }(t_{0},T)\times\mathbb{R}^{d}\\ \mu(t_{0})=\mu_{0},\end{array}\right. \tag{8}\]
under the constraint that \(\Psi(\mu(t))\leq 0\) for all \(t\in[t_{0},T]\).
The reader may notice that, under appropriate conditions on \(\alpha\), the Fokker-Planck equation (8) describes the law of the solution to the McKean-Vlasov stochastic differential equation
\[dX_{t}=\alpha(t,X_{t})dt+b\big{(}X_{t},\mathcal{L}(X_{t})\big{)}dt+\sqrt{2}dB_ {t},\quad X_{t_{0}}\sim\mu_{0},\quad t\in[t_{0},T].\]
In this case, the cost (7) can be rewritten as
\[J(t_{0},\mu_{0};(\mu,\alpha))=\mathbb{E}\left[\int_{t_{0}}^{T}L\big{(}X_{t}, \alpha(t,X_{t})\big{)}dt+\int_{t_{0}}^{T}\mathcal{F}\big{(}\mathcal{L}(X_{t}) \big{)}dt+\mathcal{G}\big{(}\mathcal{L}(X_{T})\big{)}\right],\]
and we could have formulated Problem (P) in terms of optimal control of SDEs of McKean-Vlasov type. As proved in Lemma 1.2, the resulting value function would be equal to \(\mathcal{U}\). We feel that the formulation in terms of pdes is more convenient to address problems with constraints in law.
Of course, Problem (P) is equivalent to the following Problem
\[\inf_{(\mu,\beta)}J^{\prime}(t_{0},\mu_{0};(\mu,\beta))\] (P')
with \(J^{\prime}\big{(}t_{0},\mu_{0};(\mu,\beta)\big{)}\) defined by \(J^{\prime}\big{(}t_{0},\mu_{0};(\mu,\beta)\big{)}:=J\Big{(}t_{0},\mu_{0},(\beta -b\big{(}x,\mu(t)),\mu\Big{)}\Big{)}\) and the infimum taken over couples \((\mu,\beta)\in\mathcal{C}([t_{0},T],\mathcal{P}_{2}(\mathbb{R}^{d}))\times L_{dt \otimes\mu(t)}^{2}([t_{0},T]\times\mathbb{R}^{d},\mathbb{R}^{d})\) satisfying in the sense of distributions the (linear) Fokker-Planck equation
\[\left\{\begin{array}{l}\partial_{t}\mu+\mathrm{div}(\beta(t,x)\mu)-\Delta\mu =0\quad\text{in }(t_{0},T)\times\mathbb{R}^{d}\\ \mu(t_{0})=\mu_{0},\end{array}\right. \tag{9}\]
under the constraint that \(\Psi(\mu(t))\leq 0\) for all \(t\in[t_{0},T]\). In particular, \(\mathcal{U}(t_{0},\mu_{0})\) can be defined as the infimum in both problems indifferently. Moreover, \((\mu,\alpha)\) is an optimal solution to Problem (P) if and only if \(\big{(}\mu,\alpha+b(x,\mu(t))\big{)}\) is a solution to Problem (P'). The advantage of looking at Problem (P') is that the Fokker-Planck equation reads as a convex constraint in \((\mu,\beta\mu)\).
**Martingale formulation.** We introduce the controlled martingale formulation. We let \((X,\Lambda)\) be the identity processes over \((\mathcal{C}^{d}\times\mathcal{V})\) and we look for probabilities \(m\) over \(\mathcal{C}^{d}\times\mathcal{V}\) such that \(X_{0}\) is distributed according to \(\mu_{0}\) under \(m\),
\[\varphi(X_{t})-\int_{t_{0}}^{t}\int_{\mathbb{R}^{d}}\mathcal{L}\varphi(X_{s} \#m,X_{s},a)d\Lambda_{s}(a)ds\]
is a martingale under \(m\) for all smooth compactly supported \(\varphi:\mathbb{R}^{d}\to\mathbb{R}\), with \(\mathcal{L}\varphi(\mu,x,a)=D\varphi(x).a+D\varphi(x).b(x,\mu)+\Delta\varphi(x)\). The measure \(m\) is also assumed to satisfy the constraint
\[\Psi(X_{t}\#m)\leq 0,\quad\forall t\in[t_{0},T].\]
We denote by \(\mathcal{R}\) the set of such measures and we look for \(m\in\mathcal{R}\) which minimizes the cost function
\[\Gamma(m):=\mathbb{E}^{m}\left[\int_{t_{0}}^{T}\int_{\mathbb{R}^{d}}L(X_{t},a )d\Lambda_{t}(a)dt\right]+\int_{t_{0}}^{T}\mathcal{F}(X_{t}\#m)dt+\mathcal{G }(X_{T}\#m).\]
We denote by \(\mathcal{U}_{w}(t_{0},\mu_{0})\) the resulting value function.
**Lemma 1.2**.: _For all \((t_{0},\mu_{0})\in[t_{0},T]\times\mathcal{P}_{2}(\mathbb{R}^{d})\) such that \(\Psi(\mu_{0})<0\) it holds_
\[\mathcal{U}(t_{0},\mu_{0})=\mathcal{U}_{w}(t_{0},\mu_{0}).\]
Proof.: If we denote by \(\mathcal{U}_{s}(t_{0},\mu_{0})\) the value of the mean-field problem if defined in terms of controlled stochastic differential equations of McKean-Vlasov type over a fixed probability space, we have, following Theorem 2.4. in [40], \(\mathcal{U}_{w}(t_{0},\mu_{0})=\mathcal{U}_{s}(t_{0},\mu_{0})\) for all \((t_{0},\mu_{0})\in[0,T]\times\mathcal{P}_{2}(\mathbb{R}^{d})\) such that \(\Psi(\mu_{0})\leq 0\). Notably, the presence of the mean-field constraint does not change the argument leading to the aforementioned result. On the other hand, the optimality conditions of Proposition (2.1) provide a Lipschitz feedback control for \(\mathcal{U}(t_{0},m_{0})\) when \(\Psi(m_{0})<0\). We can use this control in the strong formulation to infer that \(\mathcal{U}_{s}(t_{0},\mu_{0})=\mathcal{U}(t_{0},\mu_{0})\). This leads to \(\mathcal{U}(t_{0},\mu_{0})=\mathcal{U}_{w}(t_{0},\mu_{0})\).
### Main result
Our main result can be stated as follows.
**Theorem 1.1**.: _Let Assumption (1) hold. Take \((t_{0},\mu_{0})\in[0,T]\times\mathcal{P}_{2}(\mathbb{R}^{d})\) such that \(\Psi(\mu_{0})<0\). Then_
\[\lim_{N\to+\infty}\mathcal{V}^{N}\big{(}t_{0},x_{0}^{1,N},\ldots,x_{0}^{N,N} \big{)}{=}\ \mathcal{U}(t_{0},\mu_{0}),\]
_whenever \(\frac{1}{N}\sum_{i=1}^{N}\delta_{x_{0}^{i,N}}\to\mu_{0}\) in \(\mathcal{P}_{2}(\mathbb{R}^{d})\) as \(N\to+\infty\). Moreover if \(P_{N}\) a sequence of \(\epsilon_{N}\)-optimal solutions to the weak \(N\)-particles problem, for some sequence \(\epsilon_{N}\to 0\), then the sequence \(\hat{\nu}^{N}\#P_{N}\) is relatively compact in \(\mathcal{P}_{p}(\mathcal{P}_{p}(C^{d}\times\mathcal{V}))\) for every \(p\in(1,2)\). Every limit point is supported on the set of solutions to the mean-field problem in its controlled martingale formulation._
A direct consequence of our theorem is that we have a stronger convergence result if the limit problem has a unique solution. In particular, arguing as in Proposition 5.1 we can infer that
\[\lim_{N\to+\infty}\mathbb{E}^{P_{N}}\left[\sup_{t\in[t_{0},T]}d_{1}\big{(}\hat{ \mu}_{t}^{N},X_{t}\#m\big{)}\right]=0,\]
if \(m\) is the unique solution to the mean field problem in the martingale formulation. Moreover, in this case any solution \((\alpha,\mu)\) of Problem (P) must satisfy \(\mu(t)=X_{t}\#m\), \(\forall t\in[t_{0},T]\) and therefore we have
\[\lim_{N\to+\infty}\mathbb{E}\left[\sup_{t\in[t_{0},T]}d_{1}\big{(}\hat{\mu}_{t }^{N},\mu(t)\big{)}\right]=0.\]
The proof of Theorem 1.1 is given in Subsection 3.2.
## 2. Properties of the mean-field problem
### Optimality conditions and regularity of optimal controls
When there is no constraint, optimal controls for Problem (P) can be characterized by a coupled forward-backward pde system which was first investigated for its connection with mean-field games, see [6, 44] for the seminal work of Lasry and Lions and for a derivation of the optimality conditions. In the presence of state constraint, the optimality conditions involve an additional Lagrange multiplier \(\nu\in\mathcal{M}^{+}([0,T])\) which is only active when the optimal trajectory \((\mu(t))_{t\in[t_{0},T]}\) touches the constraint. The next result is a small extension of the main result of [23].
**Proposition 2.1**.: _Take \((t_{0},\mu_{0})\in[0,T]\times\mathcal{P}_{2}(\mathbb{R}^{d})\) such that \(\Psi(\mu_{0})<0\). Under Assumption (1), Problem (P) admits at least one solution and, for any solution \((\mu,\alpha)\) there exist \(u\in L^{\infty}([t_{0},T],E_{n})\) and \(\nu\in\mathcal{M}^{+}([t_{0},T])\) such that \(\alpha=-\partial_{p}H(x,Du)\) and \((u,\mu,\nu)\) satisfies the system of optimality conditions_
\[\left\{\begin{array}{ll}-\partial_{t}u(t,x)+H(x,Du(t,x))-Du(t,x).b(x,\mu(t) )-\Delta u(t,x)\\ \quad=\nu(t)\frac{\delta\Psi}{\delta m}(\mu(t),x)+\frac{\delta\mathcal{F}}{ \delta m}(\mu(t),x)+\int_{\mathbb{R}^{d}}Du(t,y).\frac{\delta b}{\delta m}(y, \mu(t),x)d\mu(t)(y)&\text{in }(t_{0},T)\times\mathbb{R}^{d},\\ \partial_{t}\mu-\operatorname{div}(\partial_{p}H(x,Du(t,x))\mu)+ \operatorname{div}(b(x,\mu(t))\mu)-\Delta\mu=0&\text{in }(t_{0},T)\times\mathbb{R}^{d},\\ \mu(t_{0})=\mu_{0},\hskip 28.452756ptu(T,x)=\frac{\delta\mathcal{G}}{ \delta m}(\mu(T),x)&\text{in }\mathbb{R}^{d},\end{array}\right. \tag{10}\]
_where the Fokker-Planck equation is understood in the sense of distributions and \(u\) solves the HJB equation in the sense of Definition 2.1 below. The Lagrange multiplier \(\nu\) satisfies the exclusion condition_
\[\Psi(\mu(t))=0,\hskip 56.905512pt\nu\text{-almost-everywhere in }[t_{0},T]. \tag{11}\]
_The optimal control \(\alpha\) belongs to \(BV_{loc}([t_{0},T]\times\mathbb{R}^{d},\mathbb{R}^{d})\bigcap L^{\infty}([t_{ 0},T],\mathcal{C}_{b}^{n-1}(\mathbb{R}^{d},\mathbb{R}^{d}))\). Finally the value of the optimal control problem is given by_
\[\mathcal{U}(t_{0},\mu_{0})=\int_{\mathbb{R}^{d}}u(t_{0},x)d\mu_{0}(x)+\int_{t_ {0}}^{T}\mathcal{F}(\mu(t))dt+\mathcal{G}(\mu(T)).\]
_Remark 2_.: Under additional regularity and qualification conditions on \(\Psi\) --Assumptions (APsiC2) and (APsiTrans) in Section 4--, it is shown in [23] that \(\nu=\nu_{1}+\eta\delta_{T}\) for some \(\nu_{1}\in L^{\infty}([0,T])\) and some \(\eta\geq 0\). As a consequence \(u\) belongs to \(\mathcal{C}([0,T],E_{n})\) and optimal controls are Lipschitz continuous in time and space. See Theorem 2.2 in [23].
In the above theorem we use the following notion of solution for the HJB equation, using Duhamel's representation formula.
**Definition 2.1**.: Let \(\psi_{1},\varphi_{1}\in\mathcal{C}^{0}([t_{0},T],E_{n})\), \(\psi_{2}\in E_{n+\alpha}\) for some \(n\geq 2\) and \(\nu\in\mathcal{M}^{+}([0,T])\). We say that \(u\in L^{\infty}([t_{0},T],E_{n})\) is a solution to
\[\left\{\begin{array}{ll}-\partial_{t}u+H(x,Du)-\Delta u=\psi_{1}\nu+\varphi_ {1}&\text{in }[t_{0},T]\times\mathbb{R}^{d},\\ u(T,x)=\psi_{2}&\text{in }\mathbb{R}^{d},\end{array}\right. \tag{12}\]
if, for almost all \(t\in[t_{0},T]\),
\[u(t,x) =P_{T-t}\psi_{2}(x)+\int_{t_{0}}^{T}\mathds{1}_{(t,T]}P_{s-t}\psi _{1}(s,.)(x)d\nu(s)+\int_{t}^{T}P_{s-t}\varphi_{1}(s,.)(x)ds\] \[-\int_{t}^{T}P_{s-t}\left[H(.,Du(s,.))\right](x)ds,\hskip 28.452756pt \forall x\in\mathbb{R}^{d}, \tag{13}\]
where \((P_{t})_{t\geq 0}\) is the heat semi-group (associated to \(-\Delta\)).
This formulation is convenient to handle solutions which are not necessarily continuous in time (the optimal control can jump when the optimal trajectory touches the constraint) but are, for each time, regular in the space variable.
The proof of Theorem 2.1 is given in [23] when \(b=0\) by a penalization procedure. We give here a more direct proof covering the case \(b\neq 0\) using a min/max argument similar to [22] Section 3.2. See Appendix 5.1.
### Stability with respect to the constraint
For \(\delta\geq 0\) small, we define \(\mathcal{U}^{\delta}(t_{0},\mu_{0})\) to be the value of the same problem associated to the constraint \(\Psi(\mu(t))\leq-\delta\) for all \(t\in[t_{0},T]\). In particular it holds that \(\mathcal{U}^{\delta_{1}}(t_{0},\mu_{0})\geq\mathcal{U}^{\delta_{2}}(t_{0},\mu _{0})\) whenever \(\delta_{1}\geq\delta_{2}\geq 0\). Using the convexity of the constraint, we can prove the following stability result.
**Lemma 2.1**.: _Let Assumption (1) hold and assume as well that \(\Psi(\mu_{0})<0\). Then it holds_
\[\lim_{\delta\to 0}\mathcal{U}^{\delta}(t_{0},\mu_{0})=\mathcal{U}(t_{0},\mu_{0 }).\]
Proof.: Assume on the contrary that
\[\lim_{\delta\to 0}\mathcal{U}^{\delta}(t_{0},\mu_{0})=\inf_{\delta>0} \mathcal{U}^{\delta}(t_{0},\mu_{0})=\mathcal{U}(t_{0},\mu_{0})+\gamma \tag{14}\]
for some \(\gamma>0\). For every \(\delta>0\) we denote by \((\mu^{\delta},\beta^{\delta})\) an optimal solution for \(\mathcal{U}^{\delta}(t_{0},\mu_{0})\) and by \((\tilde{\beta},\tilde{m})\) and optimal solution for \(\mathcal{U}(t_{0},\mu_{0})\) where both Problems are understood in term of Problem (P'). For \(\lambda\in(0,1)\) we let \(\mu^{\delta,\lambda}:=(1-\lambda)\tilde{\mu}+\lambda\mu^{\delta}\) and, noticing that \((1-\lambda)\tilde{\beta}\tilde{\mu}+\lambda\beta^{\delta}\mu^{\delta}\) is absolutely continuous with respect to \(\mu^{\delta,\lambda}\), we define a control \(\beta^{\delta,\lambda}\) such that \((1-\lambda)\tilde{\beta}\tilde{\mu}+\lambda\beta^{\delta}\mu^{\delta}=\beta^{ \delta,\lambda}\mu^{\delta,\lambda}\), so that \((\mu^{\delta,\lambda})_{t\in[t_{0},T]}\) satisfies
\[\partial_{t}\mu^{\delta,\lambda}+\operatorname{div}(\beta^{\delta\lambda}\mu ^{\delta,\lambda})-\Delta\mu^{\delta,\lambda}=0\quad\text{ in }(t_{0},T)\times\mathbb{R}^{d}, \quad\quad\mu^{\delta,\lambda}(t_{0})=\mu_{0}.\]
By convexity of \(\Psi\), we have, for all \(t\in[t_{0},T]\),
\[\Psi(\mu^{\delta,\lambda}(t))\leq\lambda\Psi(\mu^{\delta}(t))\leq-\delta\lambda.\]
As a consequence, \((\beta^{\delta,\lambda},\mu^{\delta,\lambda})\) is admissible for \(\mathcal{U}^{\delta\lambda}(t_{0},\mu_{0})\) and
\[J^{\prime}(t_{0},\mu_{0},(\beta^{\delta,\lambda},\mu^{\delta,\lambda})\geq \mathcal{U}^{\delta\lambda}(t_{0},\mu_{0}). \tag{15}\]
However, for all \(\lambda\in(0,1)\) it holds, by convexity of \(\mathbb{R}^{d}\times\mathbb{R}^{+}\ni(a,b)\mapsto L(x,\frac{a}{b})b\) (set to be \(+\infty\) if \(b=0\)),
\[J^{\prime}(t_{0},\mu_{0},(\beta^{\delta,\lambda},\mu^{\delta, \lambda})) \leq(1-\lambda)\int_{t_{0}}^{T}\int_{\mathbb{R}^{d}}L(x,\tilde{ \beta}(t,x)-b(x,\mu^{\delta,\lambda}(t))d\tilde{\mu}(t)(x)dt\] \[+\lambda\int_{t_{0}}^{T}\int_{\mathbb{R}^{d}}L(x,\beta^{\delta}( t,x)-b(x,\mu^{\delta,\lambda}(t))d\mu^{\delta}(t))dt\] \[+\int_{t_{0}}^{T}\mathcal{F}(\mu^{\delta,\lambda}(t))dt+\mathcal{ G}(\mu^{\delta,\lambda}(T)).\]
This shows in particular that \(\limsup_{\lambda\to 0^{+}}J^{\prime}(t_{0},\mu_{0},(\beta^{\delta,\lambda},\mu^{ \delta,\lambda}))\ \leq\ J^{\prime}(t_{0},\mu_{0},(\tilde{\beta},\tilde{\mu}))\ =\ \mathcal{U}(t_{0},\mu_{0})\). Together with (15) this contradicts (14).
_Remark 3_.: We easily conclude from the above lemma that the value of the limit problem does not change if we replace the closed constraint \(\{\Psi\leq 0\}\) by the open one \(\{\Psi<0\}\).
## 3. Mean field limit
The main result of this section is to prove Theorem (1.1), that is the convergence of \(\mathcal{V}^{N}(t_{0},\mathbf{x}_{0})\) to \(\mathcal{U}(t_{0},\mu_{0})\) as \(\frac{1}{N}\sum_{i=1}^{N}\delta_{x_{0}^{i,N}}\to\mu_{0}\) when \(N\to+\infty\).
### From mean-field to almost-sure constraint
In this section we prove the first inequality in Theorem (1.1).
**Proposition 3.1**.: _Let Assumption (1) hold. Assume further that \(\mu_{0}\) satisfies \(\Psi(\mu_{0})<0\). Then it holds that_
\[\limsup_{N\to+\infty}\mathcal{V}^{N}\big{(}t_{0},\mathbf{x}_{0}^{N}\big{)} \leq\mathcal{U}(t_{0},\mu_{0}),\]
_whenever \(\lim_{N\to+\infty}\frac{1}{N}\sum_{i=1}^{N}\delta_{x_{0}^{i,N}}=\mu_{0}\) in \(\mathcal{P}_{2}(\mathbb{R}^{d})\)._
Proof.: To prove Proposition (3.1) we proceed as follows. First we fix \(\delta\in(0,-\Psi(\mu_{0})/2)\) small and we take \(\alpha:[t_{0},T]\times\mathbb{R}^{d}\to\mathbb{R}^{d}\) to be an optimal control for \(\mathcal{U}^{\delta}(t_{0},\mu_{0})\). Using Theorem 2.1, we know that \(\alpha\) is bounded and Lipschitz continuous in space. We let \(\mu\) be the corresponding trajectory, solution to
\[\left\{\begin{array}{l}\partial_{t}\mu+\operatorname{div}(\alpha\mu)+ \operatorname{div}(b\mu)-\Delta\mu=0\quad\text{ in }(t_{0},T)\times\mathbb{R}^{d},\\ \mu(t_{0})=\mu_{0}.\end{array}\right.\]
In particular, \(\Psi(\mu(t))\leq-\delta\) for all \(t\in[t_{0},T]\). For a set of initial positions \(\mathbf{x}_{0}^{N}=\big{(}x_{0}^{1,N},\ldots,x_{N}^{N,N}\big{)}\in\big{(} \mathbb{R}^{d}\big{)}^{N}\) such that \(\frac{1}{N}\sum_{i=1}^{N}\delta_{x_{0}^{i,N}}\to\mu_{0}\) in \(\mathcal{P}_{2}(\mathbb{R}^{d})\), we let \(\big{(}X_{t}^{1,N},\ldots,X_{t}^{N,N}\big{)}_{t_{0}\leq t\leq T}\) be the solution to
\[\left\{\begin{array}{l}X_{t}^{i,N}=x_{0}^{i,N}+\int_{t_{0}}^{t\wedge\tau_{N }}\alpha\big{(}s,X_{s}^{i,N}\big{)}dt+\int_{t\wedge\tau_{N}}^{t}\beta_{s}^{i, N}ds+\int_{t_{0}}^{t}b\big{(}X_{s}^{i,N},\hat{\mu}_{s}^{N}\big{)}ds+\sqrt{2} \int_{t_{0}}^{t}dB_{s}^{i,N},\quad 1\leq i\leq N\\ \hat{\mu}_{s}^{N}=\frac{1}{N}\sum_{i=1}^{N}\delta_{X_{s}^{i,N}}\end{array}\right.\]
where
\[\tau_{N}:=\inf\bigl{\{}t\geq 0,\Psi\big{(}\hat{\mu}_{t}^{N}\big{)}\geq-\frac{ \delta}{2}\bigr{\}},\]
with the convention \(\inf\{\varnothing\}=+\infty\), and \(\beta_{t}^{i,N}\) is the feedback control, defined for all \(t\geq\tau_{N}\wedge T\) by
\[\beta_{t}^{i,N}=\frac{4\big{(}X_{t}^{i,N}-X_{\tau_{N}\wedge T}^{i,N}\big{)}} {\sum_{i=1}^{N}|X_{t}^{i,N}-X_{\tau_{N}\wedge T}^{i,N}|^{2}-r^{2}N}-2\frac{d}{ r^{2}}\big{(}X_{t}^{i,N}-X_{\tau_{N}\wedge T}^{i,N}\big{)}-b\big{(}X_{t}^{i,N}, \hat{\mu}_{t}^{N}\big{)},\]
with \(r=\frac{\delta}{4C_{\Psi}}\) and \(C_{\Psi}\) a Lipschitz constant for \(\Psi\) with respect to \(d_{1}\). We also assume that \(N\) is large enough so that \(\Psi\Big{(}\frac{1}{N}\sum_{i=1}^{N}\delta_{x_{0}^{i,N}}\Big{)}<-\delta\). We will need the following key lemma, which justifies how we chose \(\big{(}\beta^{i,N}\big{)}_{1\leq i\leq N}\).
**Lemma 3.1**.: \(\mathbb{P}\)_-almost-surely, it holds that,_
\[\frac{1}{N}\sum_{i=1}^{N}\left|X_{t}^{i,N}-X_{\tau_{N}\wedge T}^{i,N}\right|^ {2}\leq r^{2},\quad\quad\quad\forall t\geq\tau_{N}\wedge T.\]
_Moreover, the following estimate holds_
\[\mathbb{E}\left[\int_{\tau_{N}\wedge T}^{T}\frac{1}{N}\sum_{i=1}^ {N}\left|\beta_{t}^{i,N}\right|^{2}dt\right] \leq\frac{32d}{r^{2}N}\mathbb{E}\left[e^{T-T\wedge\tau_{N}} \right]+\frac{16d^{2}}{r^{2}}\mathbb{E}\left[T-T\wedge\tau_{N}\right]\] \[+2\|b\|_{\infty}^{2}\mathbb{E}\left[T-T\wedge\tau_{N}\right].\]
We continue with the ongoing proof. We have taken \(r\) and \(\beta_{N}\) is such a way that \(\mathbb{P}\)-almost-surely \(\Psi(\hat{\mu}_{t}^{N})\leq\dfrac{-\delta}{4}\) for all \(t\in[t_{0},T]\). Indeed, by definition of \(\tau_{N}\), \(\mathbb{P}\)-almost-surely, \(\Psi(\hat{\mu}_{t}^{N})\leq-\frac{\delta}{2}\) for all \(t\leq\tau_{N}\), and, \(\mathbb{P}\)-almost-surely, by definition of \(r\) and Lemma 3.1, it holds, whenever \(t\geq\tau_{N}\)
\[\big{|}\Psi(\hat{\mu}_{t}^{N})-\Psi(\hat{\mu}_{\tau_{N}}^{N})\big{|} \leq C_{\Psi}d_{1}\big{(}\hat{\mu}_{t}^{N},\hat{\mu}_{\tau_{N}}^{N} \big{)}\] \[\leq C_{\Psi}d_{2}\big{(}\hat{\mu}_{t}^{N},\hat{\mu}_{\tau_{N}}^{N} \big{)}\] \[\leq\frac{C_{\Psi}}{\sqrt{N}}\left(\sum_{i=1}^{N}|X_{t}^{i,N}-X_{ \tau_{N}}^{i,N}|^{2}\right)^{1/2}\] \[\leq C_{\Psi}\times r\leq\frac{\delta}{4}\]
and, as a consequence, being \(\Psi\big{(}\hat{\mu}_{\tau_{N}}^{N}\big{)}=-\delta/2\), it holds that \(\Psi\big{(}\hat{\mu}_{t}^{N}\big{)}\leq-\delta/4\), for all \(t\in[t_{0},T]\). Therefore, we have an admissible control for \(\mathcal{V}^{N}\big{(}t_{0},\mathbf{x}_{0}^{N}\big{)}\). Now, by standard propagation of chaos estimates, see Proposition (5.1) in Appendix 5.2, it holds that
\[\lim_{N\to+\infty}\mathbb{E}\left[\sup_{t\in[t_{0},T\wedge\tau_{N}]}d_{1} \big{(}\mu(t),\hat{\mu}_{t}^{N}\big{)}\right]=0.\]
As a consequence, using that \(\Psi\big{(}\mu(t)\big{)}\leq-\delta\), for all \(t\in[t_{0},T]\) as well as the Lipschitz continuity of \(\Psi\) with respect to \(d_{1}\), we get
\[\mathbb{P}\left[\tau_{N}<T\right] =\mathbb{P}\left[\exists t<T,\Psi\big{(}\hat{\mu}_{t}^{N}\big{)} \geq\frac{-\delta}{2}\right]\] \[\leq\mathbb{P}\left[\exists t<\tau_{N},\Psi\big{(}\hat{\mu}_{t}^{N }\big{)}\geq\frac{-3\delta}{4}\right]\] \[\leq\mathbb{P}\left[\sup_{t\in[t_{0},T\wedge\tau_{N}]}d_{1}\big{(} \hat{\mu}_{t}^{N},\mu(t)\big{)}\geq\frac{\delta}{4C_{\Psi}}\right].\]
Using Markov's inequality we conclude that
\[\mathbb{P}\left[\tau_{N}<T\right]\leq\frac{4C_{\Psi}}{\delta}\mathbb{E}\left[ \sup_{t\in[t_{0},T\wedge\tau_{N}]}d_{1}\big{(}\hat{\mu}_{t}^{N},\mu(t)\big{)} \right].\]
Being \(\mathbb{E}\left[T-T\wedge\tau_{N}\right]\leq T\mathbb{P}\left[\tau_{N}<T\right]\) we conclude that \(\lim_{N\to+\infty}\mathbb{E}\left[T-T\wedge\tau_{N}\right]=0.\) Now we can use Lemma (3.1) and get \(\lim_{N\to+\infty}\mathbb{E}^{\mathbb{P}^{\gamma_{N}}}\left[\int_{\tau_{N} \wedge T}^{T}\frac{1}{N}\sum_{i=1}^{N}|\beta_{t}^{i,N}|^{2}\right]=0\), and we easily deduce
\[\lim_{N\to+\infty}\mathbb{E}\left[\sup_{t\in[t_{0},T]}d_{1}(\mu(t),\hat{\mu}_ {t}^{N})\right]=0.\]
As a consequence, \(\alpha\) being bounded and Lipschitz continuous in the space variable and \(\mathcal{F}\) and \(\mathcal{G}\) being Lipschitz continuous with respect to \(d_{1}\),
\[\lim_{N\rightarrow+\infty}\mathbb{E}\Big{[}\int_{t_{0}}^{T\wedge \tau_{N}}\frac{1}{N}\sum_{i=1}^{N}L\Big{(}X_{t}^{i,N},\alpha\big{(}t,X_{t}^{i,N }\big{)}\Big{)}dt+\int_{t_{0}}^{T\wedge\tau_{N}}\mathcal{F}\big{(}\hat{\mu}_{t }^{N}\big{)}dt\] \[+\int_{T\wedge\tau_{N}}^{T}\frac{1}{N}\sum_{i=1}^{N}L\Big{(}X_{t} ^{i,N},\beta_{t}^{i,N}\big{)}\Big{)}dt+\int_{T\wedge\tau_{N}}^{T}\mathcal{F} \big{(}\hat{\mu}_{t}^{N}\big{)}dt+\mathcal{G}\big{(}\hat{\mu}_{T}^{N,x}\big{)}\Big{]}\] \[=\lim_{N\rightarrow+\infty}\mathbb{E}\Big{[}\int_{t_{0}}^{T \wedge\tau_{N}}\int_{\mathbb{R}^{d}}L\big{(}x,\alpha(t,x)\big{)}d\hat{\mu}_{t }^{N}(x)dt+\int_{t_{0}}^{T\wedge\tau_{N}}\mathcal{F}\big{(}\hat{\mu}_{t}^{N} \big{)}dt\] \[+\int_{T\wedge\tau_{N}}^{T}\frac{1}{N}\sum_{i=1}^{N}L\big{(}X_{t} ^{i,N},\beta_{t}^{i,N}\big{)}dt+\int_{T\wedge\tau_{N}}^{T}\mathcal{F}\big{(} \hat{\mu}_{t}^{N}\big{)}dt+\mathcal{G}\big{(}\hat{\mu}_{T}^{N,x}\big{)}\Big{]}\] \[=\int_{t_{0}}^{T}\int_{\mathbb{R}^{d}}L\big{(}x,\alpha(t,x)\big{)} d\mu(t)(x)dt+\int_{t_{0}}^{T}\mathcal{F}\big{(}\mu(t)\big{)}dt+\mathcal{G} \big{(}\mu(T)\big{)}.\]
Finally, being \(\alpha\) optimal for \(\mathcal{U}^{\delta}(t_{0},\mu_{0})\) we have that
\[\limsup_{N\rightarrow+\infty}\mathcal{V}^{N}\big{(}t_{0},\mathbf{x}_{0}^{N} \big{)}\leq\mathcal{U}^{\delta}(t_{0},\mu_{0}).\]
Yet, we have proved, in Proposition (2.1) that \(\lim_{\delta\to 0}\mathcal{U}^{\delta}(t_{0},\mu_{0})=\mathcal{U}(t_{0},\mu_{0})\) and therefore,
\[\limsup_{N\rightarrow+\infty}\mathcal{V}^{N}\big{(}t_{0},\mathbf{x}_{0}^{N} \big{)}\leq\mathcal{U}(t_{0},\mu_{0}),\]
which concludes the proof of the proposition.
It remains to prove Lemma 3.1.
Proof of Lemma 3.1.: For \(\eta\geq 0\) small, we introduce the stopping time
\[\tau^{\eta}:=\inf\Bigl{\{}t\geq\tau_{N}\wedge T,\frac{1}{N}\sum_{i=1}^{N}|X_{t} ^{i,N}-X_{\tau_{N}\wedge T}^{i,N}|^{2}\geq r^{2}-\eta\Bigr{\}},\]
with the convention that \(\inf\left\{\varnothing\right\}=+\infty\).
For \(\eta>0\) and \(T^{\prime}>T\), we write \(\mathbf{B}_{t}=^{t}\Big{(}B_{t}^{1,N},\ldots,B_{t}^{N,N}\Big{)}\) and \(\mathbf{Y}_{t}=^{t}\Big{(}X_{t}^{i,N},\ldots,X_{t}^{N,N}\Big{)}\) and apply Ito's lemma to get
\[-e^{-\tau^{\eta}\wedge T^{\prime}}\log\Bigl{(}r^{2}-\frac{| \mathbf{Y}_{\tau^{\eta}\wedge T^{\prime}}-\mathbf{Y}_{\tau_{N}\wedge T}|^{2} }{N}\Bigr{)}=-e^{-\tau_{N}\wedge T}\log(r^{2})+\int_{\tau_{N}\wedge T}^{\tau^{ \eta}\wedge T^{\prime}}e^{-t}\log\Bigl{(}r^{2}-\frac{|\mathbf{Y}_{t}-\mathbf{Y }_{\tau_{N}\wedge T}|^{2}}{N}\Bigr{)}dt\] \[+\int_{\tau_{N}\wedge T}^{\tau^{\eta}\wedge T^{\prime}}e^{-t} \left[\frac{4(\mathbf{Y}_{t}-\mathbf{Y}_{\tau_{N}\wedge T})}{|\mathbf{Y}_{t}- \mathbf{Y}_{\tau_{N}\wedge T}|^{2}-r^{2}N}.\frac{2(\mathbf{Y}_{t}-\mathbf{Y}_{ \tau_{N}\wedge T})}{Nr^{2}-|\mathbf{Y}_{t}-\mathbf{Y}_{\tau_{N}\wedge T}|^{2} }-\frac{2d}{r^{2}}\frac{2|\mathbf{Y}_{t}-\mathbf{Y}_{\tau_{N}\wedge T}|^{2}}{ Nr^{2}-|\mathbf{Y}_{t}-\mathbf{Y}_{\tau_{N}\wedge T}|^{2}}\right]dt\] \[+\int_{\tau_{N}\wedge T}^{\tau^{\eta}\wedge T^{\prime}}e^{-t} \left[\frac{2dN}{Nr^{2}-|\mathbf{Y}_{t}-\mathbf{Y}_{\tau_{N}\wedge T}|^{2}}+ \frac{4|\mathbf{Y}_{t}-\mathbf{Y}_{\tau_{N}\wedge T}|^{2}}{(Nr^{2}-|\mathbf{Y}_ {t}-\mathbf{Y}_{\tau_{N}\wedge T}|^{2})^{2}}\right]dt\] \[+\sqrt{2}\int_{\tau_{N}\wedge T}^{\tau^{\eta}\wedge T^{\prime}} \frac{2(\mathbf{Y}_{t}-\mathbf{Y}_{\tau_{N}\wedge T})}{Nr^{2}-|\mathbf{Y}_{t}- \mathbf{Y}_{\tau_{N}\wedge T}|^{2}}.d\mathbf{B}_{t}\] \[\leq-4\int_{\tau_{N}\wedge T}^{\tau^{\eta}\wedge T^{\prime}}e^{-t} \frac{|\mathbf{Y}_{t}-\mathbf{Y}_{\tau_{N}\wedge T}|^{2}}{(|\mathbf{Y}_{t}- \mathbf{Y}_{\tau_{N}\wedge T}|^{2}-r^{2}N)^{2}}dt+\int_{\tau_{N}\wedge T}^{\tau^ {\eta}\wedge T^{\prime}}e^{-t}\left[-\frac{4d}{r^{2}}\frac{|\mathbf{Y}_{t}- \mathbf{Y}_{\tau_{N}\wedge T}|^{2}-dN}{Nr^{2}-|\mathbf{Y}_{t}-\mathbf{Y}_{\tau_ {N}\wedge T}|^{2}}\right]dt\] \[+\sqrt{2}\int_{\tau_{N}\wedge T}^{\tau^{\eta}\wedge T^{\prime}} \frac{2(\mathbf{Y}_{t}-\mathbf{Y}_{\tau_{N}\wedge T})}{Nr^{2}-|\mathbf{Y}_{t}- \mathbf{Y}_{\tau_{N}\wedge T}|^{2}}.d\mathbf{B}_{t}\]
However, an elementary analysis reveals that
\[-\frac{4d}{r^{2}}\frac{|\mathbf{Y}_{t}-\mathbf{Y}_{\tau_{N}\wedge T}|^{2}-dN}{Nr^ {2}-|\mathbf{Y}_{t}-\mathbf{Y}_{\tau_{N}\wedge T}|^{2}}\leq\frac{2d}{r^{2}}\]
whenever \(0\leq|\mathbf{Y}_{t}-\mathbf{Y}_{\tau_{N}\wedge T}|^{2}<Nr^{2}\). Therefore, we get, multiplying by \(e^{\tau_{N}\wedge T}\) and taking expectations,
\[-\mathbb{E} \left[e^{\tau_{N}\wedge T-\tau^{\eta}\wedge T^{\prime}}\log\! \left(r^{2}-\frac{|\mathbf{Y}_{\tau^{\eta}\wedge T^{\prime}}-\mathbf{Y}_{\tau_ {N}\wedge T}|^{2}}{N}\right)\right]\] \[+4\mathbb{E}\left[\int_{\tau_{N}\wedge T}^{\tau^{\eta}\wedge T^{ \prime}}e^{\tau_{N}\wedge T-t}\frac{|\mathbf{Y}_{t}-\mathbf{Y}_{\tau_{N}\wedge T }|^{2}}{(|\mathbf{Y}_{t}-\mathbf{Y}_{\tau_{N}\wedge T}|^{2}-r^{2}N)^{2}}dt \right]\leq\frac{2d}{r^{2}}.\]
Letting \(T^{\prime}\to+\infty\), using the definition of \(\tau^{\eta}\) and Lebesgue dominated convergence theorem leads to
\[-\log(\eta)\mathbb{E}\left[e^{\tau_{n}\wedge T-\tau^{\eta}}\mathds{1}_{\{\tau ^{\eta}<+\infty\}}\right]+4\mathbb{E}\left[\int_{\tau_{N}\wedge T}^{\tau^{ \eta}}e^{\tau_{N}\wedge T-t}\frac{|\mathbf{Y}_{t}-\mathbf{Y}_{\tau_{N}\wedge T} |^{2}}{(|\mathbf{Y}_{t}-\mathbf{Y}_{\tau_{N}\wedge T}|^{2}-r^{2}N)^{2}}dt \right]\leq\frac{2d}{r^{2}}. \tag{16}\]
Notice that both terms in the left-hand side of (16) are non-negative for \(\eta\leq 1\). Letting \(\eta\to 0\), we get, on the one hand that \(\tau^{0}=+\infty\), \(\mathbb{P}\)-almost surely and, on the other hand, we obtain
\[4\mathbb{E}\left[\int_{\tau_{N}\wedge T}^{+\infty}e^{\tau_{N}\wedge T-t}\frac {|\mathbf{Y}_{t}-\mathbf{Y}_{\tau_{N}\wedge T}|^{2}}{(|\mathbf{Y}_{t}-\mathbf{ Y}_{\tau_{N}\wedge T}|^{2}-r^{2}N)^{2}}dt\right]\leq\frac{2d}{r^{2}}.\]
It follows that,
\[\mathbb{E} \left[\int_{\tau_{N}\wedge T}^{T}\frac{1}{N}\sum_{i=1}^{N}|\beta _{t}^{i,N}|^{2}dt\right]\] \[\leq 2\mathbb{E}\left[\int_{\tau_{N}\wedge T}^{T}\frac{1}{N} \left|\frac{4(\mathbf{Y}_{t}-\mathbf{Y}_{\tau_{N}})}{|\mathbf{Y}_{t}-\mathbf{ Y}_{\tau_{N}}|^{2}-r^{2}N}-2\frac{d}{r^{2}}(\mathbf{Y}_{t}-\mathbf{Y}_{\tau_{N}} )\right|^{2}dt\right]+2\|b\|_{\infty}^{2}\mathbb{E}\left[T-T\wedge\tau_{N}\right]\] \[\leq 4\mathbb{E}\left[\int_{\tau_{N}\wedge T}^{T}\frac{1}{N} \left|\frac{4(\mathbf{Y}_{t}-\mathbf{Y}_{\tau_{N}})}{|\mathbf{Y}_{t}-\mathbf{ Y}_{\tau_{N}}|^{2}-r^{2}N}\right|^{2}dt\right]+4\mathbb{E}^{\mathbb{P}^{\gamma_{N}}} \left[\int_{\tau_{N}\wedge T}^{T}\frac{1}{N}\left|2\frac{d}{r^{2}}(\mathbf{Y}_{ t}-\mathbf{Y}_{\tau_{N}})\right|^{2}dt\right]\] \[+2\|b\|_{\infty}^{2}\mathbb{E}\left[T-T\wedge\tau_{N}\right]\] \[\leq\frac{32d^{2}}{r^{2}N}\mathbb{E}\left[e^{T-T\wedge\tau_{N}} \right]+\frac{16d^{2}}{r^{4}}\mathbb{E}\left[\int_{\tau_{N}\wedge T}^{T}\frac {1}{N}|\mathbf{Y}_{t}-\mathbf{Y}_{\tau_{N}}|^{2}dt\right]+2\|b\|_{\infty}^{2} \mathbb{E}\left[T-T\wedge\tau_{N}\right]\] \[\leq\frac{16d^{2}}{r^{2}N}\mathbb{E}\left[e^{T-T\wedge\tau_{N}} \right]+\frac{8d^{2}}{r^{2}}\mathbb{E}\left[T-T\wedge\tau_{N}\right]+2\|b\|_{ \infty}^{2}\mathbb{E}\left[T-T\wedge\tau_{N}\right],\]
where we used, for the last inequality, the fact that, \(\mathbb{P}\)-almost-surely, for all \(t\geq\tau_{N}\),
\[|\mathbf{Y}_{t}-\mathbf{Y}_{\tau_{N}}|^{2}\leq Nr^{2}.\]
This concludes the proof of the lemma.
### From almost-sure to mean-field constraint
To prove the second inequality we rely on compactness methods developed, in the context of Large Deviations by Budhiraja, Dupuis and Fischer [8] and, in the context of mean-field control, by Lacker [40] and Djete, Possamai and Tan [27].
Recall that we introduced the weak formulation of the \(N\)-particle problem in Subsection 1.2 and the controlled-martingale formulation of the mean-field problem in Subsection 1.3.
Theorem 1.1 is equivalent to the following proposition.
**Proposition 3.2**.: _Let us fix \(\mu_{0}\in\mathcal{P}_{2}(\mathbb{R}^{d})\) such that \(\Psi(\mu_{0})<0\) as well as some \(\mathbf{x}_{0}^{N}=(x_{0}^{1,N},\ldots,x_{0}^{N,N})\in\Omega_{N}\) such that \(\frac{1}{N}\sum_{i=1}^{N}\delta_{x_{0}^{i,N}}\to\mu_{0}\) in \(\mathcal{P}_{2}(\mathbb{R}^{d})\). Take \(P_{N}\) a sequence of \(\epsilon_{N}\)-optimal solutions to the weak \(N\)-particle problem, for some sequence \(\epsilon_{N}\to 0\). Then the sequence \(\hat{\nu}^{N}\#P_{N}\) is relatively compact in \(\mathcal{P}_{p}(\mathcal{P}_{p}(C^{d}\times\mathcal{V}))\) for every \(p\in(1,2)\). Every limit point is supported on the set of solutions to the relaxed mean-field problem and it holds that_
\[\mathcal{U}(t_{0},\mu_{0})=\lim_{N\to+\infty}\mathcal{V}^{N}(t_{0},\mathbf{x}_ {0}^{N}).\]
Proof.: We will closely follow the steps of [40] and therefore we only highlight the differences due to the constraint. In light of [39] Corollary B.2, to prove the pre-compactness of \(\hat{\nu}^{N}\#P_{N}\), it suffices to prove that the mean measures \(\frac{1}{N}\sum_{i=1}^{N}(X^{i,N},\Lambda^{i,N})\#P_{N}\) are tight and to prove that
\[\sup_{N}\mathbb{E}^{P_{N}}\frac{1}{N}\sum_{i=1}^{N}\left[\sup_{t\in[t_{0},T]}| X_{t}^{i,N}|^{2}+\int_{t_{0}}^{T}\int_{\mathbb{R}^{d}}|a|^{2}d\Lambda_{t}^{i,N}(a) dt\right]<+\infty. \tag{17}\]
The tightness of the mean measures actually follows from (17) thanks to the compactness result of Proposition 3.5 in [40]. By standard estimates, it is enough to prove
\[\sup_{N}\mathbb{E}^{P_{N}}\left[\frac{1}{N}\sum_{i=1}^{N}|x_{0}^{i,N}|^{2} \right]<+\infty\]
as well as
\[\sup_{N}\mathbb{E}^{P_{N}}\left[\int_{t_{0}}^{T}\frac{1}{N}\sum_{i=1}^{N}\int _{\mathbb{R}^{d}}|a|^{2}d\Lambda_{t}^{i,N}(a)dt\right]<+\infty\]
in order to get (17). The former follows from the convergence in \(\mathcal{P}_{2}(\mathbb{R}^{d})\) of \(\frac{1}{N}\sum_{i=1}^{N}\delta_{x_{0}^{i,N}}\) toward \(\mu_{0}\). The latter follows from the coercivity of \(L\), the boundness of \(\mathcal{F}\), \(\mathcal{G}\) and the fact that we took the \(P_{N}\) as \(\epsilon_{N}\)-optimal solutions for the \(N\)-particle problem whose values are bounded independently from \(N\) (as can be deduced from Theorem (3.1)). Now we take a limit point \(P\in\mathcal{P}_{p}(\mathcal{P}_{p}(\mathcal{C}^{d}\times\mathcal{V}))\) and prove that \(P\) is supported on the set of solutions to the mean field relaxed problem. First we have that \(\hat{\mu}_{0}^{N}\#P_{N}\to\delta_{\mu_{0}}\) in \(\mathcal{P}_{p}(\mathcal{P}_{p}(\mathbb{R}^{d}))\). Following [40] Proposition 5.2 we have that \(P\) is supported on the set of measures solution to the martingale problem. It remains to show that the constraint is satisfied \(P\)-almost surely at the limit. By continuity of \(\Psi\), for all \(t\in[t_{0},T]\) it holds that
\[P\left(\left\{m\in\mathcal{P}_{p}(\mathcal{C}^{d}\times\mathcal{V}),\quad \Psi(X_{t}\#m)\leq 0)\right\}\right)\geq\limsup_{N\to+\infty}P_{N}(\Psi(\hat{\mu}_{t}^{N} )\leq 0)=1.\]
Since \(P\)-almost surely \(m\) satisfies the martingale problem, we have that \(P\)-almost surely \(t\to X_{t}\#m\) is continuous and therefore we have that
\[P\left(\left\{m\in\mathcal{P}_{p}(\mathcal{C}^{d}\times\mathcal{V}),\quad \Psi(X_{t}\#m)\leq 0\quad\forall t\in[t_{0},T])\right\}\right)=1.\]
This implies the \(P\) is supported on \(\mathcal{R}\), the set of admissible candidate for the mean-field problem, as defined in Subsection 1.3. The fact that \(P\) is supported on the set of optimal solutions of the mean-field problem follows from the lower semi-continuity of the cost functional as proved in [40]
Lemma 4.1 and from Proposition (3.1). Indeed they imply, together with Lemma 1.1, that
\[\int_{\mathcal{P}_{p}(\mathcal{C}^{d}\times\mathcal{V})}\Gamma(\nu) dP(\nu) \leq\liminf_{N\to+\infty}\mathbb{E}^{P_{N}}\left[\Gamma(\hat{\nu}^{N})\right]\] \[\leq\liminf_{N\to+\infty}\mathcal{V}^{N}(t_{0},\mathbf{x}_{0}^{N})\] \[\leq\limsup_{N\to+\infty}\mathcal{V}^{N}(t_{0},\mathbf{x}_{0}^{N })\leq\mathcal{U}(t_{0},\mu_{0}). \tag{18}\]
However, by Lemma 1.2 we have that \(\mathcal{U}(t_{0},\mu_{0})=\inf_{m\in\mathcal{R}}\Gamma(m)\). Therefore, recalling that \(P\) is supported on \(\mathcal{R}\), we can deduce from (18) that
\[\int_{\mathcal{R}}\Gamma(\nu)dP(\nu)\leq\inf_{\nu\in\mathcal{R}}\Gamma(\nu),\]
which, in turn, implies that \(P\) is supported on the set of optimal solutions for the limit problem and \(\int_{\mathcal{P}_{p}(\mathcal{C}^{d}\times\mathcal{V})}\Gamma(\nu)dP(\nu)= \mathcal{U}(t_{0},\mu_{0})\). Finally, getting back to (18), we deduce that
\[\lim_{N\to+\infty}\mathcal{V}^{N}\big{(}t_{0},\mu_{0}\big{)}=\mathcal{U}(t_{0 },\mu_{0}),\]
which concludes the proof of the proposition.
## 4. Application to Large Deviations
We are interested in the probability distribution of the first exit time from a given region of \(\mathcal{P}_{2}(\mathbb{R}^{d})\) for the empirical measure of a system of interacting particles. We assume that this region is described by \(\Psi:\mathcal{P}_{2}(\mathbb{R}^{d})\to\mathbb{R}\) as follows
\[\Omega_{\infty}:=\left\{\mu\in\mathcal{P}_{2}(\mathbb{R}^{d}),\Psi(\mu)<0 \right\},\]
and, for \((t,\mathbf{x}^{N}=(x^{1,N},\ldots,x^{N,N}))\in[0,T]\times(\mathbb{R}^{d})^{N}\) we introduce the probability
\[v^{N}(t,\mathbf{x}^{N}):=\mathbb{P}\left[\forall s\in[0,t],\hat{\mu}_{s}^{N} \in\Omega_{\infty}\right], \tag{19}\]
where \((X_{t}^{1,N},\ldots,X_{t}^{N,N})\) is solution to the system of SDEs
\[\left\{\begin{array}{l} X_{t}^{i,N}=x^{i,N}+\int_{0}^{t}b(X_{s}^{i,N}, \hat{\mu}_{s}^{N})ds+\sqrt{2}B_{t}^{i,N}\quad t\in[0,T],\quad i\in\left\{1, \ldots,N\right\},\\ \hat{\mu}_{s}^{N}=\frac{1}{N}\sum_{i=1}^{N}\delta_{X_{s}^{i,N}}, \end{array}\right.\]
with \((B_{t}^{1,N},\ldots,B_{t}^{N,N})\), \(N\) independent \(d\)-dimensional standard Brownian motions supported on some probability space \((\Omega,\mathcal{F},\mathbb{P})\).
The goal is to understand the asymptotic behavior of \(v^{N}\) when \(N\to+\infty\).
Throughout this section we take \(L(x,q)=\frac{1}{2}|q|^{2}\) for all \((x,q)\in\mathbb{R}^{d}\times\mathbb{R}^{d}\) as well as \(\mathcal{F}=\mathcal{G}=0\). We also make the following additional assumptions on the constraint.
**Assumption 2**.: The constraint \(\left\{\Psi\leq 0\right\}\) is bounded in \(\mathcal{P}_{1}(\mathbb{R}^{d})\). (APsibd)
As a consequence, the constraints \(\Omega_{N}:=\left\{(x_{1},\ldots,x_{N})\in\mathbb{R}^{dN},\Psi(\frac{1}{N}\sum _{i=1}^{N}\delta_{x_{i}})<0\right\}\) are bounded for all \(N\geq 1\). We also assume that
**Assumption 3**.: \[\left\{\begin{array}{rl}&\mbox{For all $x\in\mathbb{R}^{d}$, $m\mapsto\frac{\delta\Psi}{\delta m}(m,x)$ is $\mathcal{C}^{1}$ with $(x,y)\mapsto\frac{\delta^{2}\Psi}{\delta m^{2}}(m,x,y)$}\\ &\mbox{in $\mathcal{C}^{2}(\mathbb{R}^{d}\times\mathbb{R}^{d})$ for all $m\in\mathcal{P}_{2}(\mathbb{R}^{d})$ and $\frac{\delta^{2}\Psi}{\delta m^{2}}(m,x,y)$ and its derivatives being}\\ &\mbox{jointly continuous and bounded in $\mathcal{P}_{2}(\mathbb{R}^{d})\times\mathbb{R}^{d} \times\mathbb{R}^{d}$.}\end{array}\right.\] (APsiC2)
and the transversality condition
**Assumption 4**.: \[\int_{\mathbb{R}^{d}}|D_{m}\Psi(m,x)|^{2}dm(x)\neq 0,\,\mbox{whenever $\Psi(m)=0$}.\] (APsiTrans)
The condition (APsiTrans) on the Wasserstein gradient of \(\Psi\) at the boundary ensures that the closure \(\overline{\Omega_{N}}\) of \(\Omega_{N}\) in \((\mathbb{R}^{d})^{N}\) is
\[\overline{\Omega_{N}}=\left\{(x^{1},\ldots,x^{N})\in(\mathbb{R}^{d})^{N}, \Psi(\frac{1}{N}\sum_{i=1}^{N}\delta_{x^{i}})\leq 0\right\}.\]
Similarly we have
\[\overline{\Omega_{\infty}}=\left\{\mu\in\mathcal{P}_{2}(\mathbb{R}^{d}),\Psi (\mu)\leq 0\right\}.\]
We will need further regularity for the constraint.
**Assumption 5**.: \[\frac{\delta^{2}\Psi}{\delta m^{2}}\mbox{ has a linear derivative, with bounded and jointly continuous first order}\] \[\mbox{ derivatives in the euclidean variables.}\] (APsiC3)
Under Assumption (1) as well as these additional assumptions, for all \(N\geq 1\), the constraint \(\Omega_{N}\) is open, bounded and \(\partial\Omega_{N}\) is a manifold of class \(\mathcal{C}^{3}\). These assumptions are satisfied for instance when \(\Psi(m)=\int_{\mathbb{R}^{d}}\left(\sqrt{|x-x_{0}|^{2}+\delta^{2}}-\delta \right)dm(x)-\kappa\) with \(x_{0}\in\mathbb{R}^{d}\), \(\delta>0\) and \(\kappa>0\).
Thanks to the additional assumptions (APsibd), (APsiC2), (APsiTrans) and (APsiC3) we are precisely in the framework of [31] section VI.6 -see also [52]- and we can conclude that \(v^{N}\), defined in (19) is \(\mathcal{C}^{1,2}\) in \((0,T]\times\Omega_{N}\) and satisfies
\[\left\{\begin{array}{ll}\partial_{t}v^{N}-\sum_{i=1}^{N}b^{i,N}( \mathbf{x}^{N}).D_{x^{i,N}}v^{N}-\Delta v^{N}=0,&\mbox{ in $(0,T)\times\Omega_{N}$}\\ v^{N}(t,\mathbf{x}^{N})=0,&\mbox{ in $[0,T]\times\partial\Omega_{N}$}\\ v^{N}(0,\mathbf{x}^{N})=1&\mbox{ in $\Omega_{N}$},\end{array}\right.\]
where \(b^{i,N}(\mathbf{x}^{N})=b(x^{i,N},\frac{1}{N}\sum_{i=1}^{N}\delta_{x^{i,N}})\). Moreover, \(v^{N}(t,\mathbf{x})>0\) for all \((t,\mathbf{x}^{N})\in(0,T]\times\Omega_{N}\). The connection with the control problems of the previous sections is given by the following proposition.
**Proposition 4.1**.: _For all \(\mathbf{x}^{N}_{0}\in\Omega_{N}\) and \(t_{0}\in[0,T]\) it holds_
\[\mathcal{V}^{N}(t_{0},\mathbf{x}^{N}_{0})=-\frac{2}{N}\log v^{N}(T-t_{0}, \mathbf{x}^{N}_{0}),\]
_where \(\mathcal{V}^{N}\) is the value function defined in Subsection (1.2) with \(\mathcal{F}=\mathcal{G}=0\) and \(L(x,q)=1/2|q|^{2}\)._
Proof.: For all \((t_{0},\mathbf{x}^{N}_{0})\in[0,T)\times\Omega_{N}\) we define \(w^{N}(t_{0},\mathbf{x}^{N}_{0}):=-\frac{2}{N}\log v^{N}(T-t_{0},\mathbf{x}^{N}_ {0})\). We are going to proceed by verification to show that \(w^{N}(t_{0},\mathbf{x}^{N}_{0})=\mathcal{V}^{N}(t_{0},\mathbf{x}^{N}_{0})\) in \([0,T)\times\Omega_{N}\). For
\((x_{0}^{1,N},\ldots,x_{0}^{N,N})\in\Omega_{N}\) and \(t_{0}\in[0,T)\), we define the following particle system where \(\mathbf{X}_{t}^{N}=(X_{t}^{1,N},\ldots,X_{t}^{N,N})\)
\[X_{t}^{i,N} :=x_{0}^{i,N}-\int_{t_{0}}^{t\wedge\tau}ND_{x^{i}}w^{N}(t,\mathbf{ X}_{t}^{N})dt+\int_{t_{0}}^{t}b(X_{s}^{i,N},\hat{\mu}_{\mathbf{X}_{s}^{N}}^{N})ds+ \sqrt{2}\int_{t_{0}}^{t\wedge\tau}dB_{t}^{i,N}\] \[=x_{0}^{i,N}+\int_{t_{0}}^{t\wedge\tau}2\frac{D_{x^{i}}v^{N}(T-t, \mathbf{X}_{t}^{N})}{v^{N}(T-t,\mathbf{X}_{t}^{N})}dt+\int_{t_{0}}^{t}b(X_{s}^{ i,N},\hat{\mu}_{\mathbf{X}_{s}^{N}}^{N})ds+\sqrt{2}\int_{t_{0}}^{t\wedge\tau}dB_{t}^{ i,N},\]
where \(\mathbf{B}_{t}^{N}:=^{t}(B_{t}^{1,N},\ldots,B_{t}^{N,N})\) and \(\tau\) is the first exit time from \(\Omega_{N}\):
\[\tau:=\inf\{t\geq t_{0},\mathbf{X}_{t}^{N}\notin\Omega_{N}\}.\]
For \(\eta\geq 0\) small, we introduce the stopping time
\[\tau^{\eta}:=\inf\{t\geq t_{0},v^{N}(T-t,\mathbf{X}_{t}^{N})\leq\eta\}.\]
Notice that, by definition of \(v^{N}\), it holds that \(\tau^{0}=\tau\). Applying Ito's formula to \(\log v^{N}(T-t,\mathbf{X}_{t}^{N})\) yields, for \(\eta>0\),
\[\log v^{N}(T-\tau^{\eta} \wedge T,\mathbf{X}_{\tau^{\eta}\wedge T}^{N})=\log v^{N}(T, \mathbf{x}_{0}^{N})\] \[+\int_{t_{0}}^{\tau^{\eta}\wedge T}\left[\frac{-\partial_{t}v^{N }}{v^{N}}+2\left|\frac{Dv^{N}}{v^{N}}\right|^{2}+\sum_{i=1}^{N}\frac{D_{x^{i}} v^{N}}{v^{N}}.b^{i,N}+\frac{\Delta v^{N}}{v^{N}}-\left|\frac{Dv^{N}}{v^{N}}\right|^{2} \right](T-t,\mathbf{X}_{t}^{N})dt\] \[+\int_{t_{0}}^{\tau^{\eta}\wedge T}\sqrt{2}\frac{Dv^{N}(T-t, \mathbf{X}_{t}^{N})}{v^{N}(T-t,\mathbf{X}_{t}^{N})}.d\mathbf{B}_{t}^{N}\] \[=\log v^{N}(T,\mathbf{x}_{0}^{N})+\int_{t_{0}}^{\tau^{\eta}\wedge T }\left|\frac{Dv^{N}(T-t,\mathbf{X}_{t}^{N})}{v^{N}(T-t,\mathbf{X}_{t}^{N})} \right|^{2}dt\] \[+\int_{t_{0}}^{\tau^{\eta}\wedge T}\sqrt{2}\frac{Dv^{N}(T-t, \mathbf{X}_{t}^{N})}{v^{N}(T-t,\mathbf{X}_{t}^{N})}.d\mathbf{B}_{t}^{N}.\]
Taking expectations and recalling the definition of \(\tau^{\eta}\) we get
\[\log(\eta)\mathbb{P}(\tau^{\eta}\leq T)+\mathbb{P}(\tau^{\eta}>T)\geq\log v^{N }(T,\mathbf{x}^{N}).\]
As a consequence,
\[\lim_{\eta\to 0}\mathbb{P}(\tau^{\eta}\leq T)=0\]
and the control \(-NDw^{N}(T-t,\mathbf{X}_{t}^{N})\) is admissible. Let us show that it is optimal. Recalling the equation satisfied by \(v^{N}\), it holds that
\[\left\{\begin{array}{ll}-\partial_{t}w^{N}-\sum_{i=1}^{N}b^{i,N}(\mathbf{x}^ {N})D_{x^{i,N}}w^{N}+\frac{N}{2}|Dw^{N}|^{2}-\Delta w^{N}=0,&\text{ in }(0,T)\times\Omega_{N}\\ w^{N}(t,\mathbf{x}^{N})=+\infty,&\text{ in }[0,T]\times\partial\Omega_{N}\\ w^{N}(T,\mathbf{x}^{N})=0&\text{ in }\Omega_{N}.\end{array}\right.\]
Let us take another admissible control \(\boldsymbol{\alpha}^{N}=(\alpha^{1,N},\ldots,\alpha^{N,N})\) with the associated solution \(\mathbf{Y}^{N}=(Y_{t}^{1,N},\ldots,Y_{t}^{N,N})\) to the SDE:
\[Y_{t}^{i,N}:=x_{0}^{1,N}+\int_{t_{0}}^{t}\alpha_{s}^{i,N}ds+\int_{t_{0}}^{t}b^ {i,N}(\mathbf{Y}_{s}^{N})ds+\sqrt{2}\int_{t_{0}}^{t}dB_{s}^{i,N}.\]
Being \(\alpha\) admissible, it holds that \(\mathbf{Y}_{t}^{N}\) belongs to \(\Omega_{N}\) for all \(t\in[t_{0},T]\) almost surely. We can apply Ito's lemma to \(w^{N}\) and get
\[\begin{split} 0=\mathbb{E}\left[w^{N}(T,\mathbf{Y}_{T}^{N})\right]& =w^{N}(t_{0},\mathbf{x}_{0}^{N})\\ &+\mathbb{E}\left[\int_{t_{0}}^{T}\partial_{t}w^{N}(t,\mathbf{Y}_ {t}^{N})+\sum_{i=1}^{N}\left(\alpha_{t}^{i,N}+b^{i,N}(\mathbf{Y}_{t}^{N}) \right).D_{x^{i}}w^{N}(t,\mathbf{Y}_{t}^{N})+\Delta w^{N}(t,\mathbf{Y}_{t}^{N}) \right]dt\\ &=w^{N}(t_{0},\mathbf{x}_{0}^{N})+\mathbb{E}\left[\int_{t_{0}}^{T }\left(\boldsymbol{\alpha}_{t}^{N}.Dw^{N}(t,\mathbf{Y}_{t}^{N})+\frac{N}{2}|Dw ^{N}(t,\mathbf{Y}_{t}^{N})|^{2}\right)dt\right]\\ &\geq w^{N}(t_{0},\mathbf{x}^{N})-\mathbb{E}\left[\int_{t_{0}}^{T }\frac{1}{2N}|\boldsymbol{\alpha}_{t}^{N}|^{2}dt\right]\end{split}\]
with equality if and only if \(\boldsymbol{\alpha}_{t}^{N}=-NDw^{N}(t,\mathbf{Y}_{t}^{N})\). This means that the control \(-NDw^{N}(t,\mathbf{Y}_{t}^{N})\) is optimal and the optimal value is given by \(w^{N}(t_{0},\mathbf{x}_{0}^{N})\) which concludes the proof of the proposition.
Notice that a by-product of Proposition 4.1 is to characterize \(\mathcal{V}^{N}\) as the unique solution in \(\mathcal{C}^{1,2}([0,T)\times\Omega_{N})\) to the HJB equation
\[\left\{\begin{array}{ll}-\partial_{t}\mathcal{V}^{N}-\sum_{i=1}^{N}b^{i,N}( \mathbf{x}^{N}).D_{x^{i,N}}\mathcal{V}^{N}+\frac{N}{2}\sum_{i=1}^{N}|D_{x^{i,N }}\mathcal{V}^{N}|^{2}-\sum_{i=1}^{N}\Delta_{x^{i,N}}\mathcal{V}^{N}=0,&\text {in }(0,T)\times\Omega_{N}\\ \mathcal{V}^{N}(t,\mathbf{x}^{N})=+\infty,&\text{in }[0,T]\times\partial \Omega_{N}\\ \mathcal{V}^{N}(T,\mathbf{x}^{N})=0&\text{in }\Omega_{N}.\end{array}\right.\]
The same argument extends without difficulty when additional mean-field costs \(\mathcal{F}\) and \(\mathcal{G}\) satisfying Assumption (Ureg) are considered.
Combining Proposition 4.1 and Theorem 1.1 we obtain the following convergence.
**Corollary 4.1**.: _Let Assumption (1) as well as Assumptions (APisbd), (APsiC2), (APsiTrans) and (APsiC3) hold. Assume that \(\Psi(\mu_{0})<0\) and write \(\mathbf{x}_{0}^{N}=(x_{0}^{1,N},\ldots,x_{0}^{N,N})\). Then it holds_
\[\lim_{N\to+\infty}\frac{2}{N}\log v^{N}(T-t_{0},\mathbf{x}_{0}^{N})=-\mathcal{ U}(t_{0},\mu_{0}),\]
_whenever \(\frac{1}{N}\sum_{i=1}^{N}\delta_{x_{0}^{i,N}}\to\mu_{0}\) is \(\mathcal{P}_{2}(\mathbb{R}^{d})\)._
This is a special case of the general result of Dawson of Gartner, [25]. Contrary to the general result of [25], we don't need to express the Large Deviation principle with \(\limsup\) and \(\liminf\). This is due to the fact that the constraint is "regular" with respect to the rate function, as can be seen with the stability result of Section 2.2. The optimality conditions of Theorem 2.1 give a new way to compute the limit \(-\mathcal{U}(t_{0},\mu_{0})\).
**Corollary 4.2**.: _Under the assumptions of Corollary (4.1) we have_
\[\lim_{N\to+\infty}\frac{2}{N}\log v^{N}\big{(}T-t_{0},\mathbf{x}_{0}^{N}\big{)} =-\int_{\mathbb{R}^{d}}u(t_{0},x)d\mu_{0}(x),\]
_for a solution \((u,\mu,\nu,\eta)\) of the optimality conditions_
\[\left\{\begin{array}{ll}-\partial_{t}u(t,x)+\frac{1}{2}|Du(t,x)|^{2}-Du(t,x ).b\big{(}x,\mu(t)\big{)}-\Delta u(t,x)&\\ \quad=\nu(t)\frac{\delta\Psi}{\delta m}\big{(}\mu(t),x\big{)}+\int_{\mathbb{R }^{d}}Du(t,y).\frac{\delta b}{\delta m}\big{(}y,\mu(t),x\big{)}d\mu(t)(y)&\text {in }(t_{0},T)\times\mathbb{R}^{d},\\ \partial_{t}\mu-\operatorname{div}\bigl{(}Du(t,x)\mu\big{)}+\operatorname{ div}\bigl{(}b(x,\mu(t))\mu\bigr{)}-\Delta\mu=0&\text{in }(t_{0},T)\times\mathbb{R}^{d},\\ \mu(t_{0})=\mu_{0},\hskip 28.452756ptu(T,x)=\eta\int_{\mathbb{R}^{d}} \frac{\delta\Psi}{\delta m}\bigl{(}\mu(T),x\big{)}&\text{in }\mathbb{R}^{d}.\end{array}\right. \tag{20}\]
_In the above optimality conditions \(\mu\) belongs to \(\mathcal{C}^{0}([t_{0},T],\mathcal{P}_{2}(\mathbb{R}^{d}))\), \(\nu\geq 0\) belongs to \(L^{\infty}([t_{0},T])\), \(\beta\) to \(\mathbb{R}^{+}\) and finally \(u\) belongs to \(\mathcal{C}([t_{0},T],E_{n})\) and \(Du\) is bounded and globally Lipschitz continuous. Moreover \(\lambda\) and \(\beta\) satisfy the exclusion conditions_
\[\int_{t_{0}}^{T}\Psi(\mu(t))\nu(t)dt=0,\qquad\eta\Psi(\mu(T))=0.\]
Proof.: The optimality conditions are given by Proposition 10 with a Lagrange multiplier \(\nu^{\prime}\in\mathcal{M}^{+}([t_{0},T])\). Thanks to Assumptions (APsiC2) and (APsiTrans) we can apply Theorem 2.2 of [23] (which applies equally well when \(b\) satisfies Assumption (Ab)) to infer that the Lagrange multiplier \(\nu^{\prime}\) has the form \(\nu^{\prime}=\nu+\eta\delta_{T}\) for some \(\nu\in L^{\infty}([t_{0},T])\) and some \(\eta\in\mathbb{R}^{+}\). The fact that \(\eta\) appears in the terminal condition for \(u\) follows then from the representation formula (13). Finally the additional regularity of \(u\) and \(Du\) follows from the boundness of \(\nu\) and (13) again, see [23] Theorem 1.1.
## 5. Appendix
### Optimality conditions
We will need some preliminary facts.
**Lemma 5.1**.: _There is an admissible couple \((\bar{\mu},\bar{\alpha})\) satisfying (8) and \(J(t_{0},\mu_{0},(\bar{\mu},\bar{\alpha}))<+\infty\) such that \(\Psi(\bar{\mu}(t))\leq-\eta\) for some \(\eta>0\) and all \(t\in[t_{0},T]\)._
Proof.: Following [23] Lemma 4.1, for every \(\epsilon>0\), we can build a solution \((\bar{\mu},\bar{\beta})\) to
\[\partial_{t}\mu+\operatorname{div}(\beta\mu)-\Delta\mu=0\qquad\text{in }(t_{0},T) \times\mathbb{R}^{d},\qquad\quad\mu(t_{0})=\mu_{0},\]
such that \(d_{2}(\bar{\mu}(t),\mu_{0})\leq\epsilon\) for all \(t\in[t_{0},T]\) and \(\int_{t_{0}}^{T}\int_{\mathbb{R}^{d}}|\bar{\beta}(t,x)|^{2}d\bar{\mu}(t)(x)dt<+\infty\). In particular, since \(\Psi(\mu_{0})<0\), we can take \(\epsilon\) small enough so that \(\Psi(\bar{\mu}(t))\leq-\Psi(\mu_{0})/2\) for all \(t\in[t_{0},T]\). Thanks to the growth assumption on \(L\), and the boundness of \(b\), we find that \(\big{(}\bar{\mu},\bar{\beta}(t,x)-b(x,\bar{\mu}(t))\big{)}\) satisfies the desired properties.
**Lemma 5.2**.: _There exists a solution \((\tilde{\mu},\tilde{\alpha})\) to Problem (P)._
Proof.: Take \((\mu_{n},\alpha_{n})_{n\geq 0}\) a minimizing sequence such that \(J\big{(}t_{0},\mu_{0};(\mu_{n},\alpha_{n})\big{)}\leq\inf_{(\mu,\alpha)}J \big{(}t_{0},\mu_{0},(\mu,\alpha)\big{)}+1\) for all \(n\in\mathbb{N}\). Using the previous lemma, the growth condition on \(L\), the boundness of \(b\) and that \(\mathcal{F}\) and \(\mathcal{G}\) are bounded from below, we find that
\[\int_{t_{0}}^{T}\int_{\mathbb{R}^{d}}|\alpha_{n}(t,x)|^{2}d\mu_{n}(t)(x)dt\leq C \tag{21}\]
for some \(C>0\) and all \(n\geq 0\). By classical argument, see Proposition 2.1 in [23], this implies that
\[\sup_{t\in[t_{0},T]}\int_{\mathbb{R}^{d}}|x|^{2}d\mu_{n}(t)+\sup_{t\neq s\in[t _{0},T]}\frac{d_{2}(\mu_{n}(t),\mu_{n}(s))}{\sqrt{|t-s|}}\leq C\]
for some \(C>0\) independent from \(n\). From (21) we also deduce, by Cauchy-Schwartz inequality, that we have
\[\int_{t_{0}}^{T}\int_{\mathbb{R}^{d}}|\alpha_{n}(t,x)|\mu_{n}(t)( x)dt \leq\sqrt{T-t_{0}}\sqrt{\int_{t_{0}}^{T}\int_{\mathbb{R}^{d}}| \alpha_{n}(t,x)|^{2}d\mu_{n}(t)(x)dt}\] \[\leq\sqrt{TC}\]
and therefore \(\alpha_{n}\mu_{n}\) is bounded in \(\mathcal{M}([t_{0},T]\times\mathbb{R}^{d},\mathbb{R}^{d})\) uniformly in \(n\in\mathbb{N}\). Now we take \(\delta\in(0,1)\) and we apply Banach-Alaoglu and Ascoli theorems to deduce that, up to taking a subsequence, \((\mu_{n},\mu_{n}\alpha_{n})_{n\geq 0}\) converges in \(\mathcal{C}([t_{0},T],\mathcal{P}_{2-\delta}(\mathbb{R}^{d}))\times\mathcal{M }([t_{0},T]\times\mathbb{R}^{d},\mathbb{R}^{d})\) toward some \((\tilde{\mu},\tilde{\omega})\in\mathcal{C}\big{(}[t_{0},T],\mathcal{P}_{2}( \mathbb{R}^{d})\big{)}\times\mathcal{M}([t_{0},T]\times\mathbb{R}^{d},\mathbb{ R}^{d})\). Using Theorem 2.34 and Example 2.36 in [1] to handle
the term involving the Lagrangian \(L\), we conclude that \(\tilde{\omega}\) is absolutely continuous with respect to \(\tilde{\mu}(t)dt\) and, taking \(\tilde{\alpha}:=\frac{d\tilde{\omega}}{dt\otimes d\tilde{\mu}}\), that
\[J\big{(}t_{0},\mu_{0};(\tilde{\mu},\tilde{\alpha})\big{)}\leq\liminf_{n\to+ \infty}J\big{(}t_{0},\mu_{0},(\mu_{n},\alpha_{n})\big{)}.\]
We easily check that \((\tilde{\mu},\tilde{\alpha})\) satisfies the Fokker-Planck equation and the state constraint and is therefore a solution to Problem (P).
**Lemma 5.3**.: _If \((\tilde{\mu},\tilde{\alpha})\) is a solution to Problem (P), then \((\tilde{\mu},\tilde{\beta}):=(\tilde{\mu},\tilde{\alpha}+b(x,\tilde{\mu}(t)))\) is a solution to_
\[\inf_{(\mu,\beta)}J^{tl}\big{(}t_{0},\mu_{0};(\mu,\beta)\big{)} \tag{22}\]
_with \(J^{tl}\big{(}t_{0},\mu_{0};(\mu,\beta)\big{)}\) defined by_
\[J^{tl}\big{(}t_{0},\mu_{0};(\mu,\beta)\big{)} :=\int_{t_{0}}^{T}\int_{\mathbb{R}^{d}}L\Big{(}x,\beta(t,x)-b( \tilde{\mu}(t),x)\Big{)}d\mu(t)(x)dt\] \[-\int_{t_{0}}^{T}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}} \partial_{q}L\big{(}y,\tilde{\alpha}(t,y)\big{)}.\frac{\delta b}{\delta m} \big{(}\tilde{\mu}(t),y,x\big{)}d\tilde{\mu}(t)(y)d\mu(t)(x)dt\] \[+\int_{t_{0}}^{T}\int_{\mathbb{R}^{d}}\frac{\delta\mathcal{F}}{ \delta m}\big{(}\tilde{\mu}(t),x\big{)}d\mu(t)(x)dt+\int_{\mathbb{R}^{d}} \frac{\delta\mathcal{G}}{\delta m}\big{(}\tilde{\mu}(T),x\big{)}d\mu(T)(x)\]
_where the infimum is taken over the couples \((\mu,\beta)\in\mathcal{C}([t_{0},T],\mathcal{P}_{2}(\mathbb{R}^{d}))\times L^ {2}_{dt\otimes\mu(t)}([t_{0},T]\times\mathbb{R}^{d},\mathbb{R}^{d})\) satisfying in the sense of distributions the Fokker-Planck equation_
\[\left\{\begin{array}{l}\partial_{t}\mu+\mathrm{div}\big{(}\beta(t,x)\mu \big{)}-\Delta\mu=0\quad\text{ in }(t_{0},T)\times\mathbb{R}^{d}\\ \mu(t_{0})=\mu_{0},\end{array}\right. \tag{23}\]
_under the constraint that \(\Psi(\mu(t))\leq 0\) for all \(t\in[t_{0},T]\)._
Proof.: We first recall that, being \((\tilde{\mu},\tilde{\alpha})\) a solution to Problem (P), then \((\tilde{\mu},\tilde{\beta})\) is a solution to Problem (P'). Now we take \((\mu,\beta)\) satisfying (23) and such that \(J^{\prime}\big{(}t_{0},\mu_{0},(\mu,\beta)\big{)}<+\infty\). We define \(\omega=\beta\mu\) and \(\tilde{\omega}=\tilde{\beta}\tilde{\mu}\). We take \(\lambda\in(0,1)\) and we let \((\omega_{\lambda},\mu_{\lambda})=(1-\lambda)(\tilde{\omega},\tilde{\mu})+ \lambda(\omega,\mu)\). In particular, \(\omega_{\lambda}\) is absolutely continuous with respect to \(\mu_{\lambda}(t)\otimes dt\) and we let \(\beta_{\lambda}=\frac{d\omega_{\lambda}}{dt\otimes\mu_{\lambda}(t)}\). By convexity of the constraint and linearity of the Fokker-Planck equation (23) in \((\mu,\beta\mu)\), the couple \((\mu_{\lambda},\beta_{\lambda})\) satisfies (23) as well as the state constraint. By minimality of \((\tilde{\mu},\tilde{\beta})\) it holds that
\[J^{\prime}\big{(}t_{0},\mu_{0},(\mu_{\lambda},\beta_{\lambda})\big{)}\geq J^{ \prime}\big{(}t_{0},\mu_{0},(\tilde{\mu},\tilde{\beta})\big{)}.\]
On the other hand, by convexity of \(\mathbb{R}^{d}\times\mathbb{R}^{+}\ni(\beta,m)\mapsto L\Big{(}x,\frac{\beta}{ m}\Big{)}m\) (set to be \(+\infty\) if \(m=0\)) for all \(x\in\mathbb{R}^{d}\), we have
\[J^{\prime}\big{(}t_{0},\mu_{0}, (\mu_{\lambda},\beta_{\lambda})\big{)}\leq\int_{t_{0}}^{T}\int_{ \mathbb{R}^{d}}L\Big{(}x,\tilde{\beta}(t,x)-b\big{(}\mu_{\lambda}(t),x\big{)} \Big{)}d\tilde{\mu}(t)(x)dt\] \[+\lambda\int_{t_{0}}^{T}\int_{\mathbb{R}^{d}}L\Big{(}x,\beta(t,x) -b\big{(}\mu_{\lambda}(t),x\big{)}\Big{)}d\mu(t)(x)dt\] \[-\lambda\int_{t_{0}}^{T}\int_{\mathbb{R}^{d}}L\Big{(}x,\tilde{ \beta}(t,x)-b\big{(}\mu_{\lambda}(t),x\big{)}\Big{)}d\mu(t)(x)dt\] \[+\int_{t_{0}}^{T}\mathcal{F}\big{(}\mu_{\lambda}(t)\big{)}dt+ \mathcal{G}\big{(}\mu_{\lambda}(T)\big{)}.\]
Combining the two inequalities we find that, for all \(\lambda\in(0,1)\),
\[\int_{t_{0}}^{T}\int_{\mathbb{R}^{d}}L\Big{(}x,\beta(t,x)-b\big{(} \mu_{\lambda}(t),x\big{)}\Big{)}d\mu(t)(x)dt-\int_{t_{0}}^{T}\int_{\mathbb{R}^{d }}L\Big{(}x,\tilde{\beta}(t,x)-b\big{(}\mu_{\lambda}(t),x\big{)}\Big{)}d\mu(t)( x)dt\] \[\geq\frac{1}{\lambda}\left[\int_{t_{0}}^{T}\int_{\mathbb{R}^{d}} L\Big{(}x,\tilde{\beta}(t,x)-b\big{(}\tilde{\mu}(t),x\big{)}\Big{)}d\tilde{\mu}(t)( x)dt-\int_{t_{0}}^{T}\int_{\mathbb{R}^{d}}L\Big{(}x,\tilde{\beta}(t,x)-b \big{(}\mu_{\lambda}(t),x\big{)}\Big{)}d\tilde{\mu}(t)(x)dt\right]\] \[+\int_{t_{0}}^{T}\frac{1}{\lambda}\left[\mathcal{F}\big{(}\tilde{ \mu}(t)\big{)}-\mathcal{F}\big{(}\mu_{\lambda}(t)\big{)}\right]dt+\frac{1}{ \lambda}\left[\mathcal{G}\big{(}\tilde{\mu}(T)\big{)}-\mathcal{G}\big{(}\mu_{ \lambda}(T)\big{)}\right].\]
Letting \(\lambda\) tends to \(0\), using the differentiability of the mean-field costs and rearranging the terms gives
\[J^{\prime l}\big{(}t_{0},\mu_{0};(\tilde{\mu},\tilde{\beta})\big{)}\leq J^{ \prime l}\big{(}t_{0},\mu_{0};(\mu,\beta)\big{)}\]
which concludes the proof of the Lemma.
The next step to derive the optimality conditions relies on the following form of Von Neumann min/max theorem, see [57] for the proof under additional compactness assumptions and the appendix of [51] for this version.
**Theorem 5.1**.: _(Von Neumann) Let \(\mathbb{A}\) and \(\mathbb{B}\) be convex sets of some vector spaces and suppose that \(\mathbb{B}\) is endowed with some Hausdorff topology. Let \(\mathcal{L}\) be a function satisfying :_
\[a\to\mathcal{L}(a,b)\text{ is concave in }\mathbb{A}\text{ for every }b\in\mathbb{B},\]
\[b\to\mathcal{L}(a,b)\text{ is convex in }\mathbb{B}\text{ for every }a\in\mathbb{A}.\]
_Suppose also that there exists \(a_{*}\in\mathbb{A}\) and \(C_{*}>\sup_{a\in\mathbb{A}}\inf_{b\in\mathbb{B}}\mathcal{L}(a,b)\) such that :_
\[\mathbb{B}_{*}:=\{b\in\mathbb{B},\mathcal{L}(a_{*},b)\leq C_{*}\}\text{ is not empty and compact in }\mathbb{B},\]
\[b\to\mathcal{L}(a,b)\text{ is lower semicontinuous in }\mathbb{B}_{*}\text{ for every }a\in\mathbb{A}.\]
_Then,_
\[\min_{b\in\mathbb{B}}\sup_{a\in\mathbb{A}}\mathcal{L}(a,b)=\sup_{a\in\mathbb{A }}\inf_{b\in\mathbb{B}}\mathcal{L}(a,b).\]
**Lemma 5.4**.: _If \((\tilde{\mu},\tilde{\alpha})\) is a solution to Problem (P), then there exists some \(\tilde{\nu}\in\mathcal{M}^{+}([t_{0},T])\) such that_
\[\Psi(\tilde{\mu}(t))=0\hskip 28.452756pt\tilde{\nu}-ae \tag{24}\]
_and \((\tilde{\mu},\tilde{\beta}):=\big{(}\tilde{\mu},\tilde{\alpha}+b(x,\tilde{\mu }(t))\big{)}\) is a solution to_
\[\inf_{(\mu,\beta)}J^{\prime l}\big{(}t_{0},\mu_{0},(\mu,\beta)\big{)}+\int_{t_{ 0}}^{T}\Psi\big{(}\mu(t)\big{)}d\tilde{\nu}(t) \tag{25}\]
_where \(J^{\prime l}\) was defined in Lemma 5.3 and the infimum is taken over the couples \((\mu,\beta)\) satisfying (23) but not necessarily the state constraint._
Proof.: We set-up the min-max argument. We define \(\mathbb{A}\) as the subset of \(\mathcal{C}([t_{0},T],\mathcal{P}_{2}(\mathbb{R}^{d}))\times\mathcal{M}([t_{0},T]\times\mathbb{R}^{d},\mathbb{R}^{d})\) consisting of elements \((\mu,\omega)\) such that
\[\left\{\begin{array}{l}\omega\text{ is absolutely continuous w.r.t. }\mu(t)\otimes dt,\quad\frac{d\omega}{dt\otimes\mu(t)}\in L^{2}_{dt \otimes d\mu(t)}\big{(}[t_{0},T]\times\mathbb{R}^{d},\mathbb{R}^{d}\big{)}, \\ \partial_{t}\mu+\operatorname{div}(\omega)-\Delta\mu=0\quad\text{ in }(t_{0},T)\times \mathbb{R}^{d},\\ \mu(t_{0})=\mu_{0},\end{array}\right.\text{ in }(t_{0},T)\times\mathbb{R}^{d}, \tag{26}\]
where the Fokker-Planck equation is understood in the sense of distributions. We also set \(\mathbb{B}=\mathcal{M}^{+}([t_{0},T])\). We define \(\mathcal{L}:\mathbb{A}\times\mathbb{B}\to\mathbb{R}\) by
\[\mathcal{L}\big{(}(\mu,\omega),\nu\big{)}=J^{\prime l}\big{(}t_{0},\mu_{0},( \mu,\omega)\big{)}+\int_{t_{0}}^{T}\Psi\big{(}\mu(t)\big{)}d\nu(t),\]
where we set, by abuse of notation, \(J^{\prime l}\big{(}t_{0},\mu_{0},(\mu,\omega)\big{)}=J^{\prime l}\big{(}t_{0},\mu_ {0},\big{(}\frac{\omega}{d\ell\otimes d\mu},\mu\big{)}\big{)}\). It is clear that
\[\inf_{(\mu,\beta)}J^{\prime l}\big{(}t_{0},\mu_{0},(\mu,\beta)\big{)}=\inf_{( \mu,\omega)\in\mathbb{A}}\sup_{\nu\in\mathbb{B}}\mathcal{L}\big{(}(\mu,\omega),\nu\big{)} \tag{27}\]
where the first infimum is taken over the couples \((\mu,\beta)\) satisfying (23) as well as the state constraint. It is plain to check that, for every \(\nu\in\mathbb{B}\), \(\mathcal{L}(.,\nu)\) is convex over \(\mathbb{B}\) and that, for any \((\mu,\omega)\in\mathbb{B}\), \(\mathcal{L}\big{(}(\mu,\omega),.\big{)}\) is concave (linear in fact) and continuous over \(\mathbb{A}\). Moreover, arguing as in Lemma 5.1, we find \(\big{(}\tilde{\mu},\bar{\omega}\big{)}\in\mathbb{A}\) such that \(J^{\prime l}\big{(}t_{0},\mu_{0},(\bar{\mu},\bar{\omega})\big{)}<+\infty\) and \(\Psi\big{(}\tilde{\mu}(t)\big{)}\leq-\eta\) for some \(\eta>0\) and all \(t\in[t_{0},T]\). As a consequence, using the continuity of \(\nu\mapsto\mathcal{L}\big{(}(\bar{\mu},\bar{\omega}),\nu\big{)}\), we find that the set
\[\Big{\{}\nu\in\mathcal{M}^{+}([t_{0},T]),\quad\mathcal{L}\big{(}( \bar{\mu},\bar{\omega}),\nu\big{)}\!\geq\inf_{(\mu,\omega)\in\mathbb{A}}\sup_{ \nu\in\mathbb{B}}\mathcal{L}\big{(}(\mu,\omega),\nu\big{)}\Big{\}}\] \[\qquad\qquad\subset\Bigg{\{}\nu\in\mathcal{M}^{+}([t_{0},T]),\quad \nu([t_{0},T])\leq\frac{J^{\prime l}\big{(}t_{0},\mu_{0};(\bar{\mu},\bar{ \omega})\big{)}-\inf_{(\mu,\omega)\in\mathbb{A}}\sup_{\nu\in\mathbb{B}} \mathcal{L}\big{(}(\mu,\omega),\nu\big{)}}{\eta}\Bigg{\}}\]
is non-empty and compact in \(\mathcal{M}^{+}([t_{0},T])\). Moreover, for all \((\mu,\omega)\in\mathbb{B}\), \(\nu\mapsto\mathcal{L}\big{(}(\mu,\omega),\nu\big{)}\) is continuous over \(\mathbb{B}\). Applying Von-Neumann min-max theorem, we find that
\[\inf_{(\mu,\omega)\in\mathbb{B}}\sup_{\nu\in\mathbb{A}}\mathcal{L}\big{(}( \mu,\omega),\nu\big{)}=\max_{\nu\in\mathbb{A}}\inf_{(\mu,\omega)\in\mathbb{B}} \mathcal{L}\big{(}(\mu,\omega),\nu\big{)}. \tag{28}\]
Let \(\tilde{\nu}\) be a solution to the dual problem, ie \(\max_{\nu\in\mathbb{A}}\inf_{(\mu,\omega)\in\mathbb{B}}\mathcal{L}\big{(}(\mu, \omega),\nu\big{)}=\inf_{(\mu,\omega)\in\mathbb{B}}\mathcal{L}\big{(}(\mu, \omega),\tilde{\nu}\big{)}\). Combining (27) and (28) we deduce that \((\tilde{\mu},\tilde{\beta})\) is a solution to (25). It remains to prove (24). By (27) it holds that
\[J^{\prime l}\big{(}t_{0},\mu_{0};(\tilde{\mu},\tilde{\beta})\big{)}=\inf_{(\mu,\omega)\in\mathbb{B}}\mathcal{L}\big{(}(\mu,\omega),\tilde{\nu}\big{)}\leq J^ {\prime l}\big{(}t_{0},\mu_{0},(\tilde{\mu},\tilde{\beta})\big{)}+\int_{t_{0}} ^{T}\Psi\big{(}\tilde{\mu}(t)\big{)}d\tilde{\nu}(t).\]
This implies that \(\int_{t_{0}}^{T}\Psi\big{(}\tilde{\mu}(t)\big{)}d\tilde{\nu}(t)\geq 0\) but \(\Psi\big{(}\tilde{\mu}(t)\big{)}\leq 0\) for all \(t\in[t_{0},T]\) and therefore,
\[\Psi\big{(}\tilde{\mu}(t)\big{)}=0\qquad\quad\tilde{\nu}-ae\text{ in }[t_{0},T],\]
which concludes the proof of the lemma.
We can finally prove the optimality conditions of Proposition 2.1.
Proof of Proposition 2.1.: Take \((\tilde{\mu},\tilde{\alpha})\) a solution to Problem (P). Using Lemma 5.4 and arguing as in Lemma 5.3, we find that \(\big{(}\tilde{\mu},\tilde{\beta}\big{)}:=\big{(}\tilde{\mu},\tilde{\alpha}-b(x, \tilde{\mu}(t))\big{)}\) is solution to
\[\inf_{(\mu,\beta)}J^{\prime l}\big{(}t_{0},\mu_{0};(\mu,\beta)\big{)}+\int_{t_{ 0}}^{T}\int_{\mathbb{R}^{d}}\frac{\delta\Psi}{\delta m}\big{(}\tilde{\mu}(t),x \big{)}d\mu(t)(x)d\nu(t),\]
where the infimum is taken over the couples \((\mu,\beta)\) satisfying (23) but not necessarily the state constraint. This means that \((\tilde{\mu},\tilde{\alpha})\) is solution to
\[\inf_{(\mu,\alpha)}J^{l}(t_{0},\mu_{0},(\mu,\alpha))\]
where \(J^{l}\) is defined by
\[J^{l}\big{(}t_{0},\mu_{0};(\mu,\alpha)\big{)} :=\int_{t_{0}}^{T}\int_{\mathbb{R}^{d}}L\big{(}x,\alpha(t,x)\big{)} d\mu(t)(x)dt\] \[\quad-\int_{t_{0}}^{T}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}} \partial_{q}L\big{(}y,\tilde{\alpha}(t,y)\big{)}\cdot\frac{\delta b}{\delta m} \big{(}\tilde{\mu}(t),y,x\big{)}d\tilde{\mu}(t)(y)d\mu(t)(x)dt\] \[\quad+\int_{t_{0}}^{T}\int_{\mathbb{R}^{d}}\frac{\delta\mathcal{ F}}{\delta m}\big{(}\tilde{\mu}(t),x\big{)}d\mu(t)(x)dt+\int_{\mathbb{R}^{d}} \frac{\delta\mathcal{G}}{\delta m}\big{(}\tilde{\mu}(T),x\big{)}d\mu(T)(x)\] \[\quad+\int_{t_{0}}^{T}\int_{\mathbb{R}^{d}}\frac{\delta\Psi}{ \delta m}\big{(}\tilde{\mu}(t),x\big{)}d\mu(t)(x)d\nu(t)\]
and the infimum is taken over the couples \((\mu,\alpha)\in\mathcal{C}([t_{0},T],\mathcal{P}_{2}(\mathbb{R}^{d}))\times L ^{2}_{dt\otimes\mu(t)}([t_{0},T]\times\mathbb{R}^{d},\mathbb{R}^{d})\) satisfying in the sense of distributions the Fokker-Planck equation
\[\left\{\begin{array}{l}\partial_{t}\mu+\mathrm{div}(\alpha\mu)+\mathrm{div} \big{(}b(\tilde{\mu}(t),x)\mu\big{)}-\Delta\mu=0\quad\text{in }(t_{0},T)\times \mathbb{R}^{d},\\ \mu(t_{0})=\mu_{0}.\end{array}\right. \tag{29}\]
We are now dealing with a standard control problem (except maybe for the presence of the measure \(\nu\)) for a linear Fokker-Planck equation and, importantly, without state constraint. Therefore, arguing by verification -- see e.g. [23] proof of Theorem 2.3 for the detailed computation--, we find that \(\tilde{\alpha}=-\partial_{p}H(x,Du)\) where \(u\in L^{\infty}([t_{0},T],E_{n})\) is solution in the sense of Definition (2.1) to
\[\left\{\begin{array}{l}-\partial_{t}u-\Delta u+H(x,Du)-b\big{(} \tilde{\mu}(t),x\big{)}.Du=\nu(t)\frac{\delta\Psi}{\delta m}\big{(}\tilde{\mu }(t),x\big{)}\\ -\int_{\mathbb{R}^{d}}\partial_{q}L\big{(}y,\tilde{\alpha}(t,y)\big{)}.\frac{ \delta b}{\delta m}\big{(}\tilde{\mu}(t),y,x\big{)}d\tilde{\mu}(t)(y)+\frac{ \delta\mathcal{F}}{\delta m}\big{(}\tilde{\mu}(t),x\big{)}\quad\text{in }(t_{0},T) \times\mathbb{R}^{d},\\ u(T,x)=\frac{\delta\mathcal{G}}{\delta m}\big{(}\tilde{\mu}(T),x\big{)} \text{in }\mathbb{R}^{d}.\end{array}\right.\]
The existence of such a solution \(u\) is guaranteed by Theorem 5.1. in [23]. Collecting the equations satisfied by \(u\) and \(\mu\) as well as noticing that
\[\partial_{q}L\big{(}x,\tilde{\alpha}(t,x)\big{)}=\partial_{q}L\Big{(}x,- \partial_{p}H\big{(}x,Du(t,x)\big{)}\Big{)}=-Du(t,x)\]
gives the optimality conditions.
### Concentration limit
We consider \(\mathbf{x}_{0}=(x_{0}^{1},\ldots,x_{0}^{N})\in(\mathbb{R}^{d})^{N}\) such that \(\frac{1}{N}\sum_{i=1}^{N}\delta_{x_{0}^{i}}\to\mu_{0}\) in \(\mathcal{P}_{2}(\mathbb{R}^{d})\) as \(N\to+\infty\). For \(b:[0,T]\times\mathbb{R}^{d}\times\mathcal{P}_{1}(\mathbb{R}^{d})\to\mathbb{R}^ {d}\) uniformly Lipschitz continuous we consider the particle system
\[\left\{\begin{array}{l}dX_{t}^{i}=b(X_{t}^{i},\hat{\mu}_{t}^{N})dt+\sqrt{2} dB_{t}^{i}\quad 1\leq i\leq N,t\in[0,T]\\ \hat{\mu}_{t}^{N}=\frac{1}{N}\sum_{i=1}^{N}\delta_{X_{t}^{i}}\\ X_{0}^{i}=x_{0}^{i},\qquad\qquad\qquad\qquad\qquad 1\leq i\leq N\end{array}\right.\]
where \(B^{1},\ldots,B^{N}\) are independent Brownian motions.
We also consider \(\mu\in\mathcal{C}([0,T],\mathcal{P}_{2}(\mathbb{R}^{d}))\) solution to
\[\left\{\begin{array}{l}\partial_{t}\mu+\mathrm{div}(b(x,\mu(t))\mu)-\Delta \mu=0\quad\text{in }(0,T)\times\mathbb{R}^{d}\\ \mu(0)=\mu_{0}.\end{array}\right. \tag{30}\]
**Proposition 5.1**.: _In this setting, it holds_
\[\lim_{N\to+\infty}\mathbb{E}\left[\sup_{t\in[0,T]}d_{1}(\hat{\mu}_{t}^{N},\mu( t))\right]=0. \tag{31}\]
Proof.: By classical arguments, see Oelschlager [50], \((\mathcal{L}(\hat{\mu}_{\cdot}^{N}))_{n\in\mathbb{N}}\) is pre-compact in \(\mathcal{P}(\mathcal{C}([0,T],\mathcal{P}_{1}(\mathbb{R}^{d}))\). The limit points are supported on the set of solutions to the Fokker-Planck equation (30). This equation admits a unique solution in \(\mathcal{C}([0,T],\mathcal{P}_{1}(\mathbb{R}^{d}))\) starting from \(\mu_{0}\) since \(b\) is bounded and Lipschitz continuous. Therefore \((\mathcal{L}(\hat{\mu}_{\cdot}^{N})_{n\in\mathbb{N}}\) converges to \(\delta_{\mu\cdot}\) in \(\mathcal{P}(\mathcal{C}([0,T],\mathcal{P}_{1}(\mathbb{R}^{d}))\). The limit is deterministic and therefore \(\hat{\mu}_{\cdot}^{N}\) actually converges toward \(\mu\). in probability. Since
\[\sup_{N\in\mathbb{N}}\mathbb{E}\left[\sup_{t\in[0,T]}\frac{1}{N}\sum_{i=1}^{N} |X_{t}^{i,N}|\right]<+\infty\]
we can improve the convergence from convergence in probability to convergence in \(L^{1}\) to deduce (31).
Acknowledgment. The author thanks Pierre Cardaliaguet for suggesting this problem and for fruitful discussions during the preparation of this work which is part of his PhD thesis.
|
2303.04329 | Graviton-photon production with a massive spin-2 particle | A recent letter Cai et al. [2107.14548] within a phenomenological dark matter
framework with a massive graviton in the external state indicated a divergence
with increasing centre-of-momentum energy arising from the longitudinal
polarizations of the graviton. In this letter we point out that in processes
such as graviton-photon production from matter annihilation, $f\bar{f} \to
G\gamma$, no such anomalous divergences occur at tree-level. This then applies
to other tree-level amplitudes related by crossing symmetry such as $\gamma f
\to Gf$, $Gf \to {\gamma}f$, ${\gamma}f \to Gf$, $f \to fG{\gamma}$ and so on.
We show this by explicitly computing the relevant tree-level diagrams, where we
find that delicate cancellations ensure that all anomalously growing terms are
well-regulated. Effectively at tree-level this is consistent with the operation
of a Ward identity associated with the external photon for such amplitudes. The
same tree-level results apply if the photon is replaced by a gluon. These
results are important for cosmological models of dark matter within the
framework of extra dimensions. | Joshua A. Gill, Dipan Sengupta, Anthony G. Williams | 2023-03-08T02:05:16Z | http://arxiv.org/abs/2303.04329v3 | # Graviton-photon production with a massive spin-2 particle
###### Abstract
A recent letter [1] within a phenomenological dark matter framework with a massive graviton in the external state indicated a divergence with increasing centre-of-momentum energy arising from the longitudinal polarizations of the graviton. In this letter we point out that in processes such as graviton-photon production from matter annihilation, \(f\bar{f}\to G\gamma\), no such anomalous divergences occur at tree-level. This then applies to other tree-level amplitudes related by crossing symmetry such as \(\gamma f\to Gf\), \(Gf\to\gamma f\), \(\gamma\bar{f}\to G\bar{f}\), \(f\to fG\gamma\) and so on. We show this by explicitly computing the relevant tree-level diagrams, where we find that delicate cancellations ensure that all anomalously growing terms are well-regulated. Effectively at tree-level this is consistent with the operation of a Ward identity associated with the external photon for such amplitudes. The same tree-level results apply if the photon is replaced by a gluon. These results are important for cosmological models of dark matter within the framework of extra dimensions.
+
Footnote †: preprint: ADP-23-06/T1215
## I Introduction
In the last few years, there has been a renewed interest in dark matter and phenomenological models with massive spin-2 particles [1; 2; 3]. While some of these approaches are simplified constructions of an underlying compact extra-dimensional theory [4], others are effective field theories with a single massive graviton [5]. In a number of these approaches, it has been shown that there are enhancements in matrix elements and cross-sections due to the longitudinal polarizations of the graviton, which grow like \(\mathcal{O}(s/M_{G}^{2})\) at high energies, where \(s\) is the centre-of-momentum energy and \(M_{G}\) the mass of the massive spin-2 particle [1; 2; 5]. These require a lower bound on the graviton mass \(M_{G}\) in order for the theory to be effective at large \(s\)1. This high energy scaling is expected in a naive Fierz-Pauli theory [8] or extensions like bigravity/dRGT gravity [9; 10; 11]2. However, in Kaluza-Klein (KK) theories with compact extra dimensions [12; 13; 14], spin-2 KK mode scatterings are unitarized due to the underlying higher dimensional diffeomorphism invariance [15; 16; 17]3. In KK theories, these results have been extended to coupling with matter localized on the four-dimensional brane [20].
Footnote 1: The \(M_{G}\to 0\) limit is not smooth and leads to the famous vanDam-Veltman-Zakharov discontinuity [6; 7].
Footnote 2: At high energies the scattering amplitudes of massive gravitons (\(GG\to GG\)) in the Fierz-Pauli theory grows as \(s^{5}/(M_{G}^{8}M_{Pl}^{2})\), which can be estimated from power counting arguments [10; 11]. It can be shown that in extensions like dRGT gravity, this scaling can be improved to \(s^{3}/(M_{G}^{4}M_{Pl}^{2})\) by adding higher order terms in the potential, but not beyond [9].
Footnote 3: It can be shown through a rigorous calculation that these cancellations persist even when the radial mode, the radion, gets a mass via the Goldberger-Wise mechanism [18; 19]
In a recent letter [1], the authors have pointed out that in a simple graviton-gluon production process, \(f\bar{f}\to Gg\), with a gluon and a massive spin-2 particle in the external state, there is a chiral-symmetry-breaking enhancement due to a massive on-shell external fermion. They found that the squared matrix elements for the longitudinal polarizations grow proportional to \(\left[\left(s/M_{Pl}^{2}\right)\left(m_{f}^{4}/M_{G}^{4}\right)\right]\) at high energies, implying an increase with increasing fermion mass \(m_{f}\) and a very strong enhancement with decreasing graviton mass, \(M_{G}\to 0\). This should be compared with a growth of the form \(|\mathcal{M}|^{2}\propto\mathcal{O}(s/M_{Pl}^{2})<1\) for a massless graviton theory since theories involving gravitons are effective field theories which are valid for \(s\ll M_{Pl}\). In Ref. [1] this enhanced result was then used to estimate the relic density in a freeze-in dark matter model with a cosmologically stable light KK graviton. This model then showed a dramatic enhancement in the velocity averaged cross-section for \(M_{G}\ll m_{f}\).
The result also implies that even in a compactified extra-dimensional setup with massive KK modes in the external state, this enhancement should persist even when the full KK spectrum is taken into account, since there is no cancellation mechanism, in contrast to expectations of scaling of KK graviton scatterings respecting higher dimensional gauge/diffeomorphism invariance. The enhancement also has significant phenomenological consequences for the production of KK gravitons at high-energy colliders within extra-dimensional models, which would predict anomalously growing cross-sections for fermion-initiated processes.
In this letter, we explicitly calculate the graviton-photon production process4,
Footnote 4: Graviton photoproduction \(\gamma f\to Gf\) has been calculated previously in [21] for massless gravitons.
\[f\bar{f}\to G\gamma, \tag{1}\]
where \(G\) represents a massive spin-2 particle5, and \(\gamma\) the massless on-shell photon. We show that the full tree-level squared amplitude at high energies grows as \(|\mathcal{M}|^{2}\propto\mathcal{O}(s/M_{Pl}^{2})\), and there are no terms proportional to \(m_{f}^{4}/M_{G}^{4}\), implying no enhancements or divergences as
\(M_{G}\to 0\) for finite fermion masses, contrary to the suggestion in [1]. We demonstrate that although individual terms in the \(s,\ t,\ u\) and contact interactions grow as \(\mathcal{O}(1/M_{G}^{2})\), due to the longitudinal polarizations of the massive graviton, delicate cancellations at tree-level ensure that the full amplitude has no low energy cut-off, for all incoming helicities of the fermion and outgoing helicities of the massless photon and the massive graviton. An identical scaling of amplitudes at high energies is observed if a gluon replaces the photon. The only difference is replacing the electromagnetic coupling with the strong coupling. In the following sections, we detail the calculation and present the full amplitude as a function of the centre-of-momentum energy \(s\) and scattering angle \(\theta\).
## II Framework and formalism
We use the'mostly minus' metric convention for the flat four-dimensional Minkowski spacetime background (4D) \(\eta_{\mu\nu}\equiv\text{Diag}(+1,-1,-1,-1)\), which is also used to raise and lower indices. Metric fluctuations \(h_{\mu\nu}(x)\)6 around the flat Minkowski background is expressed as,
Footnote 6: From here on we will drop the spacetime index \(x\), and in momentum space \(k\) unless explicitly specified.
\[\eta_{\mu\nu}\rightarrow\eta_{\mu\nu}+\kappa h_{\mu\nu}(x)\equiv\tilde{G}_{ \mu\nu}(x), \tag{2}\]
which define the spin-2 graviton in 4D. The dimensionfull coupling \(\kappa\) is related to the fundamental 4D Planck mass as \(\kappa=1/M_{Pl}=\sqrt{16\pi G_{N}}\). A theory of massive graviton, dubbed as the Fierz-Pauli theory, can be expressed as,
\[\mathcal{L}=\frac{M_{Pl}^{2}}{2}\sqrt{-|\tilde{G}|}R+\frac{M_{G}^{2}}{2}(h^{2 }-h_{\mu\nu}^{2}). \tag{3}\]
Here \(|\tilde{G}|\) is the determinant of 4D metric with fluctuations and \(h\equiv\eta^{\mu\nu}h_{\mu\nu}\). The first term represents the Einstein-Hilbert piece, \(R\) being the Ricci scalar, while the second represents the Fierz-Pauli mass term. In theories of compact extra dimensions, the same mass terms for spin-2 KK gravitons appear after compactification, along with the massless graviton. For example, in Randall-Sundrum models in warped extra dimensions, the masses of the \(n^{th}\) modes of the spin-2 KK gravitons are given by (in the large curvature limit) \(m_{n}\simeq x_{n}ke^{-\pi kr_{c}}\), where \(x_{n}\) are the zeros of the Bessel function of the first kind, \(k\) is the curvature and \(r_{c}\) the radius of the compactification.
The couplings of the graviton to matter (scalars, fermions or vectors) can be expressed by the following action,
\[\mathcal{S}_{M}=\int d^{4}x\ \mathcal{L}(\tilde{G},s,v,f), \tag{4}\]
which upon expanding to order \(\kappa\) in the metric fluctuation yields,
\[\mathcal{S}_{M}=-\frac{\kappa}{2}\int d^{4}x\ h_{\mu\nu}T^{\mu\nu}(s,v,f). \tag{5}\]
The stress energy tensor \(T_{\mu\nu}\) is given by,
\[T_{\mu\nu}=\left(-\eta_{\mu\nu}\mathcal{L}+2\frac{\delta\mathcal{L}}{\delta \tilde{G}^{\mu\nu}}\right)|_{\tilde{G}=\eta}. \tag{6}\]
For fermions, the stress-energy tensor must be calculated using the Vielbein formalism as performed in [12; 22]. We follow [12; 22] for the conventions and Feynman rules. The process of interest here is graviton-photon production via the annihilation of a fermion and anti-fermion pair, as expressed in Eq. 1. The four diagrams shown in Fig. 1, \(t\)-, \(u\)-, \(s\)-channels and a contact term, respectively, are the only tree-level interactions. The vertex rules are derived in [12] and are listed in the supplementary material. The coupling between the fermion and the photon is \(g_{f}e\), where \(e\equiv|e|\) is the magnitude of the charge of the electron. The coupling between the fermions and the graviton is \(\kappa=1/M_{Pl}\).
We define the following variable, which will appear in the \(s\)-channel diagram with a gauge parameter \(\xi\), as:
\[W_{\mu\nu\alpha\beta} \left(k_{1},k_{2};\xi\right)=(1/2)\eta_{\mu\nu}\left(k_{1\beta}k_ {2\alpha}-k_{1}\cdot k_{2}\eta_{\alpha\beta}\right)\] \[+\eta_{\mu\alpha}\left(k_{1}\cdot k_{2}\eta_{\nu\beta}-k_{1\beta} k_{2\nu}\right)\] \[+\eta_{\alpha\beta}k_{1\mu}k_{2\nu}-\eta_{\mu\beta}k_{1\nu}k_{2\alpha} \tag{7}\] \[-(1/\xi)\left\{\left(\eta_{\nu\beta}k_{1\mu}k_{1\alpha}+\eta_{ \nu\alpha}k_{2\mu}k_{2\beta}\right)\right.\] \[\left.-(1/2)\eta_{\mu\nu}\left(k_{1\alpha}k_{1\beta}+k_{2\alpha}k _{2\beta}+k_{1\alpha}k_{2\beta}\right)\right\}.\]
The photon propagator is defined as,
\[\Delta_{\mu\nu}\left(Q\right)=-\frac{i}{Q^{2}}\left[\eta_{\mu\nu}+\left(\xi-1 \right)\frac{Q_{\mu}Q_{\nu}}{Q^{2}}\right]. \tag{8}\]
For simplicity, we work in Feynman gauge \(\xi=1\). The fermion propagator with momentum \(Q\) and mass \(m_{f}\) travelling in the direction of the fermion flow is given by,
\[S_{F}\left(Q\right)=\frac{i\left(\not{Q}+m_{f}\right)}{Q^{2}-m_{f}^{2}}. \tag{9}\]
We define the Mandelstam variables such that,
\[s = (p_{1}+p_{2})^{2}=\left(k_{1}+k_{2}\right)^{2}, \tag{10}\] \[t = (p_{1}-k_{1})^{2}=\left(p_{2}-k_{2}\right)^{2},\] (11) \[u = (p_{1}-k_{2})^{2}=\left(p_{2}-k_{1}\right)^{2}. \tag{12}\]
Choosing the \(\hat{z}\) direction as the centre-of-momentum frame, with an outgoing massless photon and a massive graviton with mass \(M_{G}\), we can express the four-momenta of various particles as,
\[p_{1}^{\mu} = (E_{p_{1}},\,|\mathbf{p}|\,\hat{z})\,, p_{1}^{2} = m_{f}^{2}, \tag{13}\] \[p_{2}^{\mu} = (E_{p_{2}},\,-|\mathbf{p}|\,\hat{z})\,, p_{2}^{2} = m_{f}^{2},\] (14) \[k_{1}^{\mu} = E_{k_{1}}\left(1,\,-\hat{k}\right), k_{1}^{2} = 0,\] (15) \[k_{2}^{\mu} = (E_{k_{2}},\,\mathbf{k})\,, k_{2}^{2} = M_{G}^{2}. \tag{16}\]
The momentum (\(\mathbf{k}\)) of the outgoing graviton and photon are given in terms of the inclination and azimuthal angle pairing \((\theta,\,\phi)\) as \(\mathbf{k}=|\mathbf{k}|\left(s_{\theta}c_{\phi},\,s_{\theta}s_{\phi},\,c_{\theta}\right)\), where \(c_{\theta}\equiv\cos\theta\) and \(s_{\theta}\equiv\sin\theta\). The polarizations for the external on-shell photon are defined in the usual way,
\[\varepsilon_{\pm 1}^{\mu}\left(k_{1}\right)=\pm\frac{e^{\pm i\phi}}{ \sqrt{2}}\bigg{(}0,-c_{\theta}c_{\phi}\pm is_{\phi},-c_{\theta}s_{\phi}\mp ic_ {\phi},s_{\theta}\bigg{)}. \tag{17}\]
A helicity-\(\lambda_{G}\) massive graviton carries five polarizations \(\varepsilon_{\lambda_{G}}^{\mu\nu}(k)\). These are grouped into two transverse, and three longitudinal polarizations, which can be split into two helicity-1 modes and one helicity-0 mode, defined respectively as [15],
\[\lambda_{G} = \pm 2,\ \varepsilon_{\pm 2}^{\mu\nu}=\varepsilon_{\pm 1}^{\mu} \varepsilon_{\pm 1}^{\nu}, \tag{18}\] \[\lambda_{G} = \pm 1,\ \varepsilon_{\pm 1}^{\mu\nu}=\frac{1}{\sqrt{2}}\bigg{[} \varepsilon_{\pm 1}^{\mu}\varepsilon_{0}^{\nu}+\varepsilon_{0}^{\mu} \varepsilon_{\pm 1}^{\nu}\bigg{]},\] (19) \[\lambda_{G} = 0,\ \ \ \varepsilon_{0}^{\mu\nu}=\frac{1}{\sqrt{6}}\bigg{[} \varepsilon_{+1}^{\mu}\varepsilon_{-1}^{\nu}+\varepsilon_{-1}^{\mu} \varepsilon_{+1}^{\nu}+2\varepsilon_{0}^{\mu}\varepsilon_{0}^{\nu}\bigg{]}, \tag{20}\]
where \(\varepsilon_{\pm 1}^{\mu}\) are the usual polarization vectors for the photon defined in Eq. 17, while the helicity-0 polarization is defined by,
\[\varepsilon_{0}^{\mu}\left(k_{2}\right)=\frac{E_{k_{2}}}{M_{G}}\bigg{(}\sqrt{ 1-\frac{M_{G}^{2}}{E_{k_{2}}^{2}}},\,\hat{k}\bigg{)}. \tag{21}\]
The polarization vectors for momentum \(k_{2}\) are defined using the same angle pairs \((\theta,\phi)\). Without loss of generality, we have chosen \(\phi=0\) in the calculation.
Choosing the centre-of-momentum frame for the incoming particles with four-vectors \(p_{1}\) and \(p_{2}\), and outgoing four-vectors \(k_{1}\) and \(k_{2}\), the outgoing energies \(E_{k_{1}}\) and \(E_{k_{2}}\) can be expressed in terms of the Mandelstam variable \(s\) and the mass of the graviton \(M_{G}\) as,
\[E_{k_{1}}=\frac{s-M_{G}^{2}}{2\sqrt{s}},\qquad E_{k_{2}}=\frac{s+M_{G}^{2}}{2 \sqrt{s}}. \tag{22}\]
For the Feynman diagrams depicted in Fig. 1, with an incoming fermion \(f(p_{1})\) and anti-fermion \(\bar{f}(p_{2})\) scattering to a photon with polarization \(\varepsilon_{\lambda}(k_{1})\) and a massive graviton with polarization \(\varepsilon_{\lambda_{G}}^{\mu\nu}\), the matrix elements7 for the \(t,\ u,\ s\) and the contact diagrams are respectively given by,
Footnote 7: The matrix elements of above disagree with [1] in the \(u\)-channel only of Eq. (24). This can be a potential cause of the differing results.
\[\mathcal{M}_{t} = -\frac{\kappa g_{f}e}{4}\bar{e}_{\lambda_{1}}\left(p_{2}\right) \left[\gamma_{\mu}P_{\nu}\!+\!\gamma_{\nu}P_{\mu}\!-\!2\eta_{\mu\nu}\left( \not{P}\!-\!2m_{f}\right)\right]\] \[\times \left(\frac{\not{p}_{1}-\not{k}_{1}+m_{f}}{t-m_{f}^{2}}\right) \not{\varepsilon}_{\lambda_{\gamma}}^{*}\left(k_{1}\right)\varepsilon_{ \lambda_{G}}^{*\mu\nu}\left(k_{2}\right)u_{\lambda_{2}}\left(p_{1}\right),\] \[\mathcal{M}_{u} = -\frac{\kappa g_{f}e}{4}\bar{e}_{\lambda_{1}}\left(p_{2}\right) \varepsilon_{\lambda_{G}}^{*}\left(k_{1}\right)\left(\frac{\not{p}_{1}-\not{k}_ {2}+m_{f}}{u-m_{f}^{2}}\right)\] \[\times \left[\gamma_{\mu}K_{\nu}\!+\!\gamma_{\nu}K_{\mu}\!-\!2\eta_{\mu \nu}\left(\not{K}\!-\!2m_{f}\right)\right]\varepsilon_{\lambda_{G}}^{*\mu\nu} \left(k_{2}\right)u_{\lambda_{2}}\left(p_{1}\right),\] \[\mathcal{M}_{s} = \frac{\kappa g_{f}e}{s}\bar{e}_{\lambda_{1}}\left(p_{2}\right) \gamma^{\alpha}\big{[}W_{\mu\nu\alpha\beta}\left(Q,k_{1};\xi\right)\] \[+W_{\mu\alpha\beta}\left(Q,k_{1},\xi\right)\big{]}\varepsilon_{ \lambda_{G}}^{*\mu\nu}\left(k_{2}\right)u_{\lambda_{2}}\left(p_{1}\right),\] \[\mathcal{M}_{c} = \frac{\kappa g_{f}e}{2}\bar{e}_{\lambda_{1}}\left(p_{2}\right) \left[\gamma_{\mu}\eta_{\nu\alpha}+\gamma_{\nu}\eta_{\mu\alpha}-2\eta_{\mu\nu} \gamma_{\alpha}\right]\] \[\times \varepsilon_{\lambda_{\gamma}}^{*\alpha}\left(k_{1}\right) \varepsilon_{\lambda_{G}}^{*\mu\nu}\left(k_{2}\right)u_{\lambda_{2}}\left(p_{1} \right).\]
Here, \(P\equiv(p_{1}-k_{1}-p_{2})=(k_{2}-2p_{2})\), \(Q\equiv-\left(p_{1}+p_{2}\right)\) and \(K\equiv(p_{1}+k_{1}-p_{2})=(2p_{1}-k_{2})\). The fermions have spin states states \(\lambda_{1},\lambda_{2}=\uparrow\) or \(\downarrow\), and the photon has polarization states \(\lambda_{\gamma}=\pm 1\). The graviton has polarization states \(\lambda_{G}=\pm 2,\,\pm 1\) and \(0\). We have chosen Feynman gauge with \(\xi=1\).
## III Results and conclusion
We first note that there are 40 combinations of outgoing helicities, with the corresponding incoming states of the spinors, with 16 each in helicity-2 and helicity-1 modes supplemented by 8 in helicity-0 modes. The helicity-2 modes correspond to polarizations of the massless graviton and have no bad small-mass behaviour. The helicity-0 mode exhibits the worst growth with decreasing graviton mass due to two factors of \(\varepsilon_{0}^{\mu}\), where each factor grows as \(\mathcal{O}(1/M_{G})\). The total matrix element perhelicity is the sum of \(s,\ t,\ u\) and contact diagrams, which we, therefore, expand as a series in the mass of the graviton \(M_{G}\) to analyze if there are any divergences in the massless limit \(M_{G}\to 0\),
\[\mathcal{M}(s,\theta)=\sum_{\sigma\in\mathbb{Z}}M_{G}^{\sigma}\mathcal{M}(\theta). \tag{27}\]
The entire matrix element for these diagrams are nontrivial, and thus Mathematica [23] was employed to compute the matrix element for each polarization symbolically. We also observe that several polarization combinations vanish simply by helicity conservation and selection rules, as tabulated in the supplementary materials.
Suppose that we choose some polarization state to investigate and interrogate the results from each Feynman diagram to determine the origin of the divergence. The leading divergent terms for the longitudinal polarization mode \((u,\bar{v},\gamma,G)=(\uparrow,\uparrow,+1,0)\), are demonstrated in Table 1 for the \(s,\ t,\ u\) and contact diagrams. We notice that while each of the \(s,\ t\,u\) diagrams grow proportional to \((m_{f}/M_{G}^{2})\), as expected from power counting arguments, the sum vanishes identically, leading to regular behaviour in the limit as \(M_{G}\to 0\).
Scanning through every possible combination of the helicities, we find that the divergences in each channel exactly cancel when all channels are summed8. Therefore, the leading order term in the limit as \(M_{G}\to 0\) for all polarization combinations is a constant, including the scalar, vector and longitudinal polarizations of the graviton. Thus, no divergences persists once all diagrams are summed and hence, \(\sigma\geq 0\) in Eq. (27).
Footnote 8: It is reasonable to question if this result also holds when the amplitude is squared. The Supplementary Material shows that L’Hopital’s rule holds for arbitrary powers and complex numbers. Therefore, if the matrix element is regular in the limit as \(M_{G}\to 0\), the matrix element squared will also be regular in the same limit by L’Hopital’s rule.
Squaring the amplitude we find no divergences in the limit as the graviton becomes massless \(M_{G}\to 0\). The leading order in the limit is a constant term with respect to the graviton mass \(M_{G}\),
\[\lim_{M_{G}\to 0}|\mathcal{M}\left(s,\theta\right)|^{2}=\mathcal{O}\left(M_{G}^{ 0}\right). \tag{28}\]
Considering now the high energy limit with a finite graviton mass \(M_{G}\), the leading high energy contribution to the matrix element for the helicity-0 modes is proportional to the fermion mass and is given by9,
Footnote 9: Similar cancellations occur for helicity-1 modes and is documented in the supplementary materials.
\[\lim_{s\to\infty}\sum_{\lambda_{G}=0}|\mathcal{M}\left(s,\theta\right)|=\frac {4\kappa g_{F}e}{\sqrt{3}}m_{f}\csc\theta+\mathcal{O}\left(s^{-1}\right). \tag{29}\]
The series expansion in the high energy limit \(\sqrt{s}\to\infty\) is a physically interesting one. For example, we observe no anomalous behaviour in the high energy limit for the unpolarized process10,
Footnote 10: In [22], the cross-section for \(ff\to\gamma\ G_{KK_{m}}\) in the \(m_{f}\to 0\) limit is provided, showing no enhancements proportional to \(1/M_{KK}^{2}\). We agree with this result.
\[\lim_{s\to\infty}\sum_{\text{all spins}}|\mathcal{M}\left(s, \theta\right)|^{2}=\frac{\left(\kappa g_{F}e\right)^{2}}{6}\Bigg{\{}6s\left[ 3+\cos\left(2\theta\right)\right] \tag{30}\] \[\qquad\qquad+\left[27M_{G}^{2}-14m_{f}^{2}-6\left[M_{G}^{2}+12m_{ f}^{2}\right]\cos\left(2\theta\right)\right.\] \[\qquad\qquad+3\left[M_{G}^{2}+2m_{f}^{2}\right]\cos\left(4\theta \right)\left.\sec^{2}\theta+\mathcal{O}\left(s^{-1}\right)\Bigg{\}}.\]
We next attempt to understand if there are underlying symmetry arguments that enforce the cancellation in terms proportional to powers of \(1/M_{G}\). Pathologies in massive gravity theories come primarily from internal propagators [9]. Since we do not have them at tree-level for the process of interest, it is interesting to contemplate whether some QED-like Ward identity might effectively survive in this situation. The inclusion of an external graviton source in QED does not alter the global \(U(1)\) symmetry and so a conserved current will result. It seems reasonable to anticipate that an effective QED Ward identity might emerge in a careful treatment11, but an attempt at a formal proof of this is left for future work. We have directly verified for our amplitudes that we do have an effective Ward identity operating since we find
Footnote 11: For a derivation of Ward identity, see for example [24].
\[k_{1}^{\alpha}\mathcal{M}_{\alpha}=0, \tag{31}\]
where the quantity \(\mathcal{M}_{\alpha}\) is defined such that \(\mathcal{M}\equiv\mathcal{M}_{\mu\nu\alpha}\varepsilon^{\alpha}\left(k_{1} \right)\varepsilon^{\mu\nu}\left(k_{2}\right)=\mathcal{M}_{\alpha}\varepsilon ^{\alpha}\left(k_{1}\right)\). This ensures that all contributions that grow as powers of \(1/M_{G}\) in individual diagrams cancel out for any given process.
While we have only explicitly calculated for the case of \(f\bar{f}\to G\gamma\) the above results will also apply to other tree-level amplitudes related by crossing symmetry such as \(\gamma f\to Gf\), \(Gf\to\gamma f\), \(\gamma\bar{f}\to G\bar{f}\), \(f\to fG\gamma\) and so on.
The above results will also survive at tree-level when a gluon replaces the photon in the external leg for any of these processes, where the only difference will be the replacement of the electromagnetic coupling by the strong
\begin{table}
\begin{tabular}{|l||l|} \hline \multicolumn{2}{|c|}{Helicity-0 External Graviton: \((u,\bar{v},\gamma,G)=(\uparrow,\uparrow,+1,0)\)} \\ \hline \hline \multicolumn{2}{|c|}{Coefficient: \(s\left(\kappa g_{F}e/2\sqrt{3}\right)\left(m_{f}/M_{G}^{2}\right)\sin\theta\)} \\ \hline \hline \(\mathcal{M}_{t}\) & \(1+\cos\theta\sqrt{1-4m_{f}^{2}/s}\) \\ \hline \(\mathcal{M}_{u}\) & \(1-\cos\theta\sqrt{1-4m_{f}^{2}/s}\) \\ \hline \(\mathcal{M}_{s}\) & \(-2\) \\ \hline \(\mathcal{M}_{c}\) & \(0\) \\ \hline \hline \(\sum\mathcal{M}\) & \(0\) \\ \hline \end{tabular}
\end{table}
Table 1: The cancellations for a helicity-0 external graviton are presented. Note that \(\mathcal{M}_{t,u,s,c}\) represent the matrix element contributions for the diagrams depicted in Fig. 1
coupling constant. For example, we note that this breaks down when considering two massive graviton emissions, i.e., a process like \(f\bar{f}\to GG\), due to the presence of an \(s\)-channel diagram with a massive graviton in the internal propagator. In this case, there is no mechanism by which this cancellation can take place for a theory of massive gravity [25]. However, in KK theories, a sum over all modes in the internal propagator restores the unitarity due to higher dimensional diffeomorphism invariance.
Therefore, we have demonstrated no enhancements in the limit as \(M_{G}\to 0\) in the matrix elements of massive graviton-photon scattering with initial fermion states, regardless of whether the fermion is massive or not, contrary to claims in [1]. Therefore, the dark matter scenario for which the authors claim large enhancements in the velocity-averaged cross-section appears inconsistent with our calculation. In passing, we note that in [12], Section 3.4 seems to claim that there may be an enhancement at the cross-section level for the process with a radion external state in a full KK theory for a finite fermion mass. It would be a worthwhile exercise to re-evaluate this result given our findings here, but it is beyond the scope of this paper.
**Acknowledgements** JAG acknowledges the support he has received for his research through the provision of an Australian Government Research Training Program Scholarship. Support for this work was provided by the University of Adelaide and the Australian Research Council through the Centre of Excellence for Dark Matter Particle Physics (CE200100008). DS acknowledges the Mainz Institute of Theoretical Physics workshop 'Towards the Next Fundamental Scale of Nature: New Approaches in Particle Physics and Cosmology", where this project originated. DS and JAG thank Seung J. Lee and Giacommo Cacciapaglia for illuminating conversations. DS also thanks R. Sekhar Chivukula, Xing Wang and Kirtimaan Mohan for the discussions.
|
2301.12332 | Towards Vision Transformer Unrolling Fixed-Point Algorithm: a Case Study
on Image Restoration | The great success of Deep Neural Networks (DNNs) has inspired the algorithmic
development of DNN-based Fixed-Point (DNN-FP) for computer vision tasks. DNN-FP
methods, trained by Back-Propagation Through Time or computing the inaccurate
inversion of the Jacobian, suffer from inferior representation ability.
Motivated by the representation power of the Transformer, we propose a
framework to unroll the FP and approximate each unrolled process via
Transformer blocks, called FPformer. To reduce the high consumption of memory
and computation, we come up with FPRformer by sharing parameters between the
successive blocks. We further design a module to adapt Anderson acceleration to
FPRformer to enlarge the unrolled iterations and improve the performance,
called FPAformer. In order to fully exploit the capability of the Transformer,
we apply the proposed model to image restoration, using self-supervised
pre-training and supervised fine-tuning. 161 tasks from 4 categories of image
restoration problems are used in the pre-training phase. Hereafter, the
pre-trained FPformer, FPRformer, and FPAformer are further fine-tuned for the
comparison scenarios. Using self-supervised pre-training and supervised
fine-tuning, the proposed FPformer, FPRformer, and FPAformer achieve
competitive performance with state-of-the-art image restoration methods and
better training efficiency. FPAformer employs only 29.82% parameters used in
SwinIR models, and provides superior performance after fine-tuning. To train
these comparison models, it takes only 26.9% time used for training SwinIR
models. It provides a promising way to introduce the Transformer in low-level
vision tasks. | Peng Qiao, Sidun Liu, Tao Sun, Ke Yang, Yong Dou | 2023-01-29T02:59:14Z | http://arxiv.org/abs/2301.12332v1 | # Towards Vision Transformer Unrolling Fixed-Point Algorithm: a Case Study on Image Restoration
###### Abstract
The great success of Deep Neural Networks (DNNs) has inspired the algorithmic development of DNN-based Fixed-Point (DNN-FP) for computer vision tasks. DNN-FP methods, trained by Back-Propagation Through Time or computing the inaccurate inversion of the Jacobian, suffer from inferior representation ability. Motivated by the representation power of the Transformer, we propose a framework to unroll the FP and approximate each unrolled process via Transformer blocks, called FPformer. To reduce the high consumption of memory and computation, we come up with FPRformer by sharing parameters between the successive blocks. We further design a module to adapt Anderson acceleration to FPRformer to enlarge the unrolled iterations and improve the performance, called FPAformer. In order to fully exploit the capability of the Transformer, we apply the proposed model to image restoration, using self-supervised pre-training and supervised fine-tuning. 161 tasks from 4 categories of image restoration problems are used in the pre-training phase. Hereafter, the pre-trained FPformer, FPRformer, and FPAformer are further fine-tuned for the comparison scenarios. Using self-supervised pre-training and supervised fine-tuning, the proposed FPformer, FPRformer, and FPAformer achieve competitive performance with state-of-the-art image restoration methods and better training efficiency. FPAformer employs only 29.82% parameters used in SwinIR models, and provides superior performance after fine-tuning. To train these comparison models, it takes only 26.9% time used for training SwinIR models. It provides a promising way to introduce the Transformer in low-level vision tasks.
Image restoration pre-training, Vision Transformer, fixed-point, unrolling.
## I Introduction
The popular, exuberant, and efficient Deep Neural Networks (DNNs) techniques provide a DNN-based routine for the Fixed-Point (FP) method to handle optimization problems in computer vision [1, 2, 3], decision-making [4] and other domains, achieve adorable performance due to existing physical advantages (like GPU computing) or human experiences (like tuning or networks structure settings).
Conventional DNN-FP methods that directly unroll the FP via Convolutional Neural Networks (CNNs) or Recurrent Neural Networks (RNNs) are trained by Back-Propagation (BP) [5, 6] or Back-Propagation Through Time (BPTT) [7, 8]. These methods have limited representation ability [5, 6, 9] and suffer from high memory consumption [10], and the gradient vanishing/exploding issues [11, 12, 13, 14] as the number of unrolled depth increases. While Deep EQuilibrium (DEQ) [15, 16] methods unroll FP with implicit depth and seek the equilibrium point of FP, they are trained by computing the inversion of the Jacobian of loss w.r.t. the equilibrium point. To alleviate the heavy computation burden of inversion of the Jacobian, an inexact gradient is proposed. However, it yields an undesirable solution.
With the great success of Transformer based models in Natural Language Processing (NLP) [17, 18, 19, 20, 21, 22] and Computer Vision (CV) [23, 24, 25, 26, 27, 28, 29, 30], it has been shown that Transformer-based models are suitable for model the sequential relation with a powerful representation. Motivated by this fact, we propose to unroll the FP and approximate each unrolled process via Transformer called FPformer. Nevertheless, Transformer-based methods increase the consumption of memory and computation.
To handle this issue, we investigate the parameter sharing [18] in FPformer, called FPRformer. In this setting, the successive blocks in Transformers share the parameters, resulting in fewer trainable parameters and maintaining the unrolled iteration times.
Based on our analysis in Section III-C, we further apply Anderson acceleration [31, 32] to FPRformer via a simplified ConvGRU [13, 33] module to enlarge the unrolled iterations, called FPAformer.
To verify the effectiveness of the proposed FPformer and its variants, we apply it to image restoration task sets as a general image restoration framework. In order to fully exploit
Fig. 1: The number of parameters vs SRx4 performance in terms of PSNR on Set5. The **size of markers** is proportional to the number of parameters in each method. Our proposed methods with \({}^{\dagger}\) are finetuned for SRx4, while ones with \({}^{*}\) are pretrained.
the capability of the Transformer, we train the FPformer, FPRformer, and FPAformer using self-supervised pre-training and supervised fine-tuning, widely used in NLP and high-level vision tasks. In the self-supervised pre-training, fixed-point finding for solving image restoration problems becomes a natural interpretation for general image restoration, serving as the self-supervised pre-training problem. We use 161 tasks from 4 categories of image restoration problems to pre-train the proposed FPformer, FPRformer, and FPAformer. Namely, the image restoration tasks are Gaussian denoising in grayscale and color space (noise levels ranging from 0 to 75), single image super-resolution (scale factors are 2, 3, 4, and 8), and JPEG deblocking (quality factors are 10, 20, 30, 40 and 50). During the supervised fine-tuning, the pre-trained FPformer, FPRformer, and FPAformer are further fine-tuned for a specific comparison scenario, e.g., Gaussian denoising in color space with noise level \(\sigma=25\). Using self-supervised pre-training and supervised fine-tuning, the proposed FPformer, FPRformer, and FPAformer achieve competitive performance with state-of-the-art image restoration methods and better efficiency, as shown in Figure 1, providing a promising way to introduce Transformer in low-level vision tasks.
## II Related works
### _Fixed-Point via DNNs_
The fixed-point is formulated as
\[z^{*}=\mathcal{F}(z^{*}). \tag{1}\]
The fixed-point finding in Algorithm 1, details in supplementary materials, generates a series of \(\{z_{t}\}_{t=1}^{T}\) by successively applying the contraction mapping \(\mathcal{F}(.)\), given a initial point \(z_{0}\). When we focus on the states \(z_{t}\), we can simplify the Algorithm 1 as
\[z_{0}\overset{\mathcal{F}(.)}{\longrightarrow}z_{1}\overset{\mathcal{F}(.)}{ \longrightarrow}\cdots\overset{}{\longrightarrow}z_{T-1}\overset{\mathcal{F}(.) }{\longrightarrow}z_{T}. \tag{2}\]
Conventional DNN-FP methods that directly unroll the FP via Convolutional Neural Networks (CNNs) or Recurrent Neural Networks (RNNs), i.e., parameterizing \(\mathcal{F}(.)\) with \(\mathcal{F}_{\theta}\). These methods either have limited representation ability [12, 13, 14] or suffer from high memory consumption [15] and the gradient vanishing/exploding issues [16, 17, 18, 19] as the number of unrolled iterations increases. While Deep EQuilibrium (DEQ) [11] methods unroll the FP with implicit depth and seek the equilibrium point of the FP, they are trained by computing the inversion of the Jacobian of loss w.r.t. the equilibrium point. DEQ and its variants [12] suffer from high computation and inexact gradient to achieve an unacceptable solution. In [10], DEQ is applied in solving inverse problems in imaging, where \(\mathcal{F}(.)\) is a specific proximal operator and is further parameterized via \(\theta\).
In the fixed-point finding method, the iteration process in Algorithm 1 requires quite a lot of iterations, i.e., a large \(T\), to reach a feasible equilibrium point \(z^{*}\). One should also note that the choice of hyper-parameters of \(\epsilon\) and \(T\) is vitally important to achieving a good performance using \(\mathcal{F}_{\theta}\), whose contraction property is not well-guaranteed. When applying DNN-FP methods, repeating the modern DNN a couple of times is computation-consuming and time-consuming. For example, DEQ and its variants still consume GPU memory as large as modern DNN, and even more computational consumption to perform the fixed-point finding algorithm to achieve a reasonable performance.
Anderson acceleration ([16, 15], AA) is proposed to accelerate the fixed-point finding utilizing the previous \(m\) states \(\{z_{t-m+i}\}_{i=1}^{m}\) to estimate the next state \(z_{t+1}\), as shown in Algorithm 2 in supplementary materials. In [13], Anderson acceleration is integrated into DEQ and parameterized via NNs. In order to explicitly combine the preview \(m\) states, the proposed AA module exploits a bottle-neck-like architecture to produces real value weights. Thus, the preview \(m\) states are needed to be buffered. To save the storage costs, in [13], the bottle-neck-like network that benefits NLP but degrades image processing because images are represented in 2d-data and have vivid context information in spatial domains while natural languages only need 1d-data.
Fig. 2: The architecture of the proposed FPformer, FPRformer, and FPAformer.**CDN** and **GDN** mean Color Gaussian Denoising and Grayscale Gaussian Denoising, respectively. **JPEG** stands for image JPEG deblocking. **SISR** means Single Image Super-Resolution.
### _Image Restoration_
Conventional image restoration methods can be used to recover various image degradation problems by minimizing the following energy function,
\[\mathcal{E}(u,f)=\mathcal{D}(u,f)+\lambda\mathcal{R}(u), \tag{3}\]
where \(\mathcal{D}(u,f)\) is the data term related to one specific image restoration problem, \(f\) is the degraded input image, and \(u\) is the restored image. Taking Gaussian denoising as an example, \(\mathcal{D}(u,f)=\frac{1}{2\sigma^{2}}\|u-f\|^{2}\), where \(\sigma\) is the noise level for a specific Gaussian denoising problem. \(\mathcal{R}(u)\) is the regularization term known as the image prior model [36, 37, 38, 39, 40]. Empirically, one can get a minimizer of Equation 3 via gradient descent. It can be reformulated into a fixed-point finding diagram when \(\mathcal{I}-\nabla\mathcal{E}\) is a contraction mapping.
Benefiting from machine learning methods, the above hand-crafted methods can be further boosted. For example, the diffusion models in [5, 9] are the learned counterparts of their conventional ones. With the rapid development of CNNs and Transformers, the restoration methods are learned in a data-driven manner and provide very impressive performance in image denoising [6, 41], single image super-resolution [42, 43], JPEG deblocking [44, 41, 45], image deblurring [46, 47], et. al.
Most of CNNs based image restoration methods can be regarded as learning the mapping between the degraded input image \(f\) and its corresponding ground-truth image \(u_{gt}\). Therefore, we can summarize that these CNNs based methods are parameterized in the very first iteration in Equation 2, i.e., \(z_{1}=\mathcal{F}_{\theta}(z_{0})\). \(z_{0}\) is mapped from the degraded image \(f\) via \(S_{\phi}\cdot z_{T}\big{|}_{T=1}\) is mapped to restored image via \(U_{\phi}\). Note that, both \(S_{\phi}\) and \(U_{\phi}\) are identical mapping in [6], and convolution operators in [27]. Due to large parameters and low training efficiency, approximating each step \(z_{t}\) using CNNs is not a common practice. On the contrary, TNRD [5] and its variants [9] are trained to approximate the unrolling of Equation 2 in fixed iterations. In each iteration, the contraction mapping \(\mathcal{F}_{\theta_{t}}\) is a learned diffusion model parameterized by \(\theta_{t}\). However, the representation ability of TNRD is quite limited. So, we need an effective way to approximate each step \(z_{t}\) in Algorithm 1.
### _Vision Transformer_
Multi-Head Self Attention (MHSA) introduced in [17] has been widely used in natural language processing [19, 20, 21, 22]. In [23], MHSA was adopted in computer vision, resulting in vision transformers. And since then, vision transformers [23, 24, 25, 26] trained by self-supervised pre-training and supervised fine-tuning have achieved better performance than CNN models in high-level vision.
In low-level vision represented by image restoration problems, vision transformer architecture is adapted [27, 28, 29, 30, 48] and also benefits the performance compared with the CNN counterparts [6, 41]. These works trained transformer models via supervised training for one specific task and even one specific task setting. In [27, 30], the proposed methods were trained for Gaussian denoising with different noise levels, for single image super-resolution with different scales, respectively. In [28], IPT was trained for a couple of image restoration tasks with an auxiliary token, i.e., task-specific token. In [30], Restormer is proposed and provides a way to efficiently train for high-resolution image restoration problems. Therefore, the self-supervised pre-training for image restoration is worth discussing to achieve an efficient way to utilize a vision transformer.
## III Methodology
In this section, we first describe the proposed FPformer approximating the unrolling of the FP. To reduce the memory consumption, we utilize parameter sharing between the successive blocks in FPformer, called FPRformer. To boost the restoration performance, we design a module inspired by the Anderson acceleration algorithm, called FPAformer.
### _Unrolling FP with Transformers: FPformer_
As described above, DEQ and its variants benefit from the implicitly infinite iterations to conduct fixed-point finding with a large \(T\) and small \(\epsilon\); they need a huge computational consumption to reach the FP when \(\mathcal{F}\) is approximated via modern DNNs. Directly learning the unrolling of the fixed-point finding in fixed iterations, e.g., DnCNN with \(T=1\) and TNRD with a larger \(T\) less than 10, it is limited by the unrolled times \(T\) or the representation ability \(\mathcal{F}_{\theta_{t}}\). We need an effective way to enlarge the unrolled times and strengthen the representation ability.
Having witnessed the success of NLP and CV, the Transformer-based models are suitable for modeling the sequential relation with a powerful representation. Motivated by this fact, we propose to unroll the FP and approximate each unrolled process via Transformer called FPformer. To this end, we resort to Transformers. To efficiently capture global information, we use Residual Swin Transformer Block (RSTB) blocks as proposed in [27] to learn each contraction mapping in Equation 2, \(\mathcal{F}_{\theta_{t}},t=1,\cdots,T\).
\[z_{0}\overset{F_{\theta_{1}}}{\longrightarrow}z_{1}\longrightarrow\cdots \longrightarrow z_{T-1}\overset{F_{\theta_{T}}}{\longrightarrow}z_{T}. \tag{4}\]
The resulting architecture for image restoration is called FPformer and is shown in Figure 2 and Equation 4.
Naturally, the fixed-point finding in Equation 2 of minimizing Equation 3 is agnostic to the image restoration task. It just tries to recover a degraded image \(f\) to a restored image \(u_{T}\), which is close to the ground-truth clean image \(u_{gt}\). Therefore, to fully explore the capability of the Transformer, we trained the FPformer using multiple image restoration problems.
The training of FPformer is formulated as
\[\begin{split}&\min_{\Theta}\sum_{s\in S}\mathcal{L}(u_{gt}^{s},u_{T}^ {s})\\ & z_{0}^{s}=S_{\phi}(f_{task}^{s})\\ & z_{T}^{s}=\mathcal{F}_{\theta_{T}}(\cdots(\mathcal{F}_{\theta_{1 }}(z_{0}^{s})))=\text{FPformer}_{\Theta}(z_{0}^{s})\\ & u_{T}^{s}=U_{\phi}(z_{T}^{s})\end{split} \tag{5}\]
where \(\Theta=\{\theta_{1},\cdots,\theta_{T}\}\) is the parameters in FPformer, \(\mathcal{L}\) is the loss function that measures the difference between the ground-truth image \(u_{gt}^{s}\) and the restored image \(u_{T}^{s}\). \(u_{gt}^{s},f_{task}^{s}\)
are generated using a specific sample in dataset \(S\), details are described in Section IV. Following [27], both \(S_{\phi}\) and \(U_{\psi}\) are convolution operators. To be specific, \(S_{\phi}\) represents the degraded image \(f_{task}\in\mathbb{R}^{B\times H\times W\times 3}\) as the first state \(z_{0}\in\mathbb{R}^{B\times H\times W\times C}\). \(B\) is the size of the minibatch, \(H\) and \(W\) are the height and width of the image (or patch), and \(C\) is the channel number of features. \(U_{\psi}\) maps the last state \(z_{T}\in\mathbb{R}^{B\times H\times W\times C}\) to the restored image \(u_{T}\in\mathbb{R}^{B\times H\times W\times 3}\).
In FPformer, both \(S_{\phi}\) and \(U_{\psi}\) are shared among image restoration problems and specific tasking settings. To handle different upscales in single image super-resolution, instead of upscaling the features via upsampling blocks, we upscaled the downscaled images with the corresponding scaling factor when preparing the minibatch. Details are discussed in Section IV. In SwinIR [27], \(S_{\phi}\) and \(U_{\psi}\) are task-specific convolutional operators. In single image super-resolution, dedicated upsampling modules are used. In IPT [28], \(S_{\phi}\) and \(U_{\psi}\) are also task-specific, only Transformer blocks are shared. In Restormer [30], the degraded and restored image have the same resolution. It may explain why single image super-resolution is not discussed. In summary, FPformer can be treated as a general image restoration solver. Detail performance is discussed in Section IV.
### _Sharing parameters: FPformer_
In ALBERT [18], sharing parameters among Transformer blocks results in fewer amounts of models parameters and smaller model sizes. In this context, one can maintain the depth of the Transformers while regularizing the whole Transformers, resulting in a small model while providing a competitive performance.
Inspired by this idea, we enforce successive \(N_{j}\) blocks in the \(T\) blocks of FPformer to share parameters, coined as FPRformer. Therefore, we have \(\sum_{j=1}^{R}N_{j}=T\), where \(R\) is the number of unique RSTB blocks; \(N_{j}\) is the recurrent times of \(j_{th}\) unique RSTB block. The amounts of the parameters are about \(R/T\) times less compared with FPformer with \(T\) RSTB blocks. Note that FPformer can be regarded as a special case of FPRformer with \(R=T\) and \(N_{j}=1,\forall j\in\{1,\cdots,R\}\).
We trained FPRformer with \(R=2\), \(R=3\) and \(R=T\), separately. The details of this ablation study are shown in Section IV-B.
### _Anderson Acceleration: FPformer_
We present a theorem to characterize the performance of (4), whose details can be found in supplementary materials.
**Theorem 1**: _[Informal] If the trained model \((\theta_{t})_{1\leq t\leq T}\) can fit \(\mathcal{F}\) well. Let \(z^{*}\) be the fixed point of \(\mathcal{F}\), we have_
\[\|z_{T}-z^{*}\|=\mathcal{O}\Big{(}\rho^{T}+\frac{\delta}{1-\rho}\Big{)} \tag{6}\]
_for some fixed \(0<\rho<1\) and \(\delta\geq 0\) reflecting how the model fits (a smaller \(\delta\) indicate better fitting)._
Based on Theorem 1, we can immediately get two claims.
* As \(T\) increases, the bound of \(\|z_{T}-z^{*}\|\) gets small. That indicates when we use a larger \(T\), the unrolling yields better results.
* When \(T=\frac{\ln\frac{1-\rho}{\ln\frac{1}{\delta}}}{\ln\frac{1}{\delta}}\), it holds \(\|z_{T}-z^{*}\|=\mathcal{O}(\frac{\delta}{1-\rho})\). That means as \(T\) is fixed as some integer, the performance is only then determined by how the model fits.
Following the above two claims, we conclude that the performance of FPRformer may lag behind that of FPformer because of its parameter-sharing setting. To boost the performance of FPRformer while enjoying the parameter sharing, we design a module analog to the Anderson acceleration to explicitly enlarge the iteration times and get \(\delta\) smaller.
As described in Algorithm 2, the Anderson acceleration algorithm accelerates the FP using the previous states. It is complex to directly translate the Algorithm 2, especially lines 5-7, into CNNs or RNNs. It is because the forward and backward computation involves the previous outcome of FP and causes nested dependency. This will take more GPU memory and cause more complicated computational graphs which harms the GPU performance greatly. To this end, we simplify the computation in Algorithm 2 into a recurrent module depending on the current state \(z_{t}\) and hidden state \(h_{t}\). Let hidden state \(h_{t}\) to maintain and summarize the previous \(m\) states, \(\{\mathcal{G}_{t-m_{t}+1},\cdots,\mathcal{G}_{t}\}\). In this way, the simplified Anderson acceleration of the fixed-point finding algorithm is formulated as
\[\begin{split}\hat{z}_{t+1}&=\mathcal{F}_{\theta}(z_ {t}),\\ z_{t+1},h_{t+1}&=\mathcal{H}_{\mu}(z_{t},\hat{z}_{t+ 1},h_{t}),\end{split} \tag{7}\]
where \(\mathcal{H}_{\mu}\) is parameterized by \(\mu\). Line 6 of Anderson acceleration algorithm 2 determines the weights of the previous states. Then the weights combine these states to update the next state, as shown in Line 7 of 2. In our simplified version (7), the weights calculation and combination can be summarized as GRU [13]. Follow [33], we adopt ConvGRU module to learning \(\mathcal{H}_{\mu}\). The proposed simplified ConvGRU module is as follows,
\[\begin{split}\mathcal{G}_{t}&=\mathbf{Conv}(\hat{z} _{t+1}-z_{t}),\\ r_{h}&=\sigma(\mathbf{Conv}(\mathcal{G}_{t})+ \mathbf{Conv}(h_{t})),\\ r_{z}&=\sigma(\mathbf{Conv}(r_{h})),\\ h_{t+1}&=\mathbf{Norm}((1-r_{h})\odot h_{t}+r_{h} \odot\mathbf{Conv}(\mathcal{G}_{t})),\\ z_{t+1}&=\mathbf{Norm}((1-r_{z})\odot\hat{z}_{t+ 1}+r_{z}\odot\mathbf{Conv}(\mathcal{G}_{t})),\end{split} \tag{8}\]
To analog the \(m\) previous states setting in Algorithm 2, the hidden states \(h_{t}\in\mathbb{R}^{B\times H\times W\times C}\) is \(m\) times larger than \(z_{t}\in\mathbb{R}^{B\times H\times W\times C}\). \(\mathbf{Conv}\) is 2d convolutional layer. \(\sigma(.)\) is the sigmoid function. \(\mathbf{Norm}\) is the layer normalization [49]. Layer normalization is commonly used in Transformers and benefits the convergence of the training. Therefore, we add \(\mathbf{Norm}\) on the output of \(h_{t+1}\) and \(x_{t+1}\). \(\odot\) is the element-wise product. \(h_{0}\) is initialized as all zero values.
## IV Experiments
### _Experimental Setup_
**Image restoration tasks in training.** We use commonly used image restoration tasks, e.g., color and gray Gaussian denoising, single image super-resolution (SISR), and image JPEG deblocking in training. **For color and gray Gaussian denoising**, we obtain noisy images by adding additive white Gaussian noises with noise level \(\sigma\) ranging from 0 to 75. **For
**SISR**, we downscale and upscale1 images with scale 2, 3, 4, 8. Note that instead of upscaling the features \(z_{t}\) via upsampling blocks in FPformer, we upscaled the downscaled images \(f\) with the corresponding scaling factor. So, the resulting images \(u_{T}\) are in the same spatial resolution as the ground-truth images \(u_{gt}\). **For image JPEG deblocking**, we generate low-quality images using a JPEG encoder with quality factor \(q\)=10, 20, 30, 40, 50. Therefore, we train FPformer for 161 tasks from 4 categories of image restoration problems simultaneously. FPRformer and FPAformer are trained in the same setting.
**Training datasets**. Following [27, 42], we train FPformer, FPformer and FPAformer in the above image restoration tasks using random cropped patches from 800 images in DIV2K [50], 2560 images in Flickr2K [51], 300 images in BSD500 [52] and all images in WED [53]. For DIV2K, we use the first 800 images. For Flickr2K, we use the first 2560 images. For BSD500, we have the whole 300 images in the trainset. For WED, we use the whole dataset.
Footnote 1: We use code from [https://github.com/fatheral/matlab_imresize/](https://github.com/fatheral/matlab_imresize/).
**Pre-Training.** All training of FPformer, FPRformer, and FPAformer is run on a server with 8 NVIDIA GeForce V100 GPUs. The batch size is 16. The patch sizes are \(48\times 48\), \(72\times 72\), \(120\times 120\) (window size is \(8\times 8\)). The RSTB block is set as follows. In FPformer, the number of RSTB blocks is 9 (\(T=9\)). In FPRformer and FPAformer, the influences of \(T\) are discussed in Section IV-B. In each RSTB block, the number of Swin Transformer Layer is 6; the channel number \(C\) is 240. The head number of MHSA is 8.
When preparing the training minibatch, we first sample clean images from the above training datasets, crop them into patches (with the above patch size, e.\(48\times 48\)), and augment these patches. Then we randomly choose an image restoration problem from color and gray Gaussian denoising, SISR, and image JPEG deblocking for each augmented patch. For the chosen problem, we randomly choose the task setting, i.e., the noise level \(\sigma\), the scale, or the quality factor \(q\). Then we apply the chosen image degradation and setting to each augmented patch. The resulting degraded patches and the augmented clean patches consist of training pairs.
Note that the degraded patches in training pairs are generated on-the-fly instead of generating degraded images offline as [27]. We augment the training images using color space convert augmentation, flipping, rotating and other data augmentation methods as [27].
The learning rate is halved at [500K, 750K, 900K, 950K, 1000K], the initial learning rate is 2e-4. We use Adam [54] optimizer with \(\beta_{1}=0.9\) and \(\beta_{2}=0.99\). The training loss \(\mathcal{L}(u_{gt},z_{T})\) in Equation 5 is the Charbonnier loss [55].
**Fine-Tuning.** When the above pre-training of FPformer, FPRformer, and FPAformer is done, each of them is capable to restore the degraded images from those image restoration tasks. To boost the performance for a specific image restoration problem or task setting, we further finetuned FPformer, FPRformer, and FPAformer. The initial learning rate and schedule are discussed in Section IV-B.
**Evaluation.** We pad the image in testing so that the image size is a multiple of the window size. We compare the proposed FPformer, FPRformer, and FPAformer with the previous state-of-the-art methods in color and gray Gaussian denoising, SISR, and image JPEG deblocking. The performance metrics are PSNR and SSIM [27]. The evaluation details are shown in IV-C.
### _Ablation Studies_
In ablation studies, we investigate the influence of key hyper-parameters on the performance of FPformer, FPRformer, and FPAformer. The RSTB block is set as above. We compare the performance on Set5 [56] in SISR with scale 4 (SRx4).
**The recurrent times \(T\) and patch size.** We trained FPRformer with \(R=2\) and \(N_{1}=1\), \(N_{2}=T-1\) given \(T\) (6, 8, 10, and 14). We trained FPformer with the default RSTB setting for the different patch sizes listed above. The result is summarized in Table I.
The number of parameters in FPformer with different patch sizes is the same, about 27.8M. The number of parameters in FPRformer is 6.5M. Increasing the patch size in training is able to extend the image context, which in turn benefits the performance of FPformer and its variants. Therefore, we believe the performance of FPformer and its variants will continue to increase with larger patch size, as progressive training in [30]. Due to the limited GPU memory, we were not able to train FPformer and its variants with patch size larger than \(120\times 120\).
As the increase of \(T\), the performance of FPformer, FPRformer, and FPAformer increases, as shown in Table I and Table II. The performance of FPAformer improves as \(T\) increases, even achieving better performance than FPformer. One can find that the performance of FPRformer and FPAformer is on-par with FPformer. The parameter number of FPRformer and FPAformer is reduced to \(23.4\%\) and \(30.2\%\) of FPformer's, respectively.
**The number of previous states \(m\) in \(\mathcal{H}_{\mu}\).** We trained FPAformer with different kernel sizes, \(ks\), and various numbers of previous states \(m\). The result is summarized in the upper table of II. When \(ks=1\), as the increase of \(m\) the performance peak is presented in \(m=3\). \(m\) is set to analog the number of previous states used in Anderson acceleration. Too large or too small \(m\) harms the performance. When \(m=3\), as the increase of \(ks\), the performance is degraded. This indicates that spatial fusion via a larger convolutional kernel is not as important as temporal fusion via larger \(m\).
**Contraction mapping.** Parameter sharing in FPRformer and FPAformer benefits the memory footprint of the models. It is natural to ask why FPRformer and FPAformer exploit
fewer parameters while providing competitive performance. And what else can we learn from this setting?
As shown in Figure 3, the \(l_{2}\) distances of input \(z_{t}\) and output \(z_{t+1}\) for each layer of FPformer becomes larger. On the contrary, those of FPRformer and FPAformer are narrowing down but do not get close to zero. Meanwhile, the minimum \(l_{2}\) distance of FPAformer is smaller than FPRformer. Intuitively, both FPRformer and FPAformer seem to be contractive. It may indicate that FPRformer and FPAformer behave like seeking the fixed-point of image restoration problems. More curves in Gaussian denoising and image JPEG deblocking are shown in the supplementary materials.
We further investigate the behavior when repeating FPRformer with another iteration \(T=15\), larger than their training setting, \(T=10\). The result is shown in Figure 4. We ran FPRformer in color Gaussian denoising ranging from 5 to 75 with step 5. These noises were added to images in Set5. The denoising performance for each \(\sigma\) was averaged over images in Set5. Figure 4 indicates that the peak performance is achieved around \(T=10\pm 2\). As iteration exceeds \(T=10\), the performance almost stays unchanged. Considering both phenomena, in FPRformer and FPAformer, one can shorten \(T\) to balance between the image restoration performance and the inference speed.
The ablation studies reveal that the ability to provide almost the same performance while sharing parameters among blocks comes from the increase of \(T\). As \(T\) gets larger, the distance in the final \(T_{th}\) layer gets smaller, the performance gets better. It seems that FPRformer is seeking the contraction point in the feature space. As mentioned in [15], deep equilibrium models are implicitly seeking the equilibrium point of Equation 2. On the contrary, FPformer, FPRformer, and FPAformer are trained to find the equilibrium point of Equation 2 with \(T\) steps, each of them learned via Transformer blocks. Meanwhile, our proposed methods avoid the computation or approximation of the inverse Jacobian in [15].
**The learning rate schedule in fine-tuning.** After pre-training, one can further fine-tune FPformer, FPRformer, and
Fig. 4: The behavior when repeating FPRformer with another iterations \(T=15\), larger than their training setting, \(T=10\).
Fig. 3: The \(l_{2}\) distances of input \(z_{t}\) and output \(z_{t+1}\) for each layer in SRx4. From left to right, each column FPformer, FPRformer, and FPAformer. The first row is for pre-trained FPformer and its variants. The second row is for fine-tuned models for SRx4.
FPAformer for a specific image restoration task. Meanwhile, fine-tuning provides an effective way to train, instead of training each task a model from scratch. We fine-tuned the pre-trained FPformer (patch size is 48) using a small learning rate, e.g.5e-5, for another 10w steps. The performance improves from 32.52dB to 32.57\(\sim\)32.59dB. The improvement is quite marginal. We also fine-tuned the pre-trained FPformer (patch size is 48) with a large learning rate of 2e-4 used in pre-training. The learning rate is halved at [5K, 105K, 185K, 245K, 285K] in the additional steps. The performance improves from 32.52dB to 32.70dB.
In fine-tuning the pre-trained FPformer with patch size 120, we performed three learning rate schedules starting at 2e-4. The results are summarized in Figure 5. 5-5-3-2 schedule means the learning rate is halved at [5K, 55K, 105K, 135K, 155K]. It is a quick fine-tuning strategy, the improvement is about 0.1dB. In the 10-8-6-4 schedule, the learning rate is halved at [5K, 105K, 185K, 245K, 285K]. The fine-tuned models are on-par with SwinIR. In the 20-16-12-8 schedule, the learning rate is halved at [5K, 205K, 365K, 485K, 565K]. It takes about half a step for pre-training. The performance exceeds SwinIR 0.6dB to 32.98dB. It seems that pre-training provides a better initialization. Large initial learning rate and long fine-tuning benefit the performance as well. To balance performance and fine-tuning time consumption, we use a 10-8-6-4 schedule to fine-tune FPformer, FPformer, and FPaformer. In this schedule, the fine-tuning runtime is about one-quarter of pre-training. Meanwhile, color and grayscale Gaussian denoising are joint fine-tuned. Fine-tuning these pre-trained models for 13 comparison tasks, takes \(26.9\%\) time training each task-specific model from scratch. Pre-train + fine-tune training strategy provides an effective training of
Fig. 5: The learning rate schedule in fine-tuning. Fine-tuning is performed on SRx4, and results are summarized from Set5 with PSNR and SSIM.
Vision Transformer-based image restoration models, saving energy.
### _Comparison with state-of-the-art_
In the following comparison, we choose FPformer with patch size 120. FPRformer2 is with \(T=10\) and patch size 120. FPAformer is with \(T=10\), patch size 120, \(ks=1\), \(M=3\). The pre-trained and fine-tuned FPformer, FPRformer, and FPAformer are compared with other methods in SISR (x2, x3, and x4), color and grayscale Gaussian denoising (\(\sigma\)=15, 25, 50) and image JPEG deblocking (\(q\)=10, 20, 30, 40)3.
Footnote 2: Due to the limited space, the performance of FPRformer is listed in the supplementary materials.
**Comparison with DEQ and JFB.** We adopted and trained JFB and DEQ framework in [3] for single image super-resolution with scale 2. The \(\mathcal{F}_{\theta}\) was built on the network (38.7M parameters) for CIFAR10 in [3]. We finetuned key hyper-parameters, i.e., \(T\) and \(\epsilon\). Comparison is conducted on Set5 in terms of PSNR and SSIM. JFB achieves 33.67/0.9303 (PSNR/SSIM), and DEQ achieves 38.15/0.9608. As shown in Table III, JFB and DEQ are behind others a lot.
**SISR.** We test comparison methods on Set5 [56], Set14 [58] and Manga109 [59] for SISR with scale 2, 3 and 4. Following [27], we report PSNR and SSIM on the Y channel of the YCbCr space, as summarized in Table III. FPformer\({}^{\dagger}\) outperforms IPT (115.5M parameters) and is on-par with SwinIR. FPAformer\({}^{\dagger}\) uses fewer model parameters and is on-par with SwinIR on Set5, outperforming SwinIR about 0.02dB on Set14 and 0.06dB on Manga109.
**Image denoising.** We test comparison methods on BSD68 [60] for grayscale denoising with noise levels 15, 25, and 50. We compare color denoising with noise levels 15, 25, and 50 on CBSD68 [60] and Kodak24 [61]. Following [27], we report the PSNR on the RGB channel and Y channel for color and grayscale denoising, respectively. The experimental results are summarized in Table IV. In task-specific methods, the first 7 methods in Table IV, FPformer\({}^{\dagger}\) and FPAformer\({}^{\dagger}\) outperform IPT and are on-par with SwinIR. In task-agnostic methods, the last 4 methods in Table IV, FPformer\({}^{*}\) and FPAformer\({}^{*}\) use fewer model parameters and provide competitive performance with DRUNet (32.7M parameters) and Restormer (25.3M parameters). The number of model parameters of FPAformer is only \(7.3\%\) of those in IPT, \(25.7\%\) of DRUNet, \(33.2\%\) of Restormer.
## V Conclusion
In this work, we propose to learn the unroll of fixed-point via Transformer based models, called FPformer. By sharing parameters, we achieved a lightweight model, FPRformer. A module is proposed to analog the Anderson acceleration to boost the performance of FPRformer, called FPAformer. To fully exploit the capability of Transformer, we apply the proposed model to image restoration, using self-supervised pre-training and supervised fine-tuning. The proposed FPformer, FPRformer, and FPAformer use fewer parameters and achieve competitive performance with state-of-the-art image restoration methods and better training efficiency. FPRformer and FPAformer use only 23.21% and 29.82% parameters used in SwinIR models, respectively. To train these comparison models, we use only 26.9% time used for training from scratch.
## Acknowledgments
This work was supported by the National Natural Science Foundation of China under the Grant No.61902415.
|
2308.11330 | Dynamical Representation of Frames in Tensor Product of Hardy Spaces | Dynamical Sampling of frames and tensor products are important topics in
harmonic analysis. This paper combines the concepts of dynamical sampling of
frames and the Carleson condition in the tensor product of Hardy spaces.
Initially we discuss the preservation of the frame property under the tensor
product on the Hilbert spaces. Then we discuss the iterative representation of
frames in tensor product of Hardy spaces. The key ingredient of this paper is
the so-called Carleson condition on the sequence $\{ \lambda_k
\}_{k=1}^{\infty} \otimes\{ \gamma_l \}_{l=1}^{\infty} $ in the open unit disc
$\mathbb{D}_1 \otimes \mathbb{D}_2$. Our proof is motivated by the result of
Shapiro and Shields. | Nabin Kumar Sahu, Vishesh Rajput | 2023-08-22T10:04:10Z | http://arxiv.org/abs/2308.11330v1 | # Dynamical Representation of Frames in Tensor Product of Hardy Spaces
###### Abstract
Dynamical Sampling of frames and tensor products are important topics in harmonic analysis. This paper combines the concepts of dynamical sampling of frames and the Carleson condition in the tensor product of Hardy spaces. Initially we discuss the preservation of the frame property under the tensor product on the Hilbert spaces. Then we discuss the iterative representation of frames in tensor product of Hardy spaces. The key ingredient of this paper is the so-called Carleson condition on the sequence \(\{\lambda_{k}\}_{k=1}^{\infty}\otimes\{\gamma_{l}\}_{l=1}^{\infty}\) in the open unit disc \(\mathbb{D}_{1}\otimes\mathbb{D}_{2}\). Our proof is motivated by the result of Shapiro and Shields.
_Keywords_: Frames, tensor product, dynamical sampling, the Carleson condition, Hardy space.
_MSC_: 42C15, 46B15
## 1 Introduction
Dynamical sampling of frames is a technique used in signal processing that deals with recovering a signal from samples taken in space and time. This technique aims
to accurately represent the evolution process of the signal over time by selecting frames that best capture its dynamic behavior. By capturing the dynamic behavior of the signal over time and space, dynamical sampling of frames can help improve the quality of signal recovery in scenarios where an evolution process generates the signal. It is a relatively new topic in applied harmonic analysis but has already garnered considerable interest among researchers and practitioners alike [1, 3, 2, 6, 7, 15, 16, 19].
Aldroubi et al. [1] initiated the study of dynamical sampling problem and the frame properties of sequences in the form of \(\{T^{n}\varphi\}_{n=0}^{\infty}\), where \(T:\mathcal{H}\rightarrow\mathcal{H}\) belongs to certain classes of linear operators and \(\varphi\in\mathcal{H}\), where \(\mathcal{H}\) is a separable Hilbert space. The classical dynamical sampling problem is as follows: Consider a measurement set \(Y=\{x(i),Tx(i),T^{2}x(i),.....,T^{m_{i}}x(i):\ i\in\Omega\}\), where \(T\) is a bounded operator on \(l^{2}(I)\), \(\Omega\) is an index set. The classical dynamical sampling problem is to find the necessary and sufficient conditions on \(A\), \(\Omega\) and \(m_{i}\) such that any \(f\in l^{2}(I)\) can be recovered efficiently from the measurement set \(Y\).
In 2017, Christensen and Hasannasab [9] classified the frames which is generated by a linear operator not necessarily bounded. They have also discussed dynamically generated dual frames and they found that the dynamical frames are unstable under the classical perturbation conditions (Paley-Wiener type conditions). Again in 2019, Christensen and Hasannasab [10] proved that every frame which is norm-bounded below can be represented as a finite union of sequences \(\{(T_{j})^{n}\phi_{j}\}_{n=0}^{\infty}\) for some bounded operators \(T_{j}\) and some elements \(\phi_{j}\) in the underlying Hilbert space. Recently, Ashbrock and Powell [4] proved that every redundant finite frame for \(\mathbb{F}^{d}\), where \(\mathbb{F}=\mathbb{R}\) or \(\mathbb{C}\), has infinitely many dynamical dual frames, and introduced a low complexity error diffusion quantization algorithm based on dynamical dual frames.
Hardy Hilbert space \(H^{2}(\mathbb{D})\) is the space of all analytic functions defined on the unit disk \(\mathbb{D}\) with square summable coefficients. That is \(H^{2}(\mathbb{D})=\left\{f:\mathbb{D}\rightarrow\mathbb{C}:\ \ f(z)=\sum_{n=0}^{ \infty}a_{n}z^{n},\ \text{and}\ \sum_{n=0}^{\infty}|a_{n}^{2}|<\infty\right\}\). The space \(H^{2}(\mathbb{D})\) is equipped with the inner product \(\langle f(z),g(z)\rangle=\sum_{n=0}^{\infty}a_{n}\overline{b_{n}}\), where \(f(z)=\sum_{n=0}^{\infty}a_{n}z^{n}\) and \(g(z)=\sum_{n=0}^{\infty}b_{n}z^{n}\). The theory of frames has been well investigated in the space \(H^{2}(\mathbb{D})\), and for that one may refer [8, 18, 20] and the references there in. Carleson condition [12] is the most important ingredient to construct dynamically generated frames in a separable Hilbert space. We recall that a sequence \(\{\alpha_{k}\}_{k=1}^{\infty}\subset\mathbb{D}\) satisfies the Carleson condition if
\[\inf_{n\in\mathbb{N}}\Pi_{k\neq n}\frac{|\alpha_{k}-\alpha_{n}|}{|1-\overline {\alpha_{k}}\alpha_{n}|}>0.\]
We quote the main result from [12].
**Theorem 1**: Let \(\mathcal{H}\) be a separable Hilbert space with orthonormal basis \(\{e_{k}\}_{k=1}^{\infty}\). Let \(\{\alpha_{k}\}_{k=1}^{\infty}\) be a sequence in the unit disk \(\mathbb{D}\) in the complex plane. Assume that \(\big{\{}\sqrt{1-|\alpha_{k}|^{2}}\big{\}}_{k=1}^{\infty}\in l^{2}(\mathbb{N})\). Let \(T:\mathcal{H}\rightarrow\mathcal{H}\) be a bounded linear operator such that \(Te_{k}=\alpha_{k}e_{k}\). Then the sequence \(\{T^{n}h\}_{n=0}^{\infty}=\Big{\{}\sum_{k=1}^{\infty}\alpha_{k}^{n}\sqrt{1-| \alpha_{k}|^{2}}e_{k}\Big{\}}\) is a frame for the Hilbert space \(\mathcal{H}\) if and only if \(\{\alpha_{k}\}_{k=1}^{\infty}\) satisfies the Carleson condition.
In [5], it is proved that the tensor product of two sequences is frame if and only if each part of this product is a frame. In [14], it is also proved that if \(\{x_{n}\}_{n\in I}\) and \(\{y_{m}\}_{m\in I}\) be a frame for \(\mathcal{H}_{1}\) and \(\mathcal{H}_{2}\), respectively. Then \(\{x_{n}\otimes y_{m}\}_{n,m\in I}\) is a frame for \(\mathcal{H}_{1}\otimes\mathcal{H}_{2}\).
The above work persuade us to carry out investigation on dynamically generated frames on tensor product of two Hilbert spaces. The main contribution of this report is Section 3, where the Carleson condition has been newly formulated. The relationship between the Carleson condition and the frame properties of an iterated sequence has been established.
## 2 Iterative representation of sequences in tensor product of two Hilbert spaces
In this section we consider linearly independent sequences \(\{f_{k}\}_{k=1}^{\infty}\otimes\{g_{k}\}_{k=1}^{\infty}\) in Hilbert space \(\mathcal{H}_{1}\otimes\mathcal{H}_{2}\) and explore the existence of a representation \(\{T_{1}^{n}f_{1}\}_{n=0}^{\infty}\otimes\{T_{2}^{n}g_{1}\}_{n=0}^{\infty}\) with a bounded operator \(T_{1}\otimes T_{2}:span\{f_{k}\}_{k=1}^{\infty}\otimes span\{g_{k}\}_{k=1}^{ \infty}\to span\{f_{k}\}_{k=1}^{\infty}\otimes span\{g_{k}\}_{k=1}^{\infty}.\) This generalization is motivated by the results for frames proved in [12, 11]. In general the existence of a representation of a sequence \(\{f_{k}\}_{k=1}^{\infty}\otimes\{g_{k}\}_{k=1}^{\infty}\) of the form \(\{T_{1}^{n}f_{1}\}_{n=0}^{\infty}\otimes\{T_{2}^{n}g_{1}\}_{n=0}^{\infty}\) for a bounded operator \(T_{1}\otimes T_{2}\) is not nearly related with frame properties of the given sequence \(\{f_{k}\}_{k=1}^{\infty}\otimes\{g_{k}\}_{k=1}^{\infty}\).
**Example 1**: Consider a linearly independent frame \(\{f_{k}\}_{k=1}^{\infty}\otimes\{g_{k}\}_{k=1}^{\infty}\) for an infinite dimensional Hilbert space \(\mathcal{H}_{1}\otimes\mathcal{H}_{2}\) and the associated representation \(\{f_{k}\}_{k=1}^{\infty}\otimes\{g_{k}\}_{k=1}^{\infty}=\{T_{1}^{n}f_{1}\}_{n =0}^{\infty}\otimes\{T_{2}^{n}g_{1}\}_{n=0}^{\infty}\) in terms of a linear operator \(T_{1}\otimes T_{2}:\mathrm{span}\{f_{k}\}_{k=1}^{\infty}\otimes\{g_{k}\}_{k=1 }^{\infty}\rightarrow\mathrm{span}\{f_{k}\}_{k=1}^{\infty}\otimes\{g_{k}\}_{k= 1}^{\infty}\). Assume that \(\inf_{k\in\mathbb{N}}||f_{k}\otimes g_{k}||=\inf_{k\in\mathbb{N}}||f_{k}||||g_ {k}||>0\). Consider the sequence \(\{\phi_{k}\}_{k=1}^{\infty}\otimes\{\psi_{k}\}_{k=1}^{\infty}\subset\mathcal{ H}_{1}\otimes\mathcal{H}_{2}\) given by \(\phi_{k}\otimes\psi_{k}=2^{k}f_{k}\otimes 2^{k}g_{k}\)\(,k\in\mathbb{N}\), which leads to a frame like expansion and satisfies the lower frame condition
but fails the upper one. For any \(k\in\mathbb{N}\),
\[\phi_{k+1}\otimes\psi_{k+1} =2^{k+1}f_{k+1}\otimes 2^{k+1}g_{k+1}\] \[=2^{k+1}T_{1}f_{k}\otimes 2^{k+1}T_{2}g_{k}\] \[=4T_{1}(2^{k}f_{k})\otimes T_{2}(2^{k}g_{k})\] \[=4T_{1}(\phi_{k})\otimes T_{2}(\psi_{k})\] \[=4(T_{1}\otimes T_{2})(\phi_{k}\otimes\psi_{k}).\]
This shows that \(\{\phi_{k}\}_{k=1}^{\infty}\otimes\{\psi_{k}\}_{k=1}^{\infty}\) has the representation \(\{W_{1}^{n}\phi_{1}\}_{n=0}^{\infty}\otimes\{W_{2}^{n}\psi_{1}\}_{n=0}^{\infty}\), where \(W_{1}=2T_{1},W_{2}=2T_{2}.\) In particular we can say that the frame \(\{f_{k}\}_{k=1}^{\infty}\otimes\{g_{k}\}_{k=1}^{\infty}\) is represented by a bounded operator if and only if the non-Bessel sequence \(\{\phi_{k}\}_{k=1}^{\infty}\otimes\{\psi_{k}\}_{k=1}^{\infty}\) is represented by a bounded operator.
It is important to note a sequence \(\{f_{k}\}_{k=1}^{\infty}\otimes\{g_{k}\}_{k=1}^{\infty}\) that has nice frame properties and a nice representation \(\{f_{k}\}_{k=1}^{\infty}\otimes\{g_{k}\}_{k=1}^{\infty}=\{T_{1}^{n}f_{1}\}_{n =0}^{\infty}\otimes\{T_{2}^{n}g_{1}\}_{n=0}^{\infty}\) do not necessarily correlate with one another, as illustrated by the following example. Although the family of functions \(\{f_{k}\}_{k=1}^{\infty}\otimes\{g_{k}\}_{k=1}^{\infty}\) is not a frame and cannot provide a frame-like expansion, but it has a representation \(\{f_{k}\}_{k=1}^{\infty}\otimes\{g_{k}\}_{k=1}^{\infty}=\{T_{1}^{n}f_{1}\}_{n =0}^{\infty}\otimes\{T_{2}^{n}g_{1}\}_{n=0}^{\infty}\) for a bounded isometric operator \(T_{1}\otimes T_{2}.\)
**Example 2**: Consider the sequence \(\{f_{k}\}_{k=1}^{\infty}\otimes\{g_{k}\}_{k=1}^{\infty}\) given by \(f_{k}\otimes g_{k}=(e_{k}+e_{k+1})\otimes(d_{k}+d_{k+1}),k\in\mathbb{N}\), where \(\{e_{k}\}_{k=1}^{\infty}\otimes\{d_{k}\}_{k=1}^{\infty}\) is an orthonormal basis for the Hilbert space \(\mathcal{H}_{1}\otimes\mathcal{H}_{2}\). We find that \(\{f_{k}\}_{k=1}^{\infty}\otimes\{g_{k}\}_{k=1}^{\infty}\) is a Bessel sequence but not a frame, despite the fact that \(\overline{span}\{f_{k}\}_{k=1}^{\infty}\otimes\overline{span}\{g_{k}\}_{k=1}^ {\infty}=\mathcal{H}_{1}\otimes\mathcal{H}_{2}.\) Since \(\{f_{k}\}_{k=1}^{\infty}\otimes\{g_{k}\}_{k=1}^{\infty}\) is linearly independent, we define the operator \(T_{1}\otimes T_{2}:span\{f_{k}\}_{k=1}^{\infty}\otimes span\{g_{k}\}_{k=1}^{ \infty}\to span\{f_{k}\}_{k=1}^{\infty}\otimes span\{g_{k}\}_{k=1}^{\infty}\), by \(T_{1}f_{k}\otimes T_{2}g_{k}=f_{k+1}\otimes g_{k+1}\), and we have \(\{f_{k}\}_{k=1}^{\infty}\otimes\{g_{k}\}_{k=1}^{\infty}=\{T_{1}^{n}f_{1}\}_{n =0}^{\infty}\otimes\{T_{2}^{n}g_{1}\}_{n=0}^{\infty}.\) Then, for any \(a_{k},c_{k}\in\mathbb{C}\), and any \(N\in\mathbb{N}\),
\[\left|\left|T_{1}\sum_{k=1}^{N}c_{k}f_{k}\otimes T_{2}\sum_{k=1}^ {N}a_{k}g_{k}\right|\right|^{2} =\left|\left|(T_{1}\otimes T_{2})\left(\sum_{k=1}^{N}c_{k}f_{k} \otimes\sum_{k=1}^{N}a_{k}g_{k}\right)\right|\right|^{2}\] \[=\left|\left|\sum_{k=1}^{N}c_{k}(e_{k+1}+e_{k+2})\otimes a_{k}(d_ {k+1}+d_{k+2})\right|\right|^{2}\] \[=\left|\left|\sum_{k=1}^{N}c_{k}(e_{k}+e_{k+1})\otimes a_{k}(d_{k }+d_{k+1})\right|\right|^{2}\]
\[=\left|\left|\sum_{k=1}^{N}c_{k}f_{k}\otimes\sum_{k=1}^{N}a_{k}g_{k} \right|\right|^{2}\] \[=\left|\left|\sum_{k=1}^{N}c_{k}f_{k}\right|\right|^{2}\left| \left|\sum_{k=1}^{N}a_{k}g_{k}\right|\right|^{2}\]
It follows that \(T_{1}\otimes T_{2}\) has an extension to an isometric operator \(T_{1}\otimes T_{2}:\mathcal{H}_{1}\otimes\mathcal{H}_{2}\rightarrow\mathcal{H }_{1}\otimes\mathcal{H}_{2}\).
Given any sequence \(\{f_{k}\}_{k=1}^{\infty}\otimes\{g_{k}\}_{k=1}^{\infty}\subset\mathcal{H}_{1 }\otimes\mathcal{H}_{2}\), then the synthesis operator is defined as
\[V:\mathcal{D}(V)\rightarrow\mathcal{H}_{1}\otimes\mathcal{H}_{2},V(\{c_{k}\}_ {k=1}^{\infty}\otimes\{d_{k}\}_{k=1}^{\infty}):=\sum_{k=1}^{\infty}c_{k}f_{k} \otimes\sum_{k=1}^{\infty}d_{k}g_{k},\]
where the domain \(\mathcal{D}(V)\) consists the set of all scalar-valued sequences \(\{c_{k}\}_{k=1}^{\infty}\otimes\{d_{k}\}_{k=1}^{\infty}\) for which \(\sum_{k=1}^{\infty}c_{k}f_{k}\otimes\sum_{k=1}^{\infty}d_{k}g_{k}\) is convergent.Here we do not restrict our attention to sequences \(\{c_{k}\}_{k=1}^{\infty}\otimes\{d_{k}\}_{k=1}^{\infty}\) belonging to \(l^{2}(\mathbb{N})\otimes l^{2}(\mathbb{N})\).
Consider the right-shift operators \(\mathcal{T}_{1}\) and \(\mathcal{T}_{2}\), which acts on arbitrary scalar sequences \(\{c_{k}\}_{k=1}^{\infty}\) and \(\{d_{k}\}_{k=1}^{\infty}\) respectively by \(\mathcal{T}_{1}\{c_{k}\}_{k=1}^{\infty}=\{0,c_{1},c_{2},c_{3},.......\}\) and \(\mathcal{T}_{2}\{d_{k}\}_{k=1}^{\infty}=\{0,d_{1},d_{2},d_{3},....\}\).Now define an operator \(\mathcal{T}_{1}\{c_{k}\}_{k=1}^{\infty}\otimes\mathcal{T}_{2}\{d_{k}\}_{k=1}^ {\infty}=\{0,c_{1},c_{2},....\}\otimes\{0,d_{1},d_{2},....\}\). A vector space \(W\) of scalar-valued sequences \(\{c_{k}\}_{k=1}^{\infty}\otimes\{d_{k}\}_{k=1}^{\infty}\) is said to be invariant under the defined operator \(\mathcal{T}_{1}\otimes\mathcal{T}_{2}\) if \((\mathcal{T}_{1}\otimes\mathcal{T}_{2})(W)\subseteq W\).
The following result generalizes one of the important results in [11, 12] for any arbitrary sequence (not necessarily frame).
**Theorem 2**: Consider a sequence \(\{f_{k}\}_{k=1}^{\infty}\otimes\{g_{k}\}_{k=1}^{\infty}\in\mathcal{H}_{1} \otimes\mathcal{H}_{2}\) which has representation \(\{T_{1}^{n}f_{1}\}_{n=0}^{\infty}\otimes\{T_{2}^{n}g_{1}\}_{n=0}^{\infty}\) for a linear operator \(T_{1}\otimes T_{2}:span\{f_{k}\}_{k=1}^{\infty}\otimes span\{g_{k}\}_{k=1}^{ \infty}\rightarrow span\{f_{k}\}_{k=1}^{\infty}\otimes span\{g_{k}\}_{k=1}^{ \infty}\). Then the following results hold.
1. If \(T_{1}\otimes T_{2}\) is bounded, then the domain \(\mathcal{D}(V)\) and the kernel \(N(V)\) of the synthesis operator are invariant under the defined operator \(\mathcal{T}_{1}\otimes\mathcal{T}_{2}\), and \(\left\{\frac{||f_{k+1}\otimes g_{k+1}||}{||f_{k}\otimes g_{k}||}\right\}_{k=1} ^{\infty}\in l^{\infty}\) where \(f_{k}\otimes g_{k}\neq 0,\forall k\in\mathbb{N}\).
2. \(T_{1}\otimes T_{2}\) is bounded on \(span\{f_{k}\}_{k=1}^{\infty}\otimes span\{g_{k}\}_{k=1}^{\infty}\) if and only if there exists a positive constant \(K\) such that \(||V(\mathcal{T}_{1}\{c_{k}\}_{k=1}^{\infty}\otimes\mathcal{T}_{2}\{d_{k}\}_{k=1 }^{\infty})||\leq K\,||V(\{c_{k}\}_{k=1}^{\infty}\otimes\{d_{k}\}_{k=1}^{ \infty})||\) for all finite sequences \(\{c_{k}\}_{k=1}^{\infty}\otimes\{d_{k}\}_{k=1}^{\infty}\).
**Proof:** We know if \(T_{1}\) and \(T_{2}\) is bounded then \(T_{1}\otimes T_{2}\) is bounded, let \(\widetilde{T_{1}}\otimes\widetilde{T_{2}}\) is a unique extension of the bounded linear operator \(T_{1}\otimes T_{2}\) on \(\overline{span}\{f_{k}\}_{k=1}^{\infty}\otimes\overline{span}\{g_{k}\}_{k=1}^ {\infty}.\)
(1) Assume \(T_{1}\otimes T_{2}\) is bounded and consider sequence \(\{c_{k}\}_{k=1}^{\infty}\otimes\{d_{k}\}_{k=1}^{\infty}\in\mathcal{D}(V).\) In order to show that \(\mathcal{T}_{1}\{c_{k}\}_{k=1}^{\infty}\otimes\mathcal{T}_{2}\{d_{k}\}_{k=1}^ {\infty}\in\mathcal{D}(V),\) i.e., that \(\sum_{k=1}^{\infty}c_{k}f_{k+1}\otimes\sum_{k=1}^{\infty}d_{k}g_{k+1}\) is convergent. Consider any \(M,N\in\mathbb{N}\) where \(N>M.\) Then
\[\left|\left|\sum_{k=1}^{N}c_{k}f_{k+1}\otimes\sum_{k=1}^{N}d_{k}g _{k+1}-\sum_{k=1}^{M}c_{k}f_{k+1}\otimes\sum_{k=1}^{M}d_{k}g_{k+1}\right| \right|= \left|\left|\sum_{k=M+1}^{N}c_{k}f_{k+1}\otimes\sum_{k=M+1}^{N}d _{k}g_{k+1}\right|\right|\] \[= \left|\left|T_{1}\sum_{k=M+1}^{N}c_{k}f_{k}\otimes T_{2}\sum_{k=M +1}^{N}d_{k}g_{k}\right|\right|\] \[= \left|\left|\left(T_{1}\otimes T_{2}\right)\left(\sum_{k=M+1}^{N }c_{k}f_{k}\otimes\sum_{k=M+1}^{N}d_{k}g_{k}\right)\right|\right|\] \[\leq \left|\left|T_{1}\otimes T_{2}\right|\right|\left|\left|\sum_{k=M+ 1}^{N}c_{k}f_{k}\right|\right|\left|\left|\sum_{k=M+1}^{N}d_{k}g_{k}\right|\right|\] \[\longrightarrow 0\]
as \(M,N\rightarrow\infty.\) Thus \(\sum_{k=1}^{\infty}c_{k}f_{k+1}\otimes\sum_{k=1}^{\infty}d_{k}g_{k+1}\) is convergent, i.e, \(\mathcal{D}(V)\) is indeed invariant under the operator \(\mathcal{T}_{1}\otimes\mathcal{T}_{2}.\)
To prove the invariance of \(\mathcal{N}(V),\) assume that \(\{c_{k}\}_{k=1}^{\infty}\otimes\{d_{k}\}_{k=1}^{\infty}\in N(V).\) The series \(\sum_{k=1}^{\infty}c_{k}f_{k+1}\otimes\sum_{k=1}^{\infty}d_{k}g_{k+1}\) converges by what is already proved, and furthermore
\[\sum_{k=1}^{\infty}c_{k}f_{k+1}\otimes\sum_{k=1}^{\infty}d_{k}g_{k+1} =\sum_{k=1}^{\infty}c_{k}T_{1}f_{k}\otimes\sum_{k=1}^{\infty}d_{k }T_{2}g_{k}\] \[=\left(\widetilde{T_{1}}\otimes\widetilde{T_{2}}\right)\left(\sum _{k=1}^{\infty}c_{k}f_{k}\otimes\sum_{k=1}^{\infty}d_{k}g_{k}\right)\] \[=0;\]
this shows that \(\mathcal{T}_{1}\{c_{k}\}_{k=1}^{\infty}\otimes\mathcal{T}_{2}\{d_{k}\}_{k=1}^ {\infty}\in\mathcal{N}_{V}.\)
Finally, for every \(k\in\mathbb{N},\)
\[\left|\left|f_{k+1}\otimes g_{k+1}\right|\right| =\left|\left|T_{1}f_{k}\otimes T_{2}g_{k}\right|\right|\] \[=\left|\left(T_{1}\otimes T_{2}\right)(f_{k}\otimes g_{k})\right|\] \[\leq\left|\left|T_{1}\otimes T_{2}\right|\right|\left|\left|f_{k} \otimes g_{k}\right|\right|,\]
and thus \(\left\{\frac{\left|\left|f_{k+1}\otimes g_{k+1}\right|\right|}{\left|\left|f_{k} \otimes g_{k}\right|\right|}\right\}_{k=1}^{\infty}\in l^{\infty}.\)
(2) Assume firstly that \(T_{1}\otimes T_{2}\) is bounded for every \(\{c_{k}\}_{k=1}^{\infty}\otimes\{d_{k}\}_{k=1}^{\infty}\in\mathcal{D}(V)\). By (1) that \(\mathcal{T}_{1}\{c_{k}\}_{k=1}^{\infty}\otimes\mathcal{T}_{2}\{d_{k}\}_{k=1}^{ \infty}\in\mathcal{D}(V)\); furthermore,
\[||V(\mathcal{T}_{1}\{c_{k}\}_{k=1}^{\infty}\otimes\mathcal{T}_{2 }\{d_{k}\}_{k=1}^{\infty})|| =\left|\left|\sum_{k=1}^{\infty}c_{k}f_{k+1}\otimes\sum_{k=1}^{ \infty}d_{k}g_{k+1}\right|\right|\] \[=\left|\left|\sum_{k=1}^{\infty}c_{k}\widetilde{T}_{1}f_{k} \otimes\sum_{k=1}^{\infty}d_{k}\widetilde{T}_{2}g_{k}\right|\right|\] \[=\left|\left|\left(\widetilde{T}_{1}\otimes\widetilde{T}_{2} \right)\left(\sum_{k=1}^{\infty}c_{k}f_{k}\otimes\sum_{k=1}^{\infty}d_{k}g_{k }\right)\right|\right|\] \[\leq\left|\left|\widetilde{T}_{1}\otimes\widetilde{T}_{2}\right| \right|\left|\left|V(\{c_{k}\}_{k=1}^{\infty}\otimes\{d_{k}\}_{k=1}^{\infty}) \right|\right|.\]
Conversely, assume that there is a constant \(K>0\) so that \(||V(\mathcal{T}_{1}\{c_{k}\}_{k=1}^{\infty}\otimes\mathcal{T}_{2}\{d_{k}\}_{k= 1}^{\infty})||\leq K\left|\left|V(\{c_{k}\}_{k=1}^{\infty}\otimes\{d_{k}\}_{k= 1}^{\infty})\right|\right|\) for all finite sequences.
Take an arbitrary \(f\otimes g\in span\{f_{k}\}_{k=1}^{\infty}\otimes span\{g_{k}\}_{k=1}^{\infty}\).i.e., \(f\otimes g=\sum_{k=1}^{N}c_{k}f_{k}\otimes\sum_{k=1}^{N}d_{k}g_{k}\) for some \(N\in\mathbb{N}\) and some \(c_{1},c_{2},......,c_{N},d_{1},d_{2},......,d_{N}\in\mathbb{C}\); letting \(c_{k}=0,d_{k}=0\) for \(k>N\) we have that
\[||T_{1}f\otimes T_{2}g|| =\left|\left|\sum_{k=1}^{\infty}c_{k}f_{k+1}\otimes\sum_{k=1}^{ \infty}d_{k}g_{k+1}\right|\right|\] \[=||V(\mathcal{T}_{1}\{c_{k}\}_{k=1}^{\infty}\otimes\mathcal{T}_{2 }\{d_{k}\}_{k=1}^{\infty})||\] \[\leq K\left|\left|V(\{c_{k}\}_{k=1}^{\infty}\otimes\{d_{k}\}_{k=1 }^{\infty})\right|\right|\] \[\leq K\left|\left|f\otimes g\right|\right|,\]
i.e., \(T_{1}\otimes T_{2}\) is bounded.
## 3 Frames and the Carleson condition in the tensor product of Hardy Spaces
In this section, we consider a class of frames that can be represented via bounded operators \(T_{1}\otimes T_{2}\). We extended the construction that first appeared in Corollary 3.17 in [1] in terms of the tensor product. Our purpose is to extend the result given by Shapiro and Shields [17] in the tensor product of Hardy spaces. First, we will discuss the Carleson condition on sequences \(\{\lambda_{k}\}_{k=1}^{\infty}\) and \(\{\gamma_{l}\}_{l=1}^{\infty}\) in the open unit discs \(\mathbb{D}_{1}\) and \(\mathbb{D}_{2}\) respectively.
### The Carleson Condition and tensor product
Let \(\mathbb{D}_{1}\) and \(\mathbb{D}_{2}\) denote the open unit discs in the complex plane. The Hardy space \(H_{1}^{2}(\mathbb{D}_{1})\otimes H_{2}^{2}(\mathbb{D}_{2})\) is defined by
\[H_{1}^{2}(\mathbb{D}_{1})\otimes H_{2}^{2}(\mathbb{D}_{2})=\bigg{\{}f\otimes g: \mathbb{D}_{1}\otimes\mathbb{D}_{2}\rightarrow\mathbb{C}\big{|}f(z)\otimes g( \omega)=\sum_{n=0}^{\infty}a_{n}z^{n}\otimes\sum_{m=0}^{\infty}b_{m}\omega^{m},\]
\[\{a_{n}\}_{n=0}^{\infty}\otimes\{b_{m}\}_{m=0}^{\infty}\in l^{2}(\mathbb{N}_{0 })\otimes l^{2}(\mathbb{N}_{0})\bigg{\}}.\]
The Hardy space \(H_{1}^{2}(\mathbb{D}_{1})\otimes H_{2}^{2}(\mathbb{D}_{2})\) is a Hilbert space; given \(w_{1},w_{2}\in H_{1}^{2}(\mathbb{D}_{1})\otimes H_{2}^{2}(\mathbb{D}_{2})\) where \(w_{1}=f_{1}\otimes g_{1},w_{2}=f_{2}\otimes g_{2}\) and \(f_{1},f_{2}\in H_{1}^{2}(\mathbb{D}_{1})\) and \(g_{1},g_{2}\in H_{2}^{2}(\mathbb{D}_{2})\)
\[w_{1}=\sum_{n=0}^{\infty}a_{n}z^{n}\otimes\sum_{m=0}^{\infty}b_{m}\omega^{m},w _{2}=\sum_{n=0}^{\infty}c_{n}z^{n}\otimes\sum_{m=0}^{\infty}d_{m}\omega^{m}\]
The inner product is defined by,
\[\langle w_{1},w_{2}\rangle =\langle\sum_{n=0}^{\infty}a_{n}z^{n}\otimes\sum_{m=0}^{\infty}b _{m}\omega^{m},\sum_{n=0}^{\infty}c_{n}z^{n}\otimes\sum_{m=0}^{\infty}d_{m} \omega^{m}\rangle\] \[=\langle\sum_{n=0}^{\infty}a_{n}z^{n},\sum_{n=0}^{\infty}c_{n}z^{ n}\rangle\langle\sum_{m=0}^{\infty}b_{m}\omega^{m},\sum_{m=0}^{\infty}d_{m} \omega^{m}\rangle\] \[=\sum_{n=0}^{\infty}a_{n}\bar{c_{n}}\cdot\sum_{m=0}^{\infty}b_{m} \bar{d_{m}}\]
Note that \(\{z^{n}\}_{n=0}^{\infty}\otimes\{\omega^{m}\}_{m=0}^{\infty}\) is an orthonormal basis for \(H_{1}^{2}(\mathbb{D}_{1})\otimes H_{2}^{2}(\mathbb{D}_{2})\); denoting the canonical basis for \(l^{2}(\mathbb{N})\otimes l^{2}(\mathbb{N})\) by \(\{\delta_{n}\}_{n=1}^{\infty}\otimes\{\sigma_{m}\}_{m=1}^{\infty}\), the operator \(\theta:H_{1}^{2}(\mathbb{D}_{1})\otimes H_{2}^{2}(\mathbb{D}_{2})\to l^{2}( \mathbb{N})\otimes l^{2}(\mathbb{N})\) defined by \(\theta(z^{n}\otimes\omega^{m})=\delta_{n+1}\otimes\sigma_{n+1}\) for \(n,m=0,1,2,3,.......\) is a unitary operator from \(H_{1}^{2}(\mathbb{D}_{1})\otimes H_{2}^{2}(\mathbb{D}_{2})\) onto \(l^{2}(\mathbb{N})\otimes l^{2}(\mathbb{N})\) and its preserve the norm and inner product of vectors.
**Definition 3.1**: A sequence \(\{\lambda_{k}\}_{k=1}^{\infty}\subset\mathbb{D}_{1},\{\gamma_{l}\}_{l=1}^{ \infty}\subset\mathbb{D}_{2}\) then \(\{\lambda_{k}\}_{k=1}^{\infty}\otimes\{\gamma_{l}\}_{l=1}^{\infty}\) satisfies the Carleson condition if
\[\inf_{n,m\in\mathbb{N}}\prod_{(k,l)\neq(n,m)}\frac{\mid\lambda_{k}\gamma_{l}- \lambda_{n}\gamma_{m}\mid}{\mid 1-\overline{\lambda_{k}\gamma_{l}}\lambda_{n} \gamma_{m}\mid}>0. \tag{1}\]
For a given sequence \(\Lambda=\{\lambda_{k}\}_{k=1}^{\infty}\otimes\{\gamma_{l}\}_{l=1}^{\infty} \subset\mathbb{D}_{1}\otimes\mathbb{D}_{2}\), define the sequence-valued operator \(\varphi_{\Lambda}\) by
\[\varphi_{\Lambda}(f\otimes g)=\{f(\lambda_{k})\sqrt{1-|\lambda_{k}|^{2}}\}_{k= 1}^{\infty}\otimes\{g(\gamma_{l})\sqrt{1-|\gamma_{l}|}\}_{l=1}^{\infty},f \otimes g\in H_{1}^{2}(\mathbb{D}_{1})\otimes H_{2}^{2}(\mathbb{D}_{2}). \tag{2}\]
Since the sequence in equation (2) does not necessarily belong to \(l^{2}(\mathbb{N})\otimes l^{2}(\mathbb{N})\). The following result is motivated by the result given by Shapiro and Shields in [17] and the Theorem 9.1 of [13].
**Theorem 3**: A sequence \(\{\lambda_{k}\}_{k=1}^{\infty}\otimes\{\gamma_{l}\}_{l=1}^{\infty}\subset \mathbb{D}_{1}\otimes\mathbb{D}_{2}\) satisfies the Carleson condition.i.e.,
\(\prod_{k,l=1,(l\neq m)}^{\infty}\frac{\mid\lambda_{k}\gamma_{l}-\lambda_{n} \gamma_{m}\mid}{\mid 1-\overline{\lambda_{k}\gamma_{l}}\lambda_{n}\gamma_{m}\mid}>0\) if and only if the interpolation problem \(f(\lambda_{k})g(\gamma_{l})=z_{k}\omega_{l},f\otimes g\) are bounded analytic in \(\mathbb{D}_{1}\otimes\mathbb{D}_{2}\) is solvable for arbitrary \(z_{k}\omega_{l}\). i.e., \(\varphi_{\Lambda}(H_{1}^{2}(\mathbb{D}_{1})\otimes H_{2}^{2}(\mathbb{D}_{2}))= l^{2}(\mathbb{N})\otimes l^{2}(\mathbb{N})\) and in affirmative case, \(\varphi_{\Lambda}\) is bounded.
**Theorem 4**: Let \(\{\lambda_{k}\}_{k=1}^{\infty}\subset\mathbb{D}_{1}\) and \(\{\gamma_{l}\}_{l=1}^{\infty}\subset\mathbb{D}_{2}\) be a sequence of distinct numbers and \(\{\lambda_{k}\}_{k=1}^{\infty}\cap\{\gamma_{l}\}_{l=1}^{\infty}=\phi\) then \(\{\lambda_{k}\}_{k=1}^{\infty}\otimes\{\gamma_{l}\}_{l=1}^{\infty}\subset \mathbb{D}_{1}\otimes\mathbb{D}_{2}\) is also sequence of distinct numbers. If \(\exists\ c\in(0,1)\) such that
\[\frac{1-\mid\lambda_{k}\gamma_{l+1}\mid}{1-\mid\lambda_{k}\gamma_{l}\mid}\leq c <1,\forall k,l\in\mathbb{N} \tag{3}\]
then \(\{\lambda_{k}\}_{k=1}^{\infty}\otimes\{\lambda_{l}\}_{l=1}^{\infty}\) satisfies the carleson condition. If \(\{\lambda_{k}\}_{k=1}^{\infty}\otimes\{\gamma_{l}\}_{l=1}^{\infty}\) is positive and increasing then condition (3) is also necessary for \(\{\lambda_{k}\}_{k=1}^{\infty}\otimes\{\gamma_{l}\}_{l=1}^{\infty}\) to satisfy the Carleson condition.
**Proof :** Condition (3) implies that
\[1-\mid\lambda_{k}\gamma_{l+1}\mid\leq c(1-\lambda_{k}\gamma_{l})\]
\[1-\mid\lambda_{k}\gamma_{l}\mid\leq c^{l-m}(1-\lambda_{n}\gamma_{m}),l>m,k\geq n \text{ and }k,m=1,2,3.... \tag{4}\]
In particular \(\sum_{k}\sum_{l}(1-\mid\lambda_{k}\gamma_{l}\mid)<\infty\)
it follows from (4) that for \(l>m\)
\[\mid\lambda_{k}\gamma_{l}\mid-\mid\lambda_{n}\gamma_{m}\mid\geq(1-c^{l-m})(1- \mid\lambda_{n}\gamma_{m}\mid)\]
and
\[1-\mid\lambda_{k}\gamma_{l}\lambda_{n}\gamma_{m}\mid =1-\mid\lambda_{k}\gamma_{l}\mid+\mid\lambda_{k}\gamma_{l}\mid(1- \mid\lambda_{n}\gamma_{m}\mid)\] \[\leq(1+c^{l-m})(1-\mid\lambda_{n}\gamma_{m}\mid)\]
Hence by the lemma
\[\left|\frac{\lambda_{n}\gamma_{m}-\lambda_{k}\gamma_{l}}{1- \overline{\lambda_{k}\gamma_{l}}\lambda_{n}\gamma_{m}}\right| \geq\frac{\mid\lambda_{k}\gamma_{l}\mid-\mid\lambda_{n}\gamma_{m} \mid}{1-\mid\lambda_{k}\gamma_{l}\lambda_{n}\gamma_{m}\mid}\] \[\geq\frac{1-c^{l-m}}{1+c^{l-m}},\ \ \ \ \ l>m\]
for \(l<m\) this inequality takes the form
\[\left|\frac{\lambda_{m}\gamma_{m}-\lambda_{k}\gamma_{l}}{1-\overline{\lambda_ {k}\gamma_{l}}\lambda_{n}\gamma_{m}}\right|\geq\frac{1-c^{m-l}}{1+c^{m-l}}\]
Consequently,
\[\prod_{k,l=1,(l\neq m)}^{\infty}\left|\frac{\lambda_{n}\gamma_{m}-\lambda_{k} \gamma_{l}}{1-\overline{\lambda_{k}\gamma_{l}}\lambda_{n}\gamma_{m}}\right| \geq\prod_{N=1}^{\infty}\left(\frac{1-c^{N}}{1+c^{N}}\right)^{2}>0\]
which shows that \(\{\lambda_{k}\}_{k=1}^{\infty}\otimes\{\gamma_{l}\}_{l=1}^{\infty}\) satisfies the carleson condition. Now suppose \(0\leq\lambda_{1}\gamma_{1}<\lambda_{1}\gamma_{2}<.........<\lambda_{2}\gamma_{ 1}<\lambda_{2}\gamma_{2}<......\) and
\[\prod_{k,l=1,(l\neq m)}^{\infty}\left|\frac{\lambda_{n}\gamma_{m}-\lambda_{k }\gamma_{l}}{1-\lambda_{k}\gamma_{l}\lambda_{n}\gamma_{m}}\right|\geq\delta>0\]
Then,
\[\lambda_{k}\gamma_{l+1}-\lambda_{k}\gamma_{l}\geq\delta(1-\lambda_{k}\gamma_{l }\lambda_{k}\gamma_{l+1}),\ \ \ \ \ \ \ \ \ \ \ \ k,l=1,2,.....\]
So that,
\[1-\lambda_{k}\gamma_{l+1}\leq 1-\frac{\delta+\lambda_{k}\gamma_{l}}{1+\delta \lambda_{k}\gamma_{l}}\leq(1-\delta)(1-\lambda_{k}\gamma_{l})\]
Thus \(\{\lambda_{k}\}_{k=1}^{\infty}\otimes\{\gamma_{l}\}_{l=1}^{\infty}\) satisfies (3) as claimed.
### Frame Properties and the Carleson Condition in terms of tensor product
In this section, we have extended the result given Theorem 3.7 in [12], which yields the construction of a class of operators \(T_{1}\otimes T_{2}:l^{2}(\mathbb{N})\otimes l^{2}(\mathbb{N})\to l^{2}( \mathbb{N})\otimes l^{2}(\mathbb{N})\) for which \(\{T_{1}^{n}h_{1}\}_{n=0}^{\infty}\otimes\{T_{2}^{m}h_{2}\}_{m=0}^{\infty}\) is a frame for \(l^{2}((N))\otimes l^{2}((N))\) for certain sequences \(h_{1}\otimes h_{2}\in l^{2}((N))\otimes l^{2}((N))\).
Consider a sequence \(\{\lambda_{k}\}_{k=1}^{\infty}\subset\mathbb{D}_{1}\) and \(\{\gamma_{l}\}_{l=1}^{\infty}\subset\mathbb{D}_{2}\), where \(\mathbb{D}_{1},\mathbb{D}_{2}\) are unit discs in arbitrary complex planes and assume that \(\{\sqrt{1-\mid\lambda_{k}\mid^{2}}\}_{k=1}^{\infty}\in l^{2}(\mathbb{N})\) and \(\{\sqrt{1-\mid\gamma_{l}\mid^{2}}\}_{l=1}^{\infty}\in l^{2}(\mathbb{N})\). Given any Hilbert spaces \(\mathcal{H}_{1}\) and \(\mathcal{H}_{2}\), choose an orthonormal basis \(\{e_{k}\}_{k=1}^{\infty}\) and \(\{d_{l}\}_{l=1}^{\infty}\) and consider the bounded linear operators \(T_{1}:\mathcal{H}_{1}\rightarrow\mathcal{H}_{1},T_{2}:\mathcal{H}_{2} \rightarrow\mathcal{H}_{2}\) for which \(T_{1}e_{k}=\lambda_{k}e_{k},T_{2}d_{l}=\gamma_{l}d_{l}.\) Let \(h_{1}=\sum_{k=1}^{\infty}\sqrt{1-\mid\lambda_{k}\mid^{2}}e_{k},h_{2}=\sum_{l =1}^{\infty}\sqrt{1-\mid\gamma_{l}\mid^{2}}d_{l}\) and consider the iterated systems
\[\{T_{1}^{n}h_{1}\}_{n=0}^{\infty}=\{\sum_{k=1}^{\infty}\lambda_{k}^{n}\sqrt{1 -\mid\lambda_{k}\mid^{2}}e_{k}\}_{n=0}^{\infty} \tag{5}\]
\[\{T_{2}^{m}h_{2}\}_{m=0}^{\infty}=\{\sum_{l=1}^{\infty}\gamma_{l}^{m}\sqrt{1 -\mid\gamma_{l}\mid^{2}}d_{l}\}_{m=0}^{\infty} \tag{6}\]
We will now state the following result.
**Theorem 5**: Let \(\{\lambda_{k}\}_{k=1}^{\infty}\subset\mathbb{D}_{1},\{\gamma_{l}\}_{l=1}^{ \infty}\subset\mathbb{D}_{2}\) and assume that \(\{\sqrt{1-\mid\lambda_{k}\mid^{2}}\}_{k=1}^{\infty}\in l^{2}(\mathbb{N})\),
\(\{\sqrt{1-\mid\gamma_{l}\mid^{2}}\}_{l=1}^{\infty}\in l^{2}(\mathbb{N})\) Then the sequence \(\{T_{1}^{n}h_{1}\}_{n=0}^{\infty}\otimes\{T_{2}^{m}h_{2}\}_{m=0}^{\infty}\) defined by (5) and (6) is a frame for \(\mathcal{H}_{1}\otimes\mathcal{H}_{2}\) if and only if \(\{\lambda_{k}\}_{k=1}^{\infty}\otimes\{\gamma_{l}\}_{l=1}^{\infty}\) satisfies the Carleson condition.
**Proof :** Define the synthesis operator \(V:l^{2}(\mathbb{N}_{0})\otimes l^{2}(\mathbb{N}_{0})\rightarrow\mathcal{H}_{ 1}\otimes\mathcal{H}_{2}\) by \(V\{\{a_{n}\}_{n=0}^{\infty}\otimes\{b_{m}\}_{0}^{\infty}\}=\sum_{n=0}^{ \infty}a_{n}T_{1}^{n}h_{1}\otimes\sum_{m=0}^{\infty}b_{m}T_{2}^{m}h_{2}\). The sequence \(\{T_{1}^{n}h_{1}\}_{n=0}^{\infty}\otimes\{T_{2}^{m}h_{2}\}_{m=0}^{\infty}\) is a frame for \(\{\mathcal{H}_{1}\}\otimes\{\mathcal{H}_{2}\}\) if and only if the operator \(V\) is well defined and surjective.
First assume that \(\{T_{1}^{n}h_{1}\}_{n=0}^{\infty}\otimes\{T_{2}^{m}h_{2}\}_{m=0}^{\infty}\) is a frame for \(\mathcal{H}_{1}\otimes\mathcal{H}_{2}\). Take an arbitrary sequence \(\{c_{j}\}_{j=1}^{\infty}\otimes\{f_{r}\}_{r=1}^{\infty}\in l^{2}(\mathbb{N}) \otimes l^{2}(\mathbb{N})\) The surjectivity of \(V\) implies that there exists \(\{a_{n}\}_{n=0}^{\infty}\otimes\{b_{m}\}_{m=0}^{\infty}\in l^{2}(\mathbb{N}_{0})\) such that \(\sum_{n=0}^{\infty}a_{n}T_{1}^{n}h_{1}\otimes\sum_{m=0}^{\infty}b_{m}T_{2}^{m}h _{2}=\sum_{j=1}^{\infty}c_{j}e_{j}\otimes\sum_{r=1}^{\infty}f_{r}d_{r}.\) It follows that for each \(k,l\in\mathbb{N}\),
\[\begin{split} c_{k}f_{l}&=\langle\sum_{j=1}^{\infty}c_{j}e _{j}\otimes\sum_{r=1}^{\infty}f_{r}d_{r},e_{k}\otimes d_{l}\rangle\\ &=\langle\sum_{j=1}^{\infty}c_{j}e_{j},e_{k}\rangle\langle\sum_{ r=1}^{\infty}f_{r}d_{r},d_{l}\rangle\\ &=\sum_{n=0}^{\infty}a_{n}\langle T_{1}^{n}h_{1},e_{k}\rangle \sum_{m=0}^{\infty}b_{m}\langle T_{2}^{m}h_{2},d_{l}\rangle\\ &=\sum_{n=0}^{\infty}a_{n}\lambda_{k}^{n}\sqrt{1-|\lambda_{k}|^{ 2}}\cdot\sum_{m=0}^{\infty}b_{m}\gamma_{l}^{m}\sqrt{1-|\gamma_{l}|^{2}},\end{split} \tag{7}\]
where \(c_{k}=\langle\sum_{j=1}^{\infty}c_{j}e_{j},e_{k}\rangle=\sum_{n=0}^{\infty}a_ {n}\left\langle T_{1}^{n}h_{1},e_{k}\right\rangle=\sum_{n=0}^{\infty}a_{n} \lambda_{k}^{n}\sqrt{1-\mid\lambda_{k}\mid^{2}}\) and similarly
\(f_{l}=\sum_{m=0}^{\infty}b_{m}\gamma_{l}^{m}\sqrt{1-\mid\gamma_{l}\mid^{2}}\).
Defining \(f\in H_{1}^{2}(\mathbb{D}_{1})\) and \(g\in H_{2}^{2}(\mathbb{D}_{2})\) by
\[f(z)=\sum_{n=0}^{\infty}a_{n}z^{n},g(\omega)=\sum_{m=0}^{\infty}b_{m}\omega^{m}.\]
Equation (7) turns into
\[f(\lambda_{k})\sqrt{1-|\lambda_{k}|^{2}}\cdot g(\gamma_{l})\sqrt{1-|\gamma_{l }|^{2}}=c_{k}f_{l}.\]
Formulated in terms of the operator \(\varphi_{\Lambda}\) in (2), this means that \(l^{2}(\mathbb{N})\otimes l^{2}(\mathbb{N})\subseteq\varphi_{\Lambda}(H_{1}^{2 }(\mathbb{D}_{1})\otimes H_{2}^{2}(\mathbb{D}_{2}))\).
On the other hand, take an arbitrary \(f\otimes g\in H_{1}^{2}(\mathbb{D}_{1})\otimes H_{2}^{2}(\mathbb{D}_{2})\) and choose \(\{a_{n}\}_{n=0}^{\infty}\otimes\{b_{m}\}_{m=0}^{\infty}\in l^{2}(\mathbb{N}_{ 0})\otimes l^{2}(\mathbb{N}_{0})\) such that, \(f(z)=\sum_{n=0}^{\infty}a_{n}z^{n},g(\omega)=\sum_{m=0}^{\infty}b_{m}\omega^{m}\) for every \(k,l\in\mathbb{N}\) we have
\[\begin{split}\langle V\{\{a_{n}\}_{n=0}^{\infty}\otimes\{b_{m}\}_ {m=0}^{\infty}\},e_{k}\otimes d_{l}\rangle&=\langle\sum_{n=0}^{ \infty}a_{n}T_{1}^{n}h_{1}\otimes\sum_{m=0}^{\infty}b_{m}T_{2}^{m}h_{2},e_{k} \otimes d_{l}\rangle\\ &=\langle\sum_{n=0}^{\infty}a_{n}\sum_{j=1}^{\infty}\lambda_{j}^{ n}\sqrt{1-|\lambda_{j}|^{2}}e_{j}\otimes\sum_{m=0}^{\infty}b_{m}\sum_{r=1}^{ \infty}\gamma_{r}^{m}\sqrt{1-|\gamma_{r}|^{2}}d_{r}\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad,e_{k}\otimes d_ {l}\rangle\\ &=\sum_{n=0}^{\infty}a_{n}\lambda_{k}^{n}\sqrt{1-|\lambda_{k}|^{2}} \cdot\sum_{m=0}^{\infty}b_{m}\gamma_{l}^{m}\sqrt{1-|\gamma_{l}|^{2}}\end{split}\]
Therefore,
\[\begin{split}\varphi_{\Lambda}(f\otimes g)&=\{\langle V \{\{a_{n}\}_{n=0}^{\infty}\otimes\{b_{m}\}_{m=0}^{\infty}\},e_{k}\otimes d_{l} \rangle\}_{k,l=1}^{\infty}\in l^{2}(\mathbb{N})\otimes l^{2}(\mathbb{N})\\ &\implies\varphi_{\Lambda}(H_{1}^{2}(\mathbb{D}_{1})\otimes H_{2} ^{2}(\mathbb{D}_{2}))\subseteq l^{2}(\mathbb{N})\otimes l^{2}(\mathbb{N})\end{split}\]
Hence,
\[\varphi_{\Lambda}(H_{1}^{2}(\mathbb{D}_{1})\otimes H_{2}^{2}(\mathbb{D}_{2}))=l^{ 2}(\mathbb{N})\otimes l^{2}(\mathbb{N}),\]
which by [Theorem 3] implies that \(\{\lambda_{k}\}_{k=1}^{\infty}\otimes\{\gamma_{l}\}_{l=1}^{\infty}\) satisfies the Carleson condition.
Conversely, assume that the sequence \(\{\lambda_{j}\}_{j=1}^{\infty}\otimes\{\gamma_{l}\}_{l=1}^{\infty}\subset \mathbb{D}_{1}\otimes\mathbb{D}_{2}\) satisfies the Carleson condition. We first show that \(\sum_{n=0}^{\infty}a_{n}T_{1}^{n}h_{1}\otimes\sum_{m=0}^{\infty}b_{m}T_{2}^{m }h_{2}\) is convergent for all \(\{a_{n}\}_{n=0}^{\infty}\otimes\{b_{m}\}_{m=0}^{\infty}\in l^{2}(\mathbb{N}_ {0})\otimes l^{2}(\mathbb{N}_{0})\). Consider the corresponding \(f\in H_{1}^{2}(\mathbb{D}_{1})\) and \(g\in H_{2}^{2}(\mathbb{D}_{2})\) determined by \(f(z)=\sum_{n=0}^{\infty}a_{n}z^{n},g(\omega)=\sum_{m=0}^{\infty}b_{m}\omega^{m}\). By [Theorem 3] we know that \(\varphi_{\Lambda}(H_{1}^{2}(\mathbb{D}_{1})\otimes H_{2}^{2}(\mathbb{D}_{2}) )=l^{2}(\mathbb{N})\otimes l^{2}(\mathbb{N})\). So \(\{f(\lambda_{k})\sqrt{1-|\lambda_{k}|^{2}}\}_{k=1}^{\infty}\otimes\{g(\gamma_ {l})\sqrt{1-|\gamma_{l}|^{2}}\}_{l=1}^{\infty}\in l^{2}(\mathbb{N})\otimes l ^{2}(\mathbb{N})\). Now for \(N,M\in\mathbb{N}\), consider the truncated sequences \(\{a_{n}\}_{n=0}^{N},\{b_{m}\}_{m=0}^{M}\) and the associated functions \(f_{N}\in H_{1}^{2}(\mathbb{D}_{1}),g_{M}\in H_{2}^{2}(\mathbb{D}_{2})\),respectively. Given by \(f_{N}(z)=\sum_{n=0}^{N}a_{n}z^{n},g_{M}(\omega)=\sum_{m=0}^{M}b_{m} \omega^{m}\).
Again \(\{f(\lambda_{k})\sqrt{1-|\lambda_{k}|^{2}}\}_{k=1}^{\infty}\otimes\{g(\gamma_ {l})\sqrt{1-|\gamma_{l}|^{2}}\}_{l=1}^{\infty}\in l^{2}(\mathbb{N})\otimes l ^{2}(\mathbb{N})\), and since \(\varphi_{\Lambda}:H_{1}^{2}(\mathbb{D}_{1})\otimes H_{2}^{2}(\mathbb{D}_{2}) \to l^{2}(\mathbb{N})\otimes l^{2}(\mathbb{N})\) is bounded by [Theorem 3], there is a constant \(C>0\) such that
\[||\varphi_{\Lambda}(f\otimes g)-\varphi_{\Lambda}(f_{N}\otimes g _{M})||^{2} \leq C||f\otimes g-f_{N}\otimes g_{M}||^{2}\] \[=C||\sum_{n=0}^{\infty}a_{n}z^{n}\sum_{m=0}^{\infty}b_{m}\omega^{ m}-\sum_{n=0}^{N}a_{n}z^{n}\sum_{m=0}^{M}b_{m}\omega^{m}||^{2}\] \[=C||\left(\sum_{n=0}^{N}a_{n}z^{n}+\sum_{n=N+1}^{\infty}a_{n}z^{n} \right)\left(\sum_{m=0}^{M}b_{m}\omega^{m}+\sum_{m=M+1}^{\infty}b_{m}\omega^{m}\right)\] \[\qquad\qquad\qquad-\sum_{n=0}^{N}a_{n}z^{n}\sum_{m=0}^{M}b_{m} \omega^{m}||^{2}\] \[=C||\sum_{n=0}^{N}a_{n}z^{n}\sum_{m=0}^{M}b_{m}\omega^{m}+\sum_{n =0}^{N}a_{n}z^{n}\sum_{m=M+1}^{\infty}b_{m}\omega^{m}\] \[\qquad\qquad\qquad+\sum_{n=N+1}^{\infty}a_{n}z^{n}\sum_{m=0}^{M} b_{m}\omega^{m}+\sum_{n=N+1}^{\infty}a_{n}z^{n}\sum_{m=M+1}^{\infty}b_{m} \omega^{m}\] \[\qquad\qquad\qquad\qquad-\sum_{n=0}^{N}a_{n}z^{n}\sum_{m=0}^{M} b_{m}\omega^{m}||^{2}\] \[=C||\sum_{n=0}^{N}a_{n}z^{n}\sum_{m=M+1}^{\infty}b_{m}\omega^{m}+ \sum_{n=N+1}^{\infty}a_{n}z^{n}\sum_{m=0}^{M}b_{m}\omega^{m}\] \[\qquad\qquad\qquad+\sum_{n=N+1}^{\infty}a_{n}z^{n}\sum_{m=M+1}^{ \infty}b_{m}\omega^{m}||^{2}\]
\[=C\sum_{n=0}^{N}|a_{n}|^{2}\sum_{m=M+1}^{\infty}|b_{m}|^{2}+C\sum_{n=N+1}^{ \infty}|a_{n}|^{2}\sum_{m=0}^{M}|b_{m}|^{2}\] \[\qquad\qquad\qquad\qquad\qquad\qquad+C\sum_{n=N+1}^{\infty}|a_{n }|^{2}\sum_{m=M+1}^{\infty}|b_{m}|^{2}\to 0,N,M\to\infty\]
It follows that
\[\sum_{n=0}^{N}a_{n}T_{1}^{n}h_{1}\otimes\sum_{m=0}^{M}b_{m}T_{2}^ {m}h_{2} =\sum_{n=0}^{N}a_{n}\sum_{k=1}^{\infty}\sqrt{1-\mid\lambda_{k}\mid^ {2}}\lambda_{k}^{n}e_{k}\otimes\sum_{m=0}^{M}b_{m}\sum_{l=1}^{\infty}\sqrt{1- \mid\gamma_{l}\mid^{2}}\gamma_{l}^{n}d_{l}\] \[=\sum_{k=1}^{\infty}\sqrt{1-\mid\lambda_{k}\mid^{k}}\sum_{n=0}^{ N}a_{n}\lambda_{k}^{n}e_{k}\cdot\sum_{l=1}^{\infty}\sqrt{1-\mid\gamma_{l}\mid^{2}} \sum_{m=0}^{M}b_{m}\gamma_{l}^{m}d_{l}\] \[=\sum_{k=1}^{\infty}\sqrt{1-\mid\lambda_{k}\mid^{2}}f_{N}(\lambda _{k})e_{k}\cdot\sum_{l=1}^{\infty}\sqrt{1-\mid\lambda_{l}\mid^{2}}g_{M}(\gamma _{l})d_{l}\] \[\to\sum_{k=1}^{\infty}f(\lambda_{k})\sqrt{1-\mid\lambda_{k}\mid^{ 2}}e_{k}\sum_{l=1}^{\infty}g(\gamma_{l})\sqrt{1-\mid\gamma_{l}\mid^{2}}d_{l} \quad\text{as }N,M\to\infty\]
This proves that \(\sum_{n=0}^{\infty}a_{n}T_{1}^{n}h_{1}\otimes\sum_{m=0}^{\infty}b_{m}T_{2}^{m }h_{2}\) is convergent, and thus \(V\) is well defined from \(l^{2}(\mathbb{N}_{0})\otimes l^{2}(\mathbb{N}_{0})\) into \(\mathcal{H}_{1}\otimes\mathcal{H}_{2}\). In order to prove that \(\{T_{1}^{m}h_{1}\}_{n=0}^{\infty}\otimes\{T_{2}^{m}h_{2}\}_{m=0}^{\infty}\) is a frame, it is enough to show that the synthesis operator \(V:l^{2}(\mathbb{N}_{0})\otimes l^{2}(\mathbb{N}_{0})\to\mathcal{H}_{1}\otimes \mathcal{H}_{2}\) is surjective. Let \(x\otimes y\in\mathcal{H}_{1}\otimes\mathcal{H}_{2}\).By [Theorem 3] there is an \(f\otimes g\in H_{1}^{2}(\mathbb{D}_{1})\otimes H_{2}^{2}(\mathbb{D}_{2})\) such that \(f(\lambda_{k})\sqrt{1-\mid\lambda_{k}\mid^{2}}\otimes g(\gamma_{l})\sqrt{1- \mid\gamma_{l}\mid^{2}}=\langle x\otimes y,e_{k}\otimes d_{l}\rangle\) for all \(k,l\in\mathbb{N}\). Choose \(\{a_{n}\}_{n=0}^{\infty}\otimes\{b_{m}\}_{m=0}^{\infty}\in l^{2}(\mathbb{N}_{0 })\otimes l^{2}(\mathbb{N}_{0})\), such that \(f(z)=\sum_{n=0}^{\infty}a_{n}z^{n},g(\omega)=\sum_{m=0}^{\infty}b_{m}\omega^{m}\). Then for each \(k,l\in\mathbb{N}\),
\[\langle V(\{a_{n}\}_{n=0}^{\infty}\otimes\{b_{m}\}_{m=0}^{\infty} ),e_{k}\otimes d_{l}\rangle =\langle\sum_{n=0}^{\infty}a_{n}T_{1}^{n}h_{1}\otimes\sum_{m=0}^{ \infty}b_{m}T_{2}^{m}h_{2},e_{k}\otimes d_{l}\rangle\] \[=\langle\sum_{n=0}^{\infty}a_{n}T_{1}^{n}h_{1},e_{k}\rangle\langle \sum_{m=0}^{\infty}b_{m}T_{2}^{m}h_{2},d_{l}\rangle\] \[=\sum_{n=0}^{\infty}a_{n}\langle\sum_{j=1}^{\infty}\lambda_{j}^{n }\sqrt{1-\mid\lambda_{j}\mid^{2}}e_{j},e_{k}\rangle\] \[\qquad\qquad\qquad\cdot\sum_{m=0}^{\infty}b_{m}\langle\sum_{r=1}^{ \infty}\gamma_{r}^{m}\sqrt{1-\mid\gamma_{r}\mid^{2}}d_{r},d_{l}\rangle\]
\[=\sum_{n=0}^{\infty}a_{n}\lambda_{k}^{n}\sqrt{1{-}\mid\lambda_{k} \mid^{2}}\sum_{m=0}^{\infty}b_{m}\gamma_{l}^{m}\sqrt{1{-}\mid\gamma_{l}\mid^{2}}\] \[=f(\lambda_{k})\sqrt{1{-}\mid\lambda_{k}\mid^{2}}\cdot g(\gamma_{l })\sqrt{1{-}\mid\gamma_{l}\mid^{2}}\] \[=\langle x,e_{k}\rangle\langle y,d_{l}\rangle.\]
Therefore \(V(\{a_{n}\}_{n=0}^{\infty}\otimes\{b_{m}\}_{m=0}^{\infty})=x\otimes y\) and thus \(V\) is surjective, as desired.
Acknowledgement:This work was supported by the Science and Engineering Research Board, DST, Govt. of India [CRG/2020/003170].
Declaration:I confirm that all authors listed on the title page have contributed equally to the work, have read the manuscript, and agree to its submission.
Conflict of Interest:The authors declare that there is no conflict of interest.
Data Availability:Data availability or data sharing is not applicable to this article as no data sets were generated or analyzed during the current investigation.
|
2305.06868 | Review of the application piezoelectric actuators for SRF cavity tuners | Modern particle accelerators and high-energy physics experiments that
deployed up to several hundred of accelerating superconducting RF cavities
require accurate frequency control. This is achieved by using cavity tuners
typically actuated with the piezoelectric ceramic actuators. Piezoelectric
ceramic actuators have become "standard" components of the SRF cavity tuner and
depending on the application could be operated in different environments: in
air, at cryogenic temperature, in vacuum, and submerged in liquid helium.
Different applications place different requirements on the piezo actuators, but
the important parameters common to all applications are the lifetime and
reliability of the actuators. Several programs targeting the development of
reliable piezo actuators are presented in this contribution. | Yuriy Pischalnikov, Crispin Contreras-Martinez | 2023-05-11T15:11:04Z | http://arxiv.org/abs/2305.06868v1 | # Review of the application piezoelectric actuators for SRF cavity tuners
###### Abstract
Modern particle accelerators and high-energy physics experiments that deployed up to several hundred of accelerating superconducting RF cavities require accurate frequency control. This is achieved by using cavity tuners typically actuated with the piezoelectric ceramic actuators. Piezoelectric ceramic actuators have become "standard" components of the SRF cavity tuner and depending on the application could be operated in different environments: in air, at cryogenic temperature, in vacuum, and submerged in liquid helium. Different applications place different requirements on the piezo actuators, but the important parameters common to all applications are the lifetime and reliability of the actuators. Several programs targeting the development of reliable piezo actuators are presented in this contribution.
Piezoelectric technology, Precision engineering, Automation, Control +
Footnote †: journal: Review of the application piezoelectric actuators for SRF cavity tuners
+
Footnote †: journal: Review of the application piezoelectric actuators for SRF cavity tuners
## 1 Introduction
Modern particles accelerators such as Eu-XFEL/DESY [1], LCSL-II/SLAC [2], and ESS [3] are large in size. They are built for fundamental physics research and deploy hundreds of accelerating elements known assuperconducting radio frequency (SRF) cavities made from niobium. During operation in an accelerator, the SRF cavity frequency must be actively adjusted by changing length of the "1 m long cavity on the level nanometers. Frequency tuning systems (or simply tuners) are designed and operated to perform this function (Figure 1) [4].
The SRF cavity tuner system is a combination of a tuning mechanism and an actuator. Typically, such systems are a mechanical frame with piezoelectrical elements serving as actuators to compress or stretch cavity. The ability of the piezo actuator to generate large forces (approx. 4 kN for stack with cross-section 10*10 mm\({}^{2}\)) and capability to deliver stroke up 10's of um (with almost unlimited resolution) and withstand pressure up to 200 MPa make these actuators as good choice to be deployed in SRF cavity tuners.
Recently thousands of piezo actuators deployed as fast/fine tuning element in several large SRF linac that in operation or close to be in operation. In large machine (Eu-XFEL, LCLS-II, ESS) piezo actuators typically deployed close to SRF cavity, inside SRF cryomodule at insulated vacuum volume and at cryogenic working temperature. Reliability of the piezo actuators became most critical parameter considering complexity and cost of replacement in case of failure.
## 2 Measurements of piezo stroke versus temperature
As part of the LCLS-II cryomodules commissioning, after cooling cavities to T=2 K, the detuning of each cavity with piezo actuators has been measured [4]. The piezo actuators stroke when cooling to a range of temperature between T=4 K to T=10 K is around 10 um or almost 2-3 times larger than expected from data presented at several papers [5, 6] and companies' catalogues [7, 8]. To address this discrepancy the stroke of actuators P-844K075 [9], made from P2T (Lead Zirconate Titanate) 10*10*36 mm\({}^{2}\) PICMA* [10], has been measured at different temperature. The DarkPhoton Search experiment setup [11] has been used to conduct measurements (Figure 2). Stroke of the actuator P-844K075, measured before installation into setup at T=295K and V\({}_{ piezo}\)=120 V, was 36 um.
The piezo actuator, when V\({}_{ piezo}\)=120 V is applied, retuned the single cell 1.3GHz SRF cavity by compressing the cavity. Cavity retuning (frequency shift) was measured with a NWA (Network Analyzer) (Figure 3). Cavity was installed into special facility and cooled/submerged with liquid Helium to T=4K. Cavity retuning (or the piezo stroke) has been measured at several points (at
Figure 1: 3D picture of the LCLS-II accelerator cryomodule with 8 elliptical SRF cavities (A). Each cavity has tuner system that adjust frequency of SRF cavity to resonance [8]. To compress cavity (to change cavity frequency) encapsulated piezo actuator P-844K075 has been deployed (C).
different temperature) after liquid helium was evaporated and cavity/tuner/piezo system slowly warmed up to room temperature (during approximately 24 hours). The temperature of the piezo ceramic was measured with an RTD (CERNOX) installed on the surface of the piezo actuator. The SRF cavity parameter df/dl=2.3 kHz/um (the cavity's frequency change (df) versus cavity length change (dl)) is not dependent from cavity temperature and thus allowed the cavity to be used as high-resolution sensor for piezo stroke at different temperatures. At a temperature range from T=2 K to T=9 K, when Nb cavity is in the superconducting phase, quality factor of the SRF cavity is high and bandwidth is narrow. The cavity's narrow bandwidth led to cavity/s retuning accuracy approximately several tens of Hz. The accuracy of piezo stroke measurements through of the cold cavity detuning is in nanometers range. And when cavity go to "normal conducting stage" quality factor will drop and accuracy of the piezo stroke measurements will be 1 micrometer or better.
But main objectives were not direct measurements of piezo stroke in micrometers rather find dependence of the piezo stroke from room temperature to cryogenic temperature. The main contributions in the measurements errors were due to changes in the cavity and tuner stiffness vs temperature and accuracy in the frequency detuning measurements when cavity was at higher temperature. The estimated measurements errors are below 10%. The results are shown in Figure 4. The piezo response (cavity detuning range) when SRF cavity cools down to T=4 K, was just 3-4 times lower than at "room temperature" rather 10-13 times as reported by other authors [5,6].
## 3 Challenges for operation of piezo actuators at cryogenic temperature and vacuum
When the SRF cavity is operated in RF-pulse mode it is subject to dynamic Lorentz force detuning (LFD) up to several kHz. To compensate this level of cavity detuning, the piezo needs to be operated with stimulus pulse with amplitude up to 120-150 V. Power dissipation inside the piezo ceramic could be reach a level of 0.1 W up to 1 W. As demonstrated by several teams [13, 14], the temperature in the centre of the piezo stack when operated inside vacuum at high dynamic rate could be quickly raised by \(\Delta\)T"200 K that could lead to piezo failure. Heating of the piezo when operating at the high dynamic rate that is required for LFD compensation could therefore significantly decrease the lifetime of the actuator [13,14,15]. The ENAL and Physik Intrumente (PI) teams conducted an R&D program to develop a novel piezo actuator for operation at cryogenic temperature, inside vacuum and at high dynamic rate. The goal was to develop an encapsulated piezo actuator that removes heat from the surface of the piezo-ceramic stack and prevents positive feedback heating. The result of this R&D program is an actuator p. 844K093 (Figure 5 and 6). The encapsulated and preloaded actuator utilized the PICMA* stack of size 10*10*36mm3. Copper foam was selected to remove heat from the side surfaces of the PICMA* stack. Copper foam installed between piezo stack and plate made form aluminium nitride. Aluminum nitride is a good dielectric with excellent heat transfer properties. This material isolated the outside encapsulation from high voltage while at the same time allowing for efficient heat extraction. For efficient operation a heat sink must be connected to actuator copper plate (attached to aluminium nitride plate) and anchored to a cryomodule cryogenic pipe ( at T=2 K, T=4 K or T=7 K).
Dielectric heating measurements of the PICMA stacks in the two actuators P-844K075 and P-844K093, when operated inside vacuum and at cryogenic temperature were performed at designated facility (Figure 7) [14,15]. Heating of both actuators when operated with the same sinewave stimulus pulse (f=100 Hz, \(V_{piezo}\)=100 V) are presented on Figure 8. Temperature
Figure 4: Dependency of PICMA * piezo actuator stroke versus temperature of the piezoceramic stack. Piezo actuator stroke at room temperature (T=295K) normalized to 1.0.
Figure 3: Schematics of setup used for measurements of piezo stroke through retuning/compression of the SRF cavity. Changes in the cavity frequency when the piezo was operated with \(V_{piezo}\)=0V and \(V_{piezo}\)=120V are then converted to piezo stroke.
Figure 2: DarkPhoton search experiment setup, with single cell 1.3 GHz SRF cavity and piezo tuner equipped with piezo actuator, used to measure piezo response versus temperature of piezo ceramic.
of the actuator P-844K075 increased from T=20 K up to T=110 K during 1.5 hours, when stimulus pulse applied to the piezo actuator. At the same time temperature of the P-844K093 (piezo with copper foam) warmed up from 10 K by ST=7 K just during first 10 minutes of operation and not increased beyond 17 K. Piezo actuator P-844K093 (with copper foam) demonstrated the significant heat removal capability, generated by piezo stacks. The heat sink was attached to piezo actuator and T=4 K plate. Efficient transfer of the heat from piezo will prevent overheating and will significantly increase lifetime of piezo actuator.
## 4 Summary
Results of the tests presented in this paper demonstrated that the stroke of actuator made from PZT piezo ceramics, when cooled to T=4K will decrease to ~1/4 of the stroke at room temperature. In a previous studies and inside companies' catalogues reported drop of the PZT stroke when cooldown to T=4K in ~10 times. These results will lead to more efficient design of piezo tuners for future very large size SRF accelerators (like ILC [16]), where tens of thousands of piezo actuators will be deployed.
A novel piezo actuator P-844K093 has been developed to increase the realiabity of piezo actuators in the SRF cavity tuners which operate in RF pulse mode. Developed design allows for efficient removal of heat from the piezo ceramic and prevention of the positive feedback heating. The tests were conducted by running piezo actuators at high voltage amplitude and high dynamic rate inside vacuum and at temperature range of T=10 K to T=20 K. The temperature increase of P-844K093 due to piezoelectric stack dielectric heating was reduce by a factor of 15 compared to standard (P-844K075) PZT actuator.
Another approach to mitigate dielectric heating of piezo actuators is to develop actuators from different piezoelectric ceramic materials, which will generate less dielectric heating. There are requests for development of newest low heat generating piezecormic from scientists working in the field of Dark Matter search experiments. The Dark Matter experiments deployed inside Dilution Refrigerators at temperature range T=10-20 mK with heat removal capacities in the level of microwatts.
|
2303.05037 | Gauges and Accelerated Optimization over Smooth and/or Strongly Convex
Sets | We consider feasibility and constrained optimization problems defined over
smooth and/or strongly convex sets. These notions mirror their popular function
counterparts but are much less explored in the first-order optimization
literature. We propose new scalable, projection-free, accelerated first-order
methods in these settings. Our methods avoid linear optimization or projection
oracles, only using cheap one-dimensional linesearches and normal vector
computations. Despite this, we derive optimal accelerated convergence
guarantees of $O(1/T)$ for strongly convex problems, $O(1/T^2)$ for smooth
problems, and accelerated linear convergence given both. Our algorithms and
analysis are based on novel characterizations of the Minkowski gauge of smooth
and/or strongly convex sets, which may be of independent interest: although the
gauge is neither smooth nor strongly convex, we show the gauge squared inherits
any structure present in the set. | Ning Liu, Benjamin Grimmer | 2023-03-09T05:05:54Z | http://arxiv.org/abs/2303.05037v3 | # Gauges and Accelerated Optimization over
###### Abstract
We consider feasibility and constrained optimization problems defined over smooth and/or strongly convex sets. These notions mirror their popular function counterparts but are much less explored in the first-order optimization literature. We propose new scalable, projection-free, accelerated first-order methods in these settings. Our methods avoid linear optimization or projection oracles, only using cheap one-dimensional linesearches and normal vector computations. Despite this, we derive optimal accelerated convergence guarantees of \(O(1/T)\) for strongly convex problems, \(O(1/T^{2})\) for smooth problems, and accelerated linear convergence given both. Our algorithms and analysis are based on novel characterizations of the Minkowski gauge of smooth and/or strongly convex sets, which may be of independent interest: although the gauge is neither smooth nor strongly convex, we show the gauge squared inherits any structure present in the set.
## 1 Introduction.
We consider feasibility and optimization problems defined over sets \(S_{i}\) possessing classic structures like smoothness and strong convexity. These structures in sets are much less explored than their function counterparts in the first-order optimization literature. We show these structures in constraint sets lead to the same speedups commonly found in first-order methods for structured functions (i.e., \(O(1/T)\) convergence rates given strong convexity, accelerated \(O(1/T^{2})\) rates given smoothness, and fast linear convergence given both). We propose projection-free algorithms attaining these optimal rates for both feasibility problems
\[\text{Find }x\in\cap_{i=1}^{m}S_{i} \tag{1.1}\]
where all the sets \(S_{i}\) are either smooth, strongly convex, or both, and for optimization problems with similarly structured \(f\) and \(S_{i}\)
\[\begin{cases}\max&f(x)\\ \text{s.t.}&x\in S_{i}\quad\forall i=1\ldots m\.\end{cases} \tag{1.2}\]
Critically, our proposed algorithms are projection-free. The typical first-order method assumption of being able to project onto \(S_{i}\) limits algorithms to simple constraints. Instead, we only assume oracles for one-dimensional linesearches (to evaluate the gauges defined in (1.5)) and to compute normal vectors on the boundary of each \(S_{i}\). In contrast, projected gradient methods require an orthogonal projection oracle. (Projections onto sets parallel computing often costly proximal operators of functions, whereas computing normal vectors parallels cheaper gradient calculations.)
For example, if \(S_{i}\) is an ellipsoid, orthogonal projection lacks a closed form, but our linesearch and its normal vectors have closed forms, costing a single matrix-vector multiplication to compute. For polyhedrons, projection is a quadratic program, whereas a normal vector is computed by identifying any one active constraint.
Throughout, we consider closed convex sets \(S_{i}\subseteq\mathcal{E}\) in some finite-dimensional Euclidean space \(\mathcal{E}\). Smoothness and strong convexity can be intuitively defined as follows (formal local and global definitions are given in Section 2): A convex set \(S\) is \(\beta\)-smooth if every unit normal vector \(\zeta\in N_{S}(x)\) taken at some \(x\) on the boundary of \(S\) yields an inner approximation
\[B(x-\zeta/\beta,1/\beta)\subseteq S\, \tag{1.3}\]
where \(B(x,r)\) is the ball of radius \(r\) around \(x\). This can be viewed as a ball smoothly rolling around inside the set's boundary. Likewise, \(S\) is \(\alpha\)-strongly convex if every unit normal vector \(\zeta\in N_{S}(x)\) yields an outer approximation
\[B(x-\zeta/\alpha,1/\alpha)\supseteq S. \tag{1.4}\]
These definitions in terms of inner and outer approximations given by each normal vector mirror the traditional functional setting, where smoothness and strong convexity correspond to upper and lower quadratic approximations being given by each subgradient.
Our Contributions.Our approach towards utilizing these structures focuses on each set's gauge, translated by some point \(e_{i}\) in the interior of \(S_{i}\) (taking \(e_{i}=0\) gives the classic Minkowski functional):
\[\gamma_{S_{i},e_{i}}(x):=\inf\{\lambda>0\mid x-e_{i}\in\lambda(S_{i}-e_{i})\}. \tag{1.5}\]
1. **Structure of Gauges of Structured Sets.** We show in Theorems 3.1 and 3.2 that any \(\beta\)-smooth or \(\alpha\)-strongly convex set has \(O(\beta)\)-smooth or \(O(\alpha)\)-strongly convex gauge squared \(\frac{1}{2}\gamma_{S,e}^{2}(x)\), respectively. Theorems 3.4 and 3.5 show the converse of this result. Here the big-O notation suppresses constants depending on the size of \(S\) and placement of \(e\in S\).
2. **Fast Algorithms for Reformulations of (1.1) and (1.2)**. Noting \(\gamma_{S,e}(x)\leq 1\) if and only if \(x\in S\), the feasibility problem (1.1) can be rewritten as the unconstrained convex minimization \[\min_{y}\max_{i}\{\gamma_{S_{i},e_{i}}(y)\}\] (1.6) for any \(e_{i}\in S_{i}\). Utilizing the radial duality framework of \([1,2]\), the optimization problem (1.2) can be rewritten as the unconstrained convex minimization \[\min_{y}\max_{i}\{f^{\Gamma}(y),\ \gamma_{S_{i},0}(y)\}\] (1.7) provided \(0\in\text{int }\bigcap_{i}S_{i}\) and \(f(0)>0\), where \(f^{\Gamma}(y)=\sup\{v>0\mid vf(y/v)\leq 1\}\). Applying well-known first-order methods to these unconstrained gauge reformulations, Theorem 4.1 shows a \(O(1/\alpha\epsilon)\) rate for \(\alpha\)-strongly convex \(f\) and \(S_{i}\), and Theorem 4.3 shows an accelerated \(O(\sqrt{\beta/\epsilon})\) rate for \(\beta\)-smooth \(f\) and \(S_{i}\) and a \(O(\sqrt{\beta/\alpha}\log(1/\epsilon))\) rate given both. These rates match the optimal projected method convergence rates while avoiding their more expensive orthogonal projection computations.
3. **Numerical Validation.** We verify our theory numerically, seeing the expected speedups for feasibility problems over \(p\)-norm ellipsoids, which are known to be smooth when \(2\leq p<\infty\) and strongly convex when \(1<p\leq 2\). Further, for synthetic smooth ellipsoid-constrained
optimization instances, we compare our projection-free accelerated method (based on the radial reformulation (1.7)) against standard first-order (SCS and COSMO) and second-order, interior point solvers (Gurobi and Mosek). Despite only using one matrix multiplication per iteration, our method is often competitive with second-order methods. This motivates future works on more practical implementations and real-world validation for accelerated radial methods.
Outline.In the remainder of this section, we discuss important families of smooth/strongly convex sets and related approaches in the literature. Section 2 discusses preliminary definitions and some useful lemmas. Section 3 formalizes and proves our main theorems on gauges of structured sets. Finally, in Section 4 and 5, we propose and analyze projection-free methods using our Theorems 3.1 and 3.2 for feasibility problems based on a gauges reformulation and constrained optimization based on a radial reformulation, and numerical explore their effectiveness.
### Common Families of Smooth/Strongly Convex Sets.
Many commonly encountered families of constraint sets possess smoothness and/or strong convexity. Below we give three such families and show that affine transformations, Minkowski sums, and intersections preserve this structure. See [3, page 259-297], [4], and [5] as references for the classic analysis of these properties.
p-Norms and Schatten p-Norms.For \(\mathcal{E}=\mathbb{R}^{n}\), denote the \(p\)-norm of a vector by \(\|x\|_{p}^{p}=\sum_{i=1}^{n}|x_{i}|^{p}\). The \(p\)-norm ball \(B_{p}(0,1)=\{x\mid\|x\|_{p}\leq 1\}\) is either smooth or strongly convex depending on the choice of \(p\). This ball is smooth with constant \(\beta=(p-1)n^{\frac{1}{2}-\frac{1}{p}}\) when \(p\in[2,\infty)\) and strongly convex with constant \(\alpha=(p-1)n^{\frac{1}{2}-\frac{1}{p}}\) when \(p\in(1,2]\)[6, Lemma 3]. Similarly, in the space of matrices with trace inner product, the Schatten \(p\)-norm defined by the \(p\)-norm of a matrix's singular values \(\|X\|_{p}^{p}=\sum_{i=1}^{d}\sigma_{i}(X)^{p}\) yields smooth and/or strongly convex unit balls under the same conditions [6, Lemma 6].
Norm constraints have found widespread usage describing trust regions, adding regularization,
Figure 1: Examples of strongly convex and smooth set with the inner approximation (1.3) in red and the outer approximation (1.4) in blue. The \(p=1.5\)-norm ball in (a) is strongly convex but not smooth. The \(p=2\)-norm ellipsoid in (b) is smooth and strongly convex. The \(p=3\)-norm ball in (c) is smooth but not strongly convex.
and inducing approximate sparsity (low-rankness) for (Schatten) norms as \(p\searrow 1\). These will provide a working example throughout for applying our theory in Section 3.1 and for numerics in Section 5. Note the gauge and normal vectors of a (Schatten) \(p\)-norm ball or ellipsoid have simple closed forms, whereas orthogonal projection only does for balls with \(p\in\{1,2,\infty\}\). Hence these will provide a useful test case for our proposed projection-free methods across the range of smooth and/or strongly convex settings.
Functional Constraints.Often, constraint sets are given by a level set of some function \(h\), \(S=\{x\mid h(x)\leq z\}\) for some fixed \(z\in\mathbb{R}\). This level set inherits the smoothness and/or strong convexity of the function \(h\) as stated below.
**Lemma 1.1**.: _[_7_, Proposition 4.14]_ _If \(h\) is \(L\)-smooth, \(\{x\mid h(x)\leq z\}\) is \(L/\inf\{\|\nabla h(x)\|\mid h(x)=z\}\)-smooth. If \(h\) is \(\mu\)-strongly convex, \(\{x\mid h(x)\leq z\}\) is \(\mu/\sup\{\|\nabla h(x)\|\mid h(x)=z\}\)-strongly convex._
Epigraphs of any smooth function.For any \(L\)-smooth function \(h\), its epigraph epi \(h=\{(x,t)\mid h(x)\leq t\}\) is smooth with the same constant \(L\). No similar result applies to the epigraphs of \(\mu\)-strongly convex functions since strongly convex sets must be bounded.
Affine Transformations.For any \(L\)-smooth \(\mu\)-strongly convex function \(h\), its affine transformation \(h(Ax+b)\) is well-known to be \(\lambda_{max}(A^{*}A)L\)-smoothness and \(\lambda_{min}(A^{*}A)\mu\)-strongly convex. Note that if \(A\) has a null space, \(A^{*}A\) has a zero eigenvalue, so strong convexity is lost. A weakened version of these two properties extends to sets, stated below. The strongly convexity part of Lemma 1.2 is given by [4, Corollary 9], the smoothness part is proved in Appendix A.2.1.
**Lemma 1.2**.: _Suppose a closed convex set \(S\) is \(\beta\)-smooth and \(\alpha\)-strongly convex. Then for any linear \(A\colon\mathcal{E}\to\mathcal{E}^{\prime}\), \(b\in\mathcal{E}^{\prime}\), the set \(\{x\mid Ax+b\in S\}\) is \(\beta\frac{\lambda_{\max}(A^{*}A)}{\sqrt{\lambda_{\min}(A^{*}A)}}\)-smooth and \(\alpha\frac{\lambda_{\min}(A^{*}A)}{\sqrt{\lambda_{\max}(A^{*}A)}}\)-strongly convex._
Minkowski Sums.The Minkowski sum of two sets (defined as \(S+T=\{s+t\mid s\in S,t\in T\}\)) inherits smoothness from either set and remains strongly convex only if both sets are. These set properties are the reverse of the typical properties of summing functions. There \(f+g\) is smooth if both \(f\) and \(g\) are but inherits strong convexity if either \(f\) or \(g\) is. The following lemma formalizes this, where nonsmooth sets are \(\beta=\infty\)-smooth and non-strongly convex sets are \(\alpha=0\)-strongly convex. The strongly convexity part of Lemma 1.3 is given by [4, Corollary 2], the smoothness part is proved in Appendix A.2.2.
**Lemma 1.3**.: _Suppose closed convex sets \(S_{i}\) are \(\beta_{i}\)-smooth and \(\alpha_{i}\)-strongly convex for \(i=1,2\). Then \(S_{1}+S_{2}\) is \((\beta_{1}^{-1}+\beta_{2}^{-1})^{-1}\)-smooth and \((\alpha_{1}^{-1}+\alpha_{2}^{-1})^{-1}\)-strongly convex._
In particular, noting the ball \(B_{\epsilon}=\{x\mid\|x\|_{2}\leq\epsilon\}\) is \(\epsilon\)-smooth for any \(\epsilon>0\), this lemma gives a natural \(\epsilon\)-smoothing of any nonsmooth set \(S\) by \(S+B_{\epsilon}\). However, checking membership in \(S+B_{\epsilon}\) amounts to orthogonally projecting onto \(S\), which is only computable for sufficiently simple \(S\).
Intersections of strongly convex sets.Similar to convex sets, the intersection of several strongly convex sets preserves strong convexity.
**Lemma 1.4**.: _[_5_, Proposition 2]_ _Suppose closed convex sets \(S_{i}\) are \(\alpha_{i}\)-strongly convex. Then \(S=\bigcap_{i}S_{i}\) is \(\min_{i}\{\alpha_{i}\}\)-strongly convex._
### Related Works.
Structural Results on GaugesThere has been much interest in the influence of curvature on computational and statistical efficiency in optimization and machine learning [6, 8, 9, 10, 11]. These works all considered relationships between the strong convexity of a centrally symmetric set and its gauge. Note that this symmetry condition ensures its gauge is a norm. Recently [12] showed a symmetric set is strongly convex w.r.t. its own gauge norm if and only if the set's gauge squared is a strongly convex function. Our results remove this symmetry requirement and generalize to also apply to smooth sets. Without assuming symmetry, there is no gauge norm to consider, so our results describe smoothness and strong convexity with respect to the Euclidean norm.
Optimization over Gauges - Gauge and Radial DualityA specialized duality theory between optimization problems with objective function and/or constraints defined by gauges was developed by Freund [13] and further advanced by [14, 15]. Our results may offer new insights, showing that these gauge problem formulations (squared) may be more tractable to solve when the sets inducing these gauges are smooth and/or strongly convex.
In [1] and [2], Grimmer established a radial duality theory between nonnegative optimization problems. This theory shows that constrained optimization can be reformulated into an unconstrained problem in terms of the gauge of its constraints. Namely, for any concave \(f\) with \(f(0)>0\) and convex \(S\) with \(0\in\text{int }S\), the primal problem
\[\begin{cases}\max_{x}f(x)\\ \text{s.t. }x\in S\end{cases}\quad=\max_{x\in\mathcal{E}}\min\{f(x),\hat{t}_{S}(x)\} \tag{1.8}\]
where \(\hat{t}_{S}(x)\) is the nonstandard indicator function \(\hat{t}_{S}(x)=\begin{cases}+\infty&\text{ if }x\in S\\ 0&\text{ if }x\in S\end{cases}\) can be equivalently solved by minimizing the radially dual problem
\[\min_{y\in\mathcal{E}}\max\{f^{\Gamma}(y),\gamma_{S}(y)\} \tag{1.9}\]
where \(f^{\Gamma}(y)=\sup\{v>0\mid vf(y/v)\leq 1\}\)[1, Proposition 4.9]. This reformulation is unconstrained and only depends on the constraints \(S\) via their gauge. Consequently, our structural results on gauges will facilitate the direct use of accelerated optimization methods on this radial dual (squared). Prior radial methods have only used simple subgradient methods [16, 17] or smoothing techniques [2, 18]
Optimization over Gauges - Classic Penalization and Level-set ReformulationsGiven \(e\in S\), one can rewrite the set constraint \(x\in S\) as the functional constraint \(\gamma_{S,e}(x)\leq 1\). Then a constrained optimization problem (1.2) can be reformulated as
\[\begin{cases}\min_{x}f(x)\\ \text{s.t. }\gamma_{S,e}(x)\leq 1\.\end{cases} \tag{1.10}\]
The recent works [19, 20] approached this via penalization methods, minimizing \(f(x)+\lambda(\gamma_{S,e}(x)-1)\) in online optimization settings. Their convergence guarantees carefully account for the cost of approximate gauge evaluations via membership oracle calls to \(S\). In contrast, we assume exact evaluations of the gauge (as is possible for many polyhedral, spectral, and polynomial-type constraints), which facilitates access to actual boundary points of \(S\) where normal vectors exist.
Alternatively, a convex function \(f\) can be minimized over a set \(S\) via the value function
\[v(\lambda)=\min\max\{f(x)-\lambda,\gamma_{S,e}(x)-1\}. \tag{1.11}\]
This has \(v(\lambda)=0\) exactly when \(\lambda\) equals the minimum objective value. Root finding schemes can be effectively applied here [21, Lemmas 2.3.4, 2.3.5, 2.3.6] as well as more careful approaches, always ensuring a feasible solution path [22]. An optimal, parameter-free level-set method was proposed based on a simple parallel restarting scheme by [23].
Another root-finding reformulation proposed by [24], reverses the roles of constraint and objective functions. Then a convex function \(f\) can be minimized over a set \(S\) via root finding on the value function
\[v(\lambda)=\begin{cases}\min_{x}\gamma_{S,e}(x)\\ \text{s.t. }f(x)\leq\lambda\.\end{cases} \tag{1.12}\]
Namely, the minimum objective value of \(f\) on \(S\) is the smallest \(\lambda\) such that \(v(\lambda)\leq 1\). For any of these penalized or level-set/root-finding optimization techniques, our results on the structure of gauges (squared) for smooth and/or strongly convex sets \(S\) may yield faster algorithms. The details for such an application are beyond this work but provide an interesting future direction.
Functionally Constrained OptimizationFunctionally constrained optimization is one natural family of constrained convex optimization problems with smooth and/or strongly convex constraint sets. These are problems minimizing \(f(x)\) subject to \(g_{i}(x)\leq 0\) for convex \(f,g_{i}\). Using Lemma 1.1, smoothness and/or strong convexity of \(g_{i}\) carry over to the level sets \(S_{i}=\{x\mid g_{i}(x)\leq 0\}\) (assuming gradients are well behaved on the boundary of the feasible region, which constraint qualification can ensure). In addition to the radial or level set approaches improved by our theory here, functionally constrained problems can be addressed using switching subgradient, augmented Lagrangian, and other first-order methods [25]. One drawback of such methods directly defined in terms of function and gradient evaluations of \(g_{i}\) is that they are representation dependent (that is, replacing \(g_{i}(x)\leq 0\) by \(2g_{i}(x)\leq 0\) may change how the algorithm behaves). Hence one may need to preprocess constraints, scaling them appropriately, to achieve good performance. Approaches based on evaluating the gauges avoid this issue as \(\{x\mid g(x)\leq 0\}\) is the same set as \(\{x\mid 2g(x)\leq 0\}\).
Conditional and Projected Gradient-Type MethodsWhen constraints are sufficiently simple, the more complex methodologies like level set methods, radial transformations, switching subgradient methods, augmented Lagrangians, etc. are not needed. Instead, conditional gradient (e.g., Frank-Wolfe [26]) and projected gradient methods can be utilized. The recent works [27, 28] have shown performance improvements for conditional gradient methods whenever the constraint set is uniformly convex (a generalization of the strongly convex sets considered here). To the best of our knowledge, no performance improvements have been shown for projected gradient methods over smooth/strongly convex sets.
These families of methods assume linear optimization or quadratic optimization over the constraint set can be done relatively cheaply in each iteration. For example, consider polyhedral constraints \(S\) either represented by a collection of inequalities of \(\{x\mid a_{i}^{T}x\leq b_{i}\}\) or represented as the convex hull of its extreme points \(co\{x_{i}\}\). In either case, projection onto this region is a quadratic program, which may require an interior point method call. Linear optimization over this region may also require such a call when represented by its inequalities but can be done in linear time when represented by its extreme points. Hence for polyhedrons with relatively few extreme points, Frank-Wolfe can be effective. In contrast, a gauge-based approach is ideal for polyhedrons
represented by relatively few inequalities. The gauge can be computed in linear time via the closed form of \(\max\{a_{i}^{T}y/b_{i}\}\) with normal vectors given by selecting any active constraint.
## 2 Definitions and Preliminaries.
Let \(S\subseteq\mathcal{E}\) be a nonempty closed convex set, \(h:\mathcal{E}\to\mathbb{R}\cup\{+\infty\}\) be a closed convex function. We denote the boundary of S by \(\mathrm{bdry}\;S\), the interior of \(S\) by \(\mathrm{int}\;S\), and the domain of \(h\) by \(\mathrm{dom}\;h\). We denote the subdifferential of \(h\) at \(x\in\mathrm{dom}\;f\) by \(\partial h:=\{g\mid h(y)-h(x)\geq g^{T}(y-x)\}\) and refer to each element as a subgradient. We denote the normal cone of \(S\) at \(x\in S\) by \(N_{S}(x):=\{\zeta\mid\zeta^{T}(y-x)\leq 0,\forall y\in S\}\) and refer to each element as a normal vector.
First, we recall the standard notions of a function being smooth and/or strongly convex, locally and globally. Then we formalize their mirrored notions that apply to sets and discuss several useful characterizations and properties of such sets.
Smooth and Strongly Convex Functions, Locally and Globally.We say a convex differentiable function \(h\) is _(globally) \(L\)-smooth_ if every point \(x\in\mathrm{dom}h\) gives a quadratic upper bound
\[h(y)\leq h(x)+\nabla h(x)^{T}(y-x)+\frac{L}{2}\|y-x\|^{2}\quad\forall y\in \mathcal{E}.\]
Localizing this condition, we say that \(h\) is _(locally) \(L\)-smooth w.r.t._\((x,\nabla h(x))\) if
\[\limsup_{y\to x}\frac{h(y)-(h(x)+\nabla h(x)^{T}(y-x))}{\frac{1}{2}\|y-x\|^{2} }\leq L\.\]
We say \(h\) is _(globally) \(\mu\)-strongly convex_ if every point \(x\in\mathrm{dom}h\) and subgradient \(g\in\partial h(x)\) gives a quadratic lower bound
\[h(y)\geq h(x)+g^{T}(y-x)+\frac{\mu}{2}\|y-x\|^{2}\quad\forall y\in\mathcal{E}.\]
Localizing this condition, we say that \(h\) is _(locally) \(\mu\)-strongly convex w.r.t._\((x,g)\) if
\[\liminf_{y\to x}\frac{h(y)-(h(x)+g^{T}(y-x))}{\frac{1}{2}\|y-x\|^{2}}\geq\mu\.\]
These local properties can be viewed as ensuring that in neighborhoods near \(x\), quadratic upper or lower bounds hold with constants \(\tilde{\mu}\) or \(\tilde{L}\) that converge to \(\mu\) or \(L\) as the neighborhood shrinks. For twice continuously differentiable \(h\), these are equivalent to the following Hessian bounds, which must hold at \(x\) for the local property and everywhere for the global property
\[\mu L\preceq\nabla^{2}h(x)\preceq LI\.\]
Smooth and Strongly Convex Sets, Locally and Globally.We say a set \(S\) is _(globally) \(\beta\)-smooth_, if every point \(\bar{y}\in\mathrm{bdry}\;S\) and unit length normal vector \(\zeta\in N_{S}(\bar{y})\) give a ball inner approximation
\[B\left(\bar{y}-\frac{1}{\beta}\zeta,\frac{1}{\beta}\right)\subseteq S. \tag{2.1}\]
Localizing this condition, we say a \(S\) is _(locally) \(\beta\)-smooth w.r.t._\((\bar{y},\zeta)\) if \(\|\zeta\|=1\) and for all \(\tilde{\beta}>\beta\) the inclusion
\[B\left(\bar{y}-\frac{1}{\tilde{\beta}}\zeta,\frac{1}{\tilde{\beta}}\right) \cap B(\bar{y},\eta)\subseteq S\cap B(\bar{y},\eta)\]
holds, for some small \(\eta>0\). We say a set \(S\) is _(globally) \(\alpha\)-strongly convex_, if every point \(\bar{y}\in\) bdry \(S\) and unit length normal vector \(\zeta\in N_{S}(\bar{y})\) give a ball outer approximation
\[B\left(\bar{y}-\frac{1}{\alpha}\zeta,\frac{1}{\alpha}\right)\supseteq S. \tag{2.2}\]
Localizing this condition, we say a \(S\) is _(locally) \(\alpha\)-strongly convex w.r.t. \((\bar{y},\zeta)\)_ if \(\|\zeta\|=1\) and for all \(0<\tilde{\alpha}<\alpha\) the inclusion
\[B\left(\bar{y}-\frac{1}{\tilde{\alpha}}\zeta,\frac{1}{\tilde{\alpha}}\right) \cap B(\bar{y},\eta)\subseteq S\cap B(\bar{y},\eta)\]
holds, for some small \(\eta>0\).
The above definition of globally smooth sets is equivalent to the following alternative definition in terms of bounded change in unit normal vectors [3, 29]: A closed convex set \(S\) is \(\beta\)-smooth if and only if for any \(x_{i}\in S\), \(\zeta_{i}\in N_{S}(x_{i})\), \(\|\zeta_{i}\|=1\), \(i=1,2\),
\[\|\zeta_{1}-\zeta_{2}\|\leq\beta\|x_{1}-x_{2}\|.\]
Likewise, our definition for globally strongly convex sets is equivalent to the following alternative definition in terms of bounded change in unit normal vectors [3, 29]: A closed convex bounded set \(S\) is \(\alpha\)-strongly convex if and only if for any \(x_{i}\in S\), \(\zeta_{i}\in N_{S}(x_{i})\), \(\|\zeta_{i}\|=1\), \(i=1,2\),
\[\|\zeta_{1}-\zeta_{2}\|\geq\alpha\|x_{1}-x_{2}\|.\]
Another interesting equivalent definition of globally strongly convex [3] is: A set \(S\) is \(\alpha\)-strongly convex if and only if all \(x,y\in S\), \(\lambda\in[0,1]\), and unit vector \(z\) have
\[\lambda x+(1-\lambda)y+\lambda(1-\lambda)\frac{\alpha}{2}\|x-y\|^{2}z\in S.\]
That is, \(S\) contains a ball of radius \(\lambda(1-\lambda)\frac{\alpha}{2}\|x-y\|^{2}\) centered at \(\lambda x+(1-\lambda)y\).
Lastly, we note the following helpful lemma (with proof in Appendix A.2.3 for completeness).
**Lemma 2.1**.: _Given any point \(z\) and a convex set \(S\) with \(e\in\mathrm{int}\ S\), the gauge \(\gamma_{S,e}(z)\) has the following upper and lower bound:_
\[\|z-e\|/\sup\{\|x-e\|\mid x\in S\}\leq\gamma_{S,e}(z)\leq\|z-e\|/\inf\{\|x-e \|\mid x\notin S\}.\]
## 3 The Structure of Gauges of Structured Sets.
In this section, we prove the following pair of characterizations relating a set's strong convexity and smoothness to those of its gauge squared. For notational ease, we suppose \(e=0\) and denote \(\gamma_{S}=\gamma_{S,e}\). Also, note \(\bar{y}=y/\gamma_{S}(y)\) is on the boundary of \(S\) unless \(\gamma_{S}(y)=0\).
The following theorem gives the local result: the gauge squared is strongly convex at \(y\) if the set is at \(\bar{y}\). We defer the proof of this theorem to Section 3.3.1.
**Theorem 3.1**.: _Consider any \(y\in\mathbb{R}^{n}\) and closed convex set \(S\) with \(0\in\mathrm{int}\ S\). If \(\gamma_{S}(y)\neq 0\) and at the point \(\bar{y}=y/\gamma_{S}(y)\), \(S\) is \(\alpha\)-strongly convex w.r.t. \((\bar{y},\zeta)\) for some unit normal vector \(\zeta\in N_{S}(\bar{y})\), then \(\frac{1}{2}\gamma_{S}^{2}\) is strongly convex with parameter_
\[\frac{1}{2(\zeta^{T}\bar{y})^{3}}\left(\zeta^{T}\bar{y}+\alpha\|\bar{y}\|^{2} -\sqrt{(\zeta^{T}\bar{y}+\alpha\|\bar{y}\|^{2})^{2}-4\alpha(\zeta^{T}\bar{y} )^{3}}\right)\]
_w.r.t. \((y,g)\), where \(g=\frac{\zeta}{\zeta^{T}\bar{y}}\in\partial(\frac{1}{2}\gamma_{S}^{2})(y)\). If \(\gamma_{S}(y)=0\) and \(y=0\), then \(\{0\}=\partial(\frac{1}{2}\gamma_{S}^{2})(y)\) and \(\frac{1}{2}\gamma_{S}^{2}\) is \(1/\sup\{\|x\|^{2}\mid x\in S\}\)-strong convex w.r.t. \((0,0)\). If \(\gamma_{S}(y)=0\) and \(y\neq 0\), \(\frac{1}{2}\gamma_{S}^{2}(y)\) is not strongly convex at \(y\)._
Note that this quantity can be lower bounded by the following \(O(\alpha)\) quantity:
\[\frac{\alpha}{\zeta^{T}\bar{y}+\alpha\|\bar{y}\|^{2}}.\]
Similarly, we find the following guarantee that the smoothness of \(\frac{1}{2}\gamma_{S}^{2}\) at \(y\) follows from the smoothness of \(S\) at \(\bar{y}\). The proof of this theorem is deferred to Section 3.3.2.
**Theorem 3.2**.: _Consider any \(y\in\mathbb{R}^{n}\) and closed convex set \(S\) with \(0\in\mathrm{int}\ S\). If \(\gamma_{S}(y)\neq 0\) and at the point \(\bar{y}=y/\gamma_{S}(y)\), \(S\) is \(\beta\)-smooth w.r.t. \((\bar{y},\zeta)\) for some unit normal vector \(\zeta\in N_{S}(\bar{y})\), then \(\frac{1}{2}\gamma_{S}^{2}\) is smooth with parameter_
\[\frac{1}{2(\zeta^{T}\bar{y})^{3}}\left(\zeta^{T}\bar{y}+\beta\|\bar{y}\|^{2}+ \sqrt{(\zeta^{T}\bar{y}+\beta\|\bar{y}\|^{2})^{2}-4\beta(\zeta^{T}\bar{y})^{3 }}\right)\]
_w.r.t. \((y,g)\), where \(g=\nabla(\frac{1}{2}\gamma_{S}^{2})(y)=\frac{\zeta}{\zeta^{T}\bar{y}}\). If \(\gamma_{S}(y)=0\), then \(\{0\}=\partial(\frac{1}{2}\gamma_{S}^{2})(y)\) and \(\frac{1}{2}\gamma_{S}^{2}\) is \(1/\inf\{\|x\|^{2}\mid x\notin S\}\)-smooth w.r.t. \((y,0)\)._
Note that this quantity can be upper bounded by the following \(O(\beta)\) quantity:
\[\frac{\zeta^{T}\bar{y}+\beta\|\bar{y}\|^{2}}{(\zeta^{T}\bar{y})^{3}}.\]
The above two theorems provide the gauge squared's local strongly convex/smooth parameter. Then the gauge square's global strongly convex/smooth parameter can be obtained by bounding these over all boundary points \(\bar{y}\). Define some constant \(D>0\) such that \(\|x\|_{2}\leq D\ \forall x\in S\) and \(R>0\) such that there exists a ball \(B(0,R)\) with radius \(R\) centered at the origin with \(B(0,R)\subseteq S\). Note that the convexity of \(S\) implies \(\zeta^{T}\bar{y}\geq R\). Applying these bounds gives the following global bounds on the above local strong convexity and smoothness constants.
**Corollary 3.1**.: _Consider a closed bounded convex set \(S\) with \(0\in\mathrm{int}\ S\). If \(S\) is \(\alpha\)-strongly convex, then \(\frac{1}{2}\gamma_{S}^{2}\) is strongly convex with parameter \(\frac{\alpha}{D+\alpha D^{2}}\)._
**Corollary 3.2**.: _Consider a closed bounded convex set \(S\) with \(0\in\mathrm{int}\ S\). If \(S\) is \(\beta\)-smooth, then \(\frac{1}{2}\gamma_{S}^{2}\) is smooth with parameter \(\frac{R+\beta D^{2}}{R^{3}}\)._
### Examples.
These theorems/corollaries immediately establish the structure for the gauge squared of many common families of constraints. Here we discuss a few such examples, namely, halfspaces \(\mathcal{H}=\{x\mid a^{T}x\leq b\}\), \(p\)-norm unit balls \(B_{p}=\{x\mid\|x\|_{p}\leq 1\}\), and \(p\)-norm ellipsoids \(E_{p}=\{x\mid\|Ax-b\|_{p}\leq 1\}\). Table 1 summarizes these results, showing the smoothness and strong convexity of each \(S\) and its half gauge squared. These examples will also be utilized in our numerical evaluations in Section 5.
HalfspacesConsider any halfspace with \(\mathcal{H}=\{x\mid a^{T}x\leq b\}\). Such sets are not strongly convex (\(\alpha=0\)) but are infinitely smooth (\(\beta=0\)) everywhere. To ensures the gauge is well-defined, we require \(0\in\mathrm{int}\ \mathcal{H}\) (that is, \(b>0\)). Note there is no bound \(D\) on the size of \(\mathcal{H}\) but \(R=b/\|a\|_{2}\) does bound the distance from \(0\) to the boundary of \(\mathcal{H}\). Consider any \(\bar{y}\) on the boundary of \(\mathcal{H}\) (i.e., \(a^{T}x=b\)) and unit normal \(\zeta=a/\|a\|_{2}\in N_{\mathcal{H}}(\bar{y})\). Note \(\zeta^{T}\bar{y}=b/\|a\|_{2}\). Then Theorem 3.1 vacuously implies \(\mu=0\)-strong convexity and Theorem 3.2 implies \(L=\|a\|_{2}^{2}/b^{2}\)-smoothness at \(\bar{y}\).
In this case, we can directly compute the gauge of the set and verifying our theory produced the tightest smoothness and strong convexity values possible:
\[\gamma_{S}(x)=\begin{cases}\frac{a^{T}x}{b}&\text{if }a^{T}x>0\\ 0&\text{otherwise}\end{cases}\implies\frac{1}{2}\gamma_{S}^{2}(x)=\begin{cases} \frac{(a^{T}x)^{2}}{2b^{2}}&\text{if }a^{T}x>0\\ 0&\text{otherwise}.\end{cases}\]
Note more general polyhedrons \(\{x\mid a_{i}^{T}x\leq b_{i}\}\) with \(b_{i}>0\) are neither smooth nor strongly convex, and similarly, their piecewise linear gauges are neither smooth nor strongly convex when squared. However, since the gauge of an intersection is the maximum of the gauges of its components, the resulting half gauge squared will be a finite maximum of several smooth functions. Section 4 discusses algorithms for such problems.
\(p\)-Norm BallsConsider any \(p\in(1,\infty)\)-norm unit ball \(B_{p}=\{x\mid\|x\|_{p}\leq 1\}\). Depending on \(p\), this ball and its gauge squared are either smooth or strongly convex. Namely [6, Lemma 4] have shown \(B_{p}\) and \(\frac{1}{2}\gamma_{B_{p}}^{2}\) are both \(\alpha=\mu=(p-1)n^{\frac{1}{2}-\frac{1}{p}}\)-strongly convex whenever \(p\in(1,2]\) and are \(\beta=L=(p-1)n^{\frac{1}{2}-\frac{1}{p}}\)-smooth whenever \(p\in[2,\infty)\). Note that neither of these bounds are tight for \(p\neq 2\). Our theory could be applied to yield tighter, although substantially less elegant, bounds. The details of such an approach are given in Appendix A.1.
\(p\)-Norm EllipsoidsAs our last example, we consider generalizing the example above to ellipsoidal sets \(E_{p}=\{x\mid\|Ax-b\|_{p}\leq 1\}\). Note for the gauge to be well-defined (i.e., having \(0\in E_{p}\)), we require \(\|b\|_{p}\leq 1\). The following lemma, mirroring Lemma 1.2, allows us to bound the strong convexity and smoothness of such a set's gauge squared. Its proof is deferred to the appendix.
**Lemma 3.3**.: _If a set \(S\) has \(\frac{1}{2}\gamma_{S}^{2}\)\(\alpha\)-strongly convex or \(\beta\)-smooth at \(x\), then \(E=\{A^{-1}x\mid x\in S\}\) has \(\frac{1}{2}\gamma_{E}^{2}(y)=\frac{1}{2}\gamma_{S}^{2}(Ay)\) being \(\lambda_{\min}(A^{*}A)\alpha\)-strongly convex or \(\lambda_{\max}(A^{*}A)\beta\)-smooth at \(A^{-1}x\), respectively._
Using this lemma, it suffices to bound the gauge squared of translated balls \(T_{p}=\{x\mid\|x-b\|_{p}\leq 1\}\) to deduce bounds on \(E_{p}\). First, we compute bounds when \(p=2\). Noting \(T_{2}=\{x\mid\|x-b\|_{2}\leq 1\}\) is \(1\)-strongly convex and \(1\)-smooth, bounds on \(E_{2}\)'s \(\alpha\)-strong convexity and \(\beta\)-smoothness follow from
\begin{table}
\begin{tabular}{|c|c||c|c|c|c|} \hline Set \(S\)Structure & \(\alpha\)-s.c. of \(S\) & \(\mu\)-s.c. of \(\frac{1}{2}\gamma_{S}^{2}\) & \(\beta\)-smooth of \(S\) & \(L\)-smooth of \(\frac{1}{2}\gamma_{S}^{2}\) \\ \hline \hline \multirow{3}{*}{\(B_{p}\)} & \(p\in(1,2)\) & \(0\) & \(0\) & \(\infty\) & \(\frac{\|a\|_{p}^{2}}{b^{2}}\) \\ \cline{2-6} & \(p\in(1,2)\) & \((p-1)n^{\frac{1}{2}-\frac{1}{p}}\) & \((p-1)n^{\frac{1}{2}-\frac{1}{p}}\) & \(\infty\) & \(\infty\) \\ \cline{2-6} & \(p=2\) & \(1\) & \(1\) & \(1\) & \(1\) \\ \cline{2-6} & \(p\in(2,\infty)\) & \(0\) & \(0\) & \((p-1)n^{\frac{1}{2}-\frac{1}{p}}\) & \((p-1)n^{\frac{1}{2}-\frac{1}{p}}\) \\ \hline \multirow{3}{*}{\(E_{p}\)} & \(p\in(1,2)\) & \(\frac{\lambda_{\min}}{\sqrt{\lambda_{\max}}}(p-1)n^{\frac{1}{2}-\frac{1}{p}}\) & \(O\left(\lambda_{\min}n^{\frac{1}{2}-\frac{1}{p}}\right)\) & \(\infty\) & \(\infty\) \\ \cline{2-6} & \(p=2\) & \(\frac{\lambda_{\min}}{\sqrt{\lambda_{\max}}}\) & \(\frac{\lambda_{\min}}{(1+\|b\|_{2})(2+\|b\|_{2})}\) & \(\frac{\lambda_{\max}}{\sqrt{\lambda_{\min}}}\) & \(\frac{\lambda_{\max}(2-\|b\|_{2})}{(1-\|b\|_{2})^{2}}\) \\ \cline{2-6} & \(p\in(2,\infty)\) & \(0\) & \(0\) & \(\frac{\lambda_{\max}}{\sqrt{\lambda_{\min}}}(p-1)n^{\frac{1}{2}-\frac{1}{p}}\) & \(O\left(\lambda_{\max}\frac{n^{1-\frac{1}{p}}}{(1-\|b\|_{p}^{p})^{2}}\right)\) \\ \hline \end{tabular}
\end{table}
Table 1: Strongly convexity and smoothness of sets and gauge squares in \(\mathcal{E}=\mathbb{R}^{n}\). For the \(p\)-norm ellipsoid bounds, \(\lambda_{\max}\) and \(\lambda_{\min}\) denote the maximum and minimum eigenvalues of \(A^{T}A\).
Lemma 1.2. Theorem 3.1 and 3.2 tell us \(\frac{1}{2}\gamma_{T_{2}}^{2}\) is strongly convex and smooth with constants
\[\mu =\inf_{\|\bar{y}-b\|_{2}=1}\frac{1}{(\bar{y}-b)^{T}\bar{y}+\|\bar{y} \|_{2}^{2}}=\frac{1}{(1+\|b\|_{2})(2+\|b\|_{2})}\] \[L =\sup_{\|\bar{y}-b\|_{2}=1}\frac{(\bar{y}-b)^{T}\bar{y}+\|\bar{y} \|_{2}^{2}}{((\bar{y}-b)^{T}\bar{y})^{3}}=\frac{2-\|b\|_{2}}{(1-\|b\|_{2})^{2}}\]
Then following Lemma 3.3, \(\frac{1}{2}\gamma_{E_{2}}^{2}\) is \(\frac{\lambda_{\min}}{(1+\|b\|_{2})(2+\|b\|_{2})}\)-strongly convex and \(\frac{\lambda_{\max}(2-\|b\|_{2})}{(1-\|b\|_{2})^{2}}\)-smooth.
A similar calculation holds for general \(p\), although we lack a closed form for the infimum and supremum over all boundary points. Since \(T_{p}\) is a translated \(p\)-norm ball, it has the same strong convexity/smoothness constants as \(B_{p}\). Lemma 1.2 then ensures the ellipsoid \(E_{p}\) is \(\frac{\lambda_{\min}}{\sqrt{\lambda_{max}}}(p-1)n^{\frac{1}{2}-\frac{1}{p}}\)-strongly convex if \(1<p\leq 2\), and \(\frac{\lambda_{\max}}{\sqrt{\lambda_{min}}}(p-1)n^{\frac{1}{2}-\frac{1}{p}}\)-smooth if \(p\geq 2\). The strong convexity and smoothness constants of functions \(\frac{1}{2}\gamma_{E_{p}}^{2}\) do not have such simple closed forms. In Appendix A.1, we compute bounds on these constants for any translated \(p\)-norm ball \(T_{p}\), from which applying Lemma 3.3 gives \(O\left(\lambda_{\min}n^{\frac{1}{2}-\frac{1}{p}}\right)\)-strong convexity and \(O\left(\lambda_{\max}\frac{n^{1-\frac{p}{2}}}{(1-\|b\|_{p}^{2})^{2}}\right)\)-smoothness when \(1<p\leq 2\) and \(2\leq p<\infty\), respectively.
### Tightness of our Main Theorems and their Converses.
Our main Theorems 3.1 and 3.2 have shown that a strongly convex/smooth set has a strongly convex/smooth gauge function squared. Here we observe that these results are essentially tight in two respects. Theorem 3.3 below shows that no improvement in these constants is possible. Then Theorems 3.4 and 3.5 show the converse results, that if a set's gauge squared is \(\alpha\)-strongly convex or \(\beta\)-smooth then the set itself must be \(O(\alpha)\)-strongly convex or \(O(\beta)\)-smooth, respectively. Proofs of each of these results are deferred to Sections 3.3.3, 3.3.4 and 3.3.5.
**Theorem 3.3**.: _For any values of \(\gamma,R,D>0\), there exists a convex set \(S\), \(\bar{y}\in\mathrm{bdry}\ S\) and unit \(\zeta\in N_{S}(\bar{y})\) such that, \(S\) is \(\gamma\)-strongly convex and \(\gamma\)-smooth with respect to \((\bar{y},\zeta)\) and_
\[\lambda_{min}(\nabla^{2}\frac{1}{2}\gamma_{S}^{2}(\bar{y}))=\frac{1}{2R^{3}} \left(R+\gamma D^{2}-\sqrt{(R+\gamma D^{2})^{2}-4\gamma R^{3}}\right),\]
\[\lambda_{max}(\nabla^{2}\frac{1}{2}\gamma_{S}^{2}(\bar{y}))=\frac{1}{2R^{3}} \left(R+\gamma D^{2}+\sqrt{(R+\gamma D^{2})^{2}-4\gamma R^{3}}\right)\]
_where \(D=\|\bar{y}\|\) and \(R=\zeta^{T}\bar{y}\)._
**Theorem 3.4**.: _Consider any set \(S\) with \(0\in\mathrm{int}\ S\) and \(y\in\mathcal{E}\) with \(\gamma_{S}(y)\neq 0\). If \(\frac{1}{2}\gamma_{S}^{2}\) is \(\mu\)-strongly convex w.r.t. \((y,g)\), then the set \(S\) is strongly convex with parameter \(\mu\zeta^{T}\bar{y}\) w.r.t. \((\bar{y},\zeta)\), where \(\bar{y}=y/\gamma_{S}(y)\), \(\zeta=g/\|g\|\)._
**Theorem 3.5**.: _Consider any set \(S\) with \(0\in\mathrm{int}\ S\) and \(y\in\mathcal{E}\) with \(\gamma_{S}(y)\neq 0\). If \(\frac{1}{2}\gamma_{S}^{2}\) is \(L\)-smooth w.r.t. \((y,g)\), then the set \(S\) is smooth with parameter \(L\zeta^{T}\bar{y}\) w.r.t. \((\bar{y},\zeta)\) where \(\bar{y}=y/\gamma_{S}(y)\), \(\zeta=g/\|g\|\)._
### Proofs of Theorems Characterizing Structured Gauges.
Both strongly convex sets and smooth sets are defined in terms of balls built by a boundary point and corresponding normal vector. This perspective is critical for the proofs of our main theorems.
The following lemmas characterize the gauges of these approximating balls. Their proofs are deferred to Appendix A.2.5 and A.2.6. Note that below, we consider sets \(B(\bar{y}-r\zeta,r),\) which may not contain the origin. As a result, it is important that we defined the gauge as \(\inf\{\lambda>0\mid x\in\lambda S\},\) which may no longer equal \(\sup\{\lambda>0\mid x\not\in\lambda S\}.\)
**Lemma 3.4**.: _Consider any closed convex set \(S\) with \(0\in{\rm int}\ S\) and \(y\in\mathbb{R}^{n}\) with \(\gamma_{S}(y)\neq 0\). For the boundary point \(\bar{y}=y/\gamma_{S}(y)\) and any unit normal vector \(\zeta\in N_{S}(\bar{y})\), the ball \(B=B(\bar{y}-r\zeta,r)\) with radius \(r>0\) has_
* \(\gamma_{S}(y)=\gamma_{B}(y)\)_._
* \(\nabla\frac{1}{2}\gamma_{B}^{2}(y)\in\partial\frac{1}{2}\gamma_{S}^{2}(y)\)_._
* \(\nabla^{2}\frac{1}{2}\gamma_{B}^{2}(y)=\frac{1}{(\bar{\zeta}^{T}\bar{y})^{3}} \left(\left(\bar{\zeta}^{T}\bar{y}+\|\bar{y}\|^{2}\right)\bar{\zeta}\bar{\zeta }^{T}-\bar{\zeta}^{T}\bar{y}(\bar{\zeta}\bar{y}^{T}+\bar{y}\bar{\zeta}^{T})+ \left(\bar{\zeta}^{T}\bar{y}\right)^{2}I\right)\)_, where_ \(\bar{\zeta}=r\zeta\)_._
**Lemma 3.5**.: _Consider any ball \(B=B(\bar{y}-r\zeta,r)\) and \(y\) with \(\bar{y}=y/\gamma_{B}(y)\) and \(\zeta^{T}\bar{y}>0\), the eigenvalues of the Hessian matrix \(\nabla^{2}\frac{1}{2}\gamma_{B}^{2}(y)\), \(\lambda_{1}\leq\lambda_{2}\leq\cdots\leq\lambda_{n}\), are given by:_
\[\lambda_{1}= \frac{1}{2(\zeta^{T}\bar{y})^{3}}\left(\zeta^{T}\bar{y}+\frac{\| \bar{y}\|^{2}}{r}-\sqrt{\left(\zeta^{T}\bar{y}+\frac{\|\bar{y}\|^{2}}{r} \right)^{2}-\frac{4(\zeta^{T}\bar{y})^{3}}{r}}\right)\geq\frac{\frac{1}{r}}{ \zeta^{T}\bar{y}+\frac{\|\bar{y}\|^{2}}{r}}\] \[\lambda_{i}= \frac{1}{r\zeta^{T}\bar{y}},\qquad\text{for }i=2,...,n-1\] \[\lambda_{n}= \frac{1}{2(\zeta^{T}\bar{y})^{3}}\left(\zeta^{T}\bar{y}+\frac{\| \bar{y}\|^{2}}{r}+\sqrt{\left(\zeta^{T}\bar{y}+\frac{\|\bar{y}\|^{2}}{r} \right)^{2}-\frac{4(\zeta^{T}\bar{y})^{3}}{r}}\right)\leq\frac{\zeta^{T}\bar {y}+\frac{\|\bar{y}\|^{2}}{r}}{(\zeta^{T}\bar{y})^{3}}.\]
#### 3.3.1 Proof of Theorem 3.1
Consider any \(y\) with \(\gamma_{S}(y)\neq 0,\) and boundary point \(\bar{y}=y/\gamma_{S}(y)\) with unit normal vector \(\zeta\in N_{S}(\bar{y})\). Denote the (locally) outer approximating ball \(B=B(\bar{y}-\zeta/\alpha,1/\alpha).\) Applying the subgradient chain rule, we compute the gradient of this ball's gauge squared \(g\) as
\[\partial\left(\frac{1}{2}\gamma_{B}^{2}\right)(y)=\gamma_{B}(y)\partial\gamma_ {B}(y)=\left\{\frac{\gamma_{B}(y)\zeta}{\zeta^{T}\bar{y}}\right\}\.\]
By Lemma 3.4 (c) and Lemma 3.5, this half gauge squared is locally strongly convex at \(\bar{y}\) with constant
\[\mu=\frac{1}{2(\zeta^{T}\bar{y})^{3}}\left(\zeta^{T}\bar{y}+\alpha\|\bar{y}\| ^{2}-\sqrt{\left(\zeta^{T}\bar{y}+\alpha\|\bar{y}\|^{2}\right)^{2}-4\alpha( \zeta^{T}\bar{y})^{3}}\right)\.\]
For any \(\tilde{\mu}<\mu\), consider a neighborhood such that all \(z\in B(y,\eta)\) have
\[\frac{1}{2}\gamma_{B}^{2}(z)\geq\frac{1}{2}\gamma_{B}^{2}(y)+g^{T}(z-y)+\frac {\tilde{\mu}}{2}\|z-y\|^{2}\.\]
Since \(S\) is strongly convex w.r.t. \((\bar{y},\zeta)\), then \(S\cap B(\bar{y},\bar{\eta})\subseteq B\left(\bar{y}-\frac{1}{\alpha}\zeta, \frac{1}{\alpha}\right)\cap B(\bar{y},\bar{\eta})\) for some \(\bar{\eta}\). Hence for \(z\) near \(y\), \(\gamma_{S}(z)\geq\gamma_{B}(z)\). Applying this bound and Lemma 3.4 (a) and (b), we conclude for any \(\tilde{\mu}<\mu\) in some neighborhood of \(y\), all \(z\) have
\[\frac{1}{2}\gamma_{S}^{2}(z)\geq\frac{1}{2}\gamma_{B}^{2}(z)\geq\frac{1}{2} \gamma_{S}^{2}(y)+g^{T}(z-y)+\frac{\tilde{\mu}}{2}\|z-y\|^{2}. \tag{3.1}\]
That is, \(\gamma_{S}^{2}\) is locally \(\mu\)-strongly convex w.r.t. \((y,g)\).
If \(\gamma_{S}(y)=0\) and \(y=0\), then \(\partial(\frac{1}{2}\gamma_{S}^{2})(y)=\{0\}\). For any point \(z\), Lemma 2.1 ensures \(\frac{1}{2}\gamma_{S}^{2}(z)-\frac{1}{2}\gamma_{S}^{2}(y)-0^{T}(z-y)\geq\frac{1 }{2}\frac{\|z\|^{2}}{\sup\{\|x\|^{2}|x\in S\}}\). Thus \(\frac{1}{2}\gamma_{S}^{2}\) is \(1/\sup\{\|x\|^{2}\mid x\in S\}\)-strongly convex w.r.t. \((0,0)\).
If \(\gamma_{S}(y)=0\) and \(y\neq 0\), then we still have \(\partial(\frac{1}{2}\gamma_{S}^{2})(y)=\{0\}\). However, \(\gamma_{S}(ty)=t\gamma_{S}(y)=0\) for any \(t>0\). Thus \(\frac{1}{2}\gamma_{S}^{2}(y)\) is constant along this ray and consequently cannot be strongly convex at \(y\).
#### 3.3.2 Proof of Theorem 3.2
Following the same technique as in Theorem 3.1, but utilizing the maximum eigenvalue bound of Lemma 3.5 instead of minimum, ensures for \(y\neq 0\) the function \(\frac{1}{2}\gamma_{S}^{2}(y)\) is smooth with the parameter
\[L=\frac{1}{2(\zeta^{T}\bar{y})^{3}}\left(\zeta^{T}\bar{y}+\beta\|\bar{y}\|^{2} +\sqrt{\left(\zeta^{T}\bar{y}+\beta\|\bar{y}\|^{2}\right)^{2}-4\beta(\zeta^{T} \bar{y})^{3}}\right).\]
If \(\gamma_{S}(y)=0\), then \(g=\nabla(\frac{1}{2}\gamma_{S}^{2})(y)=\{0\}\), similar to the proof of Theorem 3.1, by Lemma 2.1, we have \(\frac{1}{2}\gamma_{S}^{2}(z)\leq\frac{1}{2}\frac{\|z\|^{2}}{\inf\{\|y\|^{2}|y \in S\}}.\) Thus \(\frac{1}{2}\gamma_{S}^{2}\) is \(1/\inf\{\|y\|^{2}\mid y\in S\}\)-smooth w.r.t. \((y,0)\).
#### 3.3.3 Proof of Theorem 3.3
Let \(S=conv(\{0\}\cup B(c,\gamma))\), where \(c=\bar{y}-\gamma\zeta\). Consider \(\bar{y}=(\sqrt{D^{2}-R^{2}},-R)\in\) bdry \(S\) and \(\zeta=(0,-1)\in N_{S}(\bar{y})\). By Lemma 3.4, we get
\[\frac{1}{2}\nabla^{2}\gamma_{S}^{2}(y)=\frac{1}{R^{3}}\begin{bmatrix}\gamma R ^{2}&\gamma R\sqrt{D^{2}-R^{2}}\\ \gamma R\sqrt{D^{2}-R^{2}}&R+\gamma(D^{2}-R^{2})\end{bmatrix}.\]
This positive definite matrix has
\[\lambda_{min}(\nabla^{2}\frac{1}{2}\gamma_{S}^{2}(y))=\frac{1}{2R^{3}}\left(R+ \gamma D^{2}-\sqrt{(R+\gamma D^{2})^{2}-4\gamma R^{3}}\right)\]
and
\[\lambda_{max}(\nabla^{2}\frac{1}{2}\gamma_{S}^{2}(y))=\frac{1}{2R^{3}}\left(R+ \gamma D^{2}+\sqrt{(R+\gamma D^{2})^{2}-4\gamma R^{3}}\right)\]
where \(\zeta\in N_{S}(\bar{y})\) and \(\|\zeta\|=1\).
#### 3.3.4 Proof of Theorem 3.4
Note that subgradients of the half gauge squared are given by
\[\partial\left(\frac{1}{2}\gamma_{S}^{2}\right)(y)=\gamma_{S}(y)\partial\gamma_ {S}(y)=\left\{\frac{\gamma_{S}(y)\zeta}{\zeta^{T}\bar{y}}\mid\zeta\in N_{S}( \bar{y})\right\}\]
by the chain rule of subgradient calculus and the formula for subgradients of a gauge. Hence \(\zeta=g/\|g\|\in N_{S}(\bar{y})\) and \(g=\gamma_{S}(y)\zeta/\zeta^{T}\bar{y}\). For any \(\tilde{\mu}<\mu\), consider a neighborhood such that all \(z\in B(y,\eta)\) have
\[\frac{1}{2}\gamma_{S}^{2}(z)\geq\frac{1}{2}\gamma_{S}^{2}(y)+\left(\frac{ \gamma_{S}(y)\zeta}{\zeta^{T}\bar{y}}\right)^{T}(z-y)+\frac{\tilde{\mu}}{2}\|z -y\|^{2}\.\]
Dividing through by \(\gamma_{S}^{2}(y)\), all \(z\in B(\bar{y},\bar{\eta})\) with \(\bar{\eta}=\eta/\gamma_{S}(y)\) have
\[\frac{1}{2}\gamma_{S}^{2}(z)\geq\frac{1}{2}\gamma_{S}^{2}(\bar{y})+\left(\frac {\zeta}{\zeta^{T}\bar{y}}\right)^{T}(z-\bar{y})+\frac{\tilde{\mu}}{2}\|z-\bar {y}\|^{2}\.\]
Expressing the set \(S=\{z\mid\gamma_{S}(z)\leq 1\}\), we arrive at the local strong convexity containment of
\[S\cap B(\bar{y},\bar{\eta}) =\left\{z\mid\frac{1}{2}\gamma_{S}^{2}(z)\leq\frac{1}{2}\right\} \cap B(\bar{y},\bar{\eta})\] \[\subseteq\left\{z\mid\frac{1}{2}\gamma_{S}^{2}(\bar{y})+\left( \frac{\zeta}{\zeta^{T}\bar{y}}\right)^{T}(z-\bar{y})+\frac{\mu}{2}\|z-\bar{y} \|^{2}\leq\frac{1}{2}\right\}\cap B(\bar{y},\bar{\eta})\] \[=\left\{z\mid\left\|z-\bar{y}+\frac{\zeta}{\mu\zeta^{T}\bar{y}} \right\|^{2}\leq\frac{1}{\mu^{2}(\zeta^{T}\bar{y})^{2}}\right\}\cap B(\bar{y}, \bar{\eta})\.\]
#### 3.3.5 Proof of Theorem 3.5
Similar to Theorem 3.4.
## 4 Minimization of Structured Finite Maximums.
Recall that both feasibility problems (1.1) and constrained optimization problems (1.2) can be reformulated into unconstrained optimization problems over a finite maximum of gauges (1.6) and (1.7), respectively. Our Theorems 3.1 and 3.2 establish that these gauges when squared are often smooth and/or strongly convex. Moreover, Lemma 2.1 ensures all gauges \(\gamma_{S,e}\) of closed convex sets \(S\) with \(e\in\text{int }S\) are Lipschitz continuous with constant \(1/R(S,e)\) where \(R(S,e)=\inf\{\|x-e\|_{2}\mid x\in S\}\). Thus both feasibility and constrained optimization problem can be formulated as minimizing a finite maximum of Lipschitz functions that, when squared, are smooth and/or strongly convex (but no longer Lipschitz).
To the best of our knowledge, no prior works have studied this family of problems. Prior works have considered settings of minimizing a finite maximum of Lipschitz smooth functions [30, 31, 32, 33]. A maximum of Lipschitz strongly convex optimization is itself Lipschitz and strongly convex, so classic guarantees, like [34], apply. In this section, we show that many standard algorithms generalize to minimize a maximum of Lipchitz functions that, when squared, are smooth and/or strongly convex and prove the accompanying convergence rate guarantees.
Capturing both the settings of (1.6) and (1.7), here we consider the general problem of
\[p_{*}:=\min_{y}\max_{i=1\ldots m}\{f_{i}(y)\}>0 \tag{4.1}\]
for closed convex nonnegative functions \(f_{1},\ldots,f_{m}\) that are each \(M\)-Lipschitz continuous, and each \(\frac{1}{2}f_{i}^{2}\) is either \(L\)-smooth or \(\mu\)-strongly convex, or both. We denote the whole objective by \(f(y)=\max_{i}\{f_{i}(y)\}\). For our numerics in Section 5, we focus on the case where \(m\) is small (in particular, \(m=2\)), corresponding to feasibility and constrained optimization problems with only a few sophisticated constraints.
To build an appropriate theoretical foundation for problems the form (4.1), we consider several algorithms based on the following three basic first-order oracles for \(\max_{i}\{\frac{1}{2}f_{i}^{2}(y)\}\), discussed in further detail in the subsequent three subsections:
\[\mathtt{subgrad}(y,\alpha) :=y-\alpha f(y)g g\in\partial f(y), \tag{4.2}\] \[\mathtt{gen-grad}(y,\alpha) :=\operatorname*{argmin}_{z}\left\{\max_{i}\left\{\frac{1}{2}f_{ i}^{2}(y)+f_{i}(y)g_{i}^{T}(z-y)\right\}+\frac{1}{2\alpha}\|z-y\|^{2}\right\} g_{i}\in\partial f_{i}(y),\] (4.3) \[\mathtt{level-proj}(y,\bar{f}) :=\operatorname*{argmin}_{z}\left\{\|z-y\|_{2}^{2}\mid\frac{1}{2} f_{i}^{2}(y)+f_{i}(y)g_{i}^{T}(z-y)\leq\frac{1}{2}\bar{f}^{2}\quad\forall i \right\} g_{i}\in\partial f_{i}(y). \tag{4.4}\]
All three of these updates have closed forms for the setting of our numerics with \(m=2\). The latter two computations generally amount to quadratic programs of size \(m\). For each first-order
update (potentially with additional acceleration steps), we prove convergence guarantees when each \(f_{i}\) squared is smooth, when each is strongly convex, and when both hold (as well as for generic \(f_{i}\)). These new rates, which may be of independent interest, are summarized in Table 2.
Further Algorithms and Oracle Models (When \(m\) is Large)We first note two existing accelerated methods applicable when \(m\) is large and each component of the maximum is \(L\)-smooth. The accelerated smoothing method of [30, 31] replaces the finite maximum by a \(1/\epsilon\)-smooth approximation. Then applying Nesterov's accelerated method gives an overall convergence rate of \(O(L/T)\), improving on the subgradient method's \(O(1/\sqrt{T})\). Renegar and Grimmer [36]'s restarting scheme showed such a method can attain a faster \(L/\mu\log(1/\epsilon)\), when \(\mu\)-strong convexity additionally holds. Alternatively, replacing the finite maximum by \(\max_{\theta\in\Delta}\sum\theta_{i}f_{i}(x)\) where \(\Delta\) is the simplex gives an equivalent convex-concave minimax optimization problem. Then the same \(O(L/T)\) speed up can be attained by Nemirovski's convergence rate for the extragradient method [32].
Other sophisticated nonsmooth minimization methods could also be explored. Classical bundle methods [37, 38], which construct cutting plane models used to produce stable descent sequences, could be applied with convergence guarantees following from their recent literature [35, 39, 40, 41, 42]. An alternative scheme for minimizing finite maximums of smooth functions was recently proposed by Han and Lewis [43]. Their approach aims to maintain (at least) \(m\) points, each staying within a smooth region of the objective (where a single \(f_{i}\) attains the maximum). Then each iteration solves a relatively simple second-order cone program to update each point within its region.
### Simple Subgradient Methods Based on (4.2).
First, we consider methods taking "simple subgradient steps" on \(\frac{1}{2}f^{2}(y)\) via (4.2). Note \(\partial f(y)=\{\sum\lambda_{i}g_{i}\ |\ \sum_{i}\lambda_{i}=1,\lambda\geq 0,g_{i} \in\partial f_{i}(y),\lambda_{i}(f(y)-f_{i}(y))=0\}\). The chain rule ensures \(f(y)g\) is a subgradient of \(\frac{1}{2}f^{2}(y)\). Such a subgradient can be computed from any subgradient of a \(f_{i}\) attaining the finite maximum \(f\) at \(y\). Iterating the step (4.2) with some sequence of stepsizes \(\alpha_{k}\) then gives the classic subgradient method applied to \(\frac{1}{2}f^{2}\). The near optimality of the squared problem relates to the near optimality of (4.1) via
\[f(y)-p_{*}=\frac{f^{2}(y)-p_{*}^{2}}{f(y)+p_{*}}\leq\frac{\frac{1}{2}f^{2}(y)- \frac{1}{2}p_{*}^{2}}{p_{*}}. \tag{4.5}\]
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Algorithm & Generic \(\frac{1}{2}f_{i}^{2}\) & \(\mu\)-SC \(\frac{1}{2}f_{i}^{2}\) & \(L\)-Smooth \(\frac{1}{2}f_{i}^{2}\) & Both \\ \hline Subgradient Method & \(O\left(\frac{MD}{\sqrt{T+1}}\right)\) & \(O\left(\frac{M^{2}p_{*}}{\mu(T+1)}\right)\) & \(O\left(\frac{MD}{\sqrt{T+1}}\right)\) & \(O\left(\frac{M^{2}p_{*}}{\mu(T+1)}\right)\) \\ Gen. Gradient Method & \(O\left(\frac{MD}{\sqrt{T+1}}\right)\) & \(O\left(\frac{M^{2}p_{*}}{\mu(T+1)}\right)\) & \(O\left(\frac{LD^{2}p_{*}}{T+1}\right)\) & \(O\left((1-L/\mu)^{T}\right)\) \\ Accel. Gen. Gradient Method & - & - & \(O\left(\frac{LD^{2}}{p_{*}(T+1)^{2}}\right)\) & \(O\left((1-\sqrt{L/\mu})^{T}\right)\) \\ Level-Projection Method & \(O\left(\frac{MD}{\sqrt{T+1}}\right)\) & \(O\left(\frac{M^{2}p_{*}}{\mu(T+1)}\right)\) & \(O\left(\frac{MD}{\sqrt{T+1}}\right)\) & \(O\left(\frac{M^{2}p_{*}}{\mu(T+1)}\right)\) \\ \hline \end{tabular}
\end{table}
Table 2: Convergence rates for methods using oracles (4.2), (4.3), or (4.4) minimizing \(M\)-Lipschitz functions \(f_{i}\) with varied structure in \(\frac{1}{2}f_{i}^{2}\) and minimizer \(y^{*}\) with \(\|y_{0}-y^{*}\|\leq D\). Further improvements for Level-Projection Methods in smoothing settings may be possible using the ideas of [35].
Under appropriate choice of a stepsize sequence \(\alpha_{k}\), we use the non-Lipschitz convergence results of [44] to show \(y_{k+1}=\mathtt{subgrad}(y_{k},\alpha_{k})\) converges at a rate of \(1/\sqrt{k}\) for generic convex minimization and \(1/k\) for problems that are strongly convex when squared. To the best of our knowledge, such simple subgradient methods do not benefit from the smoothness of squared objective components. The proof of this result is deferred to Appendix A.3.
**Theorem 4.1**.: _For any convex, nonnegative, \(M\)-Lipschitz functions \(f_{i}\), the subgradient method \(y_{k+1}=\mathtt{subgrad}(y_{k},\alpha_{k})\) with \(\alpha_{k}=\frac{D}{\|f(y_{k})g_{k}\|\sqrt{T+1}}\) has_
\[\min_{k\leq T}\{f(y_{k})-p_{*}\}\leq\frac{MD}{\sqrt{T+1}}+\frac{M^{2}D^{2}}{2p _{*}(T+1)}\]
_provided some minimizer \(y^{*}\) of (4.1) has \(\|y_{0}-y^{*}\|\leq D\). If additionally, each \(\frac{1}{2}f_{i}^{2}\) is \(\mu\)-strongly convex, then selecting \(\alpha_{k}=\frac{2}{\mu(k+2)+\frac{M^{4}}{\mu(k+1)}}\) has_
\[\min_{k\leq T}\{f(y_{k})-p_{*}\}\leq\frac{4M^{2}p_{*}}{\mu(T+2)}+\frac{4M^{4}D ^{2}}{\mu p_{*}(T+1)(T+2)}\.\]
### Generalized Gradient Methods Based on (4.3).
We consider (accelerated) methods iteratively applying the "generalized gradient step" (4.3) following the development of Nesterov [21] (this can also be viewed as a prox linear step [33] on the composition of the maximum function \(\max\{t_{1},\ldots t_{m}\}\) with the smooth mapping \(y\mapsto\frac{1}{2}(f_{1}^{2}(y),\ldots,f_{m}^{2}(y))\)). Computing this step can be formulated as a quadratic program of dimension \(m\). When \(m=1\), this is exactly a (sub)gradient step on \(\frac{1}{2}f^{2}\) with stepsize \(\alpha\). For \(m=2\), this can be computed in closed form as
\[\mathtt{gen-grad}(y,\alpha)=\begin{cases}y^{(1)}&\text{if }\frac{1}{2}f_{1}^{2}(y) +f_{1}(y)g_{1}^{T}(y^{(1)}-y)>\frac{1}{2}f_{2}^{2}(y)+f_{2}(y)g_{2}^{T}(y^{(1) }-y)\\ y^{(2)}&\text{if }\frac{1}{2}f_{1}^{2}(y)+f_{1}(y)g_{1}^{T}(y^{(2)}-y)<\frac{1}{2} f_{2}^{2}(y)+f_{2}(y)g_{2}^{T}(y^{(2)}-y)\\ y^{(3)}&\text{otherwise}\end{cases}\]
where
\[\begin{cases}y^{(1)}:=y-\alpha f_{1}(y)g_{1}\\ y^{(2)}:=y-\alpha f_{2}(y)g_{2}\\ y^{(3)}:=y-\alpha\left((1-\theta)f_{1}(y)g_{1}+\theta f_{2}(y)g_{2}\right) \end{cases}\]
and \(\theta=\frac{\frac{1}{\alpha}(f_{2}^{2}(y)-f_{1}^{2}(y))+(f_{1}(y)g_{1}-f_{2} (y)g_{2})^{T}(f_{1}(y)g_{1})}{\|f_{1}(y)g_{1}-f_{2}(y)g_{2}\|_{2}^{2}}\). For \(i=1,2\), \(y^{(i)}\) is a subgradient step on \(\frac{1}{2}f_{i}^{2}\).
Generalized Gradient MethodWe find that for appropriately chosen stepsizes \(\alpha_{k}\), iterating \(y_{k+1}=\mathtt{gen-grad}(y_{k},\alpha_{k})\) converges as fast as the proceeding subgradient method when \(f_{i}^{2}\) is not smooth and further speeds up when smoothness holds. Again, the proof of this result is deferred to Appendix A.3.
**Theorem 4.2**.: _For any convex, nonnegative, \(M\)-Lipschitz functions \(f_{i}\), the generalized gradient method \(y_{k+1}=\mathtt{gen-grad}(y_{k},\alpha_{k})\) with stepsizes \(\alpha_{k}=\frac{D}{Mp_{*}\sqrt{T+1}}\) and \(T+1\geq M^{2}D^{2}/p_{*}^{2}\) has_
\[\min_{k\leq T}\{f(y_{k})-p_{*}\}\leq\frac{2MD}{\sqrt{T+1}}\]
_provided some minimizer \(y^{*}\) of (4.1) has \(\|y_{0}-y^{*}\|\leq D\). If additionally each \(\frac{1}{2}f_{i}^{2}\) is \(\mu\)-strongly convex, then selecting \(\alpha_{k}=\frac{2}{\mu(k+2)+\frac{M^{4}}{\mu(k+1)}}\) ensures_
\[\min_{k\leq T}\{f(y_{k})-p_{*}\}\leq\frac{4M^{2}p_{*}}{\mu(T+2)}+\frac{2M^{4} \|y_{0}-y^{*}\|^{2}}{\mu p_{*}(T+1)(T+2)}\.\]
_Alternatively, if each \(\frac{1}{2}f_{i}^{2}\) is \(L\)-smooth, then selecting \(\alpha_{k}=1/L\) ensures_
\[f(y_{T})-p_{*}\leq\frac{LD^{2}p_{*}}{T}\.\]
_If both \(L\)-smoothness and \(\mu\)-strong convexity hold, then selecting \(\alpha_{k}=1/L\) ensures_
\[\|y_{T}-y^{*}\|^{2}\leq(1-\mu/L)^{T}D^{2}\.\]
In the smooth settings where descent can be guaranteed, this gradient method can be made parameter-free by utilizing the following Armijo linesearch
\[\begin{cases}s_{k}=\sup\{\tau^{i}\bar{s}\mid\frac{1}{2}f^{2}(\bar{y})\leq\frac {1}{2}f^{2}(y_{k})-c\|\bar{y}-y_{k}\|^{2},\ \bar{y}=\texttt{gen-grad}(y_{k},\tau^{i}\bar{s}),\ i\in\mathbb{Z}\}\\ y_{k+1}=\texttt{gen-grad}(y_{k},s_{k})\end{cases} \tag{4.6}\]
which can be shown to converge at the same rates as above, up to small constants depending on the backtracking parameters \(\bar{s}>0\) and \(\tau\in(0,1)\).
Accelerated Generalized Gradient MethodApplying the following accelerated generalized gradient method yields stronger convergence guarantees whenever the squared components \(\frac{1}{2}f_{i}^{2}\) are all \(L\)-smooth (proof in Appendix A.3):
\[\begin{cases}x_{k+1}=\texttt{gen-grad}(y_{k},1/L)\\ t_{k+1}^{2}=(1-t_{k})t_{k}^{2}+\frac{\mu}{L}t_{k+1}\\ \beta_{k}=\frac{t_{k}(1-t_{k})}{t_{k}^{2}+t_{k+1}}\\ y_{k+1}=x_{k+1}+\beta_{k}(x_{k+1}-x_{k})\.\end{cases} \tag{4.7}\]
**Theorem 4.3**.: _For any convex, nonnegative, \(M\)-Lipschitz functions \(f_{i}\) where \(\frac{1}{2}f_{i}^{2}\) is \(L\)-smooth, the accelerated generalized gradient method (4.7) has_
\[f(y_{k})-p_{*}\leq\frac{\frac{1}{2}f^{2}(y_{0})-\frac{1}{2}p_{*}^{2}+\frac{ \gamma_{0}}{2}\|y_{0}-y^{*}\|^{2}}{p_{*}}\frac{4L}{(2\sqrt{L}+k\sqrt{\gamma_{ 0}})^{2}}\]
_where \(y^{*}\)is the minimizer of (4.1) and \(\gamma_{0}=\frac{t_{0}(t_{0}L-\mu)}{1-t_{0}}\). If additionally each \(\frac{1}{2}f_{i}^{2}\) is \(\mu\)-strongly convex,_
\[f(y_{k})-p_{*}\leq\frac{\frac{1}{2}f^{2}(y_{0})-\frac{1}{2}p_{*}^{2}+\frac{ \gamma_{0}}{2}\|y_{0}-y^{*}\|^{2}}{p_{*}}\left(1-\sqrt{\frac{\mu}{L}}\right)^{ k}\.\]
A parameter-free version of this iteration may be possible by generalizing the backtracking linesearch ideas of Nesterov's universal fast gradient method [45]. Such a universal method may attain both the optimal nonsmooth rates of the direct subgradient method and smooth rates of the accelerated generalized gradient method simultaneously.
### Level-set Projection Methods Based on (4.4).
Lastly, we consider a "level-set projection step" which linearizes each \(\frac{1}{2}f_{i}^{2}\) and projects onto the model objective's sublevel set at some fixed height \(\bar{f}\),
For example, in the setting of our feasibility problem (1.6), we only seek \(\gamma_{S_{i},e_{i}}\leq 1=\bar{f}\) (giving a parameter-free method). When the optimal objective value \(f^{*}\) is known, we can set \(\bar{f}=f^{*}\). When \(m=1\), (4.4) with \(\bar{f}=f^{*}\) is exactly a (sub)gradient step with the Polyak stepsize rule. When \(m=2\), this level projection step can still be computed in closed form:
\[\texttt{level-proj}(y,\bar{f})=\begin{cases}y^{(1)}&\text{if }\frac{1}{2}f_{1}^{2}( y)+f_{1}(y)g_{1}^{T}(y^{(1)}-y)>\frac{1}{2}f_{2}^{2}(y)+f_{2}(y)g_{2}^{T}(y^{(1) }-y)\\ y^{(2)}&\text{if }\frac{1}{2}f_{1}^{2}(y)+f_{1}(y)g_{1}^{T}(y^{(2)}-y)<\frac{1}{2}f_{2}^{2 }(y)+f_{2}(y)g_{2}^{T}(y^{(2)}-y)\\ y^{(3)}&\text{otherwise.}\end{cases}\]
where
\[\begin{cases}y^{(1)}:=y-\frac{(\frac{1}{2}f_{1}^{2}(y)-\frac{1}{2}\bar{f}^{2}) f_{1}(y)g_{1}}{|\bar{f}_{1}(y)g_{1}|^{2}}\\ y^{(2)}:=y-\frac{1}{2}\frac{(f_{2}^{2}(y)-\frac{1}{2}\bar{f}^{2})f_{2}(y)g_{2}}{| \bar{f}_{2}(y)g_{2}||^{2}}\\ y^{(3)}:=y-(\lambda_{1}f_{1}(y)g_{1}+\lambda_{2}f_{2}(y)g_{2})\end{cases}\]
and the coefficients \(\lambda_{1}\) and \(\lambda_{2}\) are defined as
\[\begin{bmatrix}\lambda_{1}\\ \lambda_{2}\end{bmatrix}=\begin{bmatrix}\|f_{1}(y)g_{1}\|_{2}^{2}&(f_{1}(y)g_{1 })^{T}(f_{2}(y)g_{2})\\ (f_{1}(y)g_{1})^{T}(f_{2}(y)g_{2})&\|f_{2}(y)g_{2}\|_{2}^{2}\end{bmatrix}^{-1} \begin{bmatrix}\frac{1}{2}f_{1}^{2}(y)-\frac{1}{2}\bar{f}^{2}\\ \frac{1}{2}f_{2}^{2}(y)-\frac{1}{2}\bar{f}^{2}\end{bmatrix}.\]
Iterating this method for any \(m\) and \(\bar{f}\geq p_{*}\) will converge towards this target objective level as follows (proof deferred to Appendix A.3).
**Theorem 4.4**.: _For any convex, nonnegative, \(M\)-Lipschitz functions \(f_{i}\) and \(\bar{f}\geq p_{*}\), the level-set projection method \(y_{k+1}=\texttt{level-proj}(y_{k},\bar{f})\) has_
\[\min_{k=0...T}\left\{f(y_{k})-\bar{f}\right\}\leq\frac{MD}{\sqrt{T+1}}+\frac{2 M^{2}D^{2}}{\bar{f}(T+1)}\]
_provided some \(\bar{y}\) with \(f(\bar{y})\leq\bar{f}\) has \(\|y_{0}-\bar{y}\|\leq D\). If additionally each \(\frac{1}{2}f_{i}^{2}\) is \(\mu\)-strongly convex and \(T+1\geq\frac{32M^{2}}{\mu}\log(\mu D^{2}/\bar{f}^{2})\), then_
\[\min_{k\leq T}\{f(y_{k})-\bar{f}\}\leq\frac{\sqrt{32}M^{2}\bar{f}}{\mu(T+1)}+ \frac{64M^{4}\bar{f}}{\mu^{2}(T+1)^{2}}\.\]
We expect that faster convergence rates could be proven for this method when each \(\frac{1}{2}f_{i}^{2}\) is smooth (with or without strong convexity). Numerically, in the following section, we will see support for this as the level projection method performs very well across a range of settings. The ideas of Lan [35] may provide a tractable path to establish such analysis and enable the analysis of accelerated level projection methods when objective functions square to become smooth.
## 5 Applications and Numerics.
In this section, we show the effectiveness of our gauge theory and the above algorithms on several smooth and/or strongly convex feasibility problems and constrained optimization problems. We run all methods in Section 4 on feasibility problems to verify the convergence rate Table 2. Then
considering constrained optimization problems, we compare our projection-free accelerated method based on the radial dual (1.7) against standard first-order and second-order solvers. Our numerical experiments are conducted on a two-core Intel i5-6267u CPU using Julia 1.8.5 and JuMP.jl to access solvers Gurobi 10.0.1, Mosek 10.0.2, COSMO 0.8.6, SCS 3.2.11. In Appendix A.4, we validate our numerics using Convex.jl instead of JuMP.jl, seeing similar performance.
Footnote 1: The source code is available at [https://github.com/nliu15/Gauges-and-Accelerated-Optimization-over-Smooth-Strongly-Convex](https://github.com/nliu15/Gauges-and-Accelerated-Optimization-over-Smooth-Strongly-Convex).
### Application to \(p\)-norm Ellipsoid Feasibility Problems.
First, we illustrate the above algorithms by considering the \(p\)-norm ellipsoid feasibility problem
\[\text{Find }x\text{ s.t. }x\in\{x\mid\|A_{1}x-b_{1}\|_{p_{1}}\leq\tau_{1}\} \cap\{x\mid\|A_{2}x-b_{2}\|_{p_{2}}\leq\tau_{2}\}\, \tag{5.1}\]
setting \(S_{i}=\{x\mid\|A_{i}x-b_{i}\|_{p_{i}}\leq 1\}\). Recall the discussion in Example 3.1 showed ellipsoidal sets are smooth when \(2\leq p<\infty\), strongly convex when \(1<p\leq 2\), and both when \(p=2\). We consider these three settings separately in dimensions \(n=1600\), matching the speedups expected from Section 4.
We generate synthetic ellipsoids corresponding to linear regression uncertainty sets with two differently corrupted data sets \(i=1,2\). We generate \(x_{true}\in\mathbb{R}^{n}\) and \(A_{i}\in\mathbb{R}^{n\times n}\) with standard normal entries and \(b_{i}=A_{i}x_{true}+\epsilon_{i}\) where \(\epsilon_{i}\) has generalized normal distribution with shape parameter \(\beta=p_{i}\). Then we select each \(\tau_{i}\) such that \(x_{true}\in S_{i}\) holds with probability 0.975. Our feasibility problem can then be viewed as seeking any \(x\in\cap S_{i}\) able to provide a reasonable estimate of \(x_{true}\), performing well on each data set. Note setting the shape parameter \(\beta<2\) gives a heavier tail than Gaussian data (with \(\beta=1\) yielding a Laplace distribution) while \(\beta>2\) gives a lighter tail (with \(\beta=\infty\) yielding a uniform distribution).
Following (1.6), we approach finding a feasible point via \(\min_{y}\max_{i}\{\gamma_{S_{i},e_{i}}(y)\}\) where here each \(e_{i}\) is computed by 30 iterations of conjugate gradient applied to \(A_{i}x=b_{i}\). Computing \(\gamma_{S_{i},e_{i}}\) (and its gradient) correspond to polynomial root finding, which can be done efficiently. These roots have closed forms in the cases of \(p\in\{1,2,3,4,\infty\}\).
We set \((p_{1},p_{2})=(1.5,1.8)\), \((p_{1},p_{2})=(2,2)\), \((p_{1},p_{2})=(3,4)\) to generate three cases with only strong convexity, both strong convexity and smoothness, and only smoothness. For each setting, the methods in Table 2 are considered: we implement subgradient methods with stepsize \(\alpha_{k}=\eta\)
Figure 2: The minimum accuracy of \(\max_{i}\{\frac{1}{2}\gamma_{S_{i}}^{2}(x_{k})\}-\max_{i}\{\frac{1}{2}\gamma _{S_{i}}^{2}(x^{*})\}\) of finding the intersection point of two smooth/strongly convex \(p\)-norm balls by subgradient methods, generalized gradient methods, Nesterov accelerated method, and level projection methods. Note that all points below the black line at objective value one correspond to feasible solutions.
\(\eta/\sqrt{k+10}\), generalized gradient methods with stepsize \(\alpha_{k}=\eta\), \(\eta/\sqrt{k+10^{2}}\), accelerated generalized gradient method with smooth and strongly convex parameters \(L\), \(\mu\), and level methods with \(\bar{f}\) as the optimal value and as one. All parameters are tuned to two significant figures; see the source code for the values used.
The results are shown in Figure 2. The solid black line is \(y=1\), so all solutions under this line correspond to the method having found a feasible point. In the strongly convex setting of Figure 2(a), we can see that the Level method with optimal value performs best. The subgradient method and generalized gradient method with constant stepsize and the accelerated generalized gradient method all perform reasonably up to a constant accuracy, after which nonsmoothness prevents further progress. For the smooth and strongly convex case shown in Figure 2(b), the accelerated generalized gradient method converges quickly, but the level method remains competitive (the generalized gradient methods are slower, and subgradient methods are even slower). In the only smooth case, Figure 2(c) looks similar to Figure 2(b), with every method being somewhat slower (likely since no strong convexity is present). These performances match our analysis summarized in Table 2 except for the level method outperforming our theory, competing with the accelerated generalized gradient method in smooth settings.
### Application to Trust Region Optimization Problems.
Finally, we compare the best of the previous first-order method (the accelerated generalized gradient method) against standard solvers for synthetic constrained minimization problems. We consider constrained quadratic minimization problems over a \(p\)-norm ellipsoidal constraint set
\[\begin{cases}\max_{x}&1-\frac{1}{2}x^{T}Qx-c^{T}x\\ \text{s.t.}&\|Ax-b\|_{p}\leq 1\end{cases} \tag{5.2}\]
for positive semidefinite \(Q\in\mathbb{R}^{n\times n}\), and generic \(A\in\mathbb{R}^{m\times n},b\in\mathbb{R}^{m},c\in\mathbb{R}^{n}\). For our numerics, we generate \(Q,c,A,x_{feas},\epsilon\) with standard normal entries and set \(b=Ax_{feas}+\frac{1}{m}\epsilon\). Each solver will be directly applied to (5.2), first with \(p=2\) (making this a simple second-order cone program) and then with \(p=4\) (making this a more general conic program).
For our projection-free method, we apply the accelerated generalized gradient method to the square of the radial reformulation (1.7) to benefit from any smoothness and/or strong convexity present in the constraints. (For this radial transformation to be well-defined, we need the origin to be feasible. To ensure this, we compute \(e\) approximately solving \(Ax=b\) via 30 iteration of the conjugate gradient method and then translate the problem to place \(e\) at the origin.)
\(p=2\)-norm Ellipsoid Constrained OptimizationFirst, we consider the problem of quadratic optimization over a \(p=2\)-norm ellipsoid constraint, giving a smooth and strongly convex objective function and constraints,
\[\begin{cases}\max_{x}&1-\frac{1}{2}x^{T}Qx-c^{T}x\\ \text{s.t.}&\|Ax-b\|_{2}\leq 1\end{cases} \tag{5.3}\]
which has an equivalent unconstrained radially dual problem
\[\min_{y}\max\left\{\frac{c^{T}y+1+\sqrt{(c^{T}y+1)^{2}+2y^{T}Qy}}{2},\,\frac{ -b^{T}Ay+\sqrt{(b^{T}Ay)^{2}-4(1-\|b\|_{2}^{2})\|Ay\|_{2}^{2}}}{1-\|b\|_{2}^{2 }}\right\}. \tag{5.4}\]
Note the second term above is exactly the gauge \(\gamma_{\{x\|Ax-b\|_{2}\leq 1\},0}(y)\). As before, the smoothness and strong convexity of the objective and ellipsoid constraint corresponds to both terms above being smooth and strongly convex when squared. As a result, this can be solved at a linear rate by an accelerated projection-free method.
We generate problem instances with dimensions \((n,m)=(400,200),(800,400),(1600,800)\) and apply this radial accelerated method with tuned smooth/strongly convex constants \((L,\mu)=(110,1.2\mathrm{e}{-3})\), (160, 8.1\(\mathrm{e}{-4}\)), (440, 4.4\(\mathrm{e}{-4}\)), respectively. We compare this method against the default configurations of Gurobi [46] and Mosek [47] (second-order solvers) and COSMO [48] and SCS [49] (first-order solvers) in Figure 3. The two second-order solvers have high iteration costs but converge quickly once they start making progress. In this setting, our radial accelerated method is relatively competitive with Gurobi and COSMO, but always outperformed by Mosek.
\(p=4\)-norm Ellipsoid Constrained OptimizationLastly, we consider a harder quadratic optimization problem with a quartic ellipsoid constraint, making the constraint set smooth but not strongly convex,
\[\begin{cases}\max_{x}&1-\frac{1}{2}x^{T}Qx-c^{T}x\\ \mathrm{s.t.}&\|Ax-b\|_{4}\leq 1\end{cases} \tag{5.5}\]
which has an equivalent unconstrained radially dual problem
\[\min_{y}\max\left\{\frac{c^{T}y+1+\sqrt{(c^{T}y+1)^{2}+2y^{T}Qy}}{2},\gamma_{ S}(y)\right\}. \tag{5.6}\]
where \(S=\{x\mid\|Ax-b\|_{4}\leq 1\}\). The gauge \(\gamma_{S}\) has a closed form given by quartic formula, and only costs one matrix-vector multiplication with \(A\) to evaluate. In this case, the two terms in (5.6) are only guaranteed to have smoothness when squared.
As before, we consider problem dimensions \((n,m)=(400,200),(800,400),(1600,800)\) and tune the radial accelerated method to with constants \((L,\mu)=(7.6,1.2\mathrm{e}{-3})\), (13, 8.6\(\mathrm{e}{-4}\)), (6.7, 3.7\(\mathrm{e}{-4}\)), respectively. The numerical result is shown in Figure 43. The first-order solvers have their performance fall off as the problem size grows. Here the second-order solver's performance is matched by our radial method at all problem sizes, increasingly so as the problem dimension grows.
Figure 3: The minimum relative accuracy \(|f(x_{k})-f^{*}|/|f(x_{0})-f^{*}|\) of (5.2), with size \((n,m)\) equals \((400,200)\), \((800,400)\), \((1600,800)\) from left to right, seen by radial accelerate, Mosek, COSMO and SCS over 2 seconds, 10 seconds, 30 seconds, respectivelyJuMP.jl with.
This motivates future works, building more practical radial methods and testing their performance on real (non-synthetic) problem instances.
## 6 Conclusion
In this paper, we showed that \(\alpha\)-strongly convex and/or \(\beta\)-smooth sets always have \(O(\alpha)\)-strongly convex and/or \(O(\beta)\)-smooth gauge squared. As a result, feasibility and optimization problems over structured sets can be recast as structured optimization problems minimizing these gauges squared. To benefit from this, we proposed fast first-order methods targeting such squared strongly convex/smooth properties. Consequently, we derived accelerated convergence guarantees of \(O(1/T)\) for problems over strongly convex sets, \(O(1/T^{2})\) for smooth sets, and accelerated linear convergence given both. Numerically, we find these methods are very effective in sample synthetic experiment settings. This indicates future developments of gauge-based methods may provide a competitive alternative to current solvers based on ADMM or interior point approaches.
Additionally, future works may be able to identify further relationships between structured sets and their gauges. We expect that notions of uniform smoothness/convexity of a set will correspond to a Holder/uniform smoothness of the set's gauge raised to an appropriate power. Such analysis may explain the linear convergence we observe for our radial method in settings with \(p=4\) norms (where strong convexity does not hold, but some uniform convexity notion may hold).
|
2306.00791 | Modeling and Analyzing Scorer Preferences in Short-Answer Math Questions | Automated scoring of student responses to open-ended questions, including
short-answer questions, has great potential to scale to a large number of
responses. Recent approaches for automated scoring rely on supervised learning,
i.e., training classifiers or fine-tuning language models on a small number of
responses with human-provided score labels. However, since scoring is a
subjective process, these human scores are noisy and can be highly variable,
depending on the scorer. In this paper, we investigate a collection of models
that account for the individual preferences and tendencies of each human scorer
in the automated scoring task. We apply these models to a short-answer math
response dataset where each response is scored (often differently) by multiple
different human scorers. We conduct quantitative experiments to show that our
scorer models lead to improved automated scoring accuracy. We also conduct
quantitative experiments and case studies to analyze the individual preferences
and tendencies of scorers. We found that scorers can be grouped into several
obvious clusters, with each cluster having distinct features, and analyzed them
in detail. | Mengxue Zhang, Neil Heffernan, Andrew Lan | 2023-06-01T15:22:05Z | http://arxiv.org/abs/2306.00791v1 | # Modeling and Analyzing Scorer Preferences
###### Abstract
Automated scoring of student responses to open-ended questions, including short-answer questions, has great potential to scale to a large number of responses. Recent approaches for automated scoring rely on supervised learning, i.e., training classifiers or fine-tuning language models on a small number of responses with human-provided score labels. However, since scoring is a subjective process, these human scores are noisy and can be highly variable, depending on the scorer. In this paper, we investigate a collection of models that account for the individual preferences and tendencies of each human score in the automated scoring task. We apply these models to a short-answer math response dataset where each response is scored (often differently) by multiple different human scorers. We conduct quantitative experiments to show that our scorer models lead to improved automated scoring accuracy. We also conduct quantitative experiments and case studies to analyze the individual preferences and tendencies of scorers. We found that scorers can be grouped into several obvious clusters, with each cluster having distinct features, and analyzed them in detail.
Automated Scoring, Scorer Models, Bias 2019 acmcopyright
2
## 1 Introduction
Automated scoring (AS), i.e., using algorithms to automatically score student (textual) responses to open-ended questions, has significant potential to complement and scale up human scoring, especially with an ever-increasing number of students. AS algorithms are often driven by _supervised_ machine learning-based algorithms and require a small number of example responses and their score labels to train on. These algorithms mostly consist of two components: a _representation_ component that use either hand-crafted features [8, 17, 21, 27, 28, 37] or language models [24, 25, 34, 36, 42] to represent the (mostly textual) content in questions, student responses, and other information, e.g., rubrics [12] and a _scoring_ component that use classifiers [4, 26] to predict the score of a response from its textual representation. In different subject domains, the representation component can be quite different, from hand-crafted features and neural language model-based textual embeddings in automated essay scoring (AES) [2, 27], automatic short answer grading (ASAG) [35, 47], and reading comprehension scoring [16] to specialized representations in responses where mathematical expressions are present [6, 31, 32, 40]. On the contrary, the scoring model does not vary significantly across different subject domains, often relying on simple classifiers such as logistic regression, support vector machines, random forests, or linear projection heads in neural networks [20]. We provide a more detailed discussion on related work in Section 1.2.
One key factor that limits the accuracy of AS methods is that the scoring task is a _subjective_ one; human scorers are often given a set of rubrics [1] and asked to score responses according to them. However, different individuals interpret rubrics and student responses differently, leading to significant variation in their scores. For example, inter-score agreement can be as quite high in NAEP reading comprehension question scoring, with a quadratic weighted Kappa (QWK) score of 0.88 [16] and quite low in open-ended math question scoring, with a Kappa score of 0.083 (see Section 3.1 for details and Table 1 for a concrete example). This variation creates a _noisy labels_ problem, which is a common problem in machine learning where one often needs to acquire a large number of labels via crowdsourcing [3, 18, 19]. In educational applications such as AS, this problem is even more important since the amount of labels we have access to is often small, which amplifies the negative impact of noisy score labels. Therefore, there is a significant need to analyze the preferences and tendencies of individual scorers, to not only improve AS accuracy by providing cleaner labels to train on but also understand where the variation in scores comes from and investigate whether we can reduce it.
### Contributions
In this paper, we propose a collection of models for the variation in human scorers due to their individual preferences and tendencies, from simple models that use only a few parameters to account for the bias and variance of each scorer to complex models that use a different set of neural network parameters for each scorer. We ground our work in an AS task for short-answer mathematical questions and show that by adding our model to the classification component of
AS models, we can improve AS accuracy by more than 0.02 in Kappa score and 0.01 in AUC compared to AS methods that do not account for individual scorer differences. We also conduct qualitative experiments and case studies to analyze the individual preference and tendencies of scorers. We found that scorers can be grouped into several major, obvious clusters, with each cluster having distinct features, which we explain in detail. **We emphasize that our goal is NOT to develop the most accurate AS model; instead, our goal is to show that accounting for the variation across different individual scorers can potentially improve the accuracy of any AS model.**
### Related work
Noisy labelsIndividual scorers often exhibit different preferences and tendencies, as found in [38]. Some of our models for scorer preference and tendency are closely related to models used in peer grading [30], where students grade each others' work, which is often deployed in settings such as massive open online courses (MOOCs) where a large number of open-ended responses make it impossible for external human scorers to score all responses. Most of these models are inspired by methods in machine learning on combining labels from human labelers with different expertise in crowdsourcing contexts [41]. These models are simple and interpretable, with the most basic version involving a single bias parameter (towards certain score labels) and a single variance parameter (across different score labels) for each scorer. On the contrary, we experiment with not only these models but also more flexible but uninterpretable models, which are compatible with using pre-trained neural language models [13, 29] in the representation component of AS models.
AS and math ASThe majority of existing ASAG and AES methods focus on non-mathematical domains [7, 9, 11, 21, 27, 37, 39]. Recently, some AS methods are developed for specific domains that contain non-textual symbols, e.g., Chemistry, Computer Science, and Physics, which exist in student responses in addition to text, achieving higher and higher AS accuracy [5, 14, 23, 33, 34]. Our work is grounded in the short-answer math question scoring setting, which is studied in prior works [5, 6, 32, 46]. The key technical challenge here is that mathematical expressions that are often contained in open-ended student responses can be difficult to parse and understand in the representation component. The authors of [5] proposed a scoring approach for short-answer math questions using sentence-BERT (SBERT)-based representation of student responses and simply ignored mathematical expressions. The authors of [6] developed an additional set of features specifically designed for mathematical expressions and used them in conjunction with the SBERT representations as input to the scoring component. The authors of [32] fine-tuned a language model, BERT [13], further pre-trained on math textbooks, as the representation component; however, this representation was found to not be highly effective in later works [46]. The authors of [46] used a sophisticated in-context meta-training approach for automated scoring by inputting not only the response that needs to be scored but also scored examples to a language model, enabling the language model to learn from examples, which results in significant improvement in AS accuracy and especially generalizability to previously unseen questions.
Another line of related work is about fairness in educational data analysis since scorer preference can be classified as a form of individual bias. Researchers have proposed methods to incorporate constraints and regularization into predictive models to improve parity and mitigate fairness issues [10, 44, 45]. On the contrary, our work does not attempt at reducing biases; our focus is only on identifying a specific source of bias, individual score bias, in the AS context. Therefore, the only approach we use to mitigate biases is to leverage scorer identification information and investigate its impact on AS accuracy, following prior work on using this information in predictive models [43].
## 2 Model
We now detail our models for individual scorer preference and tendency in AS tasks. For all models, we use a BERT model [13] as the corresponding representation component of the AS model, which has been shown to perform well and reach state-of-the-art performance on the short math answer AS task with an appropriate input structure [46]. Let us denote each question-response pair that needs to be scored as \(q_{i}\), while the \(j\)-th scorer assigns a score \(y_{i,j}\in\{1,\ldots,C\}\) where \(C\) denotes the number of possible score categories.
### Baseline
Our base AS model is one that directly uses the output [CLS] embedding of BERT as the representation of the question-response pair \(\mathbf{r}_{i}\in\mathbb{R}^{D}\), where \(D=768\) is the dimension of the embedding. We also use a linear classification head (omitting the bias terms for simplicity) with softmax output [20] for all score categories, i.e.,
\[p(y_{i,j}=c)\propto e^{(\mathbf{w}_{c}^{T}\mathbf{r}_{i})+b_{c}},\]
where \(\mathbf{w}_{c}\) denotes the \(D\)-dimensional parameter for each score category and \(b_{c}\in\mathbb{R}\) is the universal bias toward each score category.
### Scalar bias and variance with scorer embeddings
The first version of our model is the simplest and most interpretable: we use a scalar temperature, i.e., variance parameter for each scorer, and a scalar offset, i.e., bias parameter on each score category for each scorer, i.e.,
\[p(y_{i,j}=c)\propto e^{\alpha_{j}(\mathbf{w}_{c}^{T}\mathbf{r}_{i}+b_{c,j})}, \tag{1}\]
where \(\alpha_{t}>0\) is the "temperature" parameter that controls the scorer's uncertainty across categories: larger values indicate higher concentrations of the probability mass around the most likely score category, which corresponds to more consistent scoring behavior. \(b_{c,j}\in\mathbb{R}\) is the "offset" parameter that controls the scorer's bias towards each score category: larger values indicate a higher probability of selecting some score category, which corresponds to more positive/negative scoring preferences.
In practice, we found that parameterizing biases with a set of _scorer embeddings_ lead to better performance than simply parameterizing the biases as learnable scalars. Specifically, we introduce a high-dimensional embedding for each scorer,
\(\mathbf{e}_{j}\in\mathbb{R}^{D}\), and use a \(C\times D\) matrix \(\mathbf{S}\) to map it to a low-dimensional vector that corresponds to the bias terms for all score categories. This advantage is likely due to the fact that more model parameters make the model more flexible and more capable in capturing detailed nuances in scorer preferences and tendencies.
### Content-driven scorer bias and variance
In the models above, we have set the scorer biases and variances to be scorer-dependent but not question/response-dependent, i.e., the bias and variance of a scorer stay the same across all question-response pairs. However, in practice, it is possible that these parameters depend on the actual textual content of the question and the student's response. Therefore, we extend the scorer model in Eq. 1 into
\[\mathbf{b}_{i,j}=f_{b}(\mathbf{r}_{i},\mathbf{e}_{j}),\quad \alpha_{t}=f_{\alpha}(\mathbf{r}_{i},\mathbf{e}_{j}),\] \[\text{where}\quad f_{b}(\mathbf{r}_{i},\mathbf{e}_{j})=\mathbf{r} _{i}^{T}\mathbf{A}_{b}\mathbf{e}_{j},\quad f_{\alpha}(\mathbf{r}_{i},\mathbf{ e}_{j})=\mathbf{r}_{i}^{T}\mathbf{A}_{\alpha}\mathbf{e}_{j},\]
where the bias \(\mathbf{b}_{i,j}\) is now a \(C\times 1\) vector of biases across all score categories and both question-response pair (\(i\))-dependent and scorer (\(j\))-dependent. \(f_{b}\) and \(f_{\alpha}\) denote functions that map the textual representation of the question-response pair and the scorer embedding to the bias and variance parameters, which can be implemented in any way (from simple linear models to complex neural networks). In this work, we found that using bi-linear functions of the question-response pair representation \(\mathbf{r}_{i}\) and the scorer embedding \(\mathbf{e}_{j}\), using two \(D\times D\) matrices \(\mathbf{A}_{b}\) and \(\mathbf{A}_{\alpha}\), results in the best AS accuracy.
### Training with different losses
We explore using various different loss functions as objectives to train our AS model, which we detail below.
#### 2.4.1 Cross-entropy
Since the AS task corresponds to a multi-category classification problem, the standard loss function that we minimize is the cross-entropy (CE) loss [20], summed over all question-response pairs and scorers, as
\[\mathcal{L}_{\text{CE}}=-\sum_{i,j}\sum_{c=1}^{C}\mathbf{1}_{y_{i,j}=c}\log p (y_{i,j}=c)\]
where \(\mathbf{1}_{y_{i,j}=c}\) is the indicator function that is non-zero only if \(y_{i,j}=c\). In other words, we are minimizing the negative log-likelihood of the actual score category among the category probabilities predicted by the AS model, \(p(y_{i,j}=c)\).
#### 2.4.2 Ordinal log loss
One obvious limitation of the standard CE loss is that it assumes that the categories are unordered, which works for many applications. Therefore, it penalizes all misclassifications equally. However, for AS, the score categories are naturally ordered, which means that score classification errors are not equal: if the actual score is 1 out of 5, then a misclassified score of 2 is better than 5, but they are weighted equally in the standard CE loss. Therefore, we follow the approach outlined in [15] and use an ordinal log loss (OLL), which we define as
\[\mathcal{L}_{\text{OLL}}=-\sum_{i,j}\sum_{c=1}^{C}|y_{i,j}-c|\log(1-p(y_{i,j}=c )),\]
where we weight the misclassification likelihood, i.e., \(-\log(1-p(y_{i,j}=c))\), according to the difference between the actual score, \(y_{i,j}\), and the predicted score, \(c\). In the aforementioned example, this objective function would increase the penalty of a misclassified score of 5 by four times compared to a misclassified score of 2 when the actual score is 1, which effectively leverages the ordered nature of the score categories.
#### 2.4.3 Mean squared error
Since the score categories are integers and can be treated as numerical values, one simple alternative to the CE loss is the mean squared error (MSE) loss, i.e.,
\[\mathcal{L}_{\text{MSE}}=\sum_{i,j}(y_{i,j}-\sum_{c=1}^{C}p(y_{i,j}=c)c)^{2}, \tag{2}\]
where we simply square the difference between the actual score and the expected (i.e., weighted average) score under the category probabilities predicted by the AS model.
## 3 Quantitative Experiments
We now detail experiments that we conducted to validate the different scoring components of AS models and loss functions that capture score preferences and tendencies. Section 3.1 discusses details on the real-world student response dataset we use and the pre-processing steps. Section 3.2 details the evaluation metrics we use in our experiments. Section 3.3 details our experimental setting, and Section 3.4 details the experimental results and corresponding discussion.
### Dataset
\begin{table}
\begin{tabular}{l|l|l|l} \hline \(question\_id\) & \(question\_body\) & \(response\) & \(score\_id\) & \(score\) \\ \hline
43737 & Chris spent 89 of the \$12 he was given for his birthday. His sister Jessie says that he has spent exactly 0.75 of the money. Chris wonders if Jessie is correct. Explain your reasoning. & Jessie is correct & 1 & 4 \\ \cline{2-3} & & Jessie is correct & & \\ \cline{2-3} & & Jessie is wrong. & 1 & 0 \\ \cline{2-3} & she is correct & 1 & 1 \\ \cline{2-3} & Jessie is incorrect. & 2 & 4 \\ \cline{2-3} & Jessie is right because if you divide 12 by & 2 & 2 \\ \cline{2-3} & 9 you get 0.75. & & \\ \hline \end{tabular}
\end{table}
Table 1: Example questions, student responses, and scores. Some scorers assign highly different scores to similar responses.
We use data collected from an online learning platform that has been used in prior work [5, 14], which contains student responses to open-ended, short-answer math questions, together with scores assigned by human scores. There are a total of 141,612 total student responses made by \(25,069\) students to \(2,042\) questions, with 891 different teachers being scorers. The set of possible score categories is from \(0\) (no credit) to \(4\) (full credit). The dataset mainly contains math word problems, where the answer could be mathematical such as numbers and equations or textual explanations, sometimes in the format of images.
We found that different scorers sometimes assign very different scores to the same response, which motivated this work. As an example, we analyze question-response pairs that are scored by more than one scorer and evaluate the Kappa score between these scorers. The _human_ Kappa score is only \(0.083\), which means a minimal agreement between different scorers. Although there are only \(523\) such pairs, this case study still shows that even for the same exact response, scorers have highly different individual preferences and tendencies and may assign them highly different scores.
We also perform a series of pre-processing steps to the original dataset. For example, since some of the scorers do not score many responses, e.g., less than \(100\), there may not be enough information on these scores for us to model their behavior. Therefore, we remove these scores from the dataset, which results in \(203\) scorers, \(1,273\) questions, and \(118,079\) responses. The average score is \(3.152\pm 1.417\). Table 1 shows some examples of data points of this dataset; each data point consists of the question statement, the student's response, the scorer's ID, and the score.
### Metrics
We utilize three standard evaluation metrics for integer-valued scores that have been commonly used in the AS task [5, 14]. First, the area under the receiver operating characteristic curve (**AUC**) metric, which we adapt to the multi-category classification problem by averaging the AUC numbers over each possible score category and treating them as separate binary classification problems, following [22]. Second, we use the root mean squared error (**RMSE**) metric, which simply treats the integer-valued score categories as numbers, as detailed in Eq. 2. Third and most importantly, we use the multi-class Cohen's **Kappa** metric for ordered categories, which is often used to evaluate AS methods [1].
### Experimental setting
In the quantitative experiment, we focus on studying whether adding scorer information leads to improved AS accuracy. Therefore, when we are splitting a dataset into training, validation, and test sets, we ensure that every scorer is included in the training set. We divide the data points (question-response pairs, score ID, score) into \(10\) equally-sized folds for cross-validation. During training, we use \(8\) folds as training data, \(1\) fold for validation for model selection, and \(1\) fold for the final testing to evaluate the AS models.
For a fair comparison, every model uses BERT1 as the pre-trained model for question-response pair representation, which has been shown to result in state-of-the-art AS accuracy in prior work [46]. We emphasize that our work on **scorer models** can be added on top of **any** AS method for response representation; applying these models on other AS methods is left for future work. We use the Adam optimizer, a batch size of \(16\), and a learning rate of \(1e-5\) for \(10\) training epochs on an NVIDIA RTX8000 GPU. We do not perform any hyper-parameter tuning and simply use the default settings.
Footnote 1: [https://huggingface.co/bert-base-uncased](https://huggingface.co/bert-base-uncased)
### Results and discussion
Table 2 shows the mean and standard deviation of each scorer model trained under each loss function. We see that generally, models with content-driven scorer biases and variances outperform scorer-specific biases and variances, which outperform the base AS model that treats each scorer the same with universal values for bias and variance. The improvement in AS accuracy is significant, up to about \(0.02\) in the most important metric-Kappa, for the content-driven biases and variances over the standard AS approach of not using scorer information. This observation validates the need to account for individual score preferences and tendencies in the highly subjective AS task. Meanwhile, since the content-driven scorer bias and variance models outperform the scorer-specific bias and variance models, we can conclude that the content of the question and response does play an important role in scorer preference.
\begin{table}
\begin{tabular}{l|l|l|l|l} \hline Bias (\(b\)) \& Temperature (\(\alpha\)) & Loss Function & AUC & RMSE & Kappa \\ \hline Universal (\(b_{c}\), \(\alpha\) = 1) & CE & 0.765 \(\pm\) 0.003 & 0.954 \(\pm\) 0.014 & 0.614 \(\pm\) 0.009 \\ \hline Universal (\(b_{c}\), \(\alpha\) = 1) & MSE & 0.764 \(\pm\) 0.003 & 0.946 \(\pm\) 0.018 & 0.615 \(\pm\) 0.008 \\ \hline Universal (\(b_{c}\), \(\alpha\) = 1) & OLL & 0.768 \(\pm\) 0.003 & 0.944 \(\pm\) 0.015 & 0.617 \(\pm\) 0.006 \\ \hline Scorer-specific (\(b_{c,j}\), \(\alpha_{j}\)) & CE & 0.768 \(\pm\) 0.005 & 0.928 \(\pm\) 0.023 & 0.628 \(\pm\) 0.006 \\ \hline Scorer-specific (\(b_{c,j}\), \(\alpha_{j}\)) & MSE & 0.772 \(\pm\) 0.005 & 0.926 \(\pm\) 0.025 & 0.625 \(\pm\) 0.006 \\ \hline Scorer-specific (\(b_{c,j}\), \(\alpha_{j}\)) & OLL & 0.770 \(\pm\) 0.003 & **0.916 \(\pm\) 0.013** & 0.628 \(\pm\) 0.004 \\ \hline Content-driven (\(b_{c,j}(\mathbf{r}_{i})\), \(\alpha_{j}(\mathbf{r}_{i})\)) & CE & 0.772 \(\pm\) 0.003 & 0.923 \(\pm\) 0.016 & 0.631 \(\pm\) 0.006 \\ \hline Content-driven (\(b_{c,j}(\mathbf{r}_{i})\), \(\alpha_{j}(\mathbf{r}_{i})\)) & MSE & 0.774 \(\pm\) 0.004 & 0.922 \(\pm\) 0.021 & 0.629 \(\pm\) 0.005 \\ \hline Content-driven (\(b_{c,j}(\mathbf{r}_{i})\), \(\alpha_{j}(\mathbf{r}_{i})\)) & OLL & **0.779 \(\pm\) 0.004** & 0.924 \(\pm\) 0.013 & **0.641 \(\pm\) 0.005** \\ \hline \end{tabular}
\end{table}
Table 2: Comparing different scorer models on short-answer math scoring. The combination of content-driven scorer bias and temperature with the OLL loss outperforms other scorer models and training losses.
We also observe that training scorer models with the OLL loss outperform the other losse, while training with the MSE loss does not even lead to the best results on the RMSE metric. This observation suggests that taking into account the ordered nature of score categories instead of treating them as parallel ones is important to the AS task.
## 4 Qualitative Analysis
Despite the content-driven model delivering the highest AUC and Kappa results, the complexity of the information contained in its embedding space renders it difficult to interpret. Consequently, we have elected to concentrate on examining the scorer-specific model (detailed in Sec. 2.2).
### Visualization of scorer embedding
Figure 1 shows a 2-D visualization of the learned scorer embedding space; We see that there are obvious clusters among all scorers. We then fit the learned scorer embeddings under a mixture-of-Gaussian model via the expectation-maximization (EM) algorithm with 6 clusters. The subfigures to each side of the main plot shows each cluster's average bias towards each score category, which are 0, 1, 2, 3, and 4 from left to right.
### Features analysis based on each cluster
Cluster 1 shows a negative scoring profile, with a strong, positive bias towards the lowest score category 0 (positive \(b_{c,j}\) values) and small, negative biases against higher scores, 1, 2, and 3 (negative \(b_{c,j}\) values). These scorers assign 0 scores much more often than other score categories, compared to other scorers. The average score across question-response pairs is the lowest for this cluster, at 1.69. Meanwhile, this cluster has a relatively high score variance of 1.69, meaning that these scorers tend to have inconsistent behavior and assign a wide variety of score labels.
Cluster 2 shows a positive scoring profile, with a strong, positive bias towards the highest score, 4, and moderate negative biases against other scores. These scorers prefer to assign scores that are overwhelmingly higher compared to other scorers. The average score across question-response pairs is the lowest for this cluster, at 3.45. Meanwhile, this cluster has a relatively low score variance of 0.92, meaning that these scorers are consistent in scoring responses higher than other scorers.
Cluster 3 shows a conservative scoring profile, with small, positive biases towards the middling scores 1, 2, and 3 and a strong, negative bias against the top score 4. The average score across question-response pairs is 2.41 for this cluster with a variance of 1.4, which is high considering that scorers in this cluster rarely use the top score category, indicating that their scoring behavior is not highly consistent.
Cluster 4 shows an unbiased scoring profile, with a low bias towards or against any score category, with a slight preference for the top score category, 4. This cluster contains almost half of the scorers, which means that the majority of scorers are reliable (their scores depend mostly on the actual quality of the response, i.e., the \(\mathbf{w}_{c}^{T}\mathbf{r}_{i}\) term in Eq. 1 rather than the bias term.
Cluster 5 shows a polarizing scoring profile, with strong, pos
Figure 1: Visualization of clustering result on scorer embedding learnt via scorer-specific model. The left figure shows the 2-D visualization of scorer embedding space, and the right figure shows the average bias for each cluster
tive biases toward both the lowest score, 0, and the highest score, 4, while having strong, negative biases against score categories in between. Scorers in this cluster often score a response as all or nothing while using the intermediate score values sparingly. The average score across question-response pairs is 2.55 for this cluster with a variance of 1.81, the highest among all clusters, which agrees with our observation that these scorers are highly polarizing and rarely judge any response to be partially correct.
Cluster 6 shows a lenient scoring profile, with a strong, negative bias against the lowest score, 0, and a moderate, positive bias towards the next score, 1, with minimal bias across higher score categories. Scorers in this cluster tend to award students a single point for an incorrect response instead of no points at all. The average score across question-response pairs is 2.71 for this cluster with a middling variance of 1.33.
## 5 Conclusions and Future Work
In this paper, We created models to account for individual scorer preferences and tendencies in short-answer math response automated scoring. Our models differ from previous work by focusing on capturing the subjective nature of scoring rather than textual content. Our models range from simple to complex, with some using bias and variance as a function of the question and response. Our experiments on a dataset with low inter-rater agreement showed that accounting for scorer preferences and tendencies improved performance by more than 0.02 in the Kappa metric. Qualitative analysis showed obvious patterns among scorers, some with biases towards certain scores. Scorer-specific settings can model scorer grading behavior very well. In other words, the scorer's grading behavior is highly controllable, and the scorer's grading behavior representation is also well-represented in the hidden space. One practical extension could be adjusting the learned score bias by using a different type of scorer embedding to control model grading in a different scorer style. Future work can address limitations in our analysis. Our dataset only provides score IDs, lacking gender, race, or location. Investigating biases with this additional information is crucial, including how teacher-student relationships or shared demographics impact biases. Our analysis also did not consider student demographic information, which is important for fairness studies. Additionally, our scorer models were only validated with a BERT-based textual representation model, so further testing is needed to determine their adaptability to traditional, feature-based automated scoring methods.
## 6 Acknowledgements
The authors thank the NSF (under grants 1917713, 2118706, 2202506, 2215193) for partially supporting this work.
\begin{table}
\begin{tabular}{c|c c c c c c} \hline \hline \multirow{2}{*}{Cluster} & \multirow{2}{*}{Bias} & \multicolumn{2}{c}{Observed scoring} & \multirow{2}{*}{Temperature} & \multirow{2}{*}{Score} & \multicolumn{2}{c}{Response features} \\ \cline{5-6} & & & & math tok (\%) & img (\%) & length \\ \hline
1 & & & 1.013 & 1.685 \(\pm\) 1.644 & 29.13 & 0.101 & 23.06 \\ \hline
2 & & & 1.034 & 3.451 \(\pm\) 0.919 & 32.12 & 1.286 & 24.40 \\ \hline
3 & & & 0.996 & 2.415 \(\pm\) 1.400 & 23.51 & 1.311 & 36.16 \\ \hline
4 & & & 1.033 & 3.074 \(\pm\) 0.991 & 29.48 & 0.304 & 21.94 \\ \hline
5 & & & 1.026 & 2.558 \(\pm\) 1.806 & 45.18 & 5.271 & 14.35 \\ \hline
6 & & & 1.007 & 2.714 \(\pm\) 1.331 & 33.83 & 1.403 & 13.34 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Detailed biases and variance (inverse of temperature) for each scorer profile, their observed scoring distributions, and average response features. We normalize the observed scoring distributions to zero-mean, which makes them easier to visually compare against the learned biases. _math tok (%)_ is the percentage of math tokens in the response. _img (%)_ is the percentage of images in the response. _length_ is the number of word tokens in the response. |
2305.08845 | Large Language Models are Zero-Shot Rankers for Recommender Systems | Recently, large language models (LLMs) (e.g., GPT-4) have demonstrated
impressive general-purpose task-solving abilities, including the potential to
approach recommendation tasks. Along this line of research, this work aims to
investigate the capacity of LLMs that act as the ranking model for recommender
systems. We first formalize the recommendation problem as a conditional ranking
task, considering sequential interaction histories as conditions and the items
retrieved by other candidate generation models as candidates. To solve the
ranking task by LLMs, we carefully design the prompting template and conduct
extensive experiments on two widely-used datasets. We show that LLMs have
promising zero-shot ranking abilities but (1) struggle to perceive the order of
historical interactions, and (2) can be biased by popularity or item positions
in the prompts. We demonstrate that these issues can be alleviated using
specially designed prompting and bootstrapping strategies. Equipped with these
insights, zero-shot LLMs can even challenge conventional recommendation models
when ranking candidates are retrieved by multiple candidate generators. The
code and processed datasets are available at
https://github.com/RUCAIBox/LLMRank. | Yupeng Hou, Junjie Zhang, Zihan Lin, Hongyu Lu, Ruobing Xie, Julian McAuley, Wayne Xin Zhao | 2023-05-15T17:57:39Z | http://arxiv.org/abs/2305.08845v2 | # Large Language Models are Zero-Shot Rankers for Recommender Systems
###### Abstract
Recently, large language models (LLMs) (_e.g.,_ GPT-4) have demonstrated impressive general-purpose task-solving abilities, including the potential to approach recommendation tasks. Along this line of research, this work aims to investigate the capacity of LLMs that act as the ranking model for recommender systems. To conduct our empirical study, we first formalize the recommendation problem as a conditional ranking task, considering sequential interaction histories as _conditions_ and the items retrieved by the candidate generation model as _candidates_. We adopt a specific prompting approach to solving the ranking task by LLMs: we carefully design the prompting template by including the sequential interaction history, the candidate items, and the ranking instruction. We conduct extensive experiments on two widely-used datasets for recommender systems and derive several key findings for the use of LLMs in recommender systems. We show that LLMs have promising zero-shot ranking abilities, even competitive to or better than conventional recommendation models on candidates retrieved by multiple candidate generators. We also demonstrate that LLMs struggle to perceive the order of historical interactions and can be affected by biases like position bias, while these issues can be alleviated via specially designed prompting and bootstrapping strategies. The code to reproduce this work is available at [https://github.com/RUCAIBox/LLMRank](https://github.com/RUCAIBox/LLMRank).
+
Footnote †: copyright\) Corresponding author.
+
Footnote †: copyright\) Corresponding author.
## 1 Introduction
In the literature of recommender systems, most existing models are trained with user behavior data from a specific domain or task scenario [25; 13; 14], and often suffer from two major issues. Firstly, it is difficult to explicitly understand the real user preference, since existing models mainly capture user preference from historical interaction behaviors, _e.g.,_ clicked item sequences [14; 19; 39; 17], limiting the expressive power to model the complicated user interests (_e.g.,_ user intentions expressed in natural language). Secondly, these models are essentially "_narrow experts_", lacking more comprehensive knowledge in solving complicated recommendation tasks that rely on background or commonsense knowledge [11].
To improve recommendation performance and interactivity, there have been increasing efforts that explore the use of pre-trained language models (PLMs) in recommender systems [10; 18; 31]. They
aim to explicitly capture user preference in natural language [10] or transfer rich world knowledge from text corpora [18; 16]. Despite their effectiveness, thoroughly fine-tuning the recommendation models on task-specific data is still a necessity, making it less capable of solving diverse recommendation tasks [18]. More recently, large language models (LLMs) have shown superior capabilities in commonsense reasoning, knowledge utilization, and task generalization [37], which have shown great potential to serve as zero-shot task solvers [32; 27]. Indeed, there are some preliminary attempts that employ LLMs for solving recommendation tasks [9; 29; 30; 5; 21; 34]. These studies mainly focus on discussing the possibility of building a capable recommender with LLMs, and report promising results based on preliminary experiments. While our focus is to take a more detailed and in-depth analysis of such abilities, and understand the factors in possessing them, _e.g.,_ how LLMs learn from historical interaction data.
In this paper, we aim to investigate the capacity of LLMs that serve as recommendation models by conducting a more detailed empirical study. Typically, recommender systems are developed in a pipeline architecture [4], consisting of multi-stage candidate generation (_retrieving more relevant items_) and ranking (_ranking relevant items at a higher position_) procedures. This work mainly focuses on the ranking stage of recommender systems, since LLMs are more expensive to run on a large-scale candidate set. Further, the ranking performance is sensitive to the retrieved top-ranked candidate items, which is more suitable to examine the subtle differences in the recommendation abilities of LLMs.
To carry out this study, we first formalize the recommendation process of LLMs as a _conditional ranking_ task. Given prompts that include sequential historical interactions as _"conditions"_, LLMs are instructed to rank a set of _"candidates"_ (_e.g.,_ items retrieved by candidate generation models), according to LLM's intrinsic knowledge about the relationships between candidate items and historically interacted items. Then we conduct controlled experiments to systematically study the empirical performance of LLMs as rankers by designing specific configurations for "conditions" and "candidates", respectively. Overall, we attempt to answer the following key questions:
* Can LLMs capture underlying user preferences from prompts with _sequential_ interactions?
* Can LLMs leverage their intrinsic knowledge to rank candidates retrieved by different practical strategies?
Our empirical experiments are conducted on two widely-used public datasets for recommender systems. Our experiments lead to several key findings that potentially shed light on how to develop LLMs as powerful ranking models for recommender systems. We summarize the key findings of this empirical study as follows:
* LLMs can utilize historical behaviors for personalized ranking, but _struggle to perceive the order_ of the given sequential interaction histories.
* By employing specifically designed promptings, such as recency-focused prompting and in-context learning, _LLMs can be triggered to perceive the order_ of sequential historical interactions, leading to improved ranking performance.
* LLMs outperform existing zero-shot recommendation methods, showing promising zero-shot ranking abilities, especially on candidates retrieved by multiple candidate generation models with different practical strategies.
* LLMs suffer from position bias and popularity bias while ranking, which can be alleviated by prompting or bootstrapping strategies.
## 2 General Framework for LLMs as Rankers
To investigate the recommendation abilities of LLMs, we first formalize the recommendation process as a conditional ranking task. Then, we describe a general framework that adapts LLMs to solve the recommendation task.
### Problem Formulation
Given the historical interactions \(\mathcal{H}=\{i_{1},i_{2},\ldots,i_{n}\}\) of one user (in chronological order of interaction time) as _conditions_, the task is to rank the _candidate_ items \(\mathcal{C}=\{i_{j}\}_{j=1}^{m}\), such that the items of
interest would be ranked at a higher position. In practice, the candidate items are usually retrieved by candidate generation models from the whole item set \(\mathcal{I}\) (\(m\ll|\mathcal{I}|\)) [4]. Further, we assume that each item \(i\) is associated with a descriptive text \(t_{i}\) following [18].
### Ranking with LLMs Using Natural Language Instructions
We use LLMs as ranking models to solve the above-mentioned task in an instruction-following paradigm [32]. Specifically, for each user, we first construct two natural language patterns that contain sequential interaction histories \(\mathcal{H}\) (_conditions_) and retrieved candidate items \(\mathcal{C}\) (_candidates_), respectively. Then these patterns are filled into a natural language template \(T\) as the final instruction. In this way, LLMs are expected to understand the instructions and output the ranking results as the instruction suggests. The overall framework of the ranking approach by LLMs is depicted in Figure 1. Next, we describe the detailed instruction design in our approach.
**Sequential historical interactions.** To investigate whether LLMs can capture user preferences from historical user behaviors, we include sequential historical interactions \(\mathcal{H}\) into the instructions as inputs of LLMs. To enable LLMs to be aware of the sequential nature of historical interactions, we propose three ways to construct the instructions:
* **Sequential prompting**: Arrange the historical interactions in chronological order. This way has also been used in prior studies [5]. For example, _"I've watched the following movies in the past in order: '0. Multiplicity', '1. Jurassic Park',..."_.
* **Recency-focused prompting**: In addition to the sequential interaction records, we can add an additional sentence to emphasize the most recent interaction. For example, _"I've watched the following movies in the past in order: '0. Multiplicity', '1. Jurassic Park',... |Note that my most recently watched movie is Dead Presidents...."_.
* **In-context learning (ICL)**: ICL is a prominent prompting approach for LLMs to solve various tasks [37], where it includes demonstration examples (possibly with the task description) in the prompt and instructs LLMs to solve a specific task. For the personalized recommendation task, simply introducing examples of other users may introduce noises because different users usually have different preferences. By adapting ICL in our setting, we introduce demonstration examples by augmenting the input interaction sequence itself. In detail, we pair the prefix of the input interaction sequence and the corresponding successor as examples. For example, _"I've watched the following movies in the past in order: '0. Multiplicity', '1. Jurassic Park',..., then you should recommend Dead Presidents to me and now that I've watched Dead Presidents, then..."_.
**Retrieved candidate items.** Typically, candidate items to be ranked are first retrieved by several candidate generation models [4]. To rank these candidates with LLMs, we also arrange the candidate items \(|\mathcal{C}|\) in a sequential manner. For example, _"Now there are 20 candidate movies that I can watch next: '0. Sister Act', '1. Sunset Blvd',..."_. Note that, following the classic candidate generation approach [4], there is no specific order for candidate items. We apply a set union for the retrieved results of different candidate generation models, and randomly assign the position to a candidate item. In this work, we consider a relatively small pool for the candidates, and keep \(20\) candidate items (_i.e., \(m=20\)_) for ranking. It has been shown that LLMs are sensitive to the order of
Figure 1: An overview of the proposed LLM-based zero-shot personalized ranking method.
examples in prompts [38, 22]. As a result, We generate different orders for the candidate items in the prompts, which enables us to further examine whether the ranking results of LLMs are affected by the arrangement order of candidates, _i.e.,_ position bias, and how to alleviate position bias via bootstrapping.
**Ranking with large language models.** Existing studies show that LLMs can follow natural language instructions to solve diverse tasks in a zero-shot or few-shot setting [32, 37]. To use LLMs as ranking models, we finally integrate the above-mentioned patterns into the instruction template \(T\). An example instruction template can be given as: "_[pattern that contains sequential historical interactions \(\mathcal{H}\)] [pattern that contains retrieved candidate items \(\mathcal{C}\)] Please rank these movies by measuring the possibilities that I would like to watch next most, according to my watching history. You MUST rank the given candidate movies. You cannot generate movies that are not in the given candidate list."_.
**Parsing the output of LLMs.** By feeding the instructions into LLMs, we can obtain the ranking results of LLMs for recommendation. Note that the output of LLMs is still in natural language text, and we parse the output with heuristic text-matching methods and ground the recommendation results on the specified item set. In detail, when the text of items is short and discriminative, like movie titles, we can directly perform efficient substring matching algorithms like KMP [20] between the LLM outputs and the text of candidate items. Otherwise, we can assign an index for each candidate item and instruct LLMs to directly output the ranked indices. Despite that candidate items are included in the prompts, we have found that LLMs have a tendency to generate items that are out of the candidate set. While, the proportion of this error is very small for GPT-3.5, about 3%. In this case, we can either remind LLMs of this error or simply treat it as an incorrect recommendation.
## 3 Empirical Studies
We aim to examine the effect of various configurations, including sequential historical interactions \(\mathcal{H}\), candidates \(\mathcal{C}\), and template \(\mathcal{T}\), and focus on answering two research questions: (a) can LLMs capture user preferences from prompts with user sequential historical interactions \(\mathcal{H}\)? (b) can LLMs leverage their intrinsic knowledge to rank candidates \(\mathcal{C}\) retrieved by different practical strategies?
**Datasets.** The experiments are conducted on two widely-used public datasets for recommender systems: (1) the movie rating dataset _MovieLens-1M_[12] (in short, **ML-1M**) where user ratings are regarded as interactions, and (2) one category from the _Amazon Review_ dataset [23] named **Games** where reviews are regarded as interactions. We filter out users and items with fewer than five interactions. Then we sort the interactions of each user by timestamp, with the oldest interactions first, to construct the corresponding historical interaction sequences. The movie/product titles are used as the descriptive text of an item.
**Evaluation configurations.** Following existing works [19, 18, 17], we apply the leave-one-out strategy for evaluation. For each historical interaction sequence, the last item is used as the ground-truth item. We adopt the widely used metric NDCG@N to evaluate the ranking results over the given \(m\) candidates, where \(N\leq m\).
**Implementation details.** To ease the reproduction of this work, our experiments are conducted using a popular open-source recommendation library RecBole[36, 35, 33]. For sequential historical user behaviors, we use the most recent \(50\) interactions by default. For LLM-based methods, we randomly sample \(200\) users along with their historical behaviors for each dataset. For conventional
\begin{table}
\begin{tabular}{l r r r r r r} \hline \hline Dataset & \#Users & \#Items & \#Interactions & Sparsity & Avg. \(|\mathcal{H}|\) & Avg. \(|t_{i}|\) \\ \hline ML-1M & 6,040 & 3,706 & 1,000,209 & 95.53\% & 46.19 & 16.96 \\ Games & 50,547 & 16,859 & 389,718 & 99.95\% & 7.02 & 43.31 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Statistics of the datasets after preprocessing. “Avg. \(|\mathcal{H}|\)” denotes the average length of historical user behaviors. “Avg. \(|t_{i}|\)” denotes the average number of tokens in the descriptive text of the items.
baseline methods like SASRec [19], they are trained on all the interactions in the training dataset unless specified, the evaluated LLM is accessed by calling OpenAI's API gpt-3.5-turbo1. The hyperparameter temperature of calling LLMs is set to \(0.2\). All the reported results are the average of at least three repeat runs to reduce the effect of randomness.
Footnote 1: [https://openai.com/blog/introducing-chatgpt-and-whisper-apis](https://openai.com/blog/introducing-chatgpt-and-whisper-apis)
### Can LLMs Understand Prompts that Involve Sequential Historical User Behaviors?
In existing literature, historical user behaviors are mainly modeled as graphs [13] or sequences [14] by specially designed recommendation models. In contrast, our work encodes historical user behaviors as the prompts and feeds them into large language models not specifically trained for recommendation. In this part, we first investigate whether LLMs can leverage these historical user behaviors for making accurate recommendations. By designing different configurations of \(\mathcal{H}\), we aim to examine: (1) whether LLMs can understand prompts with historical behaviors and rank correspondingly, (2) whether the sequential nature is perceived and utilized for understanding user preferences, and (3) whether LLMs can make better use of long-range user histories.
As experiments in this section mainly focus on the effect of historical user behaviors, we employ a simple strategy for constructing the candidate sets to evaluate the LLMs' ranking performance. Specifically, for each ground-truth item, we randomly retrieve \(m-1\) items from the entire item set \(\mathcal{I}\) as negative instances, where \(m=20\). These candidate items are then randomly shuffled before constructing the prompts.
**LLMs can give personalized recommendations corresponding to prompts with historical behaviors.** In this section, we examine whether LLMs can understand prompts with historical user behaviors and give personalized recommendations. Given prompts with sequential user historical behaviors, the task is to rank a candidate set of \(20\) items, including one ground-truth item and \(19\) randomly sampled negatives. By analyzing historical behaviors, items of interest should be ranked at a higher position. We compare the ranking results of three LLM-based methods: (a) _Ours_, which ranks with LLMs as we have described in Section 2.2. Historical user behaviors are encoded into prompts using the "sequential prompting" strategy. (b) _No History_, where the historical user behaviors are removed from instructions, and (c) _Fake History_, where we replace all the items in original historical behaviors with randomly sampled items as fake historical behaviors.
From Figure 2(a), we can see that _Ours_ has better performance than variants with no historical behaviors or fake historical behaviors. The results suggest that LLMs can effectively leverage prompts with historical user behaviors to make personalized recommendations, while unrelated historical behaviors may also hurt the ranking performance of LLMs.
**LLMs struggle to perceive the order of the given historical user behaviors.** In Figure 2(b), we further investigate the ability of LLMs to recognize the sequential nature of user historical behaviors. The variant with the suffix _(Random Order)_ refers to shuffling the historical user behaviors randomly
Figure 2: Analysis of whether LLMs make use of historical user behaviors and whether LLMs perceive the order of interaction histories.
before feeding to the model (either _Ours_ or _SASRec_). By comparing _SASRec_ and _SASRec_ (_Random Order_), we can see that the order of sequential historical interactions is vital for the item ranking. However, the performance of _Ours_ and _Ours_ (_Random Order_) is quite similar, indicating that LLMs are not sensitive to the order of given historical user behaviors.
Moreover, in Figure 2(c), we vary the number of latest historical user behaviors (\(|\mathcal{H}|\)) used for constructing the prompt from \(5\) to \(50\). The results show that increasing the number of historical user behaviors does not improve the ranking performance, but even negatively impacts the ranking performance. We speculate that this phenomenon is caused by the fact that LLMs have difficulty understanding the order, but consider all the historical behaviors equally. Therefore too many historical user behaviors (_e.g.,_\(|\mathcal{H}|=50\)) may overwhelm LLMs and lead to a performance drop. In contrast, a relatively small \(|\mathcal{H}|\) enables LLMs to concentrate on the most recently interacted items, resulting in better recommendation performance. The above results can be summarized as the first key observation:
**Observation 1.** LLMs can utilize historical behaviors for personalized ranking, but _struggle to perceive the order_ of the given sequential interaction histories.
**Triggering LLMs to perceive the interaction order.** Based on the above observations, we find that it is difficult for LLMs to perceive the order in interaction histories by a sequential prompting strategy. As a result, we propose two alternative prompting strategies, aiming to elicit the order-perceiving abilities of LLMs. The core idea is to emphasize the recently interacted items. Detailed descriptions of the proposed recency-focused prompting and in-context learning strategies have been given in Section 2.2.
In Table 2, we can see that both recency-forced prompting and in-context learning can improve the ranking performance of LLMs. Recency-focused prompting yields better top-\(1\) accuracies, while in-context learning performs better on datasets with longer historical behaviors. The above results can be summarized as the following key observation:
**Observation 2.** By employing specifically designed promptings, such as recency-focused prompting and in-context learning, _LLMs can be triggered to perceive the order_ of historical user behaviors, leading to improved ranking performance.
### How Well Can LLMs Rank Candidate Items in a Zero-Shot Setting?
In this section, we further investigate how well can LLMs rank the candidates. We first conduct benchmarking experiments to compare the ranking performance between different methods on random candidates, including conventional recommendation models, existing zero-shot recommendation
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline & \multirow{2}{*}{Method} & \multicolumn{4}{c}{ML-1M} & \multicolumn{4}{c}{Games} \\ \cline{3-10} & & N@1 & N@5 & N@10 & N@20 & N@1 & N@5 & N@10 & N@20 \\ \hline \multirow{4}{*}{\begin{tabular}{} \end{tabular} } & Pop & 19.50 & 40.70 & 48.16 & 51.78 & 25.50 & 45.16 & 50.75 & 55.58 \\ & BPRMF [25] & 32.50 & 54.07 & 60.17 & 61.93 & 39.50 & 59.07 & 62.66 & 65.92 \\ & SASRec [19] & 57.00 & 73.14 & 76.25 & 77.47 & 52.50 & 70.71 & 73.25 & 74.74 \\ \hline \multirow{4}{*}{
\begin{tabular}{} \end{tabular} } & BM25 [26] & 4.00 & 13.14 & 20.53 & 33.70 & 9.00 & 22.85 & 31.08 & 40.55 \\ & UniSRec [18] & 9.00 & 20.08 & 26.72 & 38.24 & 22.50 & 37.74 & 42.64 & 51.03 \\ & VQ-Rec [16] & 9.50 & 19.52 & 27.11 & 38.72 & 8.00 & 19.36 & 29.43 & 39.06 \\ \cline{1-1} \cline{2-10} & Sequential & 21.50 & 40.71 & 46.61 & 52.24 & 23.17 & 44.10 & 49.06 & 53.62 \\ \cline{1-1} & Recency-Focused & **23.33** & 42.07 & 48.80 & 53.73 & **23.83** & **45.69** & **50.31** & **55.45** \\ \cline{1-1} & In-Context Learning & 22.67 & **44.51** & **49.97** & **54.60** & 19.67 & 45.30 & 49.68 & 54.05 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance comparison of different zero-shot recommendation models on _randomly retrieved candidates_. Ground-truth items are guaranteed to be included in the candidate sets. “full” denotes recommendation models that are trained on the target dataset, and “zero-shot” denotes recommendation models that are not trained on the target dataset but could be pre-trained. The three zero-shot prompting strategies are based on gpt-3.5-turbo. We highlight the best performance among zero-shot recommendation methods in **bold**. N@\(K\) denotes NDCG@\(K\).
methods, and the proposed LLM-based methods. Next, we evaluate LLM-based methods on candidates with hard negatives that are retrieved by different strategies to further investigate what does the ranking of LLMs depend on? Then, we present another benchmark to compare the ranking performance of different methods on candidates retrieved by multiple candidate generation models to simulate a more practical and difficult setting.
**LLMs have promising zero-shot ranking abilities.** In Table 2, we conduct experiments to compare the ranking abilities of LLM-based methods with existing methods. We follow the same setting in Section 3.1 where \(|\mathcal{C}|=20\) and candidate items (other than the ground-truth item) are randomly retrieved. We include three conventional recommendation models that are trained on the training set, _i.e.,_ Pop (recommending according to item popularity), BPRMF [25], and SASRec [19]. We also evaluate three zero-shot recommendation methods that are not trained on the target datasets, including BM25 [26] (rank according to the textual similarity between candidates and historical interactions), UniSRec [18], and VQ-Rec [16]. For UniSRec and VQ-Rec, we use their publicly available pre-trained models. We do not include ZESRec [7] because there is no pre-trained model released, and UniSRec can be regarded as an improved version of this model [18]. For LLM-based methods, we include the three variants that use different prompting strategies as described in Section 2.2, named Sequential.
From Table 2, we can see that LLM-based methods outperform existing zero-shot recommendation methods by a large margin, showing promising zero-shot ranking abilities. We would highlight that it is difficult to conduct zero-shot recommendations on the ML-1M dataset, due to the difficulty in measuring the similarity between movies merely by the similarity of their titles. We can observe that LLM-based models still achieve promising zero-shot ranking performance on ML-1M, as they can use intrinsic world knowledge to measure the similarity between movies and make recommendations. However, we can also see that there are still gaps between zero-shot recommendation methods and conventional methods, indicating the importance to develop LLM-based recommendation methods that can learn from interaction data [34].
**LLMs rank candidates based on item popularity, text features as well as user behaviors.** To further investigate how LLMs rank the given candidates, we evaluate the ranking performance of LLMs on candidates that are retrieved by different candidate generation methods. These candidates can be viewed as hard negatives for ground-truth items, which can be used to measure the ranking ability of LLMs for specific categories of items. We consider two categories of strategies to retrieve
Figure 3: Ranking performance measured by NDCG@10 (%) on hard negatives retrieved by different strategies.
the candidates: (1) _content-based methods_ like _BM25_[26] and _BERT_[6] retrieve candidates based on the text feature similarities between historical interacted items and candidates, and (2) _interaction-based methods_, including _Pop_ (recommend based on item popularity), _BPRMF_[25], _GRU4Rec_[14], and _SASRec_[19], retrieve items using conventional recommendation models trained on user-item interactions. Given candidates, we compare the ranking performance of the LLM-based model (_Ours_) and representative content-based (_BM25_) and interaction-based (_Pop_ and _SASRec_) methods.
From Figure 3 we can see that the ranking performance of the LLM-based method varies on different candidate sets and different datasets. (1) On ML-1M, our LLM-based method cannot rank well on candidate sets that contain popular items (_e.g., Pop_ and _BPRMF_), indicating the LLM-based method recommend items largely depend on item popularity on ML-1M dataset. (2) On Games, we can observe that _Ours_ has similar ranking performance both on popular candidates and textual similar candidates, showing that item popularity and text features contribute similarly to the ranking of LLMs. (3) On both two datasets, the ranking performance of _Ours_ is affected by hard negatives retrieved by interaction-based candidate generation methods, but not as severe as those ranking models that are purely based on interactions like _SASRec_. The above results demonstrate that LLM-based methods not only consider some single aspect for ranking, but make use of item popularity, text features, and even user behaviors. On different datasets, the weights of these three aspects to affect the ranking performance may also vary.
**LLMs can effectively rank candidates retrieved by multiple candidate generation models.** For real-world two-stage recommender systems [4], the items to be ranked are usually retrieved by multiple candidate generation models. As a result, we also conduct benchmarking experiments in a more practical setting. We use seven candidate generation models to retrieve items, _i.e., Random_, _BM25_, _BERT_, _Pop_, _BPRMF_, _GRU4Rec_, and _SASRec_, covering typical content-based and interaction-based methods. The top-\(3\) best items retrieved by each candidate generation model will be merged into a candidate set containing a total of \(21\) items. Note that as a more practical setting, we do not complement the ground-truth item to each candidate set like the setting described in Section 3.1. For _Ours_, inspired by experiments in Section 3.1, we use the recency-focused prompting strategy to encode \(|\mathcal{H}|=5\) sequential historical interactions into prompts for a decent ranking performance.
From Table 3, we can see that the LLM-based ranking model (_Ours_) yields the best performance over the compared recommendation models on most metrics (\(6\) of \(8\)), even beats the conventional recommendation model _SASRec_ that has been trained on the target datasets. The results demonstrate the strong zero-shot ranking ability of LLMs on candidates retrieved by multiple candidate generation models. Facing the phenomenon, we assume that LLMs can make use of their intrinsic world knowledge to rank the candidates comprehensively considering popularity, text features, and user behaviors. In comparison, existing models (as _narrow experts_) may lack the ability to rank items in a complicated setting. The above findings can be summarized as:
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline \multirow{2}{*}{} & \multirow{2}{*}{Method} & \multicolumn{4}{c}{ML-1M} & \multicolumn{4}{c}{Games} \\ \cline{3-10} & & N@1 & N@5 & N@10 & N@20 & N@1 & N@5 & N@10 & N@20 \\ \hline \multirow{4}{*}{\begin{tabular}{c} **Ours** \\ **BPRMF** \\ \end{tabular} } & Pop & 0.00 & 1.19 & 3.32 & 5.28 & 0.00 & 0.39 & 1.57 & 1.69 \\ & BPRMF [25] & 1.50 & 3.01 & 4.78 & 6.39 & 0.50 & 0.72 & 1.03 & 1.82 \\ & SASRec [19] & 2.00 & **7.13** & **8.38** & 8.52 & 0.00 & 2.23 & 2.56 & 2.56 \\ \hline \multirow{4}{*}{
\begin{tabular}{c} **Ours** \\ **BPRMF** \\ \end{tabular} } & BM25 [26] & 0.50 & 0.75 & 2.20 & 4.95 & 0.00 & 0.60 & 1.10 & 1.63 \\ & UniSRec [18] & 1.50 & 4.09 & 5.37 & 6.77 & 0.00 & 1.86 & 2.03 & 2.31 \\ & VQ-Rec [16] & 0.00 & 1.68 & 3.52 & 5.22 & 0.00 & 0.76 & 1.12 & 1.73 \\ \cline{1-1} \cline{2-10} & Ours & **3.83** & 6.20 & 7.81 & **8.65** & **1.00** & **2.24** & **2.57** & **2.74** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance comparison of different zero-shot recommendation models on _candidates retrieved by multiple candidate generation models_. Ground-truth items are _not_ guaranteed to be included in the candidate sets. “full” denotes recommendation models that are trained on the target dataset, and “zero-shot” denotes recommendation models that are not trained on the target dataset but could be pre-trained. We highlight the best performance among _all_ recommendation methods in **bold**. N@\(K\) denotes NDCG@\(K\).
**Observation 3.** LLMs have promising zero-shot ranking abilities, especially on candidates retrieved by multiple candidate generation models with different practical strategies.
**LLMs cannot rank candidates well when the candidate set is large.** It has been a technical challenge to effectively model the semantics of long sequences by language models [8]. As a result, we would like to investigate whether LLMs can deal with a large set of candidates for ranking. We vary the number of candidates \(|\mathcal{C}|\) from \(5\) to \(50\) and report the ranking performance in Figure 4. We can see that the gap between LLMs and conventional recommendation models (_e.g.,_ SASRec) enlarges as \(|\mathcal{C}|\) increases, indicating that LLMs may face challenges when ranking a large set of candidate items.
### Do LLMs suffer from biases while ranking?
The biases and debiasing methods in conventional recommender systems have been widely studied [2]. For the proposed LLM-based zero-shot recommendation model, there could also be specific biases that affect the ranking of LLMs. In this section, we discuss two kinds of biases that LLM-based recommendation models suffer from, namely position bias and popularity bias. We also make discussions on how to alleviate these biases.
**The order of candidates affects the ranking results of LLMs.** For conventional ranking methods, the order of retrieved candidates usually will not affect the ranking results. However, for our LLM-based approach that is described in Section 2.2, the candidate items are arranged in a sequential manner and encoded into a prompt as inputs of LLMs. It has been shown that LLMs are generally sensitive to the order of examples in the prompts for NLP tasks [38, 22]. As a result, we also conduct experiments to examine whether the order of candidates affects the ranking performance of LLMs. We evaluate the performance of LLMs on the same candidate sets that are used in Section 3.1. The only difference is that we control the order of these candidates in the prompts by purpose, _i.e.,_ we make the ground-truth items appear at a certain position while constructing prompts. We vary the position of ground-truth items at \(\{0,5,10,15,19\}\) and present the ranking results in Figure 5(a). We can see that the ranking performance varies when the ground-truth items appear at different positions. Specifically, the ranking performance drops significantly when the ground-truth items appear at the last few positions. The results indicate that the ranking performance of LLMs is affected by the order of candidates, _i.e., position bias_ for LLM-based rankers, while conventional recommendation models are usually _not_ influenced.
**Alleviating position bias via bootstrapping.** From Figure 5(a), we can see that LLMs tend to rank the candidate items lower if they locate at a later position in the prompts. As the candidate items are randomly assigned to each position, a simple strategy to alleviate position bias is to bootstrap the ranking process. We may rank the candidate set repeatedly for \(B\) times, with candidates randomly shuffled at each round, so that one candidate item may appear at different positions to be ranked.
Figure 4: Ranking performance comparison between LLM-based model (_Ours_) and conventional recommendation model (_SASRec_) on different sizes of candidate sets.
After ranking, the item at a higher position will be given a higher score, and we will merge the ranking scores together to derive the final ranking. From Figure 5(c), we follow the setting in Section 3.1 and apply the bootstrapping strategy to _Ours_. Each candidate set will be ranked for \(3\) times. We can see that bootstrapping improves the ranking performance on both datasets.
**Popularity degrees of candidates affect ranking results of LLMs.** For popular items, the associated text may also appear frequently in the pre-training corpora of LLMs. For example, a best-selling book would be widely discussed on the Web. Thus, we would like to examine whether the ranking results are affected by the popularity degrees of candidates. However, it is difficult to directly measure the item text popularity in the pre-training corpora. As a result, we hypothesize that the text popularity can be reflected and indirectly measured by item frequency in one recommendation dataset. In Figure 5(b), we report the item popularity score (measured by the normalized item frequency of appearance in the training set) at each position of the ranked item lists. We can see that popular items tend to be ranked at higher positions. Like conventional recommendation models, LLMs also suffer from popularity bias and favor recommending more popular items.
**Making LLMs focus on historical interactions helps reduce popularity bias.** From Figure 5(b), we can see that LLMs tend to rank popular items at higher positions. As observations in Section 3.2 indicate, the reason could be that LLMs do not leverage historical interactions well, and have to make recommendations mainly based on item popularity. From experiments in Figure 2(c), we know that LLMs can make better use of historical interactions when the number of used historical interactions is smaller. As a result, we vary the number of historical interactions to see whether popularity bias can be reduced once LLMs focus more on user histories. From Figure 5(d), we compare the popularity scores (measured by normalized item frequency) of the best-ranked items. It can be observed that as the number of historical interactions decreases, the popularity score decreases as well. This suggests that one can reduce the effects of popularity bias when LLMs are forced to focus on historical interactions. From the above experiments, we can conclude the following:
**Observation 4.** LLMs suffer from position bias and popularity bias while ranking, which can be alleviated by specially designed prompting or bootstrapping strategies.
### How Do LLMs Gain Recommendation Abilities?
In this section, we explore what factors or techniques contribute to the ranking abilities of LLMs. Specially, we mainly consider examining the effect of two factors, namely instruction tuning and model scaling, on the ranking abilities of LLMs, because both techniques have been shown to be key to the abilities of LLMs [37, 1, 32]. We would like to take a more focused study on how they improve the recommendation performance of LLMs.
**Instruction tuning improves the ranking abilities of LLMs.** Existing works show that instruction tuning significantly improves the generalization abilities of LLMs on unseen tasks [27, 32]. Here we would like to investigate whether the ability to rank items according to historical interactions using
Figure 5: Biases and debiasing methods in the ranking of LLMs. (a) The position of candidates in the prompts influences the ranking results. (b) LLMs tend to recommend popular items. (c) Bootstrapping alleviates position bias. (d) Focusing on historical interactions reduces popularity bias.
LLMs can be improved by instruction tuning. Following experimental settings in Section 3.1, we replace the default LLM in the proposed LLM-based ranking method from gpt-3.5-turbo to (1) LLMs that have not been instruction-tuned, like LLaMA [28], and (2) LLMs that have been fine-tuned on instructions, including Vicuna [3] and text-davinci-0032, and then instruct these LLMs to perform the ranking task. In Figure 6, by comparing Vicuna-7/13B to LLaMA-7/13B, we can see that instruction-tuned LLMs outperform LLMs that have not been instruction-tuned. The results demonstrate that instruction tuning improves the ranking abilities of LLMs, even if the instructions are not specially designed for the recommendation tasks.
Footnote 2: [https://platform.openai.com/docs/api-reference](https://platform.openai.com/docs/api-reference)
**Model Scaling improves the ranking performance of LLMs.** As the scaling law shows, the performance of LLMs on various downstream tasks generally increases while scaling up the model size and the amount of training data [24; 15; 28]. We follow the experimental setting in Section 3.1 and replace the base LLM with LLaMA of different sizes (7B, 13B, 35B, and 65B) to investigate the effect of model scaling on the zero-shot recommendation task. From Figure 6, we can see that the ranking performance of LLMs increases as the model size increases (_i.e.,_ LLaMA-65B \(>\) LLaMA-35B \(>\) LLaMA-13B). We can also see that LLMs larger than 100B yield superior ranking abilities, by comparing text-davinci-003 and gpt-3.5-turbo with other smaller LLMs. The above results indicate that the zero-shot recommendation task also fulfills the scaling law, _i.e.,_ model scaling improves the ranking performance of LLMs.
## 4 Conclusion
In this work, we investigated the capacities of LLMs that act as the ranking model for recommender systems. In detail, we formalized the recommendation task as a conditional ranking task, considering sequential historical interactions as conditions and the items retrieved by candidate generation models as candidates. To rank with LLMs, we further constructed natural language prompts that contain historical interactions as well as candidates. We then propose several specially designed prompting strategies to trigger the ability of LLMs to perceive orders of sequential behaviors. We also introduce a simple bootstrapping strategy to alleviate the position bias issue that LLM-based ranking models may suffer. Extensive empirical studies indicate that LLMs have promising zero-shot ranking abilities. We also conclude several key findings and aim at shedding light on several promising directions to further improve the ranking abilities of LLMs, including (1) better perceiving the order of sequential historical interactions, (2) making better use of more historical interactions and candidates, and (3) alleviating the position bias and popularity bias. For future work, we consider developing technical approaches to solve the above-mentioned key challenges when deploying LLMs as zero-shot rankers. We also would like to develop LLM-based recommendation models that can be efficiently tuned on downstream user behaviors for effective personalized recommendations.
Figure 6: Ranking performance comparison using different LLMs. |
2302.12662 | FedDBL: Communication and Data Efficient Federated Deep-Broad Learning
for Histopathological Tissue Classification | Histopathological tissue classification is a fundamental task in
computational pathology. Deep learning-based models have achieved superior
performance but centralized training with data centralization suffers from the
privacy leakage problem. Federated learning (FL) can safeguard privacy by
keeping training samples locally, but existing FL-based frameworks require a
large number of well-annotated training samples and numerous rounds of
communication which hinder their practicability in the real-world clinical
scenario. In this paper, we propose a universal and lightweight federated
learning framework, named Federated Deep-Broad Learning (FedDBL), to achieve
superior classification performance with limited training samples and only
one-round communication. By simply associating a pre-trained deep learning
feature extractor, a fast and lightweight broad learning inference system and a
classical federated aggregation approach, FedDBL can dramatically reduce data
dependency and improve communication efficiency. Five-fold cross-validation
demonstrates that FedDBL greatly outperforms the competitors with only
one-round communication and limited training samples, while it even achieves
comparable performance with the ones under multiple-round communications.
Furthermore, due to the lightweight design and one-round communication, FedDBL
reduces the communication burden from 4.6GB to only 276.5KB per client using
the ResNet-50 backbone at 50-round training. Since no data or deep model
sharing across different clients, the privacy issue is well-solved and the
model security is guaranteed with no model inversion attack risk. Code is
available at https://github.com/tianpeng-deng/FedDBL. | Tianpeng Deng, Yanqi Huang, Guoqiang Han, Zhenwei Shi, Jiatai Lin, Qi Dou, Zaiyi Liu, Xiao-jing Guo, C. L. Philip Chen, Chu Han | 2023-02-24T14:27:41Z | http://arxiv.org/abs/2302.12662v2 | FedDBL: Communication and Data Efficient Federated Deep-Broad Learning for Histopathological Tissue Classification
###### Abstract
Histopathological tissue classification is a fundamental task in computational pathology. Deep learning-based models have achieved superior performance but centralized training with data centralization suffers from the privacy leakage problem. Federated learning (FL) can safeguard privacy by keeping training samples locally, but existing FL-based frameworks require a large number of well-annotated training samples and numerous rounds of communication which hinder their practicability in the real-world clinical scenario. In this paper, we propose a universal and lightweight federated learning framework, named Federated Deep-Broad Learning (FedDBL), to achieve superior classification performance with limited training samples and only one-round communication. By simply associating a pre-trained deep learning feature extractor, a fast and lightweight broad learning inference system and a classical federated aggregation approach, FedDBL can dramatically reduce data dependency and improve communication efficiency. Five-fold cross-validation demonstrates that FedDBL greatly outperforms the competitors with only one-round communication and limited training samples, while it even achieves comparable performance with the ones under multiple-round communications. Furthermore, due to the lightweight design and one-round communication, FedDBL reduces the communication burden from 4.6GB to only 276.5KB per client using the ResNet-50 backbone at 50-round training. Since no data or deep model sharing across different clients, the privacy issue is well-solved and the model security is guaranteed with no model inversion attack risk. Code is available at [https://github.com/tianpeng-deng/FedDBL](https://github.com/tianpeng-deng/FedDBL).
Federated Learning, Data and Communication Efficiency, Deep-Broad Learning, Histopathological Tissue Classification
## I Introduction
Tissue classification ([1, 2]), also known as tissue phenotyping, aims to use computer algorithms to automatically recognize different tissue types in the Whole Slide Images (WSIs). It is one of the fundamental tasks in computational pathology ([3]) which can parse the landscape of tumor microenvironment for precise predictions of cancer diagnosis ([4]), prognosis ([5, 6]) and treatment response ([7]). With the advancements of deep learning algorithms and the growing number of open data ([8, 9]), this problem has been well studied with outstanding classification performance ([10]). In clinical practice, however, it still faces ethical, regulatory and legal obstacles where centralized data collection may lead to privacy leakage.
Federated Learning (FL) ([11]) framework provides a promising solution to protect user privacy by only sharing the intermediate results or the model parameters instead of the raw data, which has been widely studied in medical image analysis ([12, 13]). But only very few attempts ([14, 15, 16]) have been made in computational pathology and the research progress still lags behind other medical image modalities ([17]) due to the following two obstacles.
The first one is the data dependency problem. Since most of the existing FL frameworks are constructed based on deep learning models. They are data-hungry and commonly require a large amount of well-annotated samples. However, labeling histopathological images is time-consuming, expertise-dependent and expensive ([18, 19]). When without enough training samples, existing models may not achieve favorable performance. Another obstacle is the communication overhead. The training procedure of traditional FL models needs multiple cloud-client iterations to achieve global convergence. However, deep learning models are with tens of millions of parameters, which greatly increases the communication burden when with multiple communication rounds. Lack of training samples may further amplify the communication burden because deep learning models commonly require more iterations to converge when with limited training samples. Moreover,
-frequent communications may increase the chance of being attacked, such as man-in-the-middle attacks ([20]).
Therefore, it is urged to construct a data-efficient and communication-efficient FL model for histopathological tissue classification. In this paper, we proposed a simple and effective solution for histopathological tissue classification, which considers not only the data sharing problem, but also the data dependency, communication efficiency, model robustness and model inversion attack. Our proposed model Federated Deep-Broad Learning, _FedDBL_ in short, contains three integrated components, including a common federated learning framework, a pre-trained deep learning (DL) backbone and a broad learning (BL) inference system ([21]). The federated learning framework serves for decentralized training to avoid data sharing across different medical centers or institutions. The pre-trained DL backbone can provide stable and robust deep features when there are not enough training labels, while can also effectively avoid the model inversion attack since no back-propagation is calculated for gradients. The BL system is a lightweight classifier with good approximation capability which can greatly shorten the transmission time and overcome the data dependency problem. Fig. 1 comprehensively demonstrates the strengths of FedDBL compared with the centralized learning and the conventional federated learning ways. And to the best of our knowledge, this is the first FL-based model for histopathological tissue classification.
Extensive experiments with five-fold cross-validation are conducted to demonstrate the superiority of FedDBL in several aspects, including data dependency, communication efficiency, flexibility and the practicability of the model encryption. When with enough training data, FedDBL can mostly outperform conventional FL strategies and achieve comparable or even better classification performance compared with centralized learning strategy. When reducing the training samples in the data dependency experiment, FedDBL still maintains a high-level performance and greatly outperforms both centralized learning and conventional FL frameworks, even with only 1% training samples. FedDBL is also flexible to any deep learning architectures to support data- and communication-efficient histopathological tissue classification. Another spotlight of FedDBL is communication efficiency. Compared with the conventional FL frameworks, FedDBL's one-round training manner reduces the upload workload from 4.609GB to 276.5KB (over 17,000 times faster) with ResNet-50 backbone compared to traditional 50-round iterative training. Thanks to the tiny model size, FedDBL is also computationally efficient in model encryption which can further upgrade the privacy protection level. The main contributions of this paper can be summarized as follows:
* We propose a novel federated learning approach (FedDBL) for histopathological tissue classification to preserve patients' privacy. To the best of our knowledge, FedDBL is the first FL-based approach tailored for histopathological tissue classification.
* FedDBL is a simple, effective and easy-to-use algorithm that associates three classical methods, including a robust pre-trained deep learning feature extractor, a fast broad learning inference system and a simple federated learning framework.
* FedDBL is the first study that considers communication efficiency and data efficiency simultaneously which reduces the communication overhead of each client with extremely limited training samples.
* Extensive experiments demonstrate that FedDBL drastically relieves the dependence on training data and reduces the communication overhead while maintaining outstanding classification performance, which promotes its clinical practicability.
## II Related Works
### _Histopathological Tissue Classification_
High-resolution WSIs offer a wide range of tissue phenotypes where the pixel-level annotation is time-consuming and requires a great deal of biomedical knowledge ([3]), making patch-level histopathological tissue classification an alternate solution for automated analysis in computer-aided tumor diagnosis ([22, 23, 8]).
Due to the rapid development of computer vision, the most popular natural image classification models can be transferred into histopathological tissue phenotyping. However, it still suffers from the data dependency problem with a huge annotation burden ([24]). Thus, various approaches have been
Fig. 1: Overall comparison among centralized training, traditional DL-based FL and our proposed FedDBL paradigms. (a) Centralized learning gathers data from all the clients which cannot protect the patient’s privacy. (b) Traditional DL-based FL preserves privacy by transmitting the model parameters to the central server without sharing the raw data. However, the communication overload highly depends on the model size and the number of communication rounds. (c) Our proposed FedDBL not only protects privacy, but also dramatically saves the communication burden through a super lightweight trainable broad learning system.
proposed to reduce the annotation effort. [25] proposed a multi-layer pseudo-supervision approach with a progressive dropout attention mechanism to convert patch-level labels into pseudo-pixel-level labels. An extra classification gate mechanism was presented which reduced the false-positive rate for non-predominant category classification and improved the segmentation performance in return. [22] utilized a generative adversarial network (GAN) to generate pseudo samples to expand the training data. [26] cropped WSIs into tiles for training the uncertainty quantization model and solved the problem of domain shift in external validation data. In order to get rid of lacking image annotations, [27] employed unsupervised contrastive learning to obtain a robust initialized model with moderate feature representation of the histopathological feature space, with no annotation burden. Our previous study ([28]) introduced pyramidal deep-broad learning (PDBL) as a pluggable module for any CNN backbone to further improve histopathological tissue classification performance.
Besides that, another unexplored challenge is the patient privacy issue. Only a few attempts ([14, 29]) have been made in federated learning for computational pathology, which will be discussed in the following subsection. And to the best of our knowledge, we are the first study to consider privacy protection in histopathological tissue classification.
### _Federated Learning_
#### Ii-B1 Federated Learning in Medical Image Analysis
Because of the ethical issue, federated learning (FL) has been widely adopted in medical applications to preserve the patients' privacy ([12, 13, 30]). In medical imaging, FL has witnessed a boost in interest ([31]), such as MRI reconstruction ([32, 33]), CT lesion segmentation ([34]) and etc. In the COVID-19 pandemic, COVID-19-related applications with data from different medical centers or even from different countries become the most urgent demand in the real-world clinical scenario while FL greatly advances the diagnostic performance ([35]). [36] used 20 institutes' data across the global for predicting the future oxygen requirements of symptomatic patients suffering from COVID-19. [37] proposed a federated model to detect COVID-19 lung abnormalities with good generalization capability on unseen multinational datasets.
#### Ii-B2 Federated Learning in Computational Histopathology
In histopathological images, a swarm learning architecture with blockchain protocols has been proposed to predict the mutational status ([14]). However, compared with other medical imaging modalities, there are few studies ([29]) that adopt federated learning in histopathological images for the following reasons. First, the digitalization of pathology is unpopular. Pathological diagnosis still relies on observing specimens under a microscope. Second, image annotation is also an obstacle for the histopathological image process since only pathologists are capable to label WSIs which greatly increases the difficulties of acquiring well-annotated data. Third, due to the gigapixel resolution of WSIs, the size of the deep learning model is generally large, which increases the communication burden in networking.
There are technical solutions in FL to the high communication overhead problem, such as compressing the model size ([38, 39]). [38] proposed FedPAQ to reduce the interactive overhead of FL by compressing the model with lower bit-precision and [39] proposed an adaptive quantization strategy to achieve communication efficiency.
However, the underlying assumption of existing studies is that there should be enough samples for model training where they may not be able to take into account both communication efficiency and limited data issue ([40, 41]). In this study, we fully consider the specialty of histopathological images, the difficulties of data labeling and the communication efficiency in the real-world clinical scenario, which has never been discussed in decentralized computational pathology.
## III Methodology
In this section, we introduce our framework Federated Deep-Broad Learning (FedDBL). This framework is designed for privacy-preserving tissue classification with limited training samples and extremely low communication overhead. In the following subsections, we first describe the intuitive thinking and problem setting in Section III-A. The overall framework and the methodology of FedDBL are shown in Section III-B. Finally, we demonstrate the implementation details in Section III-C.
### _Problem Setting_
As a classical upstream task in computational pathology, existing tissue classification approaches have achieved outstanding performance under an ideal condition with enough training samples by centralized learning. However, they might face the following obstacles in the real-world clinical scenario.
**Annotation burden:** Collecting enough well-labeled training samples is expensive and time-consuming because it requires labelers with medical background.
**Privacy preservation:** The raw data should not be shared across different medical institutions (or clients) to preserve the patient's privacy. Transmitting raw data may break the principle of medical ethics.
**Communication cost:** The communication overhead has always been a challenge in federated learning models affected by many compound factors, such as the model size, the communication rounds, the model convergence speed, the network bandwidth and etc.
To resolve the aforementioned challenges, we propose a simple and effective FL-based framework, demonstrated in Fig. 2. First, we abandon conventional end-to-end training manner since limited training samples may harm the robustness of the deep learning model and decrease the convergence speed. Therefore, we separate feature extraction and inference for local training in each client. A pre-trained deep feature extractor (CNN backbone) is introduced to avoid the feature extractor being affected by the training sample bias from different clients in order to guarantee the robustness of extracted features. Then an independent broad learning inference system ([21, 28]) serves for fast inference. Finally, we apply a classical weighted averaging as in FedAvg ([42]), to fuse the broad learning inference systems from all the clients.
### _FedDBL Architecture and Formulation_
As shown in Fig. 2, FedDBL consists of three modules, deep feature extraction module (DL-module), broad inference module (BL-module) and federated decentralized module (Fed-module). DL-module together with BL-module serves for local training on the client side. Fed-module is executed on the server side. Algorithm 1 provides the details of the entire FedDBL pipeline.
Let \(\mathcal{D}_{1},\mathcal{D}_{2},\cdots,\mathcal{D}_{k},\cdots,\mathcal{D}_{K}\) denote the local training sets from \(K\) clients with the dataset size of \(n_{k}\) for each client \(\mathcal{D}_{k}\). The total number of training samples is denoted as \(N=\sum_{k=1}^{K}n_{k}\). For each sample \(X\) with ground truth \(Y\) in \(\mathcal{D}\), DL-module with pre-trained parameters \(\Theta\) extracts the features and stores them in the feature bank \(\mathbf{B}\). Then BL-module calculates the weights \(W_{client}\) of broad learning system. By the federated aggregation approach, we can obtain the global weight \(W_{global}\). The workflows of the server and the clients are demonstrated in Algorithm 1 and Algorithm 2, respectively.
```
Input : A set of \(K\) clients Output : A global model \(W_{global}\)
1 Prepare pre-trained DL backbone parameters \(\Theta\)
2 Initialize BL system setting
3forach client \(k\)in paralleldo
4\(W_{client}^{k}\leftarrow\)ClientExecution\((\Theta,\mathcal{D}_{k})\)
5 end for
6\(W_{global}\leftarrow\) Fed-module\((W_{client}^{1},\cdots,W_{client}^{K})\) return\(W_{global}\)
```
**Algorithm 1**FedDBL framework (Server Execution)
```
Input : Pre-trained DL backbone \(\Theta\), training set \(\mathcal{D}\) with \(n\) training samples Output : Deep-broad learning model \(W_{client}\). /* DL-module */
1fortraining sample \(X\)in \(\mathcal{D}\)do
2for\(s\)-th stage in \(\Theta\)do
3\(f_{X}^{s}\leftarrow\Theta^{s}\left(X\right)\) // Feature extraction
4\(\mathbf{e}_{X}^{s}=\frac{1}{H_{X}^{s}\times W_{X}^{s}}\sum_{i=1}^{H_{X}}\sum_{ j=1}^{W_{X}}f_{X}^{s}(i,j)\) // Adaptive global average pooling
5 end for
6\(\mathbf{b}_{X}=\mathbf{e}_{X}^{1}\parallel\mathbf{e}_{X}^{2}\parallel\cdots \parallel\mathbf{e}_{X}^{s}\) // Concatenation
7 end for
8Obtain \(\{\mathbf{b}_{i}|i=1,2,\cdots,n\}\)
9\(\mathbf{B}\leftarrow\sigma\left(\left[\mathbf{b}_{1}\mathbf{T},\mathbf{b}_{2} \mathbf{T},\ldots,\mathbf{b}_{n}^{\mathbf{T}}\right]\right)^{\mathbf{T}}\) // Normalization transformation /* BL-module */
10 Initialize BL system setting defined by central server
11\(\mathbf{B}^{+}\leftarrow\lim_{\lambda\to 0}\left(\mathbf{B}\mathbf{B}^{\mathbf{T}}+ \lambda E\right)^{-1}\mathbf{B}^{\mathbf{T}}\) // Solve Pseudo-inverse
12\(W_{client}\leftarrow\mathbf{B}^{+}Y\) // Calculate BL model weight return\(W_{client}\)
```
**Algorithm 2**FedDBL framework (Client Execution)
#### Iii-B1 Deep Feature Extraction Module
A large number of samples and repeated backpropagation are required in standard DL training to achieve a good feature representation ability.
Fig. 2: The overall architecture of FedDBL with three modules, deep feature extraction module, broad inference module and federated decentralized module. (a) Deep feature extraction module serves for extracting multi-scale deep-broad features from low level to high level by a pre-trained DL backbone. Features of all the patches are stored in a feature bank. (b) Broad inference module is introduced for fast inference by a broad learning system. (c) Federated decentralized module applies a classical federated average approach to aggregate the broad learning weights from different clients.
When suffering from the insufficient data problem, the model training procedure might be unstable which leads to poor feature representation and model overfitting. Our previous study ([28]) reveals that directly adopting a stable pre-trained model for feature extraction is more favorable to the model performance than training the model with limited samples, even the pre-trained model was trained by an irrelevant image domain (ImageNet1). Inspired by this idea, we use a pre-trained CNN model with no further training to extract the deep features. Notice that, the selection of the pre-trained models is flexible, and can be from any image domain. We have conducted an experiment to justify the flexibility in Section IV. Another advantage of using pre-trained models is to avoid model inverse attacks since the training samples are all unseen. To enrich the feature representation, we extract multi-stage features from low-level to high-level, details as below.
Footnote 1: [https://image-net.org/](https://image-net.org/)
As illustrated in DL-module of Algorithm 2, each client \(k\) (\(k\in[1,\cdots,K]\)) downloads the pre-trained DL backbone as feature extractor \(\Theta\) and extracts multi-stage deep features \(\mathbf{b}_{X}\) of training sample \(X\) locally (we omit \(k\) for simplicity), where \(\mathbf{b}_{X}\) consists of multiple stages' features \(\Theta^{s}(X)(s\in[1,\cdots,S])\). The features of the entire dataset \(\mathcal{D}_{k}\) are stored in the feature bank \(\mathbf{B}\). Then the feature bank \(\mathbf{B}\) will be passed to broad inference module. Since neither the training data nor the feature bank is shared across different clients, there is no privacy leakage risk in deep feature extraction module.
#### Iii-B2 Broad Inference Module
With the feature bank \(\mathbf{B}\), each client \(k\) can conduct a local BL system ([21]) through BL-module (Algorithm 2) for fast inference. By solving the Eq. (1) optimization problem, an optimal BL model \(W_{client}\) can be obtained rapidly through the pseudo-inverse method (Eq. (2)).
\[W_{client}=\operatorname*{arg\,min}_{W_{init}}\left\|\mathbf{B}W_{init}-Y \right\|_{2}^{2}+\gamma\left\|W_{init}\right\|_{2}^{2} \tag{1}\]
\[W_{client}=\mathbf{B}^{+}Y=\lim_{\lambda\to 0}\left(\mathbf{B}\mathbf{B}^{ \mathbf{T}}+\lambda E\right)^{-1}\mathbf{B}^{\mathbf{T}}Y \tag{2}\]
where \(Y\) represents the ground-truth label matrix, \(\mathbf{B}\) is feature bank in the form of matrix. \(W_{init}\) is the initialized broad learning weights. \(E\) is the identity matrix, \(\lambda\) is a constant parameter and \(\gamma\) is the regularization parameter. The pseudo-inverse method of solving BL model considerably reduces the computational burden while achieving high communication efficiency. For the inference process, the predicted results can be calculated by \(Y_{test}=\mathbf{B}_{test}W_{client}\) after extracting test samples' deep features with the largest probabilistic value.
Thanks to the lightweight broad learning model \(W_{client}\), the communication efficiency is drastically improved compared with the conventional DL-based FL frameworks.
#### Iii-B3 Federated Decentralized Module
In this module, we conduct a federated learning framework for decentralized learning. Given the broad learning model \(W_{client}^{k}\) of each client \(k\), we first upload the models from all the clients to the central server. And then general federated aggregation methods can be applied to aggregate them. Here, we use the most common weighted averaging way for model aggregation as adopted in FedAvg ([42]), FedProx ([43]) and FedPAQ ([38]).
\[W_{global}=\sum_{k=1}^{K}\frac{n_{k}}{N}W_{client}^{k} \tag{3}\]
where \(W_{global}\) is the global model from the server, \(n_{k}\) is the number of training samples in client \(k\) and \(N\) is the total number of training samples. A larger training dataset will contribute more to the global model. Since we only share the broad learning model for once, the communication efficiency and the patient's privacy are guaranteed.
### _Implementation Details_
All of our experiments are implemented in Pytorch on a workstation with an NVIDIA RTX 3090 and the i9-11900K CPU with 16 cores. We use the cross-entropy loss for the baseline centralized training with batch size \(20\). The SGD optimizer is set as follows: the learning rate is \(1e^{-3}\), the weight decay is \(1e^{-4}\) and the momentum is 0.9. The patches are \(224\times 224\) under \(20\times\) WSIs. Different client numbers are used depending on the datasets.
We adopt three well-known federated aggregation methods, FedAvg ([42]), FedProx ([43]) and FedPAQ ([38]), for comparison. And the centralized model is trained as the baseline. FedProx has the parameter \(\mu\) to adjust the effect of the proximal term on the loss function. Here we set \(\mu\) as \(1\) which has better performance.
## IV Experiments
In this section, we present the details of the datasets and conduct various experiments to demonstrate the performance and efficiency of the proposed FedDBL. Section IV-A shows two open datasets and the experimental settings in the federated learning framework. In Section IV-B, we compare FedDBL with centralized learning baselines, conventional federated learning baselines and one-round federated learning baselines. The effectiveness is comprehensively discussed in Section IV-C. We use accuracy and macro F1-score as the evaluation metrics in all the experiments.
### _Datasets and Experimental Settings_
**Multi-center CRC:** This is a multi-center datasets ([8, 9]) of colorectal cancer. Kather dataset ([8]) defined nine different tissue types of H&E stained WSIs, including adipose (ADI), background (BACK), debris (DEB), lymphocytes (LYM), mucus (MUC), smooth muscle (MUS), normal colon mucosa (NORM), cancer-associated stroma (STR), and colorectal adenocarcinoma epithelium (TUM). It contains 100k patches extracted from 86 WSIs. Following Kather, [9] also released another CRC dataset from three different medical centers, including 89.1k patches (85 slides) from The Cancer Genome Atlas (TCGA), 105.1k patches (106 slides) from Guangdong Provincial People's Hospital and 22.5k patches (48 slides) from Yunnan Cancer Hospital. All these patches are with the same resolution of \(224\times 224\) at \(20\times\) magnification. Table I demonstrates the statistics of each dataset.
**BCSS**: Here, we introduce another dataset of breast cancer. Breast Cancer Semantic Segmentation (BCSS) ([44]) is an open challenge released in Grand Challenge2. There are 151 ROI images with pixel-level annotations in WSIs retrieved in TCGA. According to the naming convention provided by the supplementary document of BCSS, the ROIs are extracted from 21 different medical centers/hopgitals. To generate a patch-level dataset, we first divide these ROIs into three clients and each of them has 7 medical centers. Then we crop each ROI into \(224\times 224\) pixels by a sliding window with a step size of 120 pixels at \(20\times\) objective magnification. Since this dataset is long-tail, we only keep the tissues from the four predominant classes, including tumor (TUM), stroma (STR), lymphocytic infiltrate (LYM), and necrosis or debris (NEC). The patches with the area of the majority class larger than 95% are kept while the others are discarded as ambiguous patches. Finally, a total number of 8278 patches are left, and the size of each client's dataset is shown in Table II.
Footnote 2: [https://bcsegmentation.grand-challenge.org/](https://bcsegmentation.grand-challenge.org/)
**Experimental Settings:** We conduct the federated learning environment by the following steps. First of all, Multi-center CRC includes four clients according to the dataset setting from the original papers. BCSS is separated into three clients due to the limited training samples. For each client, the local dataset is randomly separated into a training set and a test set with a ratio of \(7:3\). Then, we randomly sample seven incremental subsets with the proportions of \([1\%,5\%,10\%,30\%,50\%,70\%,100\%]\) from the training set. To conduct a 5-fold cross-validation experiment, we repeat the random sampling strategy five times for each proportion. In addition, we simply combine the training sets as well as the test sets from all the clients for centralized learning comparison.
### _Comparisons under One-round Communication_
In this experiment, we evaluate the data efficiency, communication efficiency and flexibility of our proposed _FedDBL_. Table III and IV demonstrate the average accuracy and F1-scores performance. We compare _FedDBL_ with four FL frameworks and two centralized training approaches with only one-round communication or local training. We first employ ResNet-50 pre-trained on ImageNet as the CNN backbone for _FedDBL_ and all the other competitors.
1. _Centralized_: We fine-tune the pre-trained backbone with a random initialized fully connected (FC) layer in the centralized learning manner.
2. _Centralized-FC_: We freeze the pre-trained backbone while fine-tuning the FC-layer in the centralized learning manner.
3. _FedAvg_: We fine-tune the pre-trained model by FedAvg framework ([42]).
4. _FedProx_: We fine-tune the pre-trained model by FedProx framework ([43]).
5. _FedPAQ_: We fine-tune the pre-trained model by a communication-efficient federated learning framework, FedPAQ ([38]).
6. _FedAvg-FC_: We freeze the pre-trained CNN backbone and only update the FC-layer by FedAvg framework.
**Data efficiency:** As shown in Table III and IV, when with enough training samples (100%) in one-round training experiment, centralized learning can achieve better performance in both datasets than other conventional FL frameworks, _FedAvg_, _FedProx_ and _FedPAQ_. Because centralized learning gathers the training samples from all clients such that the CNN model is trained more stably with a faster convergence speed than existing FL frameworks. When freezing the CNN backbone, _FedAvg-FC_ returns to a more favorable performance. _Centralized-FC_ also show better performance than _Centralized_. This observation shows that when with limited communication resources or local training time but sufficient training samples, maintaining a more stable CNN feature extractor is better than retraining the entire model. Only updating FC-layer is a better solution under this circumstance. The proposed _FedDBL_ can achieve comparable performance with centralized learning strategies in Multi-center CRC dataset and even outperform them in BCSS dataset. When reducing the training data, the performance of all the approaches drops dramatically except _FedDBL_, especially when with only 1% training samples. _FedAvg-FC_ with the frozen CNN backbone achieves around 0.79 accuracy and F1-score in Multi-center
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & TUM & STR & LYM & NEC & **Total** \\ \hline \#1 & 2,016 & 598 & 220 & 217 & 3,051 \\ \#2 & 1,962 & 987 & 269 & 372 & 3,590 \\ \#3 & 718 & 704 & 127 & 88 & 1,637 \\ \hline
**Total** & 4,696 & 2,289 & 616 & 677 & 8,278 \\ \hline \hline \end{tabular}
\end{table} TABLE II: Statistics of BCSS. #1, #2 and #3 are the datasets of three clients.
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline & ADI & BACK & DEB & LYM & MUC & MUS & NORM & STR & TUM & **Total** \\ \hline \#1 & 10,065 & 10,736 & 10,603 & 2,340 & 9,398 & 12,974 & 10,003 & 10,081 & 12,899 & 89,099 \\ \#2 & 10,407 & 10,566 & 11,512 & 11,577 & 8,896 & 13,536 & 8,763 & 10,446 & 14,317 & 100,000 \\ \#3 & 10,000 & 22,565 & 9,999 & 5,831 & 10,737 & 10,000 & 13,368 & 12,584 & 10,000 & 105,084 \\ \#4 & 2,500 & 2,500 & 2,500 & 2,500 & 2,500 & 2,500 & 2,500 & 2,500 & 2,500 & 22,500 \\ \hline
**Total** & 32,972 & 46,367 & 34,614 & 22,228 & 31,531 & 39,010 & 34,634 & 35,611 & 39,716 & 316,683 \\ \hline \hline \end{tabular}
\end{table} TABLE I: Statistics of Multi-center CRC. #1 denotes TCGA, #2 denotes Kather, #3 denotes Guangdong Provincial People’s Hospital and #4 denotes Yunnan Cancer Hospital.
CRC but is still less effective than _FedDBL_. However, the quantitative results of _FedAvg-FC_ in BCSS with 1% training samples are much worse than the ones in Multi-center CRC dataset because Multi-center CRC is around 38 times larger than BCSS. In this experiment, the proposed _FedDBL_ performs the most stable quantitative results among all the approaches in both datasets. It even outperforms centralized learning in most of the training data proportions. From this experiment, we can conclude that when with limited network communication resources and training samples, _FedDBL_ is the best solution for histopathological tissue classification.
**Flexibility:** Besides the data and communication efficiency, _FedDBL_ is also a flexible framework that can be further upgraded by replacing any module with a superior one if it exists, for example, a more robust feature extractor, a more outstanding classifier or a superior federated aggregation strategy. In this experiment, we demonstrate the flexibility of _FedDBL_ by replacing the ResNet-50 CNN backbone pre-trained on ImageNet with a domain-relevant backbone CTransPath ([27]) pre-trained on histopathological images. The lower parts of both datasets in Table III and IV demonstrate the comparisons under CTransPath. Here, we only compare _FedDBL_ with the ones only updating FC-layer. When experimenting on the larger dataset of Multi-center CRC, the domain-relevant pre-trained feature extractor CTransPath can greatly improve all three approaches. Centralized learning demonstrates the best results in almost all the dataset proportions. But _FedDBL_ still constantly outperforms _FedAvg-FC_ under the same circumstance. In the much smaller dataset of BCSS, _FedDBL_ demonstrates its superiority and outperforms both _Centralized-FC_ and _FedAvg-FC_. When with only 1% training samples in BCSS, CTransPath can improve the F1-score of _FedDBL_ from 0.8471 (ResNet-50) to 0.9416. Less or no improvement is observed for the other two approaches.
**Communication efficiency:** Higher communication efficiency benefits not only from fewer communication rounds but also from a smaller model or feature size for transmission. Conventional federated frameworks share either the parameters of the deep learning models or the extracted features. In our proposed _FedDBL_, we only share the lightweight broad learning weights without sharing any deep learning parameters or deep features. In Table VII, we demonstrate the model size and total upload overhead per client of the entire training phase. We can observe that the size of ResNet-50 for sharing is 94.4MB. _FedDBL(ResNet-50)_ and _FedDBL(CTP)_ share only 276.5KB and 55.4KB broad learning weights respectively, where CTP denotes CTransPath backbone. With only one-round communication, _FedDBL(ResNet-50)_ reduces the communication overhead by nearly 350 times compared with ResNet-50. Since conventional federated frameworks might need multiple training iterations for model convergence, 50-round communication will greatly increase the total upload overhead from 94.4MB to 4.609GB. Thanks to the lightweight BL-module and one-shot training manner, _FedDBL(CTP)_ reduces the upload workload from 4.609GB to 55.4KB which is over 87,000 times faster than _FedAvg(ResNet-50)_. Even _FedDBL(ResNet-50)_ is over 17,000 times faster.
### _Comparisons under Multiple-round Communication_
In this experiment, we compare _FedDBL_ under one-round communication with centralized learning and two federated frameworks under multiple-round communication. Table V
\begin{table}
\begin{tabular}{l l l l c c c c c c} \hline \hline \multirow{2}{*}{Datasets} & \multirow{2}{*}{Base model} & \multirow{2}{*}{Models} & \multicolumn{8}{c}{Accuracy} \\ \cline{3-10} & & & 1\% & 5\% & 10\% & 30\% & 50\% & 70\% & 100\% \\ \hline \multirow{10}{*}{Multi-center CRC} & \multirow{5}{*}{ResNet-50} & _Centralized_ & 0.7185 & 0.7913 & 0.8819 & 0.9033 & 0.9306 & 0.9414 & 0.9450 \\ & & _Centralized-FC_ & 0.8334 & 0.8970 & 0.9129 & 0.9260 & 0.9378 & 0.9412 & 0.9458 \\ & & _FedAvg_ & 0.1465 & 0.3095 & 0.4025 & 0.4145 & 0.4461 & 0.4823 & 0.4345 \\ & _FedDrax_ & 0.1900 & 0.2976 & 0.4332 & 0.4621 & 0.4663 & 0.4448 & 0.4818 \\ & & _FedPMQ_ & 0.1984 & 0.3373 & 0.3798 & 0.5208 & 0.4778 & 0.4919 & 0.4994 \\ & & _FedAvg-FC_ & 0.7942 & **0.8552** & 0.8628 & 0.8817 & 0.8900 & 0.8857 & 0.8977 \\ & & _FedDBL(ours)_ & **0.8832** & **0.8456** & **0.9229** & **0.9411** & **0.9410** & **0.9413** & **0.9399** \\ & & _Centralized-FC_ & 0.9390 & 0.9594 & 0.9670 & 0.9756 & 0.9788 & 0.9861 & 0.9817 \\ & & _FedAvg-FC_ & 0.9074 & 0.9382 & 0.9455 & 0.9536 & 0.9563 & 0.9577 & 0.9595 \\ & & _FedDBL(ours)_ & **0.9213** & **0.9640** & **0.9654** & **0.9663** & **0.9668** & **0.9669** & **0.9669** \\ \hline \multirow{10}{*}{BCSS} & \multirow{5}{*}{ResNet-50} & \multirow{5}{*}{ResNet-50} & _Centralized_ & 0.7657 & 0.4779 & 0.5726 & 0.8676 & 0.8836 & 0.8944 & 0.9408 \\ & & _Centralized-FC_ & 0.5806 & 0.8106 & 0.8640 & 0.9345 & 0.9490 & 0.9543 & 0.9600 \\ \cline{1-1} & & _FedAvg_ & 0.5972 & 0.7495 & 0.7062 & 0.6959 & 0.6611 & 0.5889 & 0.6420 \\ \cline{1-1} & & _FedProx_ & 0.5951 & 0.7277 & 0.7158 & 0.7155 & 0.6654 & 0.6631 & 0.6308 \\ \cline{1-1} & & _FedPMQ_ & 0.5931 & 0.6597 & 0.6802 & 0.6822 & 0.6254 & 0.6454 & 0.6170 \\ \cline{1-1} & & _FedAvg-FC_ & 0.5740 & 0.6234 & 0.7714 & 0.8754 & 0.9014 & 0.9259 & 0.9365 \\ \cline{1-1} & & _FedDBL(ours)_ & **0.9012** & **0.9511** & **0.9603** & **0.9711** & **0.9745** & **0.9731** & **0.9652** \\ \cline{1-1} & & _Centralized-FC_ & 0.6159 & 0.5904 & 0.6692 & 0.8886 & 0.9435 & 0.9609 & 0.9726 \\ \cline{1-1} & & _FedAvg-FC_ & 0.5858 & 0.5772 & 0.5676 & 0.6689 & 0.8072 & 0.8537 & 0.9072 \\ \cline{1-1} & & _FedDBL(ours)_ & **0.9593** & **0.9754** & **0.9777** & **0.9770** & **0.9834** & **0.9883** & **0.9910** \\ \hline \hline \end{tabular}
\end{table} TABLE III: Accuracy of five-fold cross-validation experiment for different proportions of the training data under one-round training. Centralized methods are used for baselines, **bold** is the highest among federated algorithms and red represents the highest score among all methods including centralized learning. ResNet-50 and CTransPath indicate the CNN backbones pre-trained on ImageNet and pathology images, respectively. The performance of each fold can be found in the supplemental material.
and Table VI demonstrate the accuracy and F1-score, respectively. As expected, the centralized learning strategy achieves the best classification performance when with enough training data in Multi-center CRC dataset. _FedDBL(CTP)_ with domain-relevant feature extractor constantly surpasses two federated learning frameworks, and even outperforms centralized learning when with less than 10% training data. In BCSS, since this dataset is much smaller than Multi-center CRC, even _FedDBL_ with ImageNet pre-trained feature extractor can outperform both centralized learning and federated frameworks and _FedDBL(CTP)_ can further improve the quantitative results to a remarkable level.
Moreover, we also visualize the average accuracy of the results in Table V and Table VI at every epoch in Fig. 3 to demonstrate the convergence speed of the existing approaches trained with four representative dataset proportions. Since _FedDBL_ is a one-round communication framework, we show _FedDBL_ and _FedDBL(CTP)_ results by a gray dash line and an orange dash line, respectively. The most representative region in each sub-figure has been highlighted by a zoom-in window. As we can see, the convergence speed and the optimal performance of the existing models highly depend on the proportion of the training data. When with 100% training data in Multi-center CRC, centralized learning can fast surpass _FedDBL_ within five epochs and even outperform _FedDBL(CTP)_. Two federated frameworks need more training epochs to converge and can achieve comparable performance with _FedDBL(CTP)_. When reducing the training samples, the convergence speed is becoming slower and the optimal performance also decreases. When with 1% training samples in BCSS, three existing models vibrate heavily and are even not able to surpass _FedDBL_ within 50 training epochs.
All the above experimental results have demonstrated the data efficiency, communication efficiency, model flexibility and model robustness of FedDBL on histopathological image classification. FedDBL, in our opinion, has the potential to sig
\begin{table}
\begin{tabular}{l l c c c c c c c c} \hline \hline \multirow{2}{*}{Datasets} & \multirow{2}{*}{Models} & \multicolumn{3}{c}{Global} & \multicolumn{5}{c}{Accuracy} \\ \cline{3-10} & & & Epochs & 1\% & 5\% & 10\% & 30\% & 50\% & 70\% & 100\% \\ \hline \multirow{4}{*}{Multi-center CRC} & _Centralized_ & 50 & 0.8830 & 0.9308 & 0.9508 & 0.9713 & 0.9789 & 0.9817 & 0.9846 \\ & _Ye2Avg_ & 50 & 0.8747 & 0.9289 & 0.9385 & 0.9542 & 0.9590 & 0.9604 & 0.9629 \\ & _FedDrax_ & 50 & 0.8786 & 0.9289 & 0.9409 & 0.9549 & 0.9586 & 0.9615 & 0.9622 \\ & _FedDBL_ & 1 & 0.8832 & 0.8456 & 0.9229 & 0.9411 & 0.9410 & 0.9413 & 0.9399 \\ & _FedDBL(CTP)_ & 1 & **0.9213** & **0.9640** & **0.9654** & **0.9663** & **0.9668** & **0.9669** & **0.9669** \\ \hline \multirow{4}{*}{BCSS} & _Centralized_ & 50 & 0.8291 & 0.9179 & 0.9458 & 0.9650 & 0.9810 & 0.9818 & 0.9838 \\ & _FedAvg_ & 50 & 0.8657 & 0.9540 & 0.9535 & 0.9634 & 0.9730 & 0.9804 & 0.9834 \\ \cline{1-1} & _FedDrax_ & 50 & 0.8700 & 0.9201 & 0.9502 & 0.9671 & 0.9737 & 0.9776 & 0.9830 \\ \cline{1-1} & _FedDBL_ & 1 & 0.9012 & 0.9511 & 0.9603 & 0.9711 & 0.9745 & 0.9731 & 0.9652 \\ \cline{1-1} & _FedDBL(CTP)_ & 1 & **0.9593** & **0.9754** & **0.9777** & **0.9770** & **0.9834** & **0.9883** & **0.9910** \\ \hline \hline \end{tabular}
\end{table} TABLE V: Comparisons with different methods on 50-round training (Accuracy). **Bold** is the highest among federated methods; red is the highest score among all algorithms including centralized learning.
\begin{table}
\begin{tabular}{l l l c c c c c c} \hline \hline \multirow{2}{*}{Datasets} & \multirow{2}{*}{Base model} & \multirow{2}{*}{Models} & \multicolumn{5}{c}{F1-score} \\ \cline{3-10} & & & 1\% & 5\% & 10\% & 30\% & 50\% & 70\% & 100\% \\ \hline \multirow{4}{*}{Multi-center CRC} & _Centralized_ & 0.7191 & 0.7863 & 0.8801 & 0.9027 & 0.9300 & 0.9415 & 0.9451 \\ & & _Centralized-FC_ & 0.8301 & 0.8977 & 0.9137 & 0.9265 & 0.9383 & 0.9417 & 0.9463 \\ & _FedAvg_ & 0.0669 & 0.2203 & 0.3214 & 0.3250 & 0.3240 & 0.3829 & 0.3366 \\ & _FedProx_ & 0.1086 & 0.2110 & 0.3468 & 0.3576 & 0.3627 & 0.3468 & 0.3981 \\ & _FedDQA_ & 0.1146 & 0.2527 & 0.2789 & 0.4285 & 0.3773 & 0.3930 & 0.4032 \\ & & _FedAvg-FC_ & 0.7912 & **0.8527** & 0.8610 & 0.8805 & 0.8896 & 0.8841 & 0.8971 \\ & _FedDBL(**ours)_ & **0.8833** & 0.8502 & **0.9268** & **0.9443** & **0.9443** & **0.9445** & **0.9432** \\ \cline{1-1} & \multirow{4}{*}{CTransPath} & _Centralized-FC_ & 0.9365 & 0.9595 & 0.9675 & 0.9737 & 0.9738 & 0.9804 & 0.9817 \\ \cline{1-1} & & _FedAvg-FC_ & 0.9031 & 0.9385 & 0.9459 & 0.9541 & 0.9568 & 0.9582 & 0.9600 \\ \cline{1-1} & & _FedDBL(**ours)_ & **0.9217** & **0.9643** & **0.9657** & **0.9666** & **0.9671** & **0.9672** & **0.9672** \\ \hline \multirow{4}{*}{BCSS} & \multirow{4}{*}{ResNet-50} & _Centralized_ & 0.5870 & 0.2228 & 0.4280 & 0.8196 & 0.8081 & 0.8245 & 0.8974 \\ & & _Centralized-FC_ & 0.2056 & 0.4846 & 0.7297 & 0.8979 & 0.9231 & 0.9304 & 0.9404 \\ \cline{1-1} & _FedAvg_ & 0.2300 & 0.4059 & 0.3689 & 0.3433 & 0.2962 & 0.2176 & 0.2798 \\ \cline{1-1} & _FedDrax_ & 0.2276 & 0.3968 & 0.3578 & 0.3600 & 0.3024 & 0.3052 & 0.2633 \\ \cline{1-1} & & _FedDQA_ & 0.2210 & 0.3020 & 0.3157 & 0.3328 & 0.2766 & 0.2837 & 0.2582 \\ \cline{1-1} & & _FedAvg-FC_ & 0.2004 & 0.2650 & 0.4161 & 0.7445 & 0.8192 & 0.8764 & 0.8959 \\ \cline{1-1} & & _FedDBL(**ours)_ & **0.8471** & **0.9227** & **0.9413** & **0.9578** & **0.9638** & **0.9621** & **0.9550** \\ \cline{1-1} & & _Centralized-FC_ & 0.2572 & 0.21971 & 0.3272 & 0.7578 & 0.9035 & 0.9392 & 0.9614 \\ \cline{1-1} & & _FedAvg-FC_ & 0.2148 & 0.1981 & 0.1816 & 0.3272 & 0.4571 & 0.6165 & 0.8061 \\ \cline{1-1} & & _FedDBL(**ours)_ & **0.9416** & **0.9644** & **0.9685** & **0.9639** & **0.9758** & **0.9818** & **0.9859** \\ \hline \hline \end{tabular}
\end{table} TABLE IV: F1-scores of five-fold cross-validation experiment for different proportions of the training data under one-round training. Centralized methods are used for baselines, **bold** is the highest among federated algorithms and red represents the highest among all methods including centralized learning. ResNet-50 and CTransPath indicate the CNN backbones pre-trained on ImageNet and pathology images, respectively. The performance of each fold can be found in the supplemental material.
nificantly save computational and communication resources, relieve the pathologist's labeling burden and preserve the patient's privacy, which greatly promotes its clinical practicability compared with existing approaches.
## V FedDBL for Privacy-preserving
One of the objectives of FedDBL is to preserve patients' privacy. However, even though it is not necessary to directly share the raw data, conventional DL-based federated frameworks still suffer from different kinds of federated attacks, such as model inversion attack ([45]) or man-in-the-middle attack ([20]). Compared with conventional DL-based federated frameworks, FedDBL can defend against the aforementioned attacks because the training data is totally unseen for the DL-module and it is the deep features that are used for calculating the parameters of the BL-module at each client locally. Actually, the pre-trained DL-module can be regarded as a perturbation process to transfer the data into high-level features. And the features are also protected by only sharing the parameters of the BL-module.
Thanks to the lightweight BL-module, we can further protect the parameters by employing an additional encryption algorithm ([46]) to support federated aggregation in the encrypted domain. The corresponding packing-encryption ([47]) is used where it exploits the redundancy of ciphertext space to hold more plaintext in an encrypted block. Table VIII demonstrates the model accuracy, F1-score and the model size of the packing-encrypted FedDBL with 1% Multi-center CRC dataset using CTransPath pre-trained backbone. When
\begin{table}
\begin{tabular}{l l c c c c c c c} \hline \hline \multirow{2}{*}{Datasets} & \multirow{2}{*}{Models} & \multicolumn{2}{c}{Global} & \multicolumn{5}{c}{F1-score} \\ \cline{3-10} & & Epochs & 1\% & 5\% & 10\% & 30\% & 50\% & 70\% & 100\% \\ \hline \multirow{4}{*}{Multi-center} & _Centralized_ & 50 & 0.8813 & 0.9307 & 0.9512 & 0.9714 & 0.9789 & 0.9816 & 0.9845 \\ & _FedAvg_ & 50 & 0.8720 & 0.9283 & 0.9386 & 0.9545 & 0.9592 & 0.9607 & 0.9632 \\ & _FedDBL_ & 50 & 0.8751 & 0.9288 & 0.9409 & 0.9550 & 0.9591 & 0.9618 & 0.9625 \\ & _FedDBL_ & 1 & 0.8833 & 0.8502 & 0.9268 & 0.9443 & 0.9443 & 0.9445 & 0.9432 \\ & _FedDBL(CTP)_ & 1 & **0.9217** & **0.9643** & **0.9657** & **0.9666** & **0.9671** & **0.9672** & **0.9672** \\ \hline \multirow{4}{*}{BCSS} & _Centralized_ & 50 & 0.7671 & 0.8808 & 0.9182 & 0.9554 & 0.9722 & 0.9756 & 0.9761 \\ & _FedAvg_ & 50 & 0.8088 & 0.9006 & 0.899 & 0.9348 & 0.9598 & 0.9721 & 0.9741 \\ & _FedDBL_ & 50 & 0.8095 & 0.8851 & 0.9293 & 0.9522 & 0.9599 & 0.9669 & 0.9769 \\ & _FedDBL_ & 1 & 0.8471 & 0.9227 & 0.9413 & 0.9578 & 0.9638 & 0.9621 & 0.9550 \\ & _FedDBL(CTP)_ & 1 & **0.9416** & **0.9644** & **0.9685** & **0.9639** & **0.9758** & **0.9818** & **0.9859** \\ \hline \hline \end{tabular}
\end{table} TABLE VI: Comparisons with different methods on 50-round training (F1-score). **Bold** is the highest among federated methods; red is the highest score among all algorithms including centralized learning.
Fig. 3: Average global accuracy scores of 5-fold cross-validation results at each training epoch. Four representative dataset proportions are selected for visualization. Since FedDBL is a one-round communication framework, we show _FedDBL_ and _FedDBL_(_CTP_) results by a gray dash line and an orange dash line, respectively. We also highlight the most representative region for each sub-figure for better visualization.
\begin{table}
\begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{Models} & _FedAvg_ & _FedDBL_ & _FedDBL_ \\ & _(ResNet-50)_ & _(ResNet-50)_ & _(CTP)_ \\ \hline Model size & 94.4MB & 276.5KB & 55.4KB \\ Total upload size & 4.609GB & 276.5KB & 55.4KB \\ \hline \hline \end{tabular}
\end{table} TABLE VII: Model size and total upload overhead per client among three models.
the bit-length for the encryption precision is set as 32 bits, the encryption algorithm does not harm the parameters. So the model accuracy is preserved. By using packing-encryption, the model size is still much smaller than that of ResNet-50 in the plaintext condition (94.4MB) shown in Table VII.
## VI Discussion and Conclusion
In this paper, we proposed a novel federated framework (FedDBL) for histopathological tissue classification. Thanks to the robust deep learning feature extractor and flexible broad learning inference system, FedDBL can greatly improve the classification performance with only one-round communication and extremely limited training samples, which significantly reduces the data dependency and communication burden. With the employment of DL-module and BL-module for local training, each client just needs to solve a lightweight broad learning system, which drastically reduces the training overhead. Sharing the lightweight BL-module in the federated framework not only greatly saves the communication burden, but also preserves the patients' privacy. Experimental results with five-fold cross-validation demonstrate that FedDBL outperforms both centralized learning strategies and federated frameworks in one-round communication. It even outperforms all the competitors in 50-round training iterations when with limited training samples.
Moreover, due to the flexible model design, FedDBL can be further upgraded by replacing any module with a superior one in the future. In this paper, we have proven that a domain-relevant deep feature extractor is more effective than a domain-irrelevant one. Since the federated framework in this study is the most common federated average aggregation strategy. We expect a more outstanding federated aggregation framework can be applied in the future to further improve the performance of FedDBL under extreme data and communication situations.
## Acknowledgments
This work was supported by Key-Area Research and Development Program of Guangdong Province (No. 2021B0101420006), the National Key R&D Program of China (No. 2021YFF1201003), Regional Innovation and Development Joint Fund of National Natural Science Foundation of China (No. U22A20345), the National Natural Science Foundation of China (No. 82271941, 82072090 and 81772840), the National Science Foundation for Young Scientists of China (No. 62102103), the Natural Science Foundation for Distinguished Young Scholars of Guangdong Province (No. 2023B1515020043), the Natural Science Foundation of Guangdong Province (No. 2023A1515030251), Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application (No. 2022B1212010011), Science and Technology Projects in Guangzhou (No. 20220102000 and 202201010513) and High-level Hospital Construction Project (No. DFJHBF202105).
|
2308.07552 | Resilience assessment and planning in power distribution systems:Past
and future considerations | Over the past decade, extreme weather events have significantly increased
worldwide, leading to widespread power outages and blackouts. As these threats
continue to challenge power distribution systems, the importance of mitigating
the impacts of extreme weather events has become paramount. Consequently,
resilience has become crucial for designing and operating power distribution
systems. This work comprehensively explores the current landscape of resilience
evaluation and metrics within the power distribution system domain, reviewing
existing methods and identifying key attributes that define effective
resilience metrics. The challenges encountered during the formulation,
development, and calculation of these metrics are also addressed. Additionally,
this review acknowledges the intricate interdependencies between power
distribution systems and critical infrastructures, including information and
communication technology, transportation, water distribution, and natural gas
networks. It is important to understand these interdependencies and their
impact on power distribution system resilience. Moreover, this work provides an
in-depth analysis of existing research on planning solutions to enhance
distribution system resilience and support power distribution system operators
and planners in developing effective mitigation strategies. These strategies
are crucial for minimizing the adverse impacts of extreme weather events and
fostering overall resilience within power distribution systems. | Shuva Paul, Abodh Poudyal, Shiva Poudel, Anamika Dubey, Zhaoyu Wang | 2023-08-15T03:43:31Z | http://arxiv.org/abs/2308.07552v1 | # Resilience assessment and planning in power distribution systems: Past and future considerations
###### Abstract
Over the past decade, extreme weather events have significantly increased worldwide, leading to widespread power outages and blackouts. As these threats continue to challenge power distribution systems, the importance of mitigating the impacts of extreme weather events has become paramount. Consequently, resilience has become crucial for designing and operating power distribution systems. This work comprehensively explores the current landscape of resilience evaluation and metrics within the power distribution system domain, reviewing existing methods and identifying key attributes that define effective resilience metrics. The challenges encountered during the formulation, development, and calculation of these metrics are also addressed. Additionally, this review acknowledges the intricate interdependencies between power distribution systems and critical infrastructures, including information and communication technology, transportation, water distribution, and natural gas networks. It is important to understand these interdependencies and their impact on power distribution system resilience. Moreover, this work provides an in-depth analysis of existing research on planning solutions to enhance distribution system resilience and support power distribution system operators and planners in developing effective mitigation strategies. These strategies are crucial for minimizing the adverse impacts of extreme weather events and fostering overall resilience within power distribution systems.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
+
Footnote †: Equal contribution authors.
affected approximately 2.7 million customers in Florida [5], while Europe experienced an energy crisis during the 2022 heatwave, with France witnessing a drastic increase in electricity prices, reaching a record of EUR700/MWh [6]. The power outage in Texas in February 2021 left more than ten million people without power, causing a financial impact of about $4 billion on wind farms, significantly more than their annual gross revenue [7]. Iran experienced a summer heatwave in 2021 with temperatures exceeding 122 F, and a deficit of almost eleven gigawatts of electricity was reported [8]. In 2021, Hurricane Ida caused widespread outages in the Northeastern US, affecting 1.2 million people, and it took almost fifteen days to restore the electric power entirely [9]. In 2022, there were eighteen different billion-dollar disasters in the United States alone, with an estimated loss of approximately $172 billion [10]. Critical customers, such as hospitals and transportation systems, endure substantial economic losses during such calamities. Prolonged power outages also exacerbate vulnerabilities in personal safety and security, highlighting the need to increase resilience to extreme weather events.
The recent challenges faced by the power grid have highlighted the critical need for resilience due to extreme weather events. A resilient system can withstand severe disturbances, recover quickly to its normal operating state, and ensure uninterrupted power supply. It is worth noting that power distribution grids account for more than 80% of power outages due to disruptions caused by extreme weather events [11]. Furthermore, due to the grid modernization initiatives, the bidirectional energy flow architecture of power grids, and the increasing impact of extreme weather events, ensuring the resilience of power distribution systems has become a pressing priority [12]. Although several questions and challenges remain, significant research efforts have been devoted to understanding and improving the resilience of power distribution grids. This review aims to comprehensively summarize existing research contributions in the field of resilient power distribution systems and identify key areas for future research, particularly in light of the anticipated increase in the frequency and severity of natural disasters. Figure 1 a) shows the increasing number of research publications since 1993 on the impact of extreme events on energy systems, specifically related to power distribution systems. Figure 1 b) illustrates the frequency of various extreme events in the United States. However, it should be noted that the increase in the frequency of extreme events is not limited to the US alone and is a global concern. The rise in publications reflects the urgency and importance of addressing the resilience of energy systems to extreme weather events. In particular, research on power distribution systems gained prominence in recent years, with advances in controllable distribution systems. Therefore, a comprehensive understanding of current state-of-the-art resilience assessment and planning methods in power distribution systems is crucial to realize practical implications, including enhanced operational resilience, effective risk management strategies, and informed infrastructure planning. This study also plays an important role in forming the development of regulations, standards, and guidelines related to resilience planning, emergency response, and strategic infrastructure investments. This allows policymakers, industry stakeholders, and researchers to address the challenges posed by extreme weather events effectively.
Table 1 summarizes some of the existing review studies on resilience-related topics in different critical infras
Figure 1: a) Number of extreme events and trends in publications considering energy systems, power distribution systems, extreme events, and resilience since 1993 contained from the Scopus database. b) Different extreme events and their frequency every five years in the United States since 1993.
tructure systems, including power distribution systems, and outlines their contributions and limitations. These survey articles include related work on resilience quantification and planning applied to different domains. A holistic framework for the development of critical infrastructure resilience interrupted by external and unexpected forces is discussed in [13]. The described framework builds the foundation for resilience on a set of resilience policies, considering the influences of policies on the damage prevention, absorption, and recovery stages, and presents implementation methodologies. Similarly, in [14], a review of the resilience of six critical infrastructures is presented, including electric, water, gas, transportation, drainage, and communication networks. The resilience of critical infrastructure elements and their main factors are studied in [15] involving electricity, gas, information and communication technology (ICT), and road
\begin{table}
\begin{tabular}{p{113.8pt} p{113.8pt} p{113.8pt}} \hline \hline References & Domain & Summary & Drawbacks \\ \hline
[13] & & Holistic frameworks for developing critical infrastructure resilience & Missing distribution system resilience discussion \\
[14, 15] & & Resilience assessment of multi-domain infrastructure systems & Very little analysis on power distribution system resilience, missing discussion of interdependencies \\
[16] & & Discusses the deployment of deep learning techniques in critical infrastructure resilience & Missing thorough and focused analysis of distribution system resilience, characterization, and interdependencies \\
[17, 18, 19, 20, 21, 22, 23] & & Discusses critical infrastructure resilience including transportation, geo-science, and water distribution system & None of them have focused on electric power distribution systems resilience and their in-depth analysis \\ \hline
[24] & & Discusses resilience enhancement strategies using microgrids & Missing focuses of distribution system resilience, resilience metric characterization, understanding the interdependencies of the infrastructures \\
[25] Bulk power systems & Analyze a load restoration framework to enhance the resilience of the power system against an extreme event & Lacks metrics analysis, characterizing resilience, and other resilience enhancement strategies \\
[26, 27] & & Comprehensive studies on power system resilience against natural disasters focusing on forecast models, corrective actions, and restoration strategies \\
[28] & & Discusses the resilience and security of smart grid infrastructures & Missing the interdependence and metric analysis \\ \hline
[29] & & Comprehensively summarizes the distribution system resilience assessment frameworks and metrics. \\
[30] & & Discusses the definition, frameworks, resilience frameworks development, etc. \\
[31] Power distribution systems & & Discusses resilience enhancement utilizing microgrids in distribution systems and definitions of resilience & Lacks characterization of resilience and resilience enhancement methods \\
[32, 33] & & Comprehensive study of resilience assessment on the power distribution systems. \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comprehensive resilience study in the existing literature in different domains, their contributions, and limitations to alleviate the significance of this resilience study.
transportation networks. The deployment of deep learning techniques for critical infrastructure resilience is presented in [16]. Similarly, [17, 18, 19, 20, 21, 22, 23] presents a comprehensive review of different critical infrastructure resilience, including, but not limited to, transportation, geology, and water-distribution system. However, none of the existing works specifically addresses resilience enhancement or assessment methods for electric power distribution systems, including the associated infrastructural and operational considerations.
In the domain of power systems, existing articles extensively cover the resilience of the high-voltage bulk power grid [24, 25, 26, 27]. However, there is a noticeable gap in discussing resilience considerations for medium-voltage and low-voltage power distribution systems. Bulk power grids and power distribution systems drastically differ in structure and operation. Thus, the general understanding of the bulk grid system does not translate directly to distribution systems that require specialized analysis. Although some review works touch on the resilience of smart grid infrastructures [28], resilience metric and quantification [29, 30], microgrid-based resilience assessment and enhancement [31], resilience planning against extreme weather events [34], and resilience assessment framework in power distribution systems [35], there is a lack of comprehensive work which compiles the major aspects of resilience analysis, quantification, mitigation, enhancement, and multidomain interdependencies in power distribution systems.
Existing reviews on the resilience of the power distribution system lack several key aspects. First, there is a lack of a systematic framework for evaluating resilience. The existing works do not provide a comprehensive process for resilience analysis, which is one of the most important aspects for characterizing the resilience of any critical infrastructure, including power distribution systems. Furthermore, these works do not highlight resilience metrics specific to power distribution systems. Secondly, these review articles do not provide insight into the interdependencies among critical infrastructures that can significantly impact the resilience of the power distribution grid. The impact of extreme weather events on one critical infrastructure can have a drastic effect on the other, which can negatively impact the community. Finally, the research gaps and limitations discussed in existing works are broad, making it challenging to identify specific research directions. This study aims to address these gaps and comprehensively review all aspects of resilience in power distribution systems. This review provides valuable information to the scientific and engineering community in addressing the resilience challenges posed by extreme weather events. It is important to clarify that this study is focused solely on the resilience of the power distribution system and does not address other aspects of the resilience of the power system. By narrowing the scope, this work aims to provide a comprehensive analysis and practical recommendations tailored to enhance the resilience of power distribution systems. Specifically, the major contributions of this work are as follows.
1. The conceptual necessity for the resilience of the power distribution system is detailed, specifically in anticipation of HILP events, while also discussing the significance of existing definitions and their relevance.
Figure 2: The overview of resilience assessment and planning in power distribution systems.
2. A characteristic representation of distribution system resilience is provided. This includes classifying assessments into qualitative and quantitative evaluations and attribute and performance-based metrics. Furthermore, a systematic resilience analysis process is detailed for power distribution systems.
3. A comprehensive survey of existing strategies to enhance the resilience of power distribution systems is presented. These strategies are categorized into event forecasting, load prioritization, situational awareness (SA), repair and resource allocation, and utilization of microgrids and distributed energy resources (DER) for resilience enhancement.
4. The interdependencies among power distribution systems and various critical infrastructure systems, such as ICT, transportation, natural gas, and water distribution systems, are reviewed for their impacts on the resilience of the power distribution system.
5. Research gaps and potential opportunities for further exploration in power distribution system resilience are identified and reviewed. These focus areas encompass proactive outage management, long-term resilience investment frameworks, investments in smart grid infrastructure, and modeling and forecasting the impact of extreme weather events.
Figure 2 shows the overview of the resilience assessment and planning in power distribution systems, and the rest of the work is organized as follows. Section 2 discusses methods to characterize power distribution grid resilience and resilience assessment techniques. Different types of resilience planning measures, standards, and operational procedures are discussed in section 3. Section 4 focuses on the interdependence of power distribution systems with other critical infrastructures, followed by the potential research directions for resilience assessment in Section 5. Finally, Section 6 concludes the paper with a summary of the contributions of this study to the scientific community.
## 2 Framework for resilience evaluation in power distribution systems
Resilience evaluation of power distribution systems is challenging due to the complex nature of distribution systems. Due to such intricacy, it is crucial to adopt a comprehensive approach that considers both qualitative and quantitative perspectives [36, 37]. The resilience analysis process is vital in assessing the system's ability to maintain specific objectives during adverse grid conditions. Typically, the evaluation is quantified using some metric that aims to capture the system's resilience. However, it is widely acknowledged that a single metric cannot fully capture all the diverse resilience characteristics of power distribution systems [38]. Therefore, this section provides a comprehensive framework that explores various aspects of power distribution systems resilience, evaluation approaches, analysis process, and assessment metrics, aiming to facilitate a holistic understanding of resilience in power distribution systems.
### Resilient power distribution systems
The response of a power distribution system when impacted by an extreme weather event is detailed in this section along with how it affects grid resilience. Figure 3 illustrates an overall response of the power distribution system, including some corrective actions after an HILP event. The top left portion of Figure 3 illustrates a general curve to show the response of a system towards an event. The vertical axis is the figure of merit (FOM) that accounts for the overall resilience of the system, and the horizontal axis represents time. FOM can be the number of customers online, the number of connected components, the electrical load fed by the system, etc. FOM is observed in every instance - before, during, and after an extreme event and is commonly referred to as the system's resilience in this work. From Figure 3, it can be seen that the system's resilience drastically decreases immediately after the event occurs. The appropriate infrastructure planning and hardening measures can slow down the rate of decrements in the system's resilience. When the system approaches the emergency response stage, a system with planned infrastructure shows better resilience than the traditional one with limited or no infrastructural development. An adequately hardened system with the knowledge of previous events that can effectively counteract the unwanted changes and restore the disrupted system in the lowest possible time is assumed to have higher resilience than the system that lacks these capabilities. Here, for the traditional system, \(FOM^{-}\) represents FOM before the event, \(FOM_{D}\) represents FOM before any remedial actions are taken, and \(FOM^{+}\) represents FOM after corrective actions are taken.
To strengthen the effective response of the system to an event and minimize the potential impacts and necessary investments, it is essential to have an efficient long-term plan. One such investment is grid hardening [39], which refers
to making physical changes in its structure, e.g., replacing overhead lines with underground lines, elevating substations to prevent them from being flooded, and so forth. Although the effective response of the system might improve with planning measures, the occurrence, impact, and location of events are still not certain. Therefore, immediate corrective actions or emergency responses are needed during or after an event. Corrective action can include dispatch of mobile energy resources [40, 41], load shedding [42, 43], and intentional islanding [44, 45, 46, 47, 48]. Intentional islanding can effectively provide continuous supply to critical loads in the system with the help of distributed generators (DG) while isolated from the main grid [49, 50, 51, 52]. While critical loads are islanded, the operators can dispatch other generating units to pick up additional system loads. With improved impact modeling, the appropriate system response can be proposed to counteract the negative effect of the event. As seen in Figure 3, all the impact modeling, long-term planning, and emergency response processes are interconnected and work together to improve the resilience of the distribution system. Advanced distribution management systems (ADMS) facilitate the interconnection of these processes by continuously interacting with the distribution grid through supervisory control and data acquisition (SCADA) [53]. Furthermore, the system operators are directly connected to the customers, who can provide additional information about the current situation of the distribution grid when an extreme event hits the grid.
### Qualitative and quantitative assessment of resilience
Qualitative assessment of resilience considers aspects of energy infrastructure, information systems, fuel supply chain, business structure, etc. The capabilities of the system involved in the qualitative resilience assessment include preparedness, mitigation, response, and recovery. Qualitative resilience assessment frameworks can guide policymakers in implementing long-term energy policies [25]. Existing work includes numerous qualitative resilience assessment methods. For example, the work in [54, 55] assesses the resilience of the system and the regional level using questionnaires, investigations, and individual ratings to address individual, industrial, federal, and infrastructural resilience. A scoring matrix is developed in [56] to evaluate the functionality of the system from different perspectives. The analytical hierarchy process is employed in [57] to convert subject opinions into comparable quantities, eventually aiding operational decision-making.
Quantitative assessment methods aim to numerically assess the resilience of critical infrastructure systems such as the power grid [58]. Specific to power distribution systems, many studies have been conducted to assess system resilience quantitatively. The intrinsic characteristics of resilience can be defined as stress resistance and strain compensation. Later, the stress resistance is split into hardness and asset health. In contrast, strain compensation is charac
Figure 3: Response of power distribution system during HILP events.
terized by capacity and efficacy [59]. Another way to measure the resilience in the electric power grid that quantifies the efficacy of the recovery process is the ratio of recovered functionality to the system [60]. The resilience analysis process can be interpreted in multiple steps depending on time. The indices include expected hazard frequency, initial failure scale, maximum level of impact, recovery time, and recovery cost according to stages. A functional description of resilience is obtained in terms of initial failure scales, maximum impact level, and recovery time [61]. When considering extreme weather events, the resilience assessment metric can be expressed as a function of the expected number of power grid outages during the event, the probability of loss of load, the expected demand not served, and the level of difficulty of the grid recovery processes [62]. Similarly, other resilience measures emphasize multiple system properties, including recovery time, loads not served, etc., [63, 64, 65, 66, 67].
### Resilience analysis process
This section briefly discusses and extends the resilience analysis process, initially introduced for the 2015 quadrennial energy review in [68]. The analysis framework evaluates the power system's capacity to handle potential future disturbances. It helps to prioritize planning decisions, investment endeavors, and response actions based on this assessment. In doing so, this study also highlights the available research and techniques that concentrate on every aspect of the analysis procedure to define resilience goals for utilities, choose suitable metrics that align with those objectives, collect essential data for the metrics, and ultimately determine the optimal approach to making resilience-based decisions. The conceptual framework and analysis process for creating forward-looking resilience metrics, which are based on extensive simulations that measure the impact on grid operations and power delivery, are shown in Figure 4. It provides a clear roadmap that outlines the journey from establishing resilience goals to effectively achieving them, with multiple interconnected components in between.
Figure 4: The resilience analysis process – Setting up resilience goals, measuring resilience, and planning for resilience enhancement.
#### 2.3.1 Resilience goals/objectives
The resilience analysis process starts with defining resilience goals. The goal could be to improve the resistance and capability of a regional electric grid to withstand extreme events, evaluate a utility's investment plan for resilience enhancement, reduce the cost of recovery (monetary and time), ensure power availability to critical services, and/or reduce the overall interruption cost. For example, in [36], resilience is defined as the sum of the availability of individual critical loads during an event. In [69; 70; 71], the focus is on optimal investment decision-making capability to enhance resilience. Other studies have suggested various indicators of resilience, such as the optimal duration for repairing critical components [72], disruptions in the energy supply following extreme events [73], the general delivery of essential resources and uninterrupted power supply to critical customers after a disaster [74], and the recovery of infrastructure and operations [75; 76]. Table 2 summarizes these indicators, and a comprehensive list of examples is available in [77]. These indicators can be instrumental in defining resilience goals and developing relevant metrics to measure them.
#### 2.3.2 Hazard characterization
The grid disturbance event could be initiated by various natural threats. Several efforts have been made to create better ways to evaluate the impacts of these threats. It is worth noting that different threats to energy infrastructure vary in probability and impact. For instance, a wind storm can cause significant damage to overhead structures but may not impact underground systems such as cables and substations. Consequently, it is critical to understand the hazard characterization as part of the resilience analysis process. Hazard US (HAZUS) or CLIMate ADAPtation (CLIMADA) tool can be used effectively to model or simulate extreme weather events, including but not limited to hurricanes, floods, and earthquakes [78; 79; 80]. When evaluating resilience against multiple hazards effectively, it is crucial to consider two key factors: (1) the probability of each potential threat scenario and (2) how the intensity of an event maps onto the resulting consequences at the system level.
#### 2.3.3 Event to impact mapping
Once the onset of the event is identified, the next step is to use the characteristics or attributes of the event to determine the impact at the component level. Different components in the power grid have predefined thresholds (intrinsic property of a device) to handle the subjected disturbances. These thresholds depend on the type of component and the duration of exposure. Although the thresholds are not directly measurable, historical data can be used to derive the failure probabilities [81]. These component-level failure probabilities can then be mapped to characterize system-level impacts. For example, a Monte Carlo simulation approach can be used to assess the spatiotemporal impacts of grid disturbances using fragility curves [82; 83; 84]. Grid disturbance theory and functional form modeling capture the spatiotemporal effects of any possible scenarios [85]. Although functional models can describe grid disturbance, event onset must be carefully analyzed to demonstrate the most meaningful impact and responses. This is crucial because different disruptive events can have varying effects on a system, requiring specific strategies for resistance and recovery. The work in [86] reviews some widely used impact models in energy systems planning and operations due to extreme weather events.
#### 2.3.4 Calculate consequence- performance measure and resilience metrics
In this stage of the resilience analysis process, the consequence category is defined, which forms the basis for the development of the metrics. Resilience metrics are then determined for each category of consequences related to the technical, societal, organizational, or economic impacts of an event. Some commonly used metrics are demand/energy not served, recovery time and cost, load recovery factor, revenue loss, and customers not served [87; 88]. Other indices, such as restoration efficiency, vulnerability index, degradation index, and microgrid resilience index, are also widely studied [89]. It should be noted that the impact of potential threats is dependent on the system's capacity to (1) anticipate, prevent, and mitigate them before being affected by the event, (2) adapt to, absorb, and survive the threat when it occurs, and (3) recover, restore, reconfigure, and repair itself afterwards [35]. Since a single resilience metric cannot capture all possible aspects of the threat response, these metrics are a function of resilience goals, the operating conditions and intrinsic characteristics of the system, and new investment planning initiatives.
#### 2.3.5 Evaluate resilience improvements
Once the resilience metrics are established, the potential investment for resilience enhancement is studied. Given limited resources and high risk, resilience can be improved by focusing more on low-probability events, investment
prioritization, and consequence management. These studies allow system analysts to decide on investments based on evolving research to identify the most impactful decision in improving resilience while minimizing long-term costs and stranded investment [91]. In recent years, it has been realized that investments should focus on quantitative analysis, the ability to incorporate the uncertainty of grid disturbance, and bottom-up approaches where efforts for resilience enhancement should start from the grid-edge [92].
### Resilience metrics
This section details the desired characteristics of a metric defining distribution system resilience and the categories of resilience metrics for distribution system resilience planning. A metric is essential for resilience planning, as it quantifies the impacts of potential HILP events on the grid and helps evaluate and compare planning alternatives to improve operational resilience [93, 94, 95]. Measuring progress towards a more resilient infrastructure requires developing and deploying metrics that can be used to assess infrastructure planning, operations, and policy changes [96, 54]. The resilience metric should focus on HILP events, consider the likelihood and consequences of threats, and evaluate the system's performance. Furthermore, the metric should consider the uncertainties inherent in response and planning activities while quantifying the consequences of power grid failures [97, 68].
It is incredibly challenging to adopt a unified metric that can capture several contributing factors such as uncertainty, spatiotemporal features of a threat, and intrinsic system properties to deal with possible threats [98, 99]. Current resilience measures can be classified into two main types: a) attribute-based metrics, which assess the attributes of the power system such as adaptiveness, resourcefulness, robustness, SA, and recoverability [35], and b) performance-based metrics, which evaluate the ability of the system to remain energized (commonly referred to as availability [100]), often represented by the conceptual resilience curve [101].
#### Attribute-based metrics
Attribute-based metrics are relatively simple in mathematical formulation, and the required data collection is also easier than performance-based metrics. The fundamental question that attribute-based metrics aim to answer is "What makes the system more/less resilient than other systems?". Attribute-based metrics are used to provide a baseline understanding of the system's current resilience and are driven by the properties that increase the resilience of the concerned system. The properties of the system comprise robustness, resourcefulness, adaptivity, recoverability, and SA. For example, the ratio of underground feeders to overhead feeders, the proportion of distributed resources to critical consumers, the number of advanced metering infrastructures/sensors, path redundancy, and overlapping branches result in increased robustness, resourcefulness, and SA, thus improving the resilience of the system to HILP events [102, 36]. Some of the widely used attribute-based resilience metrics are described below:
1. System robustness: This metric evaluates the ability of the power distribution system to withstand shocks or disturbances without failure. For example, the robustness of the system can be evaluated based on the strength of the system infrastructure, such as the resilience of poles, wires, and transformers to severe weather events [83].
\begin{table}
\begin{tabular}{l l} \hline \hline Category of consequence & Resilience metrics \\ \hline Electric service & Total customer-hours of outage \\ & Total customer energy demand not served \\ & Average/percentage of customers experiencing a power outage during a specified period \\ \hline Recovery & time and cost of recovery \\ \hline Monetary & Loss of revenue, cost of damages, repair and resource allocation \\ & Loss of assets and business interruption cost \\ & Impact on gross municipal/regional product \\ \hline Community function & Critical services without power \\ & Critical customer energy not served \\ \hline \hline \end{tabular}
\end{table}
Table 2: Some widely used examples of the category of consequences and resilience metrics [90].
2. System flexibility: This metric evaluates the system's ability to adapt to changes or disturbances. For example, the flexibility of the system can be evaluated based on its ability to manage power supply and demand during peak hours, integrate renewable energy sources, and/or extract demand flexibility during scarcity [103].
3. System redundancy: This metric evaluates the system's ability to maintain power supply even when one or more system components fail. For example, redundancy can be evaluated based on the number of backup power sources, such as generators or batteries, available to maintain the power supply during outages [104] or tie switches to utilize the power from the feeder [74].
4. Customer satisfaction: This metric evaluates the ability of the power distribution system to meet customer expectations during and after disruptions. For example, customer satisfaction can be evaluated based on the system's ability to provide timely and accurate information during outages and the quality of customer service provided by the system's operators.
The definition of resilience plays an important role in the evaluation of resilience metrics based on the attributes of the system [68]. When forming the attribute-based resilience metric, the spatiotemporal features of an HILP event on power distribution systems are also taken into account [25, 105]. The uncertainty associated with HILP events is a critical attribute that is often represented using probabilistic measures [106]. These attributes, incorporated into the metric, are valuable for decision-making in planning and policy-making processes [25, 68, 98]. When utilizing attribute-based metrics, it becomes possible to compare different systems, both with and without resilience enhancement strategies. Attribute-based resilience enhancement metrics need to be able to adapt to advances in technology, ensuring their continued relevance and effectiveness.
#### Performance-based metrics
It is recommended that the grid resilience metrics are defined based on system performance. They should be (1) forward-looking, (2) quantify the consequences of disruptions, (3) incorporate uncertainties that can affect the response of the system and planning decisions, and 4) be flexible enough to use data from historical analysis and system models [90]. Such performance-based metrics follow the approaches in evaluating system resilience quantitatively. The performance of the electric grid during major shocks, such as natural disasters, can be described by outage frequency, the number of customers impacted, outage duration, or a combination of these. The national energy regulatory commission (NERC) published separate metrics to evaluate system performance against reliability standards [107, 108]. The growing occurrence of extreme events has emphasized the importance of developing metrics to assess the performance of the power system during HILP events. Eventually, entities are expected to appropriately include all events that affect the power system, considering the event's probability and impact on the communities. Some of the widely used performance-based resilience metrics are described below:
1. Energy at risk: Energy at risk is a metric that quantifies the amount of energy that may not be supplied during extreme events. It provides a forward-looking assessment of the potential consequences of disruptions by estimating the amount of energy that can be lost due to outages during extreme events [109].
2. Probabilistic assessment risk: Probabilistic assessment risk is a metric that assesses the likelihood and consequences of disruptions due to various failure scenarios. It reflects inherent uncertainties by considering the probability of various failure scenarios and the potential consequences of these failures [82]. It can use historical analysis and system modeling to quantify the likelihood of different scenarios and the potential consequences of these events.
3. Flexibility margin: This metric measures the ability of the system to respond to changes in demand and supply. It is forward-looking and reflective of inherent uncertainties by assessing the system's ability to respond to unexpected changes in demand and supply, especially during scarcity and emergency conditions [110].
4. Restoration time: Restoration time is a metric that measures the time it takes for the power system to restore power following a disruption [111]. It is forward-looking and quantifies the consequences of disruptions by assessing the time required to restore service to customers. Historical analysis and system modeling can be used to estimate the time required to restore customer service in various scenarios.
A resilience performance curve is widely used to define performance-based metrics [112]. Several studies have adopted a resilience triangle in the past to determine the system performance where only two different states are presented [33]. Lately, it has been realized that the one-dimensional character of the resilience triangle is not very helpful and can only capture the recovery from an event. It is equally important to capture other highly critical resilience dimensions such as "how fast resilience degrades and how long the system remains in the degraded state before the recovery stage" [113, 105]. A conceptual resilience curve has recently been used to assess and define performance-based metrics [114, 115].
In conclusion, performance-based approaches are commonly used in cost-benefit analysis and planning studies to assess the advantages and drawbacks of proposed resilience improvements and investments. While attribute-based and performance-based approaches have distinct definitions and can be used independently depending on utility preferences, combining these approaches allows for a more comprehensive analysis of grid resilience, considering the potential consequences of disturbances. Attribute-based metrics provide a broad characterization of grid resilience, while performance-based approaches assess tailored options for enhancing resilience, integrating economic, social, and regional factors. By combining attribute-based and performance-based metrics, a baseline evaluation of resilience, recovery efforts, planning, and investment activities can be effectively maximized, leading to improved grid resilience [116, 38]. Unlike these methods, some recent works also explore a data-driven method to characterize resilience in power distribution systems [117]. Data-driven approaches are likely to be popular as critical information from the electric utility is less likely to be publicly available.
## 3 Power distribution system resilience - Tools for planning and enhancement
The resilience enhancement of power distribution systems depends on various tools and strategies. These include accurately forecasting natural events, recognizing critical loads that require uninterrupted power supply, maintaining situational awareness, and implementing proper planning and restoration measures. The combination of these measures and tools contributes to the overall enhancement of system resilience. These measures can be further categorized into operational and infrastructural resilience, as detailed in Table 3. The stages of a resilience curve -- avoid, react, and recover -- are illustrated in Figure 5, indicating where these tools can be applied to enhance resilience [85]. This section details related work on critical measures and tools for resilience planning and enhancement in power distribution systems.
### Event forecast
Extreme weather forecasting helps utility planners make appropriate operational decisions to reduce damage to the power grid. Recently, advances in observation networks such as satellite remote sensing have significantly improved the accuracy of short-term weather forecasting models [154]. Additionally, advances in data analytics, accurate weather modeling, and enhanced computing resources make extreme weather forecasting efficient and reliable [155]. How
\begin{table}
\begin{tabular}{c c c} \hline \hline References & Resilience category & Planning \& enhancement strategy \\ \hline
[118] & & Load prioritizing \\
[119, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 82, 89, 83, 86, 88, 89, 87, 88, 89, 91, 89, 80, 82, 89, 84, 89, 88, 89, 80, 89, 81, 82, 89, 82, 83, 84, 85, 86, 87, 88, 89, 89, 92, 89, 88, 89, 89, 93, 89, 80, 89, 89, 81, 82, 83, 84, 85, 86, 87, 88, 89, 89, 82, 89, 83, 86, 89, 87, 88, 89, 89, 94, 89, 88, 89, 95, 89, 89, 96, 89, 97, 89, 98, 99, 99, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 109, 110, 111, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 139, 140, 141, 142, 143, 144, 144, 145, 146]Infrastructural resilience & Remote units deployment & \\
[145, 146, 147] & Infrastructural resilience & Deployment of mobile energy resources \\ \hline
[147] & Both operational \& & Smart distribution systems \\
[148, 149, 150] & infrastructural resilience & Repair scheduling, Optimal switching, crew dispatching \\
[151, 152] & Networked microgrid & \\
[153] & Repair and restoration using DGs, switches, and crews \\ \hline \hline \end{tabular}
\end{table}
Table 3: State-of-the-art- Resilience planning and enhancement strategies for power distribution systems.
ever, long-horizon weather prediction is still an active area of research and requires further consideration when used for infrastructural planning purposes. Nevertheless, with advancements in short-term predictions, utility planners can take advantage of accurate event forecasting to make appropriate operational decisions. These decisions may include resource scheduling, dispatching crews and other resources for maintenance, and stocking backup resources. Essentially, a weather-grid impact model can be developed to understand and simulate the effects of an extreme weather event on the power grid. The existing research on the weather-grid impact model can be categorized into statistical and simulation-based models. These models include a detailed modeling of power systems, extreme weather events, damage assessment, and restoration after extreme natural events [156, 157, 158].
### Load prioritization
Hazardous incidents that impact the operation of the distribution system disrupt the supply to critical loads, thus affecting grid resilience. To ensure a high level of resilience for critical infrastructures, federal and state authorities are actively working to identify and provide guidance for enhancing critical infrastructure security [159]. Critical customers such as hospitals, fire departments, water suppliers, and other emergency units are recognized and prioritized for the prompt restoration of power supply within the utility's service area. This underscores the significance of enhancing the resilience of critical infrastructures through the increased availability of essential customers and their critical loads. The objective is threefold: 1) facilitate a rapid response to grid disturbances, 2) reduce the magnitude of harm and challenges experienced by communities, and 3) accelerate the restoration of critical functions [160]. Moreover, due to the significant amount of grid outages caused by catastrophic events, it is impossible to prevent threats to all assets. Therefore, prioritizing the critical assets of the grid becomes a clear choice for utilities [77]. Critical loads identification allows operators to selectively disconnect low-priority loads and maintain backup resources within their generation capacity while sustaining vital facilities for an extended duration [161, 162]. In such cases, various advanced techniques can be employed to restore the prioritized loads while considering topology and operational constraints, thereby enhancing the overall system resilience [163].
### Situational awareness
It is crucial to have adequate SA of the system conditions for effective and timely decision-making to counteract the impacts of the HILP event [164]. Several incidents discussed in Section 1 illustrate the severity when the system operators are not fully aware of relevant information. Inadequate SA increases the probability that the system enters the cascading blackout phase [119]. In light of these events, several industrial stakeholders have significantly advanced information systems to enhance SA. One key element of this is state estimation, which is critical to enabling continuous
Figure 5: Different stages of resilience planning and enhancement strategies.
and reliable monitoring and control of the distribution systems, particularly in the presence of DER penetration [165]. To date, the observability of the system downstream of the substations is very limited. Hence, only a limited number of utility companies have implemented control room applications, including, but not limited to, state estimation, topology identification, fault management, and cyber attack mitigation [121]. Furthermore, it is unclear how to adapt existing SA technologies from the bulk power system to power distribution due to unknown network models, poor observability, and incorrect measurements [166].
Nevertheless, progress in power distribution systems has brought together various innovative technologies like digital relays, phasor measurement units, intelligent electronic devices, automated feeder switches, voltage regulators, and smart inverters for DER. These advancements pave the way for improved end-to-end awareness and visualization of the network. Furthermore, the regular polling and on-demand retrieval of customer interval demand data through the advanced metering infrastructure contribute to enhancing the accuracy of the online model for the distribution network [123, 124]. The integration of advanced metering infrastructure information in SA tools is supplemented with conventional supervisory control and data acquisition, which provides additional data to improve the system's SA [167, 168]. In the future, most of the electric power grid will have installed communication networks, intelligent monitoring, and distributed sensors along feeders to provide additional data for an improved SA [169]. It is also envisioned that such digital networks will ultimately lead to greater levels of communication between the end-users, the utilities, and with other physical infrastructures [170]. The use of drones and unmanned air vehicles for damage assessments is also gaining popularity [171]. The improved SA and increased automation in a smart grid paradigm will assist the control room operators with real-time decision-making, thus improving the grid resilience [172].
### Resilience planning and restoration
Resilience planning and restoration involve various strategies and actions aimed at improving the ability of a distribution system to withstand and recover from extreme events. These strategies can be broadly categorized into three main areas: long-term resilience investment, short-term pre-event preparation, and post-event restoration.
#### 3.4.1 Long-term resilience investment
Long-term resilience investment involves strategic planning to make the distribution system more resilient to uncertain and extreme events. This includes infrastructure reinforcements or system hardening methods such as installing underground lines, elevating substations, and other upgrades to improve the system's reliability and robustness [173, 174, 175]. However, unlike transmission systems, distribution systems have unique characteristics, such as radial topology, low redundancy, and inability to incorporate DC power flow methods, which require specific considerations in resilience planning [71]. Distribution systems have received less attention compared to transmission systems, with limited literature on resilient distribution system design [176, 177, 178].
Most of the studies in resilience planning and upgrades apply two different types of modeling techniques, namely robust [71] and stochastic modeling [179, 180, 181, 182]. The scenario-based stochastic methods and other network interdiction models facilitate optimal hardening strategies in the distribution system [69, 70, 71, 183]. In other works, DGs siting/sizing and automatic switch placement strategies are simultaneously formulated to minimize the overall expected cost [184, 185]. However, with the growing need of resilience enhancement, investment decisions should be based on HILP events rather than the expected cost of several possible events. Furthermore, resource planning should be carried out to fulfill the need for operational flexibility, and specialized power distribution system models should be integrated with advanced operations [68, 186]. Works have shown that sensing and control technologies can also be deployed in the planning phase to enhance the resilience of the distribution system [187, 188]. Integrating these resources in the planning phase and observing the trade-off between these resources against uncertain or extreme events can provide a better planning portfolio.
#### 3.4.2 Short-term pre-event preparation
Short-term pre-event preparation focuses on resource allocation and planning strategies that can be implemented just before an extreme weather event to enhance the resilience of the distribution system. This includes activities such as pre-staging of resources, crew dispatching, and network reconfiguration to minimize the impact of the event and expedite restoration. One approach to short-term pre-event preparation is the strategic placement of resources, such as emergency response generators and crews, in strategic locations before an extreme weather event occurs. This allows for quicker deployment and utilization of resources immediately after the event, reducing the time required for restoration. For example, the U.S. Federal Emergency Management Agency (FEMA) pre-stages emergency response
generators before hurricanes or other major events, but the effectiveness of this strategy depends on the accuracy of the event forecast and the optimal placement of resources [189]. Mathematical models, such as mixed-integer linear programming models, can be used to optimize the pre-staging of resources based on various factors, such as forecasted event severity, expected outage duration, and available resources [148, 149, 150].
Short-term preparation also involves proactive network reconfiguration, where the distribution system is strategically reconfigured so that the damaged sections can be quickly isolated and power can be restored to unaffected areas once the event occurs. This can be achieved through automated switching actions or remote-controlled switches that can be operated remotely, allowing for quicker restoration without needing physical presence on site [147, 190]. Optimization models can be used to determine the optimal switching actions and reconfiguration plans considering system topology, available resources, and outage duration [148, 126]. Furthermore, repair crew deployment and coordination is an essential preparation plan to ensure efficient restoration efforts. Crews must be strategically deployed to affected areas based on event severity, expected restoration time, and available resources. Mathematical models can be used to optimize crew deployment and scheduling to minimize restoration time and maximize the utilization of available resources [153]. There are other efforts in pre-allocating mobile resources as a short-term preparation process [145, 191, 146]. The main idea is to restore critical loads by forming small MGs with the available mobile resources for a resilient emergency response to natural disasters. Such intentionally islanded MGs will have voltage support grid-forming resources that can energize the islanded loads until the restoration and repair tasks are completed.
#### 3.4.3 Post-event restoration
Post-event restoration involves activities carried out after an extreme weather event to restore the distribution system to normal operation. It is one of the most critical components of a system's resilience, as quick and effective restoration justifies an efficient planning strategy. Hence, planning and pre-event preparation are vital in ensuring a quick and efficient restoration process. Restoration includes repairing damaged infrastructure, re-configuring the network, and optimizing allocated resources to expedite restoration [74]. The repair crews must assess the damage, repair or replace damaged components, and restore the system to its normal operating condition following an event to minimize downtime and expedite restoration. Network reconfiguration also plays a crucial role in post-event restoration. The distribution system must be reconfigured to restore power to affected areas and isolate damaged sections. This can be achieved through automated switching actions or remote-controlled switches. Mixed integer programming models can be leveraged to optimize the repair and restoration process by coordinating crew assignments, resource allocation, and network reconfiguration to minimize restoration time and ensure efficient utilization of available resources [148, 149, 150].
The research on post-event restoration focuses also on the resilience enhancement against natural disasters using DERs and MGs, with varying restoration objectives [127, 128]. MGs provide an effective solution on DERs management and utilization for system restoration after extreme weather events [192, 193, 194, 195, 196, 197, 198]. Networked MGs consider dispatchable as well as non-dispatchable resources for service restoration in power distribution during long-duration outages [151, 152]. MGs can also be adaptive in which the formation of MG and load switching sequence is guided by the nature of extreme events [199]. MGs and DERs can also be used for sequential service restoration [133, 134]. The DG dispatch and network switching can be coordinated well to generate a feasible restoration sequence. Furthermore, such restoration can also be performed in multiple or hierarchical stages [153, 132]. Service restoration can be achieved via dynamic changes in the boundaries of MGs within a distribution network that includes synchronous-machine DGs [129, 200]. It is vital to maintain the grid frequency throughout the restoration process [201]. The correlation between switching actions and frequency deviations is considered and a suitable switching sequence is formulated that meets the essential requirement of adhering to the dynamic frequency nadir limit. Some controllers, such as grid-friendly appliance controllers, can avoid large transients in low inertia microgrids associated with switching operations for coordination [45].
## 4 Interdependence between critical infrastructures and power distribution systems
The inherent interdependencies between power distribution systems and other critical infrastructures contribute to the resilience of the community [202]. Figure 6 shows a high-level overview of the interdependencies of critical infrastructures with the power distribution system. It is crucial to understand these interdependencies for effective disaster response and recovery planning, as disruptions in the power distribution system can have cascading effects on other critical infrastructures, amplifying the overall impact on the community [203]. However, there is little understanding of the complex dynamics, vulnerabilities, and emerging threats associated with these interdependencies. This section
investigates and analyzes these infrastructure interdependencies, highlights the contributions that have been made so far in this problem space, and addresses some open challenges.
### Information and communication technologies, and power distribution systems
Interdependent power distribution systems and ICT networks offer opportunities to mitigate vulnerabilities and leverage infrastructural convergence. Key trends in analyzing their interdependencies include increasing data volumes, faster system dynamics, hidden feedback, renewable portfolio standards, variable energy resources penetration, network cybersecurity, co-simulation needs, reliability coordination, and real-time/data-in-motion analytics. Figure 6 illustrates the interdependencies between cyber layers and other physical layers of the power distribution system. Prioritizing cybersecurity is crucial for federal research and industry. Investigating these interdependencies helps to develop assessment tools to specify ICT requirements for advanced grid functionalities and build a strong foundation for new grid management tools. The deployment of advanced sensing and measurement technologies facilitates data collection and understanding while improving the resilience of the system with strategic decisions. The transmission of data streams, such as measurement data, from field devices to the control center for monitoring, analysis, and control purposes faces security risks, including data leaks, hacks, and adversarial intrusion [204, 205]. There are four domains -- cyber, social, energy market, and distribution system networks -- that interact to enable advanced grid functionalities and revolutionize grid modernization. However, such interaction also exposes the network to severe security threats.
Failure or hacking in the cyber layer can have cascading effects on the physical layer, affecting equipment and services. Understanding the relationship between the distribution grid's architecture and control systems is crucial. During extreme weather events, the power grid becomes highly vulnerable. If cyber intruders breach the control and communication system when impacted by an extreme event, it can critically collapse the power grid [206]. Loss of communication makes damage assessment and asset management nearly impossible, leading to incomplete situational awareness. Limited situational awareness during extreme events hinders decision-making. Additionally, considering cyber layer constraints can enhance the resilience of the power system, as the cyberinfrastructure is interconnected with the physical layer [207]. Advanced distribution management systems and networked microgrid control paradigms should coordinate to address operational conflicts and reconnect microgrids to the distribution networks after events. In addition, the future of distribution management systems will see an increased dependence on distributed resources and distributed architecture, necessitating robust communication capabilities [208].
Figure 6: Interdependence among various critical infrastructures, namely ICT network, transportation network, natural gas network, and physical power grid.
### Transportation and power distribution systems
It is essential to model the critical infrastructure dependency between power distribution and transportation, as the dispatch of the crew and other mitigation processes can be delayed if the transportation network is not accessible when the region is affected by an extreme event. In Figure 6, the interdependency between the transportation layer and the physical layer (power grid) is represented. The transportation system and the operational dependency of the power grid play a critical role in the resilience enhancement during extreme weather events, especially when preparing for an upcoming storm by allocating mobile resources (mobile energy storage, poles, distribution lines, etc.) and dispatching repair crews for optimal service restoration, calling for thorough research in understanding and modeling the related interdependencies. During extreme weather events, critical operations, such as optimal resource planning for rapid system restoration and emergency evacuation mechanisms, require proper modeling of the transportation network associated with the power distribution system. To investigate the critical elements that need upgrading or expansion, the study of the influence of contingency on traffic flow and power flow can be a great addition to interdependence modeling [209]. Additionally, the optimal selection of emergency stations (including distribution centers, power supply recovery, emergency supplies, medical centers, etc.) requires specific attention during interdependence modeling considering disaster management [210, 211, 212]. Such an interdependence assessment can greatly help disaster-impacted areas develop a rapid recovery plan. For example, after Hurricane Maria in Puerto Rico, it took 11 months to restore the power back to its nominal state. One of the primary reasons for such a lengthy restoration was the lack of proper modeling of the transportation systems, preventing efficient crew dispatch [213].
### Natural gas and power distribution systems
The primary reasons for the interdependence between natural gas pipelines and power distribution systems are attributed to the space heating using natural gas in residential, commercial, and industrial buildings. Simultaneous damage to both networks due to a natural disaster can cause severe concern to affected communities. Another example includes rare winter heating days, when gas distribution utilities may enforce price spikes, leading to reliability challenges. For instance, in the 2004 New England cold snap, the real-time price of electricity rose the ISO-New England's bid cap [214]. The day-ahead gas prices at New England's gas system were increased (nearly ten times their normal price range) along with the electricity price. The interdependence should be modeled to jointly identify the overall patterns of electricity generation and distribution, focusing on the energy requirements of electricity and natural gas. The framework for interdependence between natural gas and power distribution systems requires assessment of integration and automation in these infrastructures (such as threat/hazard identification and data acquisition), identification of potential impact zones, initial and cascading effects on infrastructural assets from failure events, and identification of propagation paths of this disruptive events [215]. Furthermore, power distribution systems energize the constituent components of the gas distribution system. The operation of natural gas processing plants and other relevant assets, including electricity-powered compressor stations, depends on the power distribution networks [216]. Failures in the corresponding power distribution network may propagate to the gas distribution network due to the strong interdependencies [217].
### Water and power distribution systems
The manageability and resilience of power and water distribution system are challenged by their increasing interdependence and inter-connectivity, as widely studied within water-energy nexus activities [218]. While power system research has made considerable progress over the years through dedicated research efforts and active community participation, the tools used in existing studies on water distribution networks are comparatively less advanced [219]. Critical infrastructures such as water and power distribution systems are widely interdependent since they share energy, computing, and communication resources. In heavily loaded power distribution systems, failures on distant power lines can cause severe water supply shortages. The water distribution network comprises several hydraulic components, including pipes, pumps, and tanks. The power system energizes some of these elements. Any failure will trigger cascaded loss and interrupt the electrical power supply to the hydraulic components. Research is being carried out to understand and assess this interdependence using different performance metrics [220]. One such metric is the demand satisfaction ratio, which measures the impact of power failure on the interconnected water distribution network [221]. The security framework of such critical interdependent infrastructures must be modeled using different state-of-the-art techniques, such as game-theoretic methods [222]. The operation of multi-purpose reservoirs in water distribution networks requires interdependent resource allocation between water and power distribution networks; it can be solved as a multi-objective optimization problem [223]. Furthermore, multi-infrastructural interdependence modeling paves
the way for a more sophisticated resilience analysis for power distribution systems [202].
## 5 Research gaps and opportunities
There is a significant research gap in standardizing resilience quantification, modeling, and planning for critical infrastructures such as power distribution systems. There are several areas where extensive research can improve the resilience quantification and analysis process. The research gaps specific to the analysis and enhancements of power distribution system resilience are summarized in the following sections.
### Proactive decision-making/operational planning
From the utility's perspective, planning for an upcoming HILP event is desirable to enhance resilience proactively. Planning includes mechanisms to reduce the impacts of an upcoming event or resource allocation to assist in faster recovery. For instance, staging the repair crews at appropriate locations, analyzing the availability of supply resources, and deciding on an appropriate restoration scheme before the event can help reduce their impacts on the system and help accelerate recovery [153]. The necessity and effectiveness of proactive decision-making have been extensively discussed for both the bulk grid [224, 225, 226] and distribution systems [227, 228, 229, 230]. In general, proactive outage management can help the system recover and restore faster in the aftermath of an event, thus reducing the overall impact of the HILP event. However, solving the resulting problem requires addressing some crucial modeling and algorithmic challenges. The uncertain and time-varying nature of HILP events must be appropriately modeled in the problem formulation. Robust algorithms are needed to solve related decision-making problems that typically involve solving stochastic non-linear (and often) mixed-integer optimization problems, which are computationally expensive. It is also important to model the spatiotemporal characteristics of the available resources (both human and automated) in the proactive decision-making process. For instance, fully utilizing a currently available resource for a specific outage can impact the operation in the future due to the inherent stochasticity of the resource and the changing nature of the event. In such a case, an optimal solution will include multi-stage planning to obtain optimal allocation at a given time step, considering future requirements and uncertainties. This is an even more complex problem to scale computationally for large-scale nonlinear systems such as the power grid. Additional research is needed to efficiently solve the resulting problem considering all the uncertainties related to the event and resources while appropriately modeling the complex operational decision-making problems of large-scale power systems.
### System planning for resilience/long-term planning
Resilience planning is a long-term goal for the efficient and robust operation of electric power distribution systems. Investment plans are required for infrastructure hardening, including vegetation management, smart device installations, pole maintenance, upgrades, etc. Investments are required to install weather stations in high-risk areas to avoid high-impact extreme events. State-level regulators need to reevaluate their efforts in prioritizing investments for resilience enhancement by utility companies and reassess several techniques of resilience assessment from the perspective of regulatory decisions which might impact state-level grid investments (such as DERs). Although electric utilities are supposedly investing approximately $1 trillion in the U.S. electric power grid between 2020 and 2030, investments must be implemented so that economic and national security perspectives can promote resilience by design. Utility companies require significant investment to improve security against potential vulnerabilities in distribution systems. These investments can make the distribution grid more resilient to HILP events [231].
However, several uncertainties are associated with the planning process; inappropriately incorporating those can lead to sub-optimal investment decisions. Therefore, the planning problem should appropriately model such uncertainties and the associated risks imposed on the power delivery systems. Additionally, it is also important to model those risks in the optimization framework to achieve realistic outcomes from the planning process. Recently, some current work incorporates a convex risk measure, namely conditional value-at-risk, when solving the distribution system restoration problem [232, 233, 234]. Similarly, other works explore the uncertainties associated with the system in the development of restoration approaches [235, 236, 237]. However, the risk associated with the uncertainties should also be determined for a more accurate resilience assessment. Hence, while modeling the stochastic nature of events or available resources, it is equally important to model associated risks [238].
The long-term planning problem presents significant challenges in terms of modeling, solving, and scalability [239]. Unlike operational planning, long-term planning requires the consideration of multiple scenarios and depends on the number of scenarios and time horizon being considered. Additionally, long-term planning decisions must account
for the full profile of extreme weather events, necessitating a multi-hazard model that encompasses various events occurring independently or simultaneously [240]. Furthermore, it is crucial to have a robust forecast model with minimal errors to justify future planning decisions [241]. However, one major challenge in multi-hazard modeling and forecasting is the need for a large number of scenarios to represent such hazards. Balancing computational efficiency and accuracy is critical, as incorporating sufficient scenarios is necessary to capture the high uncertainty associated with HILP events. The long-term planning model typically includes stages before, during and after the event, but the complexity grows exponentially with the time horizon, number and nature of resources, and size of the power grid under consideration. Uncertainties arise from factors such as future weather conditions, solar irradiance, battery state-of-charge, and operating conditions of DERs. To address these challenges, sophisticated tools for scenario generation and reduction are required to model the problem while retaining essential information effectively. Future research should also focus on scalable algorithms and leveraging high-performance computing resources to enhance computational tractability [242, 243].
### Modeling and forecasting the impact of natural disasters
The recent and rapid changes in weather and the frequent occurrence of natural disasters are alarming, as these events can have long-lasting impacts. Therefore, modeling and forecasting the effects of extreme weather events play a major role in the resilient operation of the power distribution system. Most of the previous works consider hurricanes as extreme events while assessing the resilience of the distribution system. On the contrary, the different types of extreme weather events differ significantly in their impact on power distribution systems. Hence, considering all the parameters affecting the distribution system resilience, a generalized impact modeling framework is challenging. Additionally, there is a gap in appropriately modeling the impacts of extreme weather events on power distribution systems when designing system hardening solutions. Most impact models are based on the topology of the power system, lacking details about localized geographical information [26]. Weather forecasting is vital for routine operations, balancing production and demand. It is essential to warn of extreme events, making it possible to better manage demand and supply, prepare a response, and accelerate recovery times. Utilities lack improved models that downscale global information to the local level. Comprehensive research is required to develop tools and skills to interpret the data and understand how meteorological uncertainty affects current and future operations. Moreover, since the frequency of these events is extremely low, the available data to characterize these events are limited. Novel methods are needed to use limited data to model the impacts at the component and system level of extreme weather events. There is also a critical need for accurate predictive event and damage models that use limited data. Weather data, meteorological data, historical outage reports, and other useful data sources must be integrated to improve resilience planning and preparation models for an upcoming event. Furthermore, the study should also have provisions for integrated studies with multiple critical infrastructures [222]. The interdependencies between power distribution and other critical infrastructures, as discussed in section 4, will be crucial, as there is a lack of a comprehensive impact assessment model on interdependent infrastructures due to extreme events.
### Smart grid operation for enhanced resilience
Improved SA and controllability at the distribution level provide an additional venue to enhance resilience through smart grid operations. In recent years, advanced metering infrastructure and intelligent electronic devices such as sensors and telemetered controllers have been deployed in distribution systems, providing utilities with access to large and increasing amounts of data [244, 245]. Furthermore, phasor measurement units, which provide real-time synchrophasor data, are expected to be widely deployed in distribution grids [246]. These innovative grid-edge technologies enhance SA compared to traditional approaches, which are labor-intensive and time-consuming. However, it is not practical to successfully deploy such devices across the network due to the associated cost and geographical difficulties [247]. Therefore, a cost-benefit analysis is needed for the design and deployment of improved state estimation technologies. This can enhance system monitoring performance and facilitate model validation with post-event analysis, allowing for accurate decision-making [248].
Another aspect of resilient smart grid operation is enabling non-traditional ways of operating grids using microgrids and other DER such as PV, storage, and flexible loads. Recently, DERs have been extensively examined for resilience enhancement because they support critical loads during extreme events independent of the bulk power system in an islanded mode. These DERs can also be effectively engaged with the help of transactive energy systems. Transactive energy systems can address operational challenges during abnormal conditions when new mechanisms are designed for contingencies [249]. Although conventional approaches for resilience enhancement are usually prepared from the
system operator's standpoint, the transactive energy systems mechanism, if appropriately designed, can be utilized to incentivize customers to engage in activities that shift load to where it is needed the most and reduce the peak loads, thus relieving stress on the grid during scarcity [250, 251, 252].
## 6 Conclusions
Resilience planning and assessment for power distribution systems have emerged as promising but challenging research topics within the community. The intricacies associated with these systems require a thorough investigation of the resilience assessment, quantification, and analysis processes. This work aims to highlight the significance of examining the resilience analysis process of power distribution networks during extreme weather events. As this review evolved with the strategic discussion and analysis of the resilience assessment and analysis process and the relevant challenges associated with different domains, it became increasingly evident that ensuring a robust and resilient power grid is not only an expected characteristic but also an absolute requirement. It should be noted that certain aspects of power system resilience, such as transmission system resilience, were beyond the scope of this work. Future research should explore these areas to further advance the understanding and implementation of holistic power system resilience strategies. To summarize, the following key aspects are addressed in this review:
* A comprehensive review is presented, highlighting the state-of-the-art resilience assessment processes and their limitations within the context of power distribution systems. This includes an overview of resilience assessment and quantification methods, an introduction to resilience analysis frameworks, and an examination of existing resilience metrics.
* The aspects of resilience planning in power distribution systems are thoroughly discussed. This encompasses event forecasting, load prioritization, SA, and resource planning and allocation. The interdependence of critical infrastructure systems is also analyzed, highlighting the interconnectedness between power distribution systems, ICT networks, transportation systems, natural gas distribution systems, and water distribution networks.
* Finally, critical research gaps are identified, and potential opportunities are proposed for future contributions in this field. By highlighting these gaps, it aims to guide current and future research to address the pressing challenges faced by the power systems community.
In conclusion, this research provides a comprehensive overview of resilience planning and assessment in power distribution systems, offering valuable insights and paving the way for further advancements in this critical area. The outcomes of this work have significant implications for multiple stakeholders in the energy sector. Industry stakeholders can leverage the resilience analysis framework to enhance their resilience planning, infrastructure investments, and operational strategies to mitigate the impacts of extreme events, as existing utility planning measures do not incorporate risks [253]. Policymakers can utilize the insights to shape regulations, standards, and policies that promote the resilience of power distribution systems, aligning with global energy, environment, and sustainability goals. Furthermore, the research underscores the urgency of coordinated action at the international level to address the increasing frequency and severity of extreme events, emphasizing the need for collaborative efforts to build resilient energy systems that contribute to a sustainable and climate-resilient future.
## Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
## Acknowledgement
This work is supported by the U.S National Science Foundation (NSF) under grant ECCS-1944142.
|
2310.12473 | Relativistic calculation of non-dipole effects in high harmonic
generation | We present results of relativistic calculations of even order harmonic
generation from various atomic targets. The even order harmonics appear due to
the relativistic non-dipole effects. We take these relativistic effects into
account by using an approach based on the solution of the time-dependent Dirac
equation. The spectra of the non-dipole even harmonics look qualitatively
similar to the spectra of the dipole harmonics obeying the same classical
cutoff rule. The temporal dynamics of the formation of the non-dipole harmonics
is, however, distinctly different from the process of dipole harmonics
formation. Even order harmonics emission is strongly suppressed at the
beginning of the laser pulse, and the emission times of the non-dipole
harmonics are shifted with respect to the bursts of the dipole emission. These
features are partly explained by a simple modification of the classical
three-step model which takes into account selection rules governing the
emission of harmonic photons. | I. A. Ivanov, Kyung Taec Kim | 2023-10-19T05:13:44Z | http://arxiv.org/abs/2310.12473v1 | # Relativistic calculation of non-dipole effects in high harmonic generation
###### Abstract
We present results of relativistic calculations of even order harmonic generation from various atomic targets. The even order harmonics appear due to the relativistic non-dipole effects. We take these relativistic effects into account by using an approach based on the solution of the time-dependent Dirac equation. The spectra of the non-dipole even harmonics look qualitatively similar to the spectra of the dipole harmonics obeying the same classical cutoff rule. The temporal dynamics of the formation of the non-dipole harmonics is, however, distinctly different from the process of dipole harmonics formation. Even order harmonics emission is strongly suppressed at the beginning of the laser pulse, and the emission times of the non-dipole harmonics are shifted with respect to the bursts of the dipole emission. These features are partly explained by a simple modification of the classical three-step model which takes into account selection rules governing the emission of harmonic photons.
## I Introduction
One can expect relativistic effects to play important role in the dynamics of the processes of atomic or molecular interactions with strong laser pulses for laser intensities over \(10^{18}\) W/cm\({}^{2}\)[1], when, with increasing ponderomotive energy, electron velocity can approach speed of light in the vacuum. It has been realized since the pioneering paper by Reiss [2],
however, that relativistic effects may reveal themselves even for moderately intense (\(10^{13}\)-\(10^{14}\) W/cm\({}^{2}\)) low frequency infrared (IR) laser fields. For instance, even for the IR laser fields of intensity of the order of \(10^{13}\) W/cm\({}^{2}\), the relativistic effects are visible in the photo-electron spectra [3; 4; 5; 6; 7; 8; 9] in the tunneling regime of ionization, characterized by the values \(\gamma\lesssim 1\), where \(\gamma=\omega\sqrt{2|I_{p}}/E_{0}\) is the Keldysh parameter [10], and \(\omega\), \(E_{0}\) and \(I_{p}\) are the field frequency, field strength and ionization potential of the target system expressed in atomic units. These relativistic non-dipole effects are due to the influence of the magnetic field component of the laser pulse which induces a non-negligible momentum transfer to the photoelectrons [11; 12]. Alternatively, if we prefer the photon picture of light, one might say that an IR photon carries small momentum, but a large number of the photons participating in the process of the tunneling ionization [13] deliver non-negligible momentum to the ionized electron [14; 15].
The momentum delivered by the photons to the photo-electron was measured experimentally under the typical parameters of the tunneling ionization regime [16]. This momentum manifests itself, on average, as a shift of the photo-electron momentum distributions (PMD) in the pulse propagation direction. More detailed picture, which emerges as a result of the complex interplay of the magnetic and Coulomb forces, includes the so-called direct electrons which never recollide with the parent ion and are driven in the direction of the laser photon momentum, and the slow electrons which experience recollisions and may acquire momentum opposite to the photon momentum [5].
Theoretical study of these effects clearly necessitates methods which go beyond the commonly used non-relativistic dipole approximation. A number of theoretical procedures allowing to consider the relativistic non-dipole effects have been described in the literature, including the relativistic strong-field approximation [8; 9; 17; 18], time-dependent Schrodinger equation (TDSE) with non-dipole corrections [6; 14; 15; 19], an approach based on the non-dipole strong-field-approximation Hamiltonian [20], and the time-dependent Dirac equation (TDDE) [21; 22; 23; 24].
The non-dipole effects manifest themselves as well in other processes occurring when atoms or molecules interact with laser fields. The process which will interest us in the present work is the process of the High Harmonic Generation (HHG). The non-dipole effects are known to produce several modifications in the HHG spectra. It was found [25] that the non-dipole interactions lead to decrease of harmonic intensity and shift of odd order
harmonics in the spectra. A detailed investigation of the effect of the pulse magnetic field on harmonic spectra was reported in [26; 27; 28]. It was found [26] that the non-dipole magnetic field effects result in the emission of photons polarized along the propagation direction which, for the laser pulse wavelength of 800 nm and intensity of the order of \(5\times 10^{15}\) W/cm\({}^{2}\), is several orders of magnitude weaker than the photon emission polarized parallel to the driving pulse polarization direction. For stronger pulses with the intensities of the order of \(10^{17}\) W/cm\({}^{2}\), the magnetic field effects start playing crucial role [28]. Electron drift in the laser propagation direction due to the magnetic-field component of the laser pulse prevents recollisions, and hence, as one could expect on the basis of the picture provided by the celebrated three-step model of HHG [29; 30], leads to the decrease of the harmonic emission.
Perhaps one of the most striking manifestations of the non-dipole effects is appearance of even order harmonics in the HHG spectra [31; 32; 33], presenting an example of a relatively small perturbation producing not only relatively minor quantitative modifications of the spectra, but introducing a qualitative change: harmonics with frequencies forbidden in the dipole approximation. The appearance of the even order harmonics can be understood as a result of the break-up of the well-known symmetry which the electron trajectories responsible for the emission of the harmonic photons exhibit in the dipole approximation [29]. Magnetic field effects break this symmetry, and thus make possible generation of even order harmonics. These harmonics were studied theoretically in [32], using perturbative treatment of the non-dipole effects.
In the present paper we report a systematic theoretical study of the non-dipole effects, in particular generation of the even order harmonics, from various atomic targets. We use the TDDE as our main calculational tool, basing on the previously developed procedure for the numerical solution of the time-dependent Dirac equation [23; 24]. The approach based on the TDDE provides a complete non-perturbative description of the non-dipole, as well as other relativistic effects.
Atomic units with \(\hbar=1\), \(e=1\), \(m=1\), and \(c\approx 137.036\) (here \(e\) and \(m\) are charge and mass of the electron, \(c\)- speed of light) are used throughout the paper.
Theory
### Numerical solution to the time-dependent Dirac equation
We solve the TDDE:
\[i\frac{\partial\Psi(r,t)}{\partial t}=\hat{H}\Psi(r,t) \tag{1}\]
following the procedure we described in [23; 24], which we briefly recapitulate below for the readers convenience. In Eq. (1) \(\Psi(r,t)\) is a four-component bispinor and the Hamiltonian operator has a form:
\[\hat{H}=\hat{H}_{\rm atom}+\hat{H}_{\rm int}\, \tag{2}\]
with:
\[\hat{H}_{\rm atom}=c\mathbf{\alpha}\cdot\hat{\mathbf{p}}+c^{2}(\beta-I)+I\ V(r)\, \tag{3}\]
and
\[\hat{H}_{\rm int}=c\mathbf{\alpha}\cdot\hat{\mathbf{A}}\, \tag{4}\]
In Eq. (3):
\(\mathbf{\alpha}=\left(\begin{array}{cc}\mathbf{0}&\mathbf{\sigma}\\ \mathbf{\sigma}&\mathbf{0}\end{array}\right)\), \(\beta=\left(\begin{array}{cc}\mathbf{I}&\mathbf{0}\\ \mathbf{0}&-\mathbf{I}\end{array}\right)\), \(I=\left(\begin{array}{cc}\mathbf{I}&\mathbf{0}\\ \mathbf{0}&\mathbf{I}\end{array}\right)\), \(\mathbf{\sigma}\) are Pauli matrices, \(\mathbf{0}\) and \(\mathbf{I}\) are \(2\times 2\) null and identity matrices, \(V(r)\) is the atomic potential and \(c=137.036\)- the speed of light. We subtracted from the field-free atomic Hamiltonian (3) the constant term \(Ic^{2}\) corresponding to the rest mass energy of the electron.
We use a laser pulse linearly polarized in \(z-\) and propagating in \(x-\) directions. The vector potential of the pulse is defined in terms of the pulse electric field:
\[\mathbf{A}(x,t)=-\hat{\mathbf{e}}_{z}\int\limits_{0}^{u}E(\tau)\ d\tau\, \tag{5}\]
where \(u=t-x/c\). At any given point in space the pulse has a finite duration \(T_{1}\) so that \(E(\tau)\) in Eq. (5) is non-zero only for \(0<\tau<T_{1}\). As targets, we will consider below a model atom with a short range (SR) Yukawa-type potential \(V(r)=-1.903e^{-r}/r\), hydrogen
atom, and helium atom described by means of an effective potential [34]. The target atom is initially in the ground \(s-\) state \(|\phi_{0}\rangle\) with the ionization potential (IP) of 0.5 a.u. for the hydrogen and Yukawa atoms and IP of 0.902 a.u. for the He atom.
The solution to Eq. (1) is expanded as a series in the basis bispinors:
\[\Psi(\textbf{r},t)=\sum_{\stackrel{{ j}}{{l=j\pm 1/2}}}\sum_{M=-j}^{j} \Psi_{jlM}(\textbf{r},t), \tag{6}\]
where:
\[\Psi_{jlM}(\textbf{r},t)=\left(\begin{array}{c}g_{jlM}(r,t)\Omega_{jlM}( \textbf{n})\\ f_{jlM}(r,t)\Omega_{jl^{\prime}M}(\textbf{n})\end{array}\right), \tag{7}\]
and the two-component spherical spinors are defined as \(\Omega_{jlM}(\textbf{n})=\left(\begin{array}{c}C^{jM}_{l\ M-\frac{1}{2}\frac{ 1}{2}\frac{1}{2}}Y_{l,M-\frac{1}{2}}(\textbf{n})\\ C^{jM}_{l\ M+\frac{1}{2}\frac{1}{2}-\frac{1}{2}}Y_{l,M+\frac{1}{2}}(\textbf{n} )\end{array}\right)\), (here \(C^{jM}_{lm\frac{1}{2}\mu}\) are the Clebsch-Gordan coefficients, \(Y_{lm}(\textbf{n})\)-spherical harmonics, and \(\textbf{n}=\textbf{r}/r\)). Parameters \(l\) and \(l^{\prime}\) in Eq. (6) must satisfy the relation \(l+l^{\prime}=2j\).
To take into account the non-dipole effects due to the spatial dependence of the laser fields, vector potential (5) is expanded in a series of spherical harmonics at every time-step of the integration procedure. Substituting expansion (6) and expansion for the vector potential in the TDDE (1), and using well-known properties of spherical spinors [35; 36], one obtains a system of coupled differential equations for the radial functions \(g_{jlM}(r,t)\) and \(f_{jlM}(r,t)\) in Eq. (7). This system has been solved using a relativistic generalization of the well-known matrix iteration method (MIM) [37], which we described in detail in [23].
Appropriate choice of the propagation technique is essential, as the Dirac equation, as it is well-known, possesses some properties which are absent in the case of the non-relativistic wave-equation. These properties are due to the presence of the continuum of the negative energy states in the Dirac Hamiltonian which makes the Dirac Hamiltonian unbounded from below. One problem which this fact entails is the well-known problem of the collapse to the negative energies continuum [38], which may manifest itself when basis set methods are used to construct approximations to the bound states of the Dirac Hamiltonian [38]. We avoid this problem, since we do not rely on the basis set methods. Initial state of the system is prepared in our calculation by solving numerically the eigenvalue equation for the
field-free Dirac Hamiltonian employing shooting method. A related problem is the so-called Zitterbewegung problem [39]. Presence of a superposition of the states with positive and negative energies implies that a solution to the TDDE should exhibit very fast oscillations with characteristic frequencies of the order of \(c^{2}\). Such oscillations are indeed present and we can reproduce them in the framework of our numerical procedure by using sufficiently small integration time-step \(\Delta\)[23]. We had to use the time-step \(\Delta\) of the order of \(10^{-6}\) a.u. in [23] to reproduce these oscillations. Use of such small values for \(\Delta\), if it were imperative, would make any practical calculations impossible, of course. Fortunately, one can bypass this problem by using an appropriate time-propagation technique. We discussed this issue in greater detail in [23; 40]. For readers convenience we present a core of the argument below. From the purely numerical point of view, presence of the fast oscillating terms in a system of the ordinary differential equations (ODE) gives us an example of a numerically stiff system of ODE, i.e. a system in which vastly different time-scales are present. To solve such a system of ODE we must use a stable integration method [41], which ensures that while the numerical solution does not reproduce very fast oscillations, it describes accurately the overall behavior of the true solution. The integration procedure that we use provides such a stability. We can illustrate this point using a simple example of a stiff system of two ODE:
\[i\dot{\mathbf{y}}=\mathbf{A}\cdot\mathbf{y}, \tag{8}\]
with Hermitian matrix \(\mathbf{A}=diag(\lambda_{1}(t),\lambda_{2}(t))\). To mimic the problem at hand let us assume that \(\lambda_{1}\) is of order of 1, while \(\lambda_{2}\) has large negative value on the interval of time that we consider. Short-time propagator in the MIM method is a unitary Crank-Nicholson (CN) propagator [42], which relates solution vectors \(\mathbf{y}_{n+1}=\mathbf{y}(t_{n+1})\) and \(\mathbf{y}_{n}=\mathbf{y}(t_{n})\) at times \(t_{n}\) and \(t_{n+1}=t_{n}+\Delta\) as follows:
\[\mathbf{y}_{n+1}=\frac{1-\frac{i\Delta}{2}\mathbf{A}(t_{n+1/2})}{1+\frac{i\Delta}{2} \mathbf{A}(t_{n+1/2})}\mathbf{y}_{n}\, \tag{9}\]
where \(t_{n+1/2}=t_{n}+\Delta/2\). One can see from Eq. (9) that if at the \(n-\)th step of the propagation the second component of the vector \(\mathbf{y}\) acquires a numerical error \(\delta y_{n}^{(2)}\), the unitarity of the CN propagation matrix in Eq. (9) makes this error remain bounded for \(m>n\).
Spatial variables in the coupled differential equations for the radial functions \(g_{jlM}(r,t)\) and \(f_{jlM}(r,t)\) were discretized on a grid with the step size \(\delta r=0.05\) a.u., the radial variable was restricted to an interval \((0,R_{\text{max}})\), with \(R_{\text{max}}=400\) a.u., and angular momenta \(j\) up to 70 were included in the expansion (6) in the calculations below. The propagation time-step \(\Delta\) was 0.05 a.u. Before proceeding to the description of the results of this calculation, it is instructive, however, to discuss an alternative treatment of the non-dipole effects based on the leading order perturbation theory (LOPT) expansion, as it provides a more transparent physical picture of the non-dipole effects than the complete Dirac equation. LOPT calculation described below was also used as an accuracy test for our solution to the TDDE.
### LOPT treatment of the non-dipole effects.
We are interested in a LOPT solution to the TDDE considering the non-dipole effects as relativistic corrections.
The leading order relativistic corrections describing the non-dipole effects in atom-field interaction can be obtained by expanding the minimal coupling atom-field interaction Hamiltonian [43; 5; 44] in the velocity gauge:
\[\hat{H}_{\text{int}}^{\text{min}}(t)=\hat{\mathbf{p}}\cdot\mathbf{A}(\mathbf{r},t)+\frac{ \hat{\mathbf{A}}^{2}(\mathbf{r},t)}{2} \tag{10}\]
in powers of \(c^{-1}\)[6]:
\[\hat{H}_{\text{min}}(t)=\hat{p}_{z}A(t)+\frac{\hat{v}_{z}xE(t)}{c}+\frac{A^{2} (t)}{2}+O(c^{-2})\, \tag{11}\]
where \(E(t)=-\frac{\partial A(t)}{\partial t}\) is the electric field of the pulse, and the velocity operator \(\hat{\mathbf{v}}=\hat{\mathbf{p}}+\mathbf{A}(t)\) has been introduced. The last term on the r.h.s. of Eq. (11) is a function of time only and can be removed by a unitary transformation of the wave-function.
Including spin effects in the interaction Hamiltonian is not necessary, if we are interested in the effects of the leading order in powers of \(c^{-1}\)[5; 25]. The fact that the spin degrees of freedom can be neglected in the leading order of the \(c^{-1}\) expansion, can be understood using the semi-classical picture of the spin effects, in which additional force due to the presence of the spin degrees of freedom, acting on the electron, is \(\mathbf{F}=-\nabla U_{m}\), where \(U_{m}=-\mathbf{\mu}\cdot\mathbf{H}\), energy of the spin-magnetic field interaction. Here \(\mathbf{H}\) is the magnetic field and \(\mathbf{\mu}\) is electron's
magnetic moment related to the expectation value of electron's spin \(\mathbf{\mu}=-2\mathbf{S}/c\). Spatial gradient of \(\mathbf{H}\) introduces an additional factor of \(c^{-1}\), making contribution of the force \(\mathbf{F}\) an effect of higher order in \(c^{-1}\). As for the relativistic corrections to the field-free atomic Hamiltonian, the so-called Breit-Pauli Hamiltonian [44], it adds terms of the order of \(c^{-2}\) to the non-relativistic atomic Hamiltonian. We do not have, therefore, to include these corrections in the LOPT treatment. To the leading order in powers of the \(c^{-1}\)-expansion, the dynamics of the system can thus be described by the time-dependent Schrodinger equation (TDSE):
\[i\frac{\partial\Psi(\mathbf{r},t)}{\partial t}=\left(\hat{H}_{\rm atom}+\hat{H}_{ \rm d}(t)+\hat{H}_{\rm nd}(t)\right)\Psi(\mathbf{r},t)\, \tag{12}\]
where
\[\hat{H}_{\rm atom}=\frac{\mathbf{\hat{p}}^{2}}{2}+V(r) \tag{13}\]
is atomic field-free Hamiltonian,
\[\hat{H}_{\rm d}(t)=\hat{p}_{z}A(t) \tag{14}\]
is the dipole part of the atom-field interaction and
\[\hat{H}_{\rm nd}(t)=\frac{\hat{v}_{z}xE(t)}{c} \tag{15}\]
is the non-dipole part of the atom-field interaction containing the effects of the order of \(c^{-1}\).
It is easy to check that the LOPT solution to the equation (12), with the non-dipole term (15) considered as a perturbation, can be written as:
\[\Psi^{\rm LOPT}(\mathbf{r},t)=\Psi_{\rm d}(\mathbf{r},t)+\Psi^{(1)}_{\rm nd}(\mathbf{r},t )\, \tag{16}\]
where the LOPT non-dipole correction is given by the expression:
\[\Psi^{(1)}_{\rm nd}(\mathbf{r},t)=-i\int\limits_{0}^{t}\hat{U}_{\rm d}(t,\tau)\hat {H}_{\rm nd}(\tau)\Psi_{\rm d}(\mathbf{r},\tau)\ d\tau. \tag{17}\]
As can be seen from Eq. (15) for the operator \(\hat{H}_{\rm nd}\) this correction is of the order of \(c^{-1}\). In Eq. (16) and Eq. (17) \(\Psi_{\rm d}(\mathbf{r},t)\) is the zero-order solution to the non-relativistic
TDSE taking into account only the dipole part of the atom-field interaction, \(\hat{U}_{\rm d}(t,\tau)\) is the evolution operator describing evolution of the system driven by the non-relativistic dipole Hamiltonian. \(\hat{U}_{\rm d}(t,\tau)\) satisfies the operator equation:
\[i\frac{\partial\hat{U}_{\rm d}(t,\tau)}{\partial t}=\left(\hat{H}_{\rm atom}+ \hat{H}_{\rm d}(t)\right)\hat{U}_{\rm d}(t,\tau)\, \tag{18}\]
and the initial condition \(\hat{U}_{\rm d}(\tau,\tau)=\hat{I}\). In practice, we need not solve the operator equation (18). All we have to do to compute the expression under the integral on the r.h.s of Eq. (16) for given \(\tau\) and \(t\), is to propagate first the initial state wave-function on the interval \((0,\tau)\) using the non-relativistic TDSE with the Hamiltonian (14), obtaining thus a state vector \(\Psi_{\rm d}(\tau)\). We act than on this vector with the operator \(\hat{H}_{\rm nd}(\tau)\) and propagate it further in time till the moment \(t\). The non-relativistic TDSE was solved using the well-tested numerical procedure described in [45].
### Calculation of electron velocity and HHG spectra
Once the solution to the TDDE (1) is obtained, expectations value of the electron velocity can be obtained as [46]:
\[\mathbf{v}(t)=c\langle\Psi(t)|\mathbf{\alpha}|\Psi(t)\rangle. \tag{19}\]
Harmonic spectra can then be calculated using the usual semi-classical approach, in which the spectral intensity of the harmonic emission can be expressed in terms of the Fourier transform of electron's velocity:
\[S_{a}(\Omega)\propto\left|\int\limits_{0}^{T_{1}}v_{a}(t)W(t)e^{i\Omega t}\ dt \right|^{2}. \tag{20}\]
where \(v_{a}(t)\) is either \(x-\) or \(z-\) component of the electron velocity for the non-dipole and dipole harmonic intensities \(S_{x}(\Omega)\) and \(S_{z}(\Omega)\), respectively. In the velocity form for the harmonics intensity which we use here, we do not need to introduce additional powers of harmonic frequency, which would be present had we used length or acceleration forms [47]. The factor \(W(t)\) in Eq. (20) is the window function [48], for which we employ the Hann form: \(W(t)=\sin^{2}\left(\frac{\pi t}{T_{1}}\right)\).
The most noticeable effects which the relativistic non-dipole corrections produce are appearance of harmonic photons polarized in the laser propagation direction [26; 27; 28] and appearance of even order harmonics in the HHG spectra [33; 49]. The LOPT picture allows to explain these features transparently. Substituting the expression Eq. (16) for the LOPT wave-function into the matrix element:
\[\langle\Psi^{\rm LOPT}(t)|\hat{\mathbf{v}}|\Psi^{\rm LOPT}(t)\rangle\approx\hat{\bm {x}}v_{x}(t)+\hat{\mathbf{y}}v_{y}(t)+\hat{\mathbf{z}}v_{z}(t)\, \tag{21}\]
defining the leading order contributions to the expectation value of electron velocity, one obtains:
\[v_{z}(t)=\langle\Psi_{\rm d}(t)|\hat{v}_{z}|\Psi_{\rm d}(t)\rangle. \tag{22}\]
For the geometry we use, the evolution operator \(\hat{U}_{\rm d}(t,\tau)\) commutes with \(\hat{l}_{z}\)- the \(z-\) component of the angular momentum, i.e., it is a conserved quantity for the quantum evolution driven by the dipole Hamiltonian (13) and (14). \(\hat{l}_{z}\), therefore, has a definite value \(l_{z}=0\) in the state described by the wave-function \(\Psi_{\rm d}(t)\), and the matrix element \(\langle\Psi_{\rm d}(t)|\hat{v}_{x}|\Psi_{\rm d}(t)\rangle\) vanishes because of the well-known dipole selection rules [44]. Leading order contribution to \(v_{x}(t)\), is, therefore, of the order of \(c^{-1}\), and is given by the expression:
\[v_{x}(t) = \langle\Psi_{\rm d}(t)|\hat{v}_{x}|\Psi_{\rm nd}^{(1)}\rangle+ \langle\Psi_{\rm nd}^{(1)}(t)|\hat{v}_{x}|\Psi_{\rm d}\rangle\] \[= 2{\rm Re}\langle\Psi_{\rm d}(t)|\hat{v}_{x}|\Psi_{\rm nd}^{(1)}\rangle\] \[= 2{\rm Im}\left(\int\limits_{0}^{t}\langle\Psi_{\rm d}(t)|\hat{p} _{x}\hat{U}_{\rm d}(t,\tau)\hat{H}_{\rm nd}(\tau)|\Psi_{\rm d}(\tau)\rangle\ d\tau \right)\.\]
In the last line of Eq. (23) we used expression (17) for \(\Psi_{\rm nd}^{(1)}\). The same dipole selection rules [44] and the structure of Eq. (17) ensure that the contribution of the order of \(c^{-1}\) to \(v_{y}(t)\) is zero. The leading contribution of the non-dipole effects is, therefore, non-zero only for the \(x\)-component of the electron velocity. Orientation of the dipole velocity due to this relativistic contribution results, thus, in the appearance of the harmonic photons polarized in the propagation direction in accordance with the observations made in [26; 27; 28].
As we mentioned above, the appearance of the even order harmonics can be understood as a result of violation of the symmetry of the electron trajectories responsible for the emission
of harmonic photons in the dipole approximation [29]. From the LOPT perspective this effect can be explained as follows. As one can see from Eq. (14) and Eq. (15), the dipole interaction operator (14) has odd parity, i.e. it couples states of different parities, while the non-dipole operator (15) has even parity. Employing a somewhat lousy language, we might say that the presence of these two atom-field interaction Hamiltonians can be described as the presence of two kinds of photons: the "dipole" photons and the "non-dipole" photons, whose emission and absorption are governed by the operators (14) and (15), respectively. Using these notions and the LOPT expression for \(v_{x}(t)\) in Eq. (23), contribution of the non-dipole interaction to the formation of the \(N-\)th harmonic can be described as absorption of \(N-1\) "dipole" photons and one "non-dipole" photon, with subsequent recombination to atomic ground state accompanied by emission of a harmonic photon with frequency \(N\omega\). Using the informal terminology which we adopted, one might say that the emitted harmonic photon is of the "dipole" nature since spontaneous emission satisfies the dipole selection rules. Conservation of the total parity for the combined system of atom and the "dipole" and the "non-dipole" photons implies then that \(N\) must necessarily be even.
Besides providing a simple physical picture of the appearance of even harmonics, the LOPT approach which we described above, can be used as a test of the accuracy of our solution to the TDDE. To do such a test we performed calculations of the expectation values of electron velocity using TDDE and LOPT approaches for the cosine-pulse form shown in Fig. 1, with the vector potential in Eq. (5) given by the equation: \(\mathbf{A}(x,t)=-\mathbf{e}_{z}\frac{E_{0}}{\omega}\sin^{2}\left(\frac{\pi u}{T_{1}} \right)\sin\omega u\) where \(\omega=0.057\) a.u., \(E_{0}=0.0534\) a.u., \(u=t-x/c\). A comparison of the TDDE results obtained using Eq. (19) and the LOPT results obtained using Eq. (23) for the \(x-\) component of electron velocity is shown in Fig. 2. The results of the LOPT treatment prove to be virtually identical to the results of the TDDE calculation which is not surprising given that the relativistic corrections could be expected to be small for the field parameters we consider.
## III Results
We report below results which we obtained from our TDDE calculations for dipole \(S_{z}(\Omega)\) and non-dipole \(S_{x}(\Omega)\) harmonic intensities for different targets. HHG spectra were obtained by computing electron velocity as prescribed by Eq. (19) and using Eq. (20) to compute
harmonic intensities. Calculations were performed using the sine waveform shown in Fig. 1 with the electric field given by the equation: \(E(u)=E_{0}\sin^{2}\left(\frac{\pi u}{T_{1}}\right)\sin\omega u\). We report below results for the base frequencies \(\omega=0.114\) a.u. (wavelength of 400 nm) and \(\omega=0.057\) a.u. (wavelength of 800 nm).
In Fig. 3 we show HHG spectra that we obtained for the driving pulse wavelength \(\lambda=400\) nm and different field strengths for various targets. Fig. 3 shows both dipole \(S_{z}(\Omega)\) and non-dipole \(S_{x}(\Omega)\) harmonic intensities. The vertical lines in the Figures show positions of the classical cutoffs given by the well-known \(3.17U_{p}+I_{p}\) (here \(U_{p}=E_{0}^{2}/4\omega^{2}\) and \(I_{p}\) are ponderomotive and ionization energies respectively) rule of the three-step model [29; 30]. In Fig. 4 we zoom on the parts of the harmonic spectra more closely to demonstrate the presence of odd and even harmonics in the dipole and non-dipole spectra respectively.
Quite expectedly, behavior of the dipole intensity \(S_{z}(\Omega)\) shown in Fig. 3 agrees very well with the three-step model predictions, exhibiting a sharp drop in magnitude after reaching the classical cutoff. The non-dipole \(S_{x}(\Omega)\) spectra mimic this behavior very closely. This may be not surprising if we make use again of the LOPT picture of formation of the non-dipole harmonics we presented above, which relied on the notions of 'dipole' and 'non-dipole' photons with operators describing their interactions with an atom given by Eq. (14) and Eq. (15), respectively. We remind, that in the framework of this picture the \(N-\)th non-dipole harmonic is produced as a result of the absorption of \(N-1\) "dipole" photons and one "non-dipole" photon. As far as the harmonic spectra are concerned, the mechanism responsible for the formation of the non-dipole harmonic emission differs thus from the mechanism of the emission of the dipole harmonics only in the replacement of one 'dipole' photon with a 'non-dipole' one. This replacement leads to the replacement of the odd order harmonics in the spectra by the even order ones and results in an overall drop in magnitude in the harmonic spectra due to the presence of the additional factor of \(c^{-1}\) in the non-dipole interaction operator (15).
The energy and parity conservation considerations which lead us to the general conclusions about the character of the non-dipole spectra do not tell us anything about temporal dynamics of the formation of the non-dipole harmonics. We can have a glimpse of this temporal dynamics by analyzing Gabor transforms [50] of dipole and non-dipole velocities:
\[T_{a}(\Omega,t)=\int\limits_{0}^{T_{1}}v_{a}(\tau)\Phi^{*}(t,\tau,\Omega)d\tau\, \tag{24}\]
where \(\Phi(t,\tau,\Omega)=\exp\left\{i\Omega\tau-(t-\tau)^{2}/2(x_{0}T)^{2}\right\}\), parameter \(x_{0}\) determines resolution in the temporal domain, and \(T\) is an optical cycle of the laser field. Gabor transform, as well as closely related wavelet transform, allows us to take a look simultaneously at both time and frequency domains, and allows to determine, in particular, when different harmonics are emitted [51; 52; 53]. We used \(x_{0}=0.1\) in the calculations below. This value of \(x_{0}\) gives us rather poor resolution in the frequency domain, but high resolution in the time domain, which is of interest to us presently.
The absolute values \(|T_{a}(\Omega,t)|\) for both dipole and non-dipole velocities are shown in Fig. 5 and Fig. 6 for the SR Yukawa and hydrogen atoms. One can see that, dynamically, formation of dipole and non-dipole harmonics proceeds quite differently. For both Yukawa and hydrogen atoms systems emission of the non-dipole harmonics is strongly suppressed at the early stages of pulse development, and emission times for the non-dipole harmonics are shifted with respect to the dipole radiation bursts. Such behavior could be anticipated by looking at Fig. 2 which shows that \(x-\) component of the velocity starts actually respond to the field only for times approaching the midpoint of the pulse. The reason for this could be traced back to the character of the fully quantum expression for the velocity component \(v_{x}\) in the second LOPT equation (23), with time integration on the right-hand side of this equation smoothing out high frequency oscillations. To elucidate this issue further we performed a simple classical calculation of the emitted photon energy as a function of the recombination time using the physical picture provided by the three-step model. We assume that electron is ionized at the moment of time \(t_{ion}\) and returns to the parent ion at the moment of time \(t_{ret}\), emitting a harmonic photon with energy \(E_{ret}+I_{p}\). As is usually assumed in the three-step model calculations, we consider only the effect of the external field (5) on the electron motion, neglecting completely ionic potential. The only difference of our calculation and the traditional three-step model analysis of the harmonic emission, is that we take into account effect of the Lorentz force due to the magnetic field of the pulse. We simulate electron motion in a plane (which is the \((x,z)\)- plane for the geometry we employ), solving the set of the classical Newton equations, which for the fields configuration, geometry and atomic units system we employ, can be written as:
\[\ddot{x} = -\frac{v_{z}}{c}E(t)\] \[\ddot{z} = -E(t)+\frac{v_{x}}{c}E(t)\] \[. \tag{25}\]
Following the prescription of the traditional three-step model we solve equations (25) with zero initial conditions imposed at the ionization time: \(v_{x}(t_{ion})=v_{z}(t_{ion})=0\) and \(x(t_{ion})=z(t_{ion})=0\). We assume that the electron trajectory returns to the origin, if at the moment of time \(t_{ret}\), \(z-\)coordinate of the electron trajectory changes sign.
Fig. 7(a) shows results of such a simulation, which qualitatively agree with the dynamics of the dipole harmonics emission shown in Fig. 5 and Fig. 6, with bursts of harmonics emission occurring every half cycle of the laser pulse. To be able to apply this classical analysis to the emission of the non-dipole harmonics we must, however, introduce one essentially quantum ingredient in the model described by the classical equations(25). Emission of the non-dipole radiation differs from the emission of the dipole harmonics in one important aspect. For the geometry we employ, the dipole harmonics photon emission process satisfies selection rule \(\Delta M=0\), where \(M\) is the \(z-\) projection of the electron angular momentum. On the other hand, emission of the non-dipole harmonic photon, as can be seen from the LOPT analysis we presented above, must satisfy selection rule \(\Delta M=\pm 1\). This means that for the ground \(s-\) state that we consider, non-dipole radiation can be emitted only by electrons with non-zero angular momentum. We can incorporate this fact in our classical model by introducing a filter parameter \(f\) in the simulations, and considering only those returning trajectories for which at the moment of time \(t_{ret}\) squared classical angular momentum value exceeds the threshold value set by the filter parameter \(f\). Results of such calculations are shown in Fig. 7(b-d) for different values of the filter parameter \(f\). One can see that by increasing the value of the filter parameter, we make the classical picture in Fig. 7 look more like the Gabor transform results shown in Fig. 5 and Fig. 6. In particular, Fig. 7(c-d) show the absence of the non-dipole harmonics emission during the first two cycles of the laser pulse, the feature which is also demonstrated by the quantum analysis based on the Gabor transform in Fig. 5 and Fig. 6. Applying non-zero filter parameter does not change, however, the maximum energy \(E_{ret}\) of the returning electron, which explains why non-dipole harmonic emission spectra exhibit essentially the same cutoffs as the dipole
harmonic emission spectra. This simple classical picture of the formation of the non-dipole harmonics, which takes as quantum ingredient only the requirement that the electron angular momentum on the returning trajectories should exceed certain threshold value, agrees, thus, qualitatively with the fully quantum picture.
We also performed TDDE calculations for the pulse base frequency \(\omega=0.057\) a.u. (corresponding to the wavelength of 800 nm). In Fig. 8 and Fig. 9 we show harmonic spectra we obtain from TDDE for the SR Yukawa and hydrogen atoms. Fig. 10 shows results of the analysis of the temporal dynamics of the harmonic formation based on the Gabor transform (24). These Figures show essentially the same picture as the results we presented above for the driving pulse wavelength of 400 nm. The spectra of the non-dipole harmonics follow closely the classical dipole cutoff rule, and differ in this respect from the dipole emission spectra only in their intensity. Temporal pictures of the harmonics formation in the dipole and the non-dipole cases are, however, totally different. The main difference is, just as in the case of the driving pulse wavelength of 400 nm, the absence of the harmonic emission at the early stages of the pulse development, the feature which we explained above using the results of the classical calculations shown in Fig. 7(c,d).
The factor which is responsible for the difference in intensity between the dipole and non-dipole harmonics is the additional factor of \(c^{-1}\) which, as one can see from Eq. (23) and Eq. (15), is present in the LOPT formula for the \(x-\)component of the velocity. The presence of this factor in \(v_{x}\) leads to a dampening factor of \(c^{-2}\) in the expression for the non-dipole harmonics intensity. It is rather difficult to obtain a more detailed insight about relative magnitude of the dipole and non-dipole harmonic intensities form the cumbersome LOPT expressions Eq. (22) and Eq. (23). One can, however, obtain a simple estimate using the reasoning based not on the Schrodinger picture that we have used so far, but on the equivalent Heisenberg picture of the quantum mechanics (QM). In the latter, we remind, the operators evolve in time, while the state vectors do not. We obtain, of course, the same expectation values for all physical observables in both pictures.
In the Heisenberg picture time-evolution of the operators \(\hat{\mathbf{r}}(t)\) and \(\hat{\mathbf{p}}(t)\) is described by the equations [54]: \(i\hat{\mathbf{r}}=[\hat{\mathbf{r}},\hat{H}]\), \(i\hat{\mathbf{p}}=[\hat{\mathbf{p}},\hat{H}]\), where the Hamiltonian operator in our problem is \(\hat{H}=\hat{H}_{\rm atom}+\hat{H}_{\rm d}(t)+\hat{H}_{\rm nd}(t)\), with \(\hat{H}_{\rm atom}\), \(\hat{H}_{\rm d}(t)\) and \(\hat{H}_{\rm nd}(t)\) given by Eq. (13), Eq. (14), and Eq. (15), respectively. Calculating the commutators, one obtains the following equations of motion:
\[\dot{\hat{x}} = \hat{p}_{x}\] \[\dot{\hat{p}}_{x} = -i[\hat{p}_{x},\hat{V}]-\frac{\hat{v}_{z}}{c}E(t)\] \[\dot{\hat{z}} = \hat{p}_{z}+A(t)+\frac{\hat{x}}{c}E(t)\] \[\dot{\hat{p}}_{z} = -i[\hat{p}_{z},\hat{V}]\,\]
where \(\hat{V}\) is atomic potential operator, \(\hat{v}_{z}=\hat{p}_{z}+A(t)\), \(A(t)\) and \(E(t)\) are the vector potential and the electric field of the pulse. Eq. (26) is the quantum-mechanical analogue of the classical equations describing electron motion in the potential \(V\) in presence of the Lorenz force. It contains the same physical information and is, therefore, equivalent to the LOPT equations Eq. (22) and Eq. (23), but it provides a more clear physical picture and can be used as a starting point for making simplifying assumptions.
From the first two equations (26) one obtains:
\[\ddot{\hat{x}}=-i[\hat{p}_{x},\hat{V}]-\frac{\hat{v}_{z}}{c}E(t)\, \tag{27}\]
We will make an assumption that one can omit the commutator \([\hat{p}_{x},\hat{V}]\) in Eq. (27). Some justification for this operation can be provided in the case of the SR Yukawa atom, when potential function \(V(\mathbf{r})\) is effectively zero everywhere excepting a small neighborhood of the atom. We obtain then from Eq. (27) a relation for the expectation values of the electron acceleration \(a_{x}=\langle\phi_{0}|\ddot{\hat{x}}|\phi_{0}\rangle\) and velocity \(v_{z}=\langle\phi_{0}|\hat{v}_{z}|\phi_{0}\rangle\):
\[a_{x}=-\frac{v_{z}}{c}E(t)\, \tag{28}\]
where \(|\phi_{0}\rangle\) is the initial atomic state, which does not evolve in time in the Heisenberg picture. Assuming further that \(E(t)\) is a monochromatic wave: \(E(t)=E_{0}\cos\omega t\) and calculating Fourier transforms of both sides of Eq. (28), we obtain a relation between the Fourier transforms \(\tilde{v}_{x}(\Omega)=\int v_{x}(t)e^{i\Omega t}\ dt\) and \(\tilde{v}_{z}(\Omega)=\int v_{z}(t)e^{i\Omega t}\ dt\):
\[-i\Omega\tilde{v}_{x}(\Omega)=\frac{E_{0}}{2c}\left(\tilde{v}_{z}(\Omega+ \omega)+\tilde{v}_{z}(\Omega-\omega)\right)\, \tag{29}\]
from which, using the fact that for any complex numbers \(z_{1}\), \(z_{2}\): \(|z_{1}+z_{2}|^{2}\leq(|z_{1}|+|z_{2}|)^{2}\), we obtain an inequality:
\[\Omega^{2}S_{x}(\Omega)\leq\frac{E_{0}^{2}}{4c^{2}}(\sqrt{S_{z}(\Omega+\omega)}+ \sqrt{S_{z}(\Omega-\omega)})^{2} \tag{30}\]
We see from Eq. (30) that for \(\Omega>\omega\) we have:
\[R(\Omega)=\frac{4c^{2}\omega^{2}}{E_{0}^{2}}\frac{S_{x}(\Omega)}{(\sqrt{S_{z}( \Omega+\omega)}+\sqrt{S_{z}(\Omega-\omega)})^{2}}\leq 1. \tag{31}\]
Introducing the magnitude \(A_{0}=E_{0}/\omega\) of the pulse vector potential, we can rewrite inequality (31) as:
\[\frac{S_{x}(\Omega)}{(\sqrt{S_{z}(\Omega+\omega)}+\sqrt{S_{z}(\Omega-\omega)}) ^{2}}\leq\frac{A_{0}^{2}}{4c^{2}} \tag{32}\]
The ratio \(R(\Omega)\) defined in Eq. (31) is shown in Fig. 11(a) for the SR Yukawa potential and various pulse parameters. Of course, we cannot expect Eq. (31) to provide a rigorous upper bound since deriving it we neglected atomic potential in Eq. (27), which constitutes a rather drastic approximation. As one can see from Fig. 11(a) inequality (31) can indeed be violated. One can see, nevertheless, that Eq. (31), and consequently Eq. (32) provide reasonably accurate estimates of the relative magnitude of the intensities of the dipole and non-dipole harmonics.
While the non-zero expectation value \(v_{x}\) and appearance of the non-dipole harmonics is an entirely relativistic phenomena, the non-dipole effects also modify slightly the velocity component \(v_{z}\). The magnitude of this effect is of the order of \(c^{-2}\). This can be most easily seen from the Heisenberg equations of motion (26). The equation for \(v_{z}(t)\) (the third of the equations (26)) contains the term \(\hat{x}E(t)/c\) on the right-hand side. Since the expectation value of \(x\) is itself of the order of \(c^{-1}\), the resulting effect on \(v_{z}(t)\) is of the order of \(c^{-2}\), which will produce a relativistic correction of the order of \(c^{-2}\) for the dipole harmonic intensity. We may expect, therefore, that the normalized difference:
\[\frac{\Delta S_{z}(\Omega)}{S_{z}(\Omega)}=\frac{S_{z}(\Omega)-S_{z}^{\rm nr}( \Omega)}{S_{z}^{\rm nr}(\Omega)}\, \tag{33}\]
where \(S_{z}(\Omega)\) is the dipole harmonics intensity obtained in the present TDDE calculation and \(S_{z}^{\rm nr}(\Omega)\) is the result of the non-relativistic TDSE calculation, should be of the order of \(c^{-2}\). i.e., we may expect \(\Delta S_{z}(\Omega)/S_{z}(\Omega)\sim 10^{-4}\). That this is indeed the case can be seen
from Fig. 11(b), where we show results of the TDDE and TDSE calculations performed for the same pulse parameters for the Yukawa atom.
The analysis based on the Heisenberg equations of motion (26) also allows to give a simple explanation for the behavior of \(v_{x}(t)\) shown in Fig. 2, where the \(x-\) component of the electron velocity starts responding to the field only for the times approaching the midpoint of the pulse. Integrating Eq. (28) we obtain for the expectation value \(v_{x}=\langle\phi_{0}|\hat{v}_{x}|\phi_{0}\rangle\) (assuming that it has zero value at \(t=0\)):
\[v_{x}(t)=-\frac{1}{c}\int\limits_{0}^{t}v_{z}(\tau)E(\tau)\ d\tau. \tag{34}\]
We could have obtained the same equation by integrating the first of the set of the classical equations (25), which is not surprising given the great formal similarity between the classical mechanics and the QM in the Heisenberg picture. We show in Fig. 12(a) the expectation value \(v_{z}(t)\) obtained in the LOPT calculation for the cosine pulse with \(E_{0}=0.0534\) a.u. and \(\omega=0.057\) a.u. We show only the LOPT result. Just as in the case of \(v_{x}(t)\), shown in Fig. 2, the TDDE and LOPT results for \(v_{z}(t)\) differ very slightly. In Fig. 12(b) we show the LOPT expectation value \(v_{x}(t)\), as well as the estimate for \(v_{x}(t)\) that we obtain if we substitute the LOPT value for \(v_{z}(\tau)\) under the integral sign in Eq. (34). One can see that the estimate thus obtained reproduces fairly well the general behavior of \(v_{x}(t)\). In particular, it reproduces the feature that we mentioned above: the \(x-\)component of the velocity begins deviating from zero appreciably only for the times approaching the midpoint of the pulse. We remind that effect of the atomic potential on the motion in the \(x-\)direction was neglected in the Heisenberg equation of motion (28) which we used to obtain the estimate (34). The fact that the estimate (34) reproduces qualitative behavior of the \(x-\)component of electron velocity shown in Fig. 2 tells us, therefore, that this behavior is a result of the interplay of the motion in \(x-\) and \(z-\) directions which are mutually interconnected due to presence of the Lorentz force.
## IV Conclusion
We have presented results of the relativistic calculations of even harmonic generation from various atomic targets. Our approach was based on the numerical solution of the TDDE. The
HHG spectra of the non-dipole even order harmonics were found to look qualitatively similar to the spectra of the dipole harmonics, obeying the same classical cutoff rules. The temporal formation of the non-dipole harmonics, however, was found to be quite different. The results of the Gabor transform analysis show that formation of the non-dipole harmonics is strongly suppressed at the beginning of the laser pulse, and bursts of the non-dipole radiation are shifted in time with respect to the bursts of the dipole emission. These features are partly explained by a simple generalization of the classical three-step model, which takes into account the selection rules governing emission of harmonic photons. We modeled the effect of these selection rules by using a filter parameter, which selects the trajectories with angular momentum exceeding a certain threshold value at the recollision time.
For the field parameters we considered the relativistic effects are still relatively weak and could be described perturbatively. LOPT provides, as we have seen, an adequate description of the non-dipole effects responsible for the even order harmonics emission. Use of the TDDE, however, is technically simpler than the calculations based on the LOPT, and opens the perspective of making an excursion into the truly relativistic domain in the future. We relied, therefore, on the TDDE-based approach in the present work. The present approach can also be generalized relatively easily to include some quantum electrodynamical (QED) effects, such as the vacuum polarization effects, or the QED strong Coulomb field radiative corrections, which can be taken into account by using effective potentials such as the Uehling potential [55; 56] or the radiative potential proposed in [57]. The procedure we apply to solve the Dirac equation can also be used to study the process of electron-positron pair production (PP) in strong electromagnetic fields, which occurs when field strength reaches the characteristic Schwinger field strength of \(1.3\times 10^{16}\) V/cm. The process of PP in both homogeneous and inhomogeneous electric fields has received considerable interest in the literature [58]. Theoretical treatment of PP in the semiclassical approximation relies on a solution of the TDDE for a given field configuration [59]. Our procedure might prove useful for this purpose, especially in the case of the spatially inhomogeneous field, which has been found to play an important role in the PP [58; 60].
The numerical procedure we employ can be relatively easily generalized for the case of the many-electron relativistic Hamiltonians used in the quantum chemistry calculations [61; 62]. Use of the representation of the wave-function analogous to the expansion (6) would be, of course, impractical for systems with more than one electron if we want to use
such expansions to represent the wave-function in the whole space. One may use, however, the idea of the \(R-\)matrix approach, which separates the coordinate space in the inner region, where a suitable basis set representation can be used to represent many-electron wave-functions and the outer region, where one has to concentrate on the description of a single electron motion, for which the finite difference method might be better suited. Such a strategy has been implemented with success in the framework of the so-called R-Matrix incorporating Time method (RMT) [63] which allows to solve the non-relativistic TDSE for many-electron systems. One can use a similar approach in the relativistic case, relying on the results of the stationary quantum chemistry calculations [61; 62] for the description of the inner region, where many-electron effects are important, and using the present procedure to solve the TDDE describing electron propagation in the outer region.
###### Acknowledgements.
This work was supported by the Institute for Basic Science grant (IBS-R012-D1) and the National Research Foundation of Korea (NRF), grant funded by the Korea government (MIST) (No. 2022R1A2C3006025). Computational works for this research were performed on the IBS Supercomputer Aleph in the IBS Research Solution Center.
|
2308.08661 | Answering Ambiguous Questions with a Database of Questions, Answers, and
Revisions | Many open-domain questions are under-specified and thus have multiple
possible answers, each of which is correct under a different interpretation of
the question. Answering such ambiguous questions is challenging, as it requires
retrieving and then reasoning about diverse information from multiple passages.
We present a new state-of-the-art for answering ambiguous questions that
exploits a database of unambiguous questions generated from Wikipedia. On the
challenging ASQA benchmark, which requires generating long-form answers that
summarize the multiple answers to an ambiguous question, our method improves
performance by 15% (relative improvement) on recall measures and 10% on
measures which evaluate disambiguating questions from predicted outputs.
Retrieving from the database of generated questions also gives large
improvements in diverse passage retrieval (by matching user questions q to
passages p indirectly, via questions q' generated from p). | Haitian Sun, William W. Cohen, Ruslan Salakhutdinov | 2023-08-16T20:23:16Z | http://arxiv.org/abs/2308.08661v1 | # Answering Ambiguous Questions with a Database of
###### Abstract
Many open-domain questions are under-specified and thus have multiple possible answers, each of which is correct under a different interpretation of the question. Answering such ambiguous questions is challenging, as it requires retrieving and then reasoning about diverse information from multiple passages. We present a new state-of-the-art for answering ambiguous questions that exploits a database of unambiguous questions generated from Wikipedia. On the challenging ASQA benchmark, which requires generating long-form answers that summarize the multiple answers to an ambiguous question, our method improves performance by 15% (relative improvement) on recall measures and 10% on measures which evaluate disambiguating questions from predicted outputs. Retrieving from the database of generated questions also gives large improvements in diverse passage retrieval (by matching user questions \(q\) to passages \(p\) indirectly, via questions \(q^{\prime}\) generated from \(p\)).
Machine Learning, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-, Knowledge-of-the-art, Knowledge-of-the-, Knowledge-of-the-art, Knowledge-of-the-, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-, Knowledge-of-the-art, Knowledge-of-the-, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-art, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-art, Knowledge-of-the-, Knowledge-of-the-art, Knowledge-of-the-, Knowledge-of-the-art, Knowledge-of-the-, Knowledge-of-the-the-art, Knowledge-of-the-, Knowledge-of-the-art, Knowledge-of-the-, Knowledge-of-the-art, Knowledge-of-the-, Knowledge-of-the-the-art, Knowledge-of-the-, Knowledge-of-the-art, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-art, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-art, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-art, Knowledge-of-the-, Knowledge-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-art, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-, Knowledge-of-the-, Knowledge-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-of-the-, Knowledge-,
we show this database of more-specific, better-grounded generated questions can be used to indirectly retrieve passages relevant to a question, by finding passages that generate questions similar to a query question \(x\). This retrieval approach leads to more diverse passages and higher answer recall on two benchmark QA datasets with ambiguous questions, AmbigQA (Min et al., 2020) and WebQuestion-sSP (Yih et al., 2016). Third, we show that retrieving from generated questions, and incorporating these generated questions as context, leads to improvement on the ASQA task (Stelmakh et al., 2022), a very challenging task of long-form question answering for summarizing and comparing answers of ambiguous questions. Overall we improve the baselines by 1.9 points in DR and achieve a state-of-the-art on ASQA.
## 2 Related Work
In previous work, Lewis et al. (2021) constructed a database of 67M probably asked questions (PAQ) from Wikipedia and proposed to answer open-domain questions by retrieving similar questions from PAQ. Later work Chen et al. (2022); Wu et al. (2022) proposed alternative approaches to using databases of QA pairs in QA systems, proposing pre-training methods that are more statistically-efficient (Chen et al., 2022) or more computationally efficient (Wu et al., 2022), and developing multi-hop QA models (Chen et al., 2022) that retrieve multiple times from a QA memory. However, prior QA-memory based models (Lewis et al., 2021; Chen et al., 2022; Wu et al., 2022) focused on the traditional QA setting of unambiguous questions with unique answers. This leads to many differences in emphasis: for example, the PAQ dataset purposely removes generated questions with multiple answers.
Another efficient way to store world knowledge is to build knowledge bases (KB) or knowledge graphs (KG), such as Freebase and Wikidata, where information is stored with entities and relations as triples, e.g. ("Charles Darwin", "author of", "On the Origin of Species"). Knowledge bases have been commonly used in many knowledge intensive tasks due to its structured nature (Sun et al., 2018; Min et al., 2019). Knowledge bases, however, lack the expressiveness in representing complex relationships that involve multiple pieces of information, and often do not contain information in a format that naturally reflects users' questions.
To resolve this problem, Dhingra et al. (2020); Sun et al. (2021) proposed to construct virtual knowledge bases that are not restricted to pre-defined vocabularies of entities and relations. Dhingra et al. (2020) proposed to store entity-centric information as vectors and build a large database of vectors by iterating through passages in Wikipedia. Sun et al. (2021) encoded pair-wise relationship between entities as vectors. Both methods support a similar reasoning process as regular knowledge bases. Others have argued for use of entity-linked QA pairs as a formalism for storing knowledge, as a representation that is more aligned with users' information needs, but still are closely related to traditional AI representations like KBs and KGs (Chen et al., 2022).
Recent interest in QA for ambiguous questions poses new challenges for retrieving and representing knowledge. In such datasets, models are required to not only find one of the correct answers, but also comply with additional requirements associated with the need to choose between multiple answers. For example, Temp-LAMA Dhingra et al. (2022) requires models to answer time-sensitive questions under given time constraints; (Zhang and Choi, 2021) contains questions with geographic and temporal constraints; ConditionalQA Sun et al. (2022) contains constraints based on user scenarios; and ROMQA (Zhong et al., 2022) requires QA subject to different combinations of constraints.
This work builds especially on the AmbigQA dataset (Min et al., 2020), which contains NQ questions that have multiple interpretations, and the ASQA dataset (Stelmakh et al., 2022). The ASQA dataset contains ambiguous questions with long-form answers, where the answers explain in text what the alternative interpretations of the original question are, and what the answer is for each interpretation.
## 3 Method
In this section, we first discuss our approach to constructing a database of questions from Wikipedia, and then propose methods which use the generated questions for two important tasks in answering ambiguous questions: retrieving passages with diverse answers, and generating long-form answers to ambiguous questions.
### Question Generation from Wikipedia
We generate questions and answers from Wikipedia passages. The construction process involves three stages: answer detection, question generation, and answer verification. We discuss each stage in detail in this section and compare each stage to another popular database of questions, PAQ (Lewis et al., 2021). We name our database of generated questions SIXPAQ (**S**ynthesized question **I**nterpretations to **e**X**tend **P**robably **A**sked **Q**uestions).
#### 3.1.1 Source
We use the dump of Wikipedia preprocessed by DPR (Karpukhin et al., 2020) as inputs to generate questions. In DPR's dump, Wikipedia articles are chunked into passages which contain 100 tokens. The preprocessed dump of Wikipedia contains 21 million passages. Since many of the question interpretations in ASQA involve less popular entities, we generate questions from all 21 million passages.
In contrast PAQ generates from only 9.1 million passages (filtered by a learned question selection model).
#### 3.1.2 Stage 1: Answer Detection
A Wikipedia passage usually contains multiple pieces of information and thus questions can be asked from different perspectives. We let the question generation process to be answer-conditioned to reflect this observation. During generation, possible answers are first detected and then questions are generated for every detected answer (Lewis et al., 2021; Chen et al., 2022).
We model the answer detection step as a sequence-to-sequence (seq2seq) generation problem with a T5 model (Raffel et al., 2020).2 We do not use a Named Entity Recognition (NER) model for answer prediction, as used in PAQ (Lewis et al., 2021). The input of the generation task is a Wikipedia passage and the output is a text span that is likely to be the answer to some questions. The answer detection model (with a pretrained T5) is finetuned on NQ (Kwiatkowski et al., 2019). We use beam search with beam size of 32 to generate multiple outputs, but filter these outputs with two heuristics. First, we require the generated spans to be sub-strings of the Wikipedia passage. Second, we merge spans which are identical after removing articles and punctuation. We end up with 283 million answers detected from 21 million Wikipedia passages.
Footnote 2: We use the pretrained T5-11B model for all subtasks in constructing SIXPAQ to ensure the high quality of data.
#### 3.1.3 Stage 2: Question Generation
Given answers detected from a Wikipedia passage, we then train a model to generate questions for the specific answers. Again, we finetune a T5 model for the question generation task. An input for question generation contains a passage and a target answer, e.g. "answer: Michigan Stadium context: The Michigan Stadium is the home stadium...". An expected output should first repeat the target answer and then generate a question, e.g. "answer: Michigan Stadium question: Where is the home stadium...". In preliminary experiments, encouraging the model to first repeat the answers generally improves the quality of questions by making generated questions more specific to target answers.
We use question and answer (QA) pairs from AmbigQA (Min et al., 2020) to train the question generation task. Questions in AmbigQA originate from NQ but are revised by adding additional answer-specific information from passages to questions to remove ambiguity. This departs from most prior QG work, which generally trains models on SQuAD (Rajpurkar et al., 2016) or NQ (Kwiatkowski et al., 2019). While AmbigQA is smaller than either of these datasets, the questions are more natural than SQuAD (where questions were _formulated_ by crowdworkers looking at the passage) and better-grounded than NQ (since questions are _revised_ by crowdworkers looking at the passage), which seems to be a happy medium in producing natural questions with minimal hallucination(Bandyopadhyay et al., 2022).
We use greedy search at inference time to generate one question per answer. While PAQ used beam search (with a beam size of 4) to increase the number of generated questions (Lewis et al., 2021), we find that questions generated from beam search are often very similar to each other. Having near-duplicate questions makes the database larger but does not increase the utility of the database for most downstream tasks.
#### 3.1.4 Stage 3: Answer Verification
Questions generated from the previous step are sometimes invalid - i.e. some questions may not be answerable from the provided passages, or the correct answers to the generated questions are different from the answers from which the questions are generated. Therefore, an additional answer verification step is needed.
We train a question answering (QA) model in the reading comprehension setting to perform the answer verification task. In particular, the model takes a passage and a generated question to predict an answer. If an answer does not exist in the passage, the model should predict "not answerable". We finetune a T5 model on SQuAD v2 (Rajpurkar et al., 2018), a reading comprehension dataset which contains unanswerable questions. During verification, we drop questions if their predicted answers are "not answerable" or different from their original answers.3 After the verification step, 156 million questions are left.
Footnote 3: We normalize the original and predicted answers before comparison using scripts provided by Rajpurkar et al. (2016).
The question generation process often produces questions that are ambiguous in an open-book setting, i.e. they have multiple answers. This is expected since many NQ questions are themselves ambiguous (Min et al., 2020) when considered carefully. In PAQ, questions that have multiple open-book answers are filtered by running an open-book QA system and discarding questions with an open-book answer different from the one used for generation. This has several disadvantages: it is expensive, since open-book QA is expensive to run; it is relatively noisier than our proposed QA-based filter, since open-book QA is less accurate than machine-reading style QA; and it filters out more than 76% of the generated questions, and it is not actually appropriate for some downstream applications (such as the ones considered in SS3.3), where questions are used in conjunction with the passages from which they were generated.
#### 3.1.5 Statistics
We merge the question and answer pairs by merging pars with identical questions and end up with 127 million unique questions, among which 14.3 million questions have more than one answer mention, and 5.8 million questions have more than one unique answer.4
Footnote 4: Merging is performed by word matching, even though many questions are semantically same.
### Retrieval of Diverse Passages
One common problem with existing retrieval models (Karpukhin et al., 2020; Ni et al., 2021) for open-domain QA is the lack of diversity of the retrieved results (Min et al., 2021), i.e. only a subset of correct answers are obtained from the top-retrieved passages. This restricts models' performance in predicting multiple answers and in comparing different answers. We show that we can get more diverse passages _indirectly_, by first retrieving similar generated questions \(q^{\prime}\) given a input question \(x\), and then using as the final retrievals the passages from which the \(q^{\prime}\)'s here generated.
Retrieving questions \(q^{\prime}\) given a question \(x\) is analogous to retrieving passages from text corpora, soany existing retrieval method can be applied. In this paper, we use a sparse retrieval model, BM25, and a state-of-the-art dense retrieval model, GTR (Ni et al., 2021). GTR was originally designed for passage retrieval but the query encoder and passage encoder in GTR share parameters, so, we can directly use it to encode and retrieve questions as well. We use GTR-large and finetune the checkpoint of GTR on NQ-open (Kwiatkowski et al., 2019) in our experiments.
Questions retrieved from SIXPAQ are then mapped to passages where those questions were generated. With an input question \(x\), the score for a passage \(p_{i}\) is
\[s(x,p_{i})=\text{max}_{q^{\prime}\in\text{GEN}(p_{i})}f(x,q^{\prime}) \tag{1}\]
where \(\text{GEN}(p_{i})\) is the set of questions generated from the passage \(p_{i}\) and \(f(x,q^{\prime})\) is the retrieval score of the question \(q^{\prime}\in\text{GEN}(p)\) from BM25 or GTR. We denote this method as "max" in the our experiments (SS4.1).
In addition, we propose another simple heuristic to map questions to passages. It returns passages from which the most top-\(k\) retrieved questions are generated. We use \(k=50\) in our experiments. This method is denoted as "count" in our experiments (SS4.1).
\[s_{c}(x,p_{i})=|\{\text{GEN}(p_{i})\cap\text{argmax}_{k,q^{\prime}}f(x,q^{ \prime})\}| \tag{2}\]
### Ambiguous QA with Long Outputs
In the second task, we investigate the challenging task of answering ambiguous questions with long outputs summarizing multiple answers to the questions. For ambiguous questions that have different answers, one practical way to answer such questions is to specify under what conditions answers are correct. For example, for the question "Where is the home stadium of Michigan Wolverines?", in addition to predicting a list of answers, {"Crisler Center", "Michigan Stadium",...}, a QA system should clarify that "Crisler Center" is the home stadium of the Michigan basketball team while the "Michigan Stadium" is the home of the football team. The ASQA task proposed to answer ambiguous questions by summarizing the multiple answers into short paragraphs, e.g. "The home stadium of Michigan Wolverines men's football is the Michigan Stadium, while the stadium of its men's basketball team is the Crisler Center. Crisler Center is also the home stadium for Michigan Wolverines women's basketball".
Previous models simply retrieve passages from a text corpus and generate answers from the retrieved results. However, the retrieved passages are usually long and contain information irrelevant to the answers. We propose to retrieve questions from SIXPAQ as a concise representation of question-specific information from passages. We additionally propose a question revision step which operates on the retrieved questions to include more detailed information for the disambiguation task.
#### 3.3.1 Question Revision
While the questions in SIXPAQ are fairly unambiguous, we also explored approaches to make the questions include more information from the passages from which they were generated. We trained a sequence-to-sequence model to extract answer-specific information from passages where SIXPAQ questions are generated and rewrite the questions to include such information. Examples of questions before and after revision are shown in Table 1: e.g., the model locates the information "men's football" from the context "... is the home stadium for the University of Michigan men's football team (Michigan Wolverines) in Ann Arbor..." and adds it to the initial question. The revised question, "Where is the home stadium of the Michigan Wolverines men's football team built in 1927?", contains information {"men's football", "built in 1927?"} that is specific to the answer "Michigan Stadium". Compared to passages with hundreds of tokens, the revised questions are more concise in capturing information that is specific to answers.
We finetune a T5 model to perform the question revision task. The T5 model is trained with data provided as auxiliary information in the ASQA dataset (Stelmakh et al., 2022), which contains revised questions for different answers \(a_{i}\) and passages \(p_{i}\) provided to human annotators to write the revised questions \(q^{\prime}_{i}\).5 The question revision model takes an
ambiguous question \(q\), an answer \(a_{i}\), and a passage \(p_{i}\) to generate a revised question \(q^{\prime}_{i}\). The input and output of the model are shown below.
\[\begin{split}\text{input}=\text{ question: }q+\text{ answer: }a_{i}+\text{ passage: }p_{i}\\ \text{output}=\text{ answer: }a_{i}+\text{ revised: }q^{\prime}_{i} \end{split}\]
At inference time, we repeat the revision process \(k\) times to increase the amount of information added to original questions. In the experiments, we use \(k=2\) because we observe the model tends to generate identical questions if \(k>2\). The revised questions have an average length of 14.5 compared to original questions, which average 9.0 words long.
#### 3.3.2 Long-form Answer Generation
After revision of the top-retrieved SIXPAQ questions, we perform a generation task, to summarize the differences between multiple answers of the ambiguous questions. In addition to the revised questions from SIXPAQ, we also retrieve a few passages from Wikipedia for generating long-form answers. We find retrieving passages is necessary for ASQA--perhaps because during annotation annotators were encouraged to include background information in the long-form answers. Such information is not specific to any answer, so merely retrieving from SIXPAQ does not provide the necessary information. To mitigate this problem, we follow the baseline to also include top \(n\) passages retrieved by JPR (Min et al., 2021) from Wikipedia (Stelmakh et al., 2022).6 The inputs to the generation model are thus a concatenation of the original question \(q\), answers and retrieved questions \(\{(a_{i},q^{\prime}_{i})\}\), and retrieved passages \(\{p_{j}\}\). The target outputs are the long answers provided in ASQA. We finetune a T5-large model (Raffel et al., 2020) for this generation task.
Footnote 6: JPR is an auto-regressive reranking model aiming for increasing the diversity of retrieved passages.
\[\text{input}=\text{ question: }q\text{ + conditions: }a_{1}\text{, }q^{\prime}_{1}\text{,... + passages: }p_{1}\text{,... }\text{, }\]
## 4 Experiments
In this section, we discuss the experimental results for retrieving diverse passages and generating long-form answers for ambiguous questions.
### Retrieval of Diverse Passages
#### 4.1.1 Dataset
We use AmbigQA (Min et al., 2020) and WebQuestionsSP (Yih et al., 2016) in our experiments. AmbigQA is an open-domain QA dataset derived from NQ (Kwiatkowski et al., 2019) which contains questions that are ambiguous and thus have multiple possible answers. WebQuestionsSP (WebQSP) (Yih et al., 2016) is another dataset which contains open-domain questions asked by web users, and a subset of the questions have multiple answers. We only evaluate on multi-answer questions (in both datasets) in this experiment: 1172 questions in the AmbigQA dev set and 809 questions in the WebQuestionsSP test set have multiple answers.7
Footnote 7: We consider questions have multiple answers if at least one of the annotators find multiple answers.
#### 4.1.2 Evaluation
To measure the diversity of retrieval, we evaluate models' performance as the recall of answers. Similar to traditional passage-level retrieval models (Karpukhin et al., 2020), the
\begin{table}
\begin{tabular}{p{113.8pt} p{113.8pt} p{113.8pt} p{113.8pt}} \hline \hline Answers & Context & Revision 1 & Revision2 \\ \hline \multirow{6}{*}{Michigan Stadium} & Michigan Stadium, nicknamed “The Big House”, is the home stadium for the University of Michigan _men’s football team_ (Michigan Wolverines) in Ann Arbor, Michigan. Michigan & Where is the home stadium of Michigan Wolverines _men’s football team_? & Where is the home stadium of Michigan Wolverines _men’s football team_ \\ & Stadium was _built in 1927_, and it is the largest stadium in the US... & _bull in 1927_? \\ \hline \multirow{6}{*}{Crisler Center} & Crisler Center (formerly known as the University Events Building and Crisler Arena) is an _indoor_ arena located in Ann Arbor, Michigan. It is the home arena for the Michigan Wolverines _men’s and women’s basketball teams..._ & Where is the _indoor_ home stadium of Michigan Wolverines _men’s and women’s basketball?_ \\ \cline{1-1} \
recall is measured as the percentage of correct answers that are mentioned in the retrieved passages.
#### 4.1.3 Results
Experimental results are presented in Table 2. Numbers in the first block (passage-based retrieval) show the performance of baseline models, BM25, DPR (Karpukhin et al., 2020) and GTR (Ni et al., 2021), in directly retrieving passages from Wikipedia. DPR is another popular dense retrieval method with separate query and candidate encoders, and trained with hard negatives. We re-run the DPR open-sourced code and evaluate the retrieved results. We also run GTR-large for passage retrieval on both datasets. For question-based retrieval, we apply the proposed method on both PAQ (Lewis et al., 2021) and our SIXPAQ dataset. Again, we use GTR-large in our method. Experiments with (max) refer to the method of directly mapping top-retrieved questions to passages where they are generated (Eq. 1), while ones with (count) refer to returning passages where most top-retrieved questions are generated (Eq. 2). Compared to passage-based retrieval methods, indirect retrieval with SIXPAQ yields better performance than using BM25 or GTR. In particular, the recall@10 with BM25 improves from 44.4 to 45.8 on AmbigQA and from 32.0 to 34.4 on WebQuestionsSP. The performance with GTR is also better with SIXPAQ. On AmbigQA, the recall@10 improves 61.9 to 63.4. More improvement comes on WebQuestionsSP with an increase from 46.3 to 53.0. We conjecture that the improvement with GTR is less significant on AmbigQA because GTR is pretrained on NQ, which is a superset of AmbigQA.
### AmbigQA with Long Outputs
#### 4.2.1 Dataset
The ASQA dataset contains 4353 train and 948 dev examples. Each example contains an ambiguous question, a list of disambiguated questions and answers (short text spans), and a long-form answer which discusses the difference between short answers. Due to the high variance of long-form answers, each example in ASQA was annotated by two human annotators and the better score among the two annotations is recorded. The average length of answers is 65.0 white-space split tokens. Each question has an average of 3.4 different short answers.
#### 4.2.2 Evaluation
In ASQA, predicted outputs are evaluated from a few different perspectives. First, as a long output prediction task, it evaluates the similarity of predicted outputs with reference outputs with ROUGE-L scores. Second, it measures the recall of answers in the predicted outputs (named STR-EM)-all possible answers must be mentioned in the predicted output in order to receive full STR-EM scores. Third, it introduces a learned metric DISAMBIG-F1, with the goal of measuring whether the disambiguating information about answers in the outputs is accurate. To compute DISAMBIG-F1, the ASQA dataset uses a learned a QA model to find the answers of a sequence of disambiguated questions (provided by annotators) from the generate output. The output will receive a full DISAMBIG-F1 score if all predicted answers from the QA model match the oracle answers of the disambiguated questions. Finally, they compute an overall score, DR, as the geometric mean of ROUGE-L and DISAMBIG-F1. In addition, LEN (words) measures the average length of outputs in terms of words. Shorter outputs with higher DR scores are preferred.
#### 4.2.3 Results
We evaluate the finetuned T5-large model on the quality of predicted long-form answers. To show the effectiveness of the retrieved questions and answers from SIXPAQ, we compare to the outputs generated from retrieved passages only.
Results are presented in Table 3. The first group of results (DPR@1 and JPR@1) means we directly return the top 1 passage retrieved by DPR and JPR. The second group of results, e.g. T5 (5 passages), shows the performance of directly generating outputs with the top 5 retrieved passages with T5-large. Both groups of numbers are copied from the original paper by (Stelmakh et al., 2022).
To check whether higher recall from more retrieved passages leads to better outputs, we re-implement the baseline model to run it on more retrieved passages. The results
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{AmbigQA} & \multicolumn{2}{c}{WebQSP} \\ k & 5 & 10 & 5 & 10 \\ \hline _passage - based retrieval_ & & & & \\ Wikipedia + BM25 & 35.5 & 44.4 & 23.4 & 32.0 \\ Wikipedia + DPR & 50.7 & 57.7 & 36.2 & 43.0 \\ Wikipedia + GTR & 55.0 & 61.9 & 38.8 & 46.3 \\ \hline _question - based retrieval_ & & & & \\ PAQ + BM25 (max) & 36.9 & 43.6 & 26.7 & 32.7 \\ PAQ + GTR (max) & 43.7 & 51.3 & 33.2 & 39.6 \\ SIXPAQ + BM25 (max) & 35.5 & 45.8 & 24.8 & 34.4 \\ SIXPAQ + GTR (max) & 53.6 & 60.4 & 45.3 & 51.8 \\ SIXPAQ + BM25 (count) & 36.4 & 47.0 & 25.7 & 35.8 \\ SIXPAQ + GTR (count) & **55.9** & **63.4** & **46.7** & **53.0** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Recall@k of retrieving diverse answers for multi-answer questions in AmbigQA and WebQuestionsSP. Experiments with “max” (Eq. 1) and “count” (Eq. 2) use different methods to map top-retrieved questions to passages. We use GTR-large in the baselines and our method.
with 5 passages from our implementation matches the numbers reported in the original paper (Stelmakh et al., 2022). However, as shown in Table 3, as the number of passages increases, both STR-EM and DISAMBIG-F1 drops (40.1 to 39.2, 26.4 to 25.9).
In the third group of experiments, we retrieve questions from SIXPAQ and add top 10 answers with their conditions to the input. Without changing the model, the performance of long-answer generation with information retrieved from SIXPAQ increases by 0.8 in ROUGE-L and 2.5 in DISAMBIG-F1. In addition, the output length of the model with information from SIXPAQ is also \(\sim\)10% shorter than the baseline but covers more answers in its outputs.
#### 4.2.4 Ablations
**Recall of Answers** In the first ablation experiment, we justify the claim that retrieving questions from SIXPAQ can improve the diversity of answers. We report the recall of answers with and without questions retrieved from SIXPAQ in terms of numbers of tokens (see Figure 1). With 10 questions from SIXPAQ added to 5 passages, the recall of answers improve from 65.5 to 71.1, which leads to around 2 additional points in the final DR metric. From another perspective, with as few as 10 questions and 2 passages, the recall becomes comparable to 5 passages (66.1 vs. 66.5). Furthermore, the total length of 10 answers plus 2 passages is 43% less than 5 passages (574.5 vs. 1008.8), since information from the revised questions are more concise. This eventually leads to 1 point of increase in the final DR metric (34.6 vs. 33.7) as shown in Table 3.
**Accuracy vs. Input Length** We additionally compare the performance of models in DISAMBIG-F1 under different numbers of tokens. Results are shown in Figure 1 (right). With SIXPAQ questions, models get better DISAMBIG-F1 performance with shorter inputs. The DISAMBIG-F1 with 10 questions and 3 passages (775.6 tokens) is 28.2, better than the DISAMBIG-F1 of 27.2 with 5 questions and 4 passages (899.9 tokens) and DISAMBIG-F1 of 26.4 with 0 question and 5 passages (1008.8 tokens).
**Revised vs. Unrevised Questions** We further ablate our model to investigate the importance of question revision step proposed in SS3.3.1. The results are shown in Table 1 (right). The model's performance with 10 revised question is consistently better than with unrevised questions.
Figure 1: Left: Recall of answers from the retrieved results with varying number of passages. Right: DISAMBIG-F1 of predicted long answers with varying number passages. Both figures show that inputs with more questions yield better outputs under certain number of tokens, in both answer recall and DISAMBIG-F1.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & LEN (words) & ROUGE-L & STR-EM & DISAMBIG-F1 & DR \\ \hline DPR @ 1\({}^{\dagger}\) & 99.9 & 31.1 & 30.1 & 16.7 & 22.8 \\ JPR @ 1\({}^{\dagger}\) & 196.8 & 27.9 & 45.0 & 25.8 & 26.9 \\ \hline T5 (1 passage)\({}^{\dagger}\) & **63.0** & 40.3 & 33.6 & 21.2 & 29.2 \\ T5 (5 passages)\({}^{\dagger}\) & 71.1 & 42.7 & 39.9 & 25.1 & 32.7 \\ T5 (5 passages)\({}^{\dagger}\) & 71.6 & 43.0 & 41.0 & 26.4 & 33.7 \\ T5 (5 passages) * & 68.1 & 43.0 & 40.1 & 26.4 & 33.7 \\ T5 (7 passages) * & 69.3 & 43.0 & 39.5 & 25.5 & 33.1 \\ T5 (10 passages) * & 68.9 & 43.0 & 39.2 & 25.9 & 33.2 \\ \hline _ours_ & & & & & \\ T5 (1 passage + 10 questions) & 58.3 & 41.6 & 39.4 & 26.5 & 33.2 \\ T5 (2 passages + 10 questions) & 62.0 & 42.9 & 41.8 & 28.0 & 34.6 \\ T5 (3 passages + 10 questions) & 63.3 & 42.9 & 41.5 & 28.2 & 34.8 \\ T5 (5 passages + 10 questions) & 63.5 & **43.8** & **42.4** & **28.9** & **35.6** \\ \hline T5 (oracle) & 82.6 & 46.6 & 88.7 & 59.2 & 52.5 \\ Human & 64.8 & 49.4 & 98.4 & 77.4 & 61.8 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance of long-form answer generation with retrieved answers and passages. All models are finetuned on T5-large. \({}^{\dagger}\) Numbers copied from the baseline (Stelmakh et al., 2022). * Numbers obtained by re-implementing the baseline models.
**Question Disambiguation** In the next ablation experiment, we experiment with a more straightforward task to investigate whether the additional information retrieved from SIXPAQ help disambiguating questions. Here, we study a question revision sub-task, using auxiliary data provided in ASQA.8 In this sub-task, the model should revise an ambiguous question using information provided a passage such that it can differentiate the provided answer with others. The task is similar to revising questions when constructing SIXPAQ (SS3.3.1), except that the additional information added to the revised question should be contrastive, i.e. it should differentiate its answer with others possible answers to the ambiguous questions. For example, to differentiate the answer "the Michigan Stadium" and "Crisler Center", one should provide the additional information "_basketball team_" vs. _"football team_", but not _"built in 1927"_ vs. _"basketball team"_.
Footnote 8: This information is also available in the question revision sub-task in AmbigQA (Min et al., 2020).
A naive model simply takes an ambiguous question, an answer, and the provided passage as input to predict the revised question. We instead retrieve similar questions, along with their answers and conditions from SIXPAQ, and augment the provided passage with the retrieved information, similar to SS3.3. The additional information should provide background knowledge for models to determine how to revise the question. Again, we finetune a T5 model for this question revision task. The model's output is measured by a metric, "EDIT-F1", proposed by Min et al. (2020), to only evaluate the edits made in the revised questions.9
Footnote 9: Please refer to Min et al. (2020) for more information on “EDIT-F1”.
The results are shown in Table 4. In addition to the naive baseline of only taking the provided passage as input (passage-only), we experiment with a few other options that can potentially improve the coverage of information for the question revision task. First, we retrieve the top 1 passage from Wikipedia (top-1 Wikipedia). Second, we expand the provided passage with preceding and following tokens to double the length of inputs (adjacent context). Third, we deliberately add a passage which contains different answers of the same question (contrasting passage).10 Results in Table 4 shows that adding questions retrieved from SIXPAQ is the most effective method in revising questions, and therefore justify our claim that questions from SIXPAQ are concise and can provide sufficient background information for models to differentiate answers.
Footnote 10: Also available as auxiliary information in ASQA.
## 5 Conclusion
In this paper, we proposed to use a database of questions constructed from Wikipedia to answer ambiguous questions. We experiment on two different tasks to show its efficacy in solving questions with multiple answers. In the first task, we show that retrieving from generated questions can increase the diversity of retrieval results as an increase in the recall of answers. In the second task, we show that the increase in recall, along with the concise information contained in the revised questions, improves the performance of models on a challenging long-form QA task in summarizing and comparing different answers of ambiguous questions.
|
2305.09583 | EIT spectra of Rydberg atoms dressed with dual-tone radio-frequency
fields | We examine spectral signatures of Rydberg atoms driven with near-resonant
dual-tone radio-frequency (RF) fields in the regime of strong driving. We
experimentally demonstrate and theoretically model a variety of nonlinear and
multiphoton phenomena in the atomic Rydberg response that manifest in the EIT
spectra. Our results echo previous studies of two-level atoms driven with
bichromatic optical fields. In comparison to the optical studies, the RF-driven
Rydberg system utilizes a more complex excitation pathway, and electromagnetic
fields from two different spectral regimes: a two-photon optical excitation
continuously creates highly excited Rydberg atoms, while RF fields drive
resonant coupling between the Rydberg levels and generate strong mixing. Yet,
our spectra reflect nearly identical effects of the dual-tone RF fields on the
atomic Rydberg observables, showing detuning-dependent splittings and Rabi
frequency dependent peak numbers and relative strengths, and avoided crossings
at subharmonic resonances. We thus validate previous two-state models in this
more complex physical system. In the context of Rydberg electrometry, we use
these investigations to explore a technique where we tune a known RF field to
observe spectra which give frequency and power of an unknown RF field using the
complex dual-tone spectra. | Maitreyi Jayaseelan, Andrew P. Rotunno, Nikunjkumar Prajapati, Samuel Berweger, Alexandra B. Artusio-Glimpse, Matthew T. Simons, Christopher L. Holloway | 2023-05-16T16:31:03Z | http://arxiv.org/abs/2305.09583v1 | # EIT spectra of Rydberg atoms dressed with dual-tone radio-frequency fields
###### Abstract
We examine spectral signatures of Rydberg atoms driven with near-resonant dual-tone radio-frequency (RF) fields in the regime of strong driving. We experimentally demonstrate and theoretically model a variety of nonlinear and multiphoton phenomena in the atomic Rydberg response that manifest in the EIT spectra. Our results echo previous studies of two-level atoms driven with bichromatic optical fields. In comparison to the optical studies, the RF-driven Rydberg system utilizes a more complex excitation pathway, and electromagnetic fields from two different spectral regimes: a two-photon optical excitation continuously creates highly excited Rydberg atoms, while RF fields drive resonant coupling between the Rydberg levels and generate strong mixing. Yet, our spectra reflect nearly identical effects of the dual-tone RF fields on the atomic Rydberg observables, showing detuning-dependent splittings and Rabi frequency dependent peak numbers and relative strengths, and avoided crossings at subharmonic resonances. We thus validate previous two-state models in this more complex physical system. In the context of Rydberg electrometry, we use these investigations to explore a technique where we tune a known RF field to observe spectra which give frequency and power of an unknown RF field using the complex dual-tone spectra.
## I Introduction
Two-level atoms driven by intense bichromatic optical fields have been the subject of extensive experimental [1; 2; 3; 4] and theoretical [5; 6; 7; 8; 9; 10; 11] investigation. In these systems, both resonance fluorescence and absorption spectra have shown physics beyond the single-frequency Rabi splitting characteristic of atoms subject to monochromatic driving, including detuning-dependent and Rabi frequency independent spectral splittings, subharmonic resonances, and phase-dependent atomic dynamics. These investigations point to a wealth of multiphoton dynamics that are accessible to systems driven by multiple frequencies: bichromatic electromagnetically induced transparency (EIT) has been demonstrated in both cold atoms [12] and in hot vapors [13], and bichromatic and multifrequency fields have been employed in novel cooling methods for alkali atoms [14]. Looking beyond atomic vapors, the spectra of bichromatically driven solid state systems with single-molecule impurities [15] and nuclear spins of nitrogen vacancy centers in diamond [16] have revealed well-resolved subharmonic resonances and multiphoton effects, while a bichromatically driven quantum dot system demonstrated predicted quantum interference effects in fluorescence spectra [17; 18].
In the radio-frequency (RF) and microwave domains (MHz and GHz), dual-tone dressing has been used to demonstrate a dynamically modulated Autler-Townes (AT) effect in superconducting qubits, providing an enhanced experimental toolbox for qubit manipulation and control [19]. It has also been investigated in the context of alignment-based magnetic resonance spectra in cesium, where dual-tone driving between the ground state hyperfine levels modifies the standard AT splitting of the system dressed by a single field [20]. In Rydberg atoms, dual-tone microwave dressing was used to achieve a polarizability nulling effect [21]. An aspect of Rydberg systems that has been less explored is their behaviour under near-resonant dual- and multi-tonal RF dressing.
Over the last decade, continuously detected Rydberg EIT systems have proven to be an invaluable technology for sensitive, external calibration-free electrometry using the AT splitting (Rabi splitting) of Rydberg energy levels dressed by an RF field [22]. Under single tone resonant driving, the AT spectra display splittings proportional to field strength, providing a direct measurement of electric field; the number and relative strengths of spectral peaks are independent of field strength in this case. For strong driving with dual-tone RF dressing, multiphoton effects may be expected to yield spectra that are qualitatively different than those obtained with single tone driving.
Here we extend the physics of bichromatic optical dressing of atoms to the RF regime, using a two-level Rydberg system probed by EIT in a warm atomic vapor of \({}^{85}\)Rb. Atomic population is driven from the ground electronic state into a Rydberg state via an intermediate energy level using a two-photon optical excitation scheme. Two near-resonant RF fields couple this Rydberg state to an adjacent Rydberg state. While the full dynamics of this system is rather complex, we show that the experimental spectra may be modeled more simply by treating the two Rydberg levels as an isolated two-level system driven by a dual-tone electromagnetic field. We use a Floquet analysis to model the response of the dressed Rydberg levels, obtaining good agreement with the rich experimental spectra.
We emphasize that the experimental spectra are optically detected (EIT) Rydberg state energy spectra. The two-level dressed-atom physics of the RF-Rydberg system is probed with electromagnetic frequencies belonging to a different spectral range than those fields that create the multiphoton Floquet spectra. Our results thus validate the applicability of two-level dressed atom physics in the RF domain, where the optical fields that create the highly excited Rydberg atoms may in turn be regarded as indirect probes that do not significantly alter the relatively long-lived two-level system dynamics. Since the Rydberg system displays a wide range of resonances from GHz to MHz frequencies, these models may be validated for a wide range of dressing field frequencies. Further, the large electric dipole moments of highly excited Rydberg states allow the multiphoton dynamics of dual-tone dressed atoms to be demonstrated with modest RF field amplitudes when compared to optical dressing.
We distinguish our experiments from the "atom mixer" configurations which used two RF tones applied to the atoms to transfer an intermediate frequency into the optical domain, providing phase sensitive detection for RF fields [23]. The atom mixer used a strong resonant local oscillator (LO) and a weaker signal field several kHz detuned and well within the EIT linewidth. Further, only the resulting beat signal on the optical frequency of the probe was detected. Here, we operate in the strong-driving regime, where both RF fields contribute non-trivially to the multiphoton dynamics that are integral to the spectra we observe. We explore configurations where one or both fields are far (\(\lesssim\!200\) MHz) off-resonance from the Rydberg transition. We further distinguish these experiments from previous bichromatic EIT configurations [12; 13] that used optical fields in a three-level \(\Lambda\) configuration in contrast to our experiment where the dual-tone RF-induced Rabi splitting of two Rydberg levels is probed by a cascade EIT system, and from the dual-tone driving of atomic energy levels employed in Ref. [20] where the coupling is through the magnetic dipole term.
We begin with a description of the experimental Rydberg EIT setup in Sec II, and present our dual-tone two-level Floquet theoretical model in Sec III. Experimental results are presented against computed spectra in Sec IV for'symmetric cases,' and for 'asymmetric' cases in Sec V, with an eye toward applications. We conclude in Sec VI. Extensions of our model to include atomic fine structure and magnetic structure are in Appx. A.
## II Experiment
We use a two-photon excitation scheme to create excited Rydberg atoms in a \({}^{85}\)Rb vapor cell at room temperature. The excitation pathway \(5S_{1/2}\to 5P_{3/2}\to nD_{5/2}\) is shown in Fig. 1. We use a probe laser at 780 nm, locked to the \(F=3\to F^{\prime}=4\) transition on the \(D_{2}\) line of \({}^{85}\)Rb, and a counter-propagating coupling laser at 480 nm to excite atoms into the Rydberg state. We employ cascade EIT between the atomic ground state and the Rydberg state as our detection scheme: as the coupling laser is scanned through the Rydberg manifold, EIT of the probe beam appears when the two-photon system is resonant to a Rydberg state. Both lasers are power-locked with acousto-optic modulators. We isolate the EIT signal using differential detection of two power-balanced probe beams; one beam overlapped with the coupling laser for EIT, and the other an absorption reference.
In this experiment, we investigate the response of the atomic Rydberg states to a dual-tone RF field addressing the Rydberg transition \(61D_{5/2}\to 62P_{3/2}\) with a transition dipole moment \(\wp=2366\,ea_{0}\) and a resonant transition frequency of 9.226 GHz, calculated using the ARC software package [24]. The Rydberg transition resonance frequency \(\omega_{0}\) is verified using a single RF field at moderate power and balancing the two AT split peaks. RF fields are applied to the vapor cell using a horn antenna that is oriented such that that the RF fields propagate perpendicular to the direction of propagation of the optical fields. The RF fields and the probe and coupling lasers are all linearly polarized in the \(\hat{z}\) direction, perpendicular to the plane of the optical table. The two RF tones whose effects we investigate in this work are outputs from a dual-output signal generator with independently controllable powers and detunings. The two outputs are combined with a power combiner and applied to the RF horn antenna.
For our data, we use a frequency scale set by a scan of the coupling laser detuning (\(\delta_{c}\)) over the states \(61D_{3/2}\) and \(61D_{5/2}\), which have a fine-structure splitting of 50.339 MHz, calculated using Ref. [24]. These EIT scans, simultaneously collected in a reference cell away from the horn, also provide a frequency reference to correct for offsets of the scans due to laser drift and other environmental effects. A slight residual 'drift' towards negative \(\delta_{c}\) is observed with increased field, potentially as a result of residual field-dependent shifting in the reference cell, which was poorly shielded from RF reflections. The two-level theory that we employ does not reproduce these shifts; however the spectral characteristics we emphasize in this work remain unaffected.
Figure 1: Rydberg atom with applied dual-tone RF fields.
## III Theoretical framework: Floquet Hamiltonian
We now discuss a simplified theoretical model that we will use to analyse our experimental data. We restrict this discussion to the dynamics of a two-level atomic system composed of two Rydberg states dressed by a dual-tone RF field. The two-level model reproduces the main features of the experimental spectra with good agreement, which is one of the main results of this work. Extensions to the two-level model that consider atomic structure are discussed in Appx. A.
Consider two Rydberg states \(\ket{a}\) and \(\ket{b}\) dressed with a dual-tone RF field of the form
\[\mathbf{E}(t)= \left(|E_{1}|\cos(\omega_{1}t)+|E_{2}|\cos(\omega_{2}t+\Phi) \right)\mathbf{\hat{z}}, \tag{1}\]
where \(|E_{i}|\) is the magnitude of the field at frequency \(\omega_{i}\), and \(\Phi\) is the relative phase between the two fields. We denote the bare Rydberg atomic resonance as \(\omega_{0}\), so that \(\delta_{1}=\omega_{0}-\omega_{1}\) and \(\delta_{2}=\omega_{0}-\omega_{2}\) are the detunings of the two field components from the Rydberg resonance. We note that the trigonometric identity \(\cos(\theta)+\cos(\phi)=2\cos\left(\frac{\theta+\phi}{2}\right)\cos\left(\frac {\theta-\phi}{2}\right)\) suggests the interpretation of our dual-tone setup as a carrier at the mean frequency, amplitude modulated at a rate of half their difference.
Defining the Rabi frequencies of the two components as \(\Omega_{i}=-\langle a|\mathbf{d}\cdot\mathbf{\hat{z}}|b\rangle|E_{i}|/\hbar\), where \(\wp_{a,b}=\bra{a}\mathbf{d}\ket{b}\) is the atomic dipole moment and \(\mathbf{\hat{z}}\) is the direction of linear polarization of the applied RF fields, the interaction Hamiltonian in the rotating wave approximation (RWA) is [25]:
\[H_{\text{RWA}}=\frac{\hbar}{2}\begin{pmatrix}0&\Omega_{1}e^{-i\delta_{\delta} t/2}\\ &+\Omega_{2}e^{i(\delta_{\delta}t/2+\Phi)}\\ \Omega_{1}^{*}e^{i\delta_{\delta}t/2}\\ +\Omega_{2}^{*}e^{-i(\delta_{\delta}t/2+\Phi)}&-\Sigma_{\delta}\end{pmatrix},\]
where we define the difference and sum of the two RF detunings \(\delta_{1,2}\):
\[\delta_{\delta} \equiv\delta_{2}-\delta_{1}=\omega_{2}-\omega_{1} \tag{2}\] \[\Sigma_{\delta} \equiv\delta_{1}+\delta_{2}=2\omega_{0}-(\omega_{1}+\omega_{2})\,. \tag{3}\]
We set \(\Phi=0\) in the subsequent analysis. Note that setting \(|\Omega_{1}|=|\Omega_{2}|\) reduces \(H_{\text{RWA}}\) to the lab-frame Hamiltonian of a two-level system with energy separation \(\Sigma_{\delta}\) and coupled by a field at frequency \(\delta_{\delta}/2\). A particular case is for \(\delta_{1}=-\delta_{2}\), where the Floquet modulation frequency is equal to the detunings as \(\delta_{\delta}/2=|\delta_{1}|=|\delta_{2}|\), and in the RWA the states are degenerate as \(\Sigma_{\delta}=0\).
Unlike the case of a two-level atom dressed by a monochromatic field, the dual-tone dressed system in the rotating frame shows a residual time dependence in the off-diagonal coupling terms, so that the Schrodinger equation for the system cannot be integrated directly. Nevertheless, the time-periodicity of the residual driving confers a symmetry that allows a conserved quasienergy for the system. We treat this residual time dependence using a Floquet picture, following Shirley's approach to obtaining the Floquet modes and quasienergies of the periodically modulated system [26]. This approach promotes the time-dependent Hamiltonian from the state space \(\mathcal{H}\) to an extended Floquet space where the eigenvectors are now labeled by two indices: the eigen index of the bare Hamiltonian and the "photon number" index of the Floquet mode of \(N\) photons with a frequency at the Floquet frequency (\(\omega_{F}\equiv\delta_{\delta}/2\)). In this extended space, the harmonic components of the Schrodinger eigenvalue equation obey a recursion relation:
\[\begin{pmatrix}0&0\\ 0&-\Sigma_{\delta}/2\end{pmatrix}\begin{pmatrix}a_{N}\\ b_{N}\end{pmatrix}+\begin{pmatrix}0&\Omega_{1}/2\\ \Omega_{2}^{*}/2&0\end{pmatrix}\begin{pmatrix}a_{N+1}\\ b_{N+1}\end{pmatrix}+\begin{pmatrix}0&\Omega_{2}/2\\ \Omega_{1}^{*}/2&0\end{pmatrix}\begin{pmatrix}a_{N-1}\\ b_{N-1}\end{pmatrix}=(\epsilon+N\hbar\omega_{F})\begin{pmatrix}a_{N}\\ b_{N}\end{pmatrix} \tag{4}\]
In the dressed state basis, and for equal Rabi frequencies \(|\Omega_{1}|=|\Omega_{2}|=|\Omega|\), the ladder of states separated by the Floquet frequency \(\omega_{F}\) has a tri-diagonal representation that leads to a Bessel function representation \(d_{N}\propto J_{N}\left(\Omega/\omega_{F}\right)\) within a dressed state manifold, where \(d_{N}\) are the expansion coefficients of the atom-field coupled dressed states in the bare state basis [9].
Using Eq. 4 we may build an infinite-dimensional time-independent Floquet Hamiltonian \(\mathcal{H}_{F}\) that satisfies the eigenvalue equation \(\mathcal{H}_{F}\mathbf{\varphi}_{N}=\epsilon_{N}\mathbf{\varphi}_{N}\). In practice, the infinite-dimensional Hamiltonian is truncated to some large value \(N_{max}\) for which the solutions converge. All two-level models in this work use \(N_{max}=50\), which was more than sufficient for reasonable convergence. An example cutoff criterion is that the population in the \(N_{max}^{\text{th}}\) sideband is small, i.e. \(<0.1\%\) of the population.
We obtain the eigenvectors and eigenenergies by diagonalizing the Floquet Hamiltonian \(\mathcal{H}_{F}\). These eigenvectors and eigenenergies are the Floquet modes and quasienergies \(\epsilon_{N}\), represented in the expanded Floquet basis.
To compute the Floquet mode occupation given a specific input state, we project the initial state into the Floquet basis and compute the square of these amplitudes [26]. We plot the mode occupation against quasienergy \(\epsilon\) to generate the theory waterfall plots. The Floquet modes and quasienergies are sensitive to the detunings and the individual Rabi frequencies of the applied RF fields. In this work, we examine the spectral signatures of the Rydberg response as these parameters are varied.
## IV Experimental results: symmetric detuning and power-balanced fields
Our main results are the observed agreement between experimental Rydberg EIT spectra and a theoretical model of the dual-tone driven Rydberg system. We predict and observe many nonlinear and multiphoton effects due to the dual-tone RF dressing.
We present experiment alongside theoretical predictions, using similar axes with different units, matching experimental and theory parameters. The energy spectrum is scanned experimentally by laser detuning \(\delta_{c}\), and state theory energy is given in quasienergy \(\epsilon\). The color/darkness axis represents experimental transmittance (EIT), alongside theory population projections of the \(61D_{5/2}\) state into the Floquet modes. The waterfall axis scans applied field strength in \(\sqrt{\mathrm{m}\mathrm{W}}\), pairing with Rabi frequencies \(\Omega\). We convert between Rabi frequency and applied field with a two-point function using Rabi frequencies of \(2\pi\cdot\{70,160\}\) MHz for applied RF powers of \(\{0.24,6.12\}\) dBm, which agrees well with the experimental data, and corresponds to a linear region of a single-field Autler-Townes scan of Rabi frequencies. We compensate the powers sent to the horn to maintain field strength (\(\sqrt{\mathrm{m}\mathrm{W}}\) or \(|\Omega|\)) across the horn's frequency-dependant gain curve.
In the following sections we discuss the spectral features of experimentally obtained EIT spectra for a variety of configurations of RF fields. The main features of our experimental spectra are very well modeled by a simple two-state Hamiltonian that includes just the Rydberg states \(61D_{5/2}\) and \(62P_{3/2}\), coupled with a linearly polarized dual-tone RF field. In this Section, we demonstrate "symmetric" RF dual-tone fields that are symmetrically detuned (\(\delta_{1}=-\delta_{2}=\delta\)) from the Rydberg resonance frequency of 9.226 GHz, and power balanced, with equal Rabi frequencies (\(|\Omega_{1}|=|\Omega_{2}|=|\Omega|\) ). In Sec. IV.1, we scan Rabi frequency \(\Omega\) for a few fixed detunings \(\delta\), and in Sec. IV.2, we scan detuning \(\delta\) for a few fixed Rabi frequencies \(\Omega\).
Figure 2: Experimental (a1, b1, c1) and theoretical (a2, b2, c2) waterfall plots showing Rydberg EIT spectra obtained through a simultaneous scan of \(|\Omega_{1}|=|\Omega_{2}|=|\Omega|\) with detunings \(\delta_{1}=-\delta_{2}=\delta\) kept constant. The spectral features appear at coupling laser detunings spaced by the symmetric detuning \(\delta\). The columns correspond to \(\delta=35\) MHz, \(55\) MHz, and \(85\) MHz. These locations are marked on the \(x\) axis to emphasize the appearance of the spectral peaks at these locations. The features are reproduced in the theoretical Floquet quasienergy spectra, with waterfall plots over \(\Omega\) showing the mode occupations of Floquet modes, against Floquet quasienergy \(\epsilon\). The Floquet modes appear at quasienergies spaced by \(\delta\) and this spacing remains constant as we scan \(\Omega\). The mode occupation has a sensitive dependence on \(\Omega\).
#### iii.1.1 Scanning Rabi frequencies
Our first set of results is presented in Fig. 2, which shows waterfall plots of Rydberg EIT spectra as the symmetric detuning \(\delta\) is held constant while both power-balanced Rabi frequencies \(\left|\Omega\right|\) are simultaneously scanned. We present data for three different values of \(\delta\).
For this case, the two-state model predicts equal and opposite energy shifts of the states at each Rabi frequency; the eigenvalues of the Floquet Hamiltonian remain identically the diagonal elements. The quasienergies therefore remain unchanged as the Rabi frequency is scanned, resulting in spectra where the spacing of the Floquet modes is set entirely by the detunings, and this mode spacing remains constant as Rabi frequencies are swept.
The Floquet mode occupation is, however, sensitive to the Rabi frequencies: higher Rabi frequencies populate modes with higher photon number. The mode occupations follow the Bessel functions in this two state model (the inclusion of more states causes deviations from this Bessel function behaviour): \(d_{N}\propto J_{N}\left(\Omega/\delta\right)\), where \(N\) signifies the Floquet mode index, and \(\Omega\) and \(\delta\) are the balanced Rabi frequencies and symmetric detunings of the two RF tones [9]. The Bessel functions are visually represented in the data of Fig. 2, as seen by following the peak heights corresponding to a specific value of detuning \(\delta\) as we scan the dressing field power (that is, looking at a vertical slice of a waterfall plot at the locations \(\delta_{c}=N\delta\)).
We note that the original investigations of bichromatic optical driving of two-level atoms showed distinct linewidths for the odd and even Floquet modes; these widths were later shown to depend on the ratio of the Rabi frequency to the Floquet frequency (\(\delta_{\delta}/2\)), as well as the natural widths of the atomic states [9]. In our data, the two Rydberg levels have very similar decay rates, and state decay linewidths in the kHz regime--the state lifetimes at room temperature, computed using Ref. [24], are \(\tau_{61D_{5/2}}\approx 110\,\mu\)s and \(\tau_{62P_{3/2}}\approx 145\,\mu\)s--and we observe no obvious difference in the linewidths of the even and odd peaks, which are primarily set by the Doppler-broadened EIT linewidth (typically several MHz).
#### iii.1.2 Scanning Detunings
Our second set of results is presented in Fig. 3, which shows waterfall plots of Rydberg EIT spectra as the power-balanced equal Rabi frequencies of the two fields are constant while the symmetric detunings \(\delta_{1}=-\delta_{2}=\delta\) are simultaneously scanned. We present data for three different values of \(\Omega\). We note that the spectra are primarily visible between \(-\Omega<\delta_{c}<+\Omega\).
Again, the symmetric detunings each produce equal and opposite energy shifts of the states, and the eigenvalues of the Floquet Hamiltonian are determined by its
Figure 3: Experimental (a1–c1) and theoretical (a2–c2) waterfall plots showing a simultaneous scan of \(\delta_{1}=-\delta_{2}=\delta\), while the two Rabi frequencies \(\left|\Omega_{1}\right|\)= \(\left|\Omega_{2}\right|\)= \(\Omega\) are kept constant. The mode quasi-energies are shown to increase with \(\delta\) in each waterfall plot, and the splitting between the Floquet modes is linear in \(\delta\) for the range of \(\Omega\) shown. Increasing Rabi frequency has the effect of populating Floquet modes of higher Floquet photon index. At higher driving Rabi frequencies we see an overall shift of the spectra towards lower energy (as seen in (c1)). We attribute this shift to imperfect shielding of our reference cell from the applied RF fields.
diagonal elements. These eigenvalues are thus spaced by multiples of the Floquet frequency \(\omega_{F}\equiv\delta_{\delta}/2\). The mode spacing changes linearly over the waterfall plot with detuning--this is in contrast to Fig. 2 where the constant detunings (labeling each plot) resulted in equal-spaced spectral features over the waterfall with Rabi frequency.
Here we see that for the three different values of Rabi frequency the behaviour of the quasienergies with detuning remains the same. However, higher Rabi frequencies drive population to higher-order Floquet modes, resulting in a larger fan-out of the spectra; note the different scales of the colormaps for the three cases, as population is spread out among a greater number of Floquet modes. The Bessel functions determining the mode occupation, \(d_{N}\propto J_{N}\left(\Omega/\delta\right)\), are less visually apparent in these spectra as the denominator of the Bessel function argument changes over the waterfall.
## V Application scenario: asymmetric and unbalanced fields
With an eye toward application scenarios, we consider the case where one field parameter is held, and the other controlled field is swept in power or detuning, while observing the dual-tone Rydberg EIT spectrum. We note that any particular spectrum can give information about both tones, moreover if one can 'tune' into the'symmetric' and 'balanced' cases, the known field strength and frequency can give the 'unknown' RF signal's parameters. We illustrate here trends which appear as one moves away in either 'asymmetric' detuning or 'unbalanced' power scenarios.
In Sec IV.1 we demonstrated the dependence of the symmetric dual-tone spectra on detuning and Rabi frequency. The quasienergies remain constant with Rabi frequency for symmetric detunings and unbalanced Rabi frequencies. In general, the spectra are more complex functions of these parameters. For instance, for asymmetric detunings or Rabi frequencies, the quasienergies oscillate as the symmetric Rabi frequencies are simultaneously scanned. The spectra of Fig. 2 and Fig. 3 offer a tool for characterizing an unknown RF frequency through frequency and power matching with scans of the Rabi frequencies and detuning.
#### v.0.1 Unbalanced Power with Symmetric Detuning
Our third set of results is presented in Fig. 4, which shows waterfall plots of Rydberg EIT spectra for unbalanced but constant Rabi frequencies as the symmetric detunings \(\delta_{1}=-\delta_{2}=\delta\) are simultaneously scanned. In these plots, \(|\Omega_{2}|=1\sqrt{\mathrm{m}\mathrm{W}}\) was held constant, and we present data for five different values of \(|\Omega_{1}|\).
The mismatched Rabi frequencies cause an asymmetric spectrum where the shift is indicative of the relative Rabi frequencies. A similar behaviour can be observed for mismatched detunings. These asymmetric spectra thus allow a determination of the detuning of an unknown RF field based on the symmetry of the quasienergy curves.
Figure 4: Experimental (a1–e1) and theoretical (a2–e2) waterfall plots show a simultaneous scan of \(\delta_{1}=-\delta_{2}=\delta\), while the two Rabi frequencies \(|\Omega_{1}|\), as labeled, and \(|\Omega_{2}|=1\sqrt{\mathrm{m}\mathrm{W}}\) are kept constant. The imbalance in Rabi frequencies is apparent from the asymmetric spectra, and the sense of the imbalance shifts as the Rabi frequency \(|\Omega_{2}|\) is larger or smaller than \(|\Omega_{2}|\).
Figure 5: (a1–g1): Experimental waterfall plots show the obtained spectra against coupling laser frequency (\(\delta_{c}\)) as the detuning \(\delta_{1}\) is scanned for constant \(\delta_{2}\) and Rabi frequencies \(|\Omega_{1}|\)\(=|\Omega_{2}|\)\(=105\) MHz. (a2–g2): Numerically obtained Floquet quasienergy spectra showing Floquet mode occupations against quasienergies (\(\epsilon\)) as \(\delta_{1}\) is scanned. The overlaid red lines indicate equal detunings of the two fields (\(\delta_{1}=\delta_{2}\)). The arrows and overlaid dashed blue lines in (d’) indicate subharmonic resonances.
Asymmetric Detuning with Balanced Power
Our final set of results is presented in Fig. 5, which shows waterfall plots of Rydberg EIT spectra where the equal Rabi frequencies of the two fields and the detuning \(\delta_{2}\) are held constant, while \(\delta_{1}\) is swept through resonance. We present data for seven different values of \(\delta_{2}\).
This is a realistic scenario for instance in the case where spurious signals or jamming signals are present. More complex spectra arise in this case. We make several observations of the main features of such spectra in the two-field case, below.
First, when \(\delta_{2}\) is far off resonance, the spectrum is essentially the Autler-Townes spectrum of a single field as it sweeps through the Rydberg resonance. The spectra are largely mirrored for the cases where the constant detuning is \(\delta_{2}\) vs. \(-\delta_{2}\), for example Fig 5 (a1) and (g1).
Second, around the region of equal detunings (marked with red lines in Fig. 5), we observe a phase diffusion effect in a region where the two RF fields are within a linewidth. This is due to the two fields inherently acting as a field at a single frequency but modulated according to their relative random phase. The measured spectrum is then essentially a time-averaged spectrum with random relative phases that effectively produce a random amplitude modulation of the Rabi splitting.
Third, in Fig. 5 (d1) and (d2), we have the case where \(\delta_{1}=0\), that is, one field is on resonance with the Rydberg transition. The red lines thus indicate the case where both fields are on resonance. This case is interesting in that the data on either side of this red line (and with \(\delta_{2}\) detuned up to several tens of kHz) corresponds to the atom-mixer configurations typically used to detect RF fields within a heterodyne configuration [23].
Avoided crossings appear at subharmonic resonances (marked with dashed blue lines in Fig. 5 (d2)) of the Rabi frequency \(\Omega\); when \(\delta_{1}=0\) these are resonances at \(\delta_{2}=\pm\Omega/k\), with small shifts from these values caused by ac Stark shifts from the applied fields. In our spectra, these subharmonic resonances are further shifted due to the different generalized Rabi frequencies of our fields, since the field strengths were kept equal as the detuning of a single field was scanned.
## VI Discussion and Conclusion
In this work we have explored the EIT spectra of Rydberg atoms dressed with intense dual-tone RF fields. These spectra are qualitatively different from the Autler-Townes spectra of Rydberg atoms dressed with a single field. The large dipole moments of Rydberg atoms allow these effects to be investigated with modest RF powers. Our analysis validates the broad applicability of theoretical models that treat the two-level dynamics of the RF-dressed Rydberg levels, while the optical fields act as indirect probes of the RF-induced multiphoton dynamics. This will allow a simplified treatment of dual-tone and multi-tone RF dressing in the Rydberg system that circumvents computationally intensive steady-state calculations with the full system density matrix. Such steady-state calculations moreover, are non-trivial to perform in the case of dissipative systems subject to driving outside the high-frequency regime, where the interplay of driving and dissipation become important.
Dual-tone RF dressing of Rydberg states unlocks a new toolbox that we foresee has implications for novel detection schemes in low frequency RF field sensing. For instance, we may realise phase-sensitive detection of fields in the 100 MHz range by coupling the energy levels of the synthetic Floquet dimension with the low frequency field to be detected, creating a three-frequency loop scheme. Previous schemes for electric field detection in the low-frequency and dc regimes have employed similar Floquet sideband techniques [27; 28; 29]. However in those studies, the Stark shifting of energies caused field-dependent shifts of the atomic spectra, which necessitates careful calculation and re-tuning due to Stark shifts at the higher fields necessary to generate higher-order Floquet sidebands. In contrast, the symmetric shifts caused by symmetric dual-tone drives in the Rydberg system allows the observation of spectra in high Floquet modes but without the added complication of overall spectral shifts.
Our work enhances our understanding of the behavior of the system in an amplitude modulated field; in general the symmetric dual-tone dressing may be described as a 100% sine-wave amplitude modulated field. Amplitude modulation has been gainfully demonstrated for Rydberg-based antennas, where studies of bandwidth and sensitivity are at the forefront of research in the field. The investigations performed in this paper allow us to place on firmer footing our understanding of the response to amplitude modulation in the system.
Our investigations of these complex spectra will also be important as Rydberg atom-based electrometry transitions to real-world applications, where simple AT spectra may be significantly distorted and complicated by spurious tones that may be present, for example in the case of electromagnetic signal jamming. A deeper understanding of the response of Rydberg systems to multiple tones with variable detunings and powers will be necessary in order to unravel complex real-world spectra and to obtain meaningful results.
## Acknowledgements
The authors would like to express their gratitude for conversations on further extensions of the theoretical model with R. M. Potvliege, E. L. Shirley, and S. Eckel.
## Appendix A Fine structure and magnetic sublevels
An examination of the data in Fig. 3 and Fig. 5 shows some spectral structure that is not accounted for in our
two-level model. In this appendix we present extensions of the two-level analysis that include the fine-structure splitting of the two Rydberg states, and also the magnetic sublevels of the fine structure states.
### 4-level model: Fine structure of Rydberg levels
We show in Fig. 6 (a) the data of Fig. 3 (a1), emphasizing spectral features that are not reproduced in the two-level model. In Fig. 6 (b) we show results from a four-state computation that includes the states \(61D_{5/2}\), \(61D_{3/2}\), \(62P_{3/2}\), and \(62P_{1/2}\) (see Appx. A.3). The results reproduce some of the finer spectral structure that we observe in the data:
First, the green arrows in the figure highlight the appearance of the \(61D_{3/2}\) fine-state to the left of the zero-detuning coupling laser resonance (\(\delta_{c}=0\)) in the EIT spectra. This feature indicates an initial population in the \(61D_{3/2}\) state that does not appear to participate significantly in the dynamics. We likewise note that the \(61D_{5/2}\) population does not show significant mixing into this state.
Second, the blue arrows in the figure show an apparent splitting of the spectral line due to an avoided crossing with the Floquet quasienergies associated with the fine structure. A similar avoided crossing appears on the left of the central mode.
Third, the black arrow highlights a spectral feature due to the fine-structure that persists and appears more clearly for the slightly higher Rabi frequency of Figure 3 (b1): this feature is quite pronounced above the \(N=2\) mode, and is absent above the \(N=-2\) mode.
Fourth and lastly, an interesting feature that is not reproduced by our model is seen at quasienergies of \(\approx-75\) MHz in Figure 3 (b1). This is likely due to mixing from the hyperfine levels on the optical transition, which is expected to appear at this energy when Doppler mismatch is accounted for.
### 16-level model: Magnetic sublevels of Rydberg fine structure
The two-state and four-state models implicitly assume linear optical and RF polarizations. For imperfect RF polarization, faint additional lines appear in the experimental data. We confirm the locations of some of these additional features with a sixteen-level Hamiltonian that includes the magnetic sublevels \(m_{J}\) of the Rydberg fine-structure (see Appx. A.3). In Fig. 7 (b) we show results from a sixteen-state computation that includes the \(m_{J}\) sublevels of the states \(61D_{5/2}\), \(61D_{3/2}\), \(62P_{3/2}\), and \(62P_{1/2}\). We highlight relevant features of our spectra that are reproduced in this extended model, below.
First, the green arrows in the figure highlight the \(61D_{3/2}\) fine-state in the EIT spectra.
Second, the red arrows indicate the contamination of the Floquet quasienergy levels with \(|m_{J}|>1/2\) sublevels of the Rydberg states, that are not accounted for in the two-state and 4-state models.
Third, the complex spectra in the vicinity of the black arrows are also polarization effects that are only reproduced in the 16-state model. The mixing between the \(61D_{5/2}\) and \(61D_{3/2}\) states appears to take place via different \(m_{J}\) sublevels, as indicated by the overlap of colours
Figure 6: The 4-state model: (a) The data of Fig. 3 (a1) (symmetric, power-balanced fields as the detuning is scanned) shows structure that is not accounted for in the two-level model. (b) Including the fine states in the model reproduces these features. Here for visibility we represent the Floquet mode occupations as the widths of the spectral lines. The overlap of the Floquet quasienergies with the zero Floquet mode of \(61D_{5/2}\) is shown in blue, while the overlap with the zero Floquet mode of \(61D_{3/2}\) is shown in green. We scale the initial state occupations to reflect the relative peak heights of the \(61D_{3/2}\) and \(61D_{5/2}\) states in order to facilitate a comparison.
in the numerical Floquet spectra in the vicinity of these features.
We note that the inclusion of the fine states and the \(m_{J}\) states are necessary for explaining the slight asymmetry of the spectra in comparing, for instance, Fig. 5 (a1) and Fig. 5 (g1). As an additional point of interest, we note that the magnetic sublevels are not usually resolved in the standard Autler-Townes EIT spectra that we use for electrometry, and their behaviour in different experimental configurations contributes to our understanding of the Rydberg atomic system. A careful analysis of the relative strengths of the features that correspond to the different \(m_{J}\) sublevels could be used to analyse the polarization of RF fields in the spirit of previously investigated vector electrometry schemes [30].
### Details of theoretical modeling
Here we give further parameters used in computing the four-level and sixteen-level spectra, computed using the ARC package [24].
We show the numerical values used in calculation, both energy offsets in Tabs. 1, and \(m_{J}\)-resolved dipole strengths in Tab. 2, for each available polarization.
Our Rydberg EIT spectra reflect the overlap of the Floquet modes with the bare Rydberg state \(61D_{5/2}\), and the state \(61D_{3/2}\), which are both detected in our EIT scans.
|
2308.06894 | When Provenance Aids and Complicates Reproducibility Judgments | It is well-established that the provenance of a scientific result is
important, sometimes more important than the actual result. For computational
analyses that involve visualization, this provenance information may contain
the steps involved in generating visualizations from raw data. Specifically,
data provenance tracks the lineage of data and process provenance tracks the
steps executed. In this paper, we argue that the utility of computational
provenance may not be as clear-cut as we might like. One common use case for
provenance is that the information can be used to reproduce the original
result. However, in visualization, the goal is often to communicate results to
a user or viewer, and thus the insights obtained are ultimately most important.
Viewers can miss important changes or react to unimportant ones. Here,
interaction provenance, which tracks a user's actions with a visualization, or
insight provenance, which tracks the decision-making process, can help capture
what happened but don't remove the issues. In this paper, we present scenarios
where provenance impacts reproducibility in different ways. We also explore how
provenance and visualizations can be better related. | David Koop | 2023-08-14T02:24:22Z | http://arxiv.org/abs/2308.06894v1 | # When Provenance Aids and Complicates Reproducibility Judgments
###### Abstract
It is well-established that the provenance of a scientific result is important, sometimes more important than the actual result. For computational analyses that involve visualization, this provenance information may contain the steps involved in generating visualizations from raw data. Specifically, data provenance tracks the lineage of data and process provenance tracks the steps executed. In this paper, we argue that the utility of computational provenance may not be as clear-cut as we might like. One common use case for provenance is that the information can be used to reproduce the original result. However, in visualization, the goal is often to communicate results to a user or viewer, and thus the insights obtained are ultimately most important. Viewers can miss important changes or react to unimportant ones. Here, interaction provenance, which tracks a user's actions with a visualization, or insight provenance, which tracks the decision-making process, can help capture what happened but don't remove the issues. In this paper, we present scenarios where provenance impacts reproducibility in different ways. We also explore how provenance and visualizations can be better related.
Provenance, Reproducibility. +
Footnote †: footnote]Footnote #1
## 1 Introduction
In science, the reproducibility of published work is important. There have been many important studies and high-profile perspectives written about the reproducibility crisis where important results cannot be reproduced later on [1, 17, 25]. At the heart of reproducibility is a judgment about whether two results are indeed the same. However, what is used to make this judgment can differ based on the type of result and the person making the judgment. Sometimes this judgment is aided by the artifacts that are provided with the result including the data, code, and documentation. Provenance, a record of how the original result was obtained, is an important piece of information that can inform such judgments. In this paper, we argue that while provenance generally has a positive impact on the reproducibility of results, in some cases, it can complicate matters.
Specifically, provenance can sometimes help users detect changes, but can also lack particular details or add inconsequential details, making a judgment about whether a result was reproduced may be more difficult. For example, Fig. 1a shows a visualization that was published along with the code and data to reproduce the visualization. However, even with the same version of the plotting library, another user may generate a visualization with a different appearance using the same code (Fig. 1b). In this case, the user had set their default configuration to a different font, colormap, and axis constraints, and the provenance did not capture the original defaults. Here, the difference is generally _cosmetic_ as the changes may not affect the insights obtained from the visualization. In Fig 1c, a casual glance at the visualization would lead one to conclude that the results are the same, despite one data item being removed (circled in red). Here, provenance _should_ capture the changed data source and alert a user even if the change is hard to see. Depending on the detail of the provenance, a slight change to the process used in creating the visualization (e.g. using an alias of the original plotting command) may encourage a user to conclude that because the process was different, the results were not reproduced despite the visualizations being exactly the same. Finally, Fig. 1d matches Fig. 1a exactly, but it may have been generated by different code or only arrived at after some user actions. When visualizations are interactive, the initial view and its configuration may not match views explored in another session. Provenance can capture these interactions, but in general, the assessment of the reproducibility of a result is not impacted by such exploration. However, because the insights obtained may differ, perhaps this should be a consideration.
Because data is the input for visualization, any modifications to the data captured through provenance can be applied to understand how the resulting visualization changed. In many systems, data and visualizations are linked; changes to the data trigger updated visualizations [20, 23]. There has also been work to make sure that interactions on data spreadsheets [2] and visual interfaces [26, 31] are written to provenance and/or code. This might also function in reverse where changes in the data display meaningful cues in the visualizations. There exists work to highlight interaction histories in the visualizations themselves [10], and these can be tied to provenance information as well [6]. We believe these approaches can be enhanced, however, to better draw attention to factors of reproducibility in visualizations, data, and insights. With more fine-grained provenance of data, we might not only detect when a visualization changes but also encode this in the visualizations themselves. Similarly, when interactions on visualizations affect the perception of data items, pushing this provenance back to the data may allow us to compute exactly which data is affected.
We argue that while provenance usually aids in reproducibility,
we need to be careful about how the two are connected. First, two identical visualizations need not have the same provenance. Second, two identical provenance traces can be associated with visualizations that are different. Finally, even when visualizations are identical (different), viewers can arrive at different (identical) conclusions. Therefore, we should be prudent in using provenance to evaluate reproducibility.
## 2 Provenance in Computations and Visualizations
Provenance has received considerable attention in the past couple of decades and has a vast array of uses [16]. Provenance has also been refined to be meaningful in particular domains, including in databases [3, 7] and visualization [27, 32]. Computational provenance focuses on the data lineage and processes involved in a result [11, 28] while insight provenance focuses on the rationale for decisions [13]. The specialization of provenance has followed core questions in the respective fields and allowed the development of different techniques. We consider four classes of provenance:
* _Data provenance_ concerns the lineage of data, including the inputs that contributed to a particular data item. In databases, fine-grained provenance traces individual tuples through relational operations [3].
* _Process provenance_ concerns the steps involved in obtaining a result, including visualizations. This can be specified by code or a workflow and captured as an execution log [11].
* _Interaction provenance_ captures the steps a user makes as they interact with a representation of data [26, 27]. Note that this need not be limited to visualizations as state can also be captured (e.g. [5]).
* _Insight provenance_ traces the reasons why particular conclusions were reached, going beyond what the computer does to understand how people made a decision [13].
Data and process provenance cover a wide domain and are often independent of visualization, while interaction and insight provenance are often more closely associated with visualization. However, all of these classes can be tied to both data and visualizations. Filtering in a visualization can be characterized as a transformation of the data and represented as data provenance [31]. Similarly, interactions with a spreadsheet may also be important details to capture to understand how data is manipulated [2].
In the database community, where data is composed of tuples, different types of provenance have been distinguished including why, how, and where [7]. These relate to the tuples that contribute to a resulting tuple (why), detail about the operations involved in the result (how), and the original locations of the resulting data (where). A number of variations on this questions theme have been adopted to characterize provenance more generally [16] and in visualization [27, 32]. These characterizations are important because they point out the diversity of provenance information and how the differences facilitate use. Importantly, the use of provenance extends well beyond simply capturing the past or reproducing a past result. Provenance can be used to build models of user activity, suggest potential actions, and understand user intent [32].
## 3 Reproducibility
The National Academy of Science's report on reproducibility states that a result is _reproducible_ if one obtains "consistent computational results using the same input data, computational steps, methods, code, and conditions of analysis" [24]. Note that it focuses on the result because the computational steps are assumed to be the same. In practice, this may be difficult to guarantee due to differences in hardware or configuration. While containers and virtual machines can mitigate many of the issues, this often comes at the expense of being able to reuse published results in standard work environments. In visualization, the task of reproducing a result is complicated by the different types of results spanning a wide variety of areas [9].
We focus on the reproducibility of visualizations as artifacts, a combination of the system that displays the visualization and the encoding itself. From one perspective, reproducibility is the same whether the result is data displayed as a table of statistics or as a visualization. For tables, we care about whether the values are the exactly same or in the case of floating-point values, within some tolerance. For visualizations, we might similarly check whether the underlying graphical marks (e.g. pixels or vector elements) match. In many cases, such judgments can be automated. We can check that the tables do indeed match up without painstakingly comparing them manually. We can do the same for visual elements, but note that just as statistics may mask important patterns in the data, visualizations can also mask differences.
We can also distinguish between the reproducibility of a static visualization, generally an image, versus an interactive visualization, that usually lives in the context of a visualization system or framework. For static visualizations, there is less involved in comparing visualizations because the elements are fixed. Most of the work here is focused on the pipeline that generates the final visualization. For interactive visualizations, the user can perform a number of operations that change the appearance of the visualization over time. Importantly, these actions can result in different views but _different sequences_ of actions can produce the same view (e.g. Fig. 1d). Certain actions may be reordered without any impact to the final view, and totally different actions may result in the same final view. To borrow from the language of database provenance, the why provenance may be the same, but the how provenance differs.
## 4 Impacts of Provenance on Reproducibility
We believe that understanding how provenance impacts reproducibility in visualization is important in understanding how to best capture and use provenance. In this section, we present some questions and ideas related to this interplay:
* When does visualization fail, and how can provenance help?
* Can we use the underlying data and its provenance to characterize similarities or differences in visualizations?
* Is it possible to ignore certain cosmetic differences between visualizations when evaluating their similarity?
* How detailed must captured provenance be to ensure reproducibility?
* How should we integrate interaction provenance with data and process provenance?
* Can we integrate provenance into visualizations?
Provenance ImportanceAt the end of the day, the goal of visualization is not to generate particular pixels on the screen but rather to communicate information to a viewer. A visualization can fail in this respect when the viewer comes to conclusions that are incorrect. Sometimes, this can be due to the visualization literacy of the viewer [4], but it can also be due to issues with the visualizations themselves. The types of problems highlighted in the algebraic treatment of visualization [19] and their characterization as visualization mirages [22] highlight a step beyond standard judgments in reproducibility. McNutt et al. note that a better understanding of the input data and analytical process can help viewers address such mirages [22], hinting at the importance of going beyond the final result and examining the provenance. Especially when a user did not generate the original visualization, understanding the process can help inform the conclusions that should or should not be made.
_c 2023 IEEE. This is the author's version of the article that has been published in the proceedings of IEEE/ACM conference. The final version of this record is available at: x.cck.tr/TVCG.2013.xxxxxxxx/_
Examining Underlying DataIn some cases, we might ask whether the reproducibility of a visualization should be characterized by the underlying data _instead of_ the visualization itself. Indeed, much work has focused on extracting the underlying data from visualizations in order to allow for further analyses or re-visualization (e.g. [14, 21, 18]). With the process provenance that captures the data transformation steps from the initial data to the data used for visualization, these potentially error-prone methods are unnecessary. Figure 0(c) shows that detecting a change in the visualization may be simplified by looking at the input data. But how should we deal with data that is occluded? Suppose the point that was removed was not visible. The visualization will not change at all. Even when the underlying data is the same, as in Figure 0(b), since the viewer is seeing the visualization and not the data, it seems like only relying on the data produces an incomplete assessment.
Cosmetic DifferencesIn some cases, _cosmetic differences_ like the font used or presence of absence of grid lines may be viewed as inconsequential in analyzing the reproducibility of a visualization. A pixel-by-pixel matching of outputs should probably be viewed as overkill in evaluating reproducibility. At the same time, changes like the colormap or symbol alphabets can affect perception and given the effectiveness of particular encodings, we cannot discount all such differences.
Provenance GranularityIn some cases where provenance complicates an assessment of reproducibility, the issue may not be that having provenance is problematic but rather that the implementations of provenance capture are often incomplete or too coarse. Capturing every detail may allow us to detect when a visualization differs but it may not help us explain why it differs. At the same time, the overhead in storing such fine-grained detail may be prohibitive and impact the latency of the visualization. Furthermore, even if fine-grained provenance is captured, it often requires significant work to distill it to an understandable form. Grammars of interaction may help here [12]. Solutions to chunk [15], query [29], and display [30] the provenance help but in practice, many tools select a coarser provenance, knowing that every last detail cannot be captured.
Relating Interaction and Process ProvenanceIn the context of interactive visualizations, the ability to translate between interactions on the visualization and transformations of the data provides an important path both for capturing the provenance of interactions and comparing the reproducibility of them (i.e. if the actions are replayed [8, 5]). Work that allows actions to be translated from visualizations to code or data provenance can be helpful in capturing this type of information [26, 31]. At the same time, certain changes or navigation operations may not have encodings in the data.
Making Provenance VisibleThere are also opportunities to display provenance near or in the visualizations themselves. These range from graphs encoding the steps followed with different glyphs [29] to screenshots displaying the state of the visualizations [5]. Showing users what has been viewed affects their explorations [10] and in-visualization provenance views represent methods to help users better understand their work [6]. Such solutions complicate both the visual encoding, and potentially the reproducibility, as the encoded provenance of interaction changes over time.
## 5 Conclusion
There are a number of open questions surrounding how provenance relates to reproducibility, and the judgments surrounding the reproducibility of a visualization are both enhanced and complicated by this information. Checking if data was reproduced can be cleaner because we can ignore presentation characteristics, but because visualizations are often used as the core artifact for conclusions, we cannot totally ignore encoding characteristics. In some cases, reproducibility is aided by greater detail in provenance and stronger links from user interactions to provenance. One opportunity to better combine provenance and visualization is to visualize the provenance. In the future, we should seek to better connect captured provenance to visualization being mindful that provenance often does not guarantee reproducibility.
|
2306.10582 | Machine Learning and Hamilton-Jacobi-Bellman Equation for Optimal
Decumulation: a Comparison Study | We propose a novel data-driven neural network (NN) optimization framework for
solving an optimal stochastic control problem under stochastic constraints.
Customized activation functions for the output layers of the NN are applied,
which permits training via standard unconstrained optimization. The optimal
solution yields a multi-period asset allocation and decumulation strategy for a
holder of a defined contribution (DC) pension plan. The objective function of
the optimal control problem is based on expected wealth withdrawn (EW) and
expected shortfall (ES) that directly targets left-tail risk. The stochastic
bound constraints enforce a guaranteed minimum withdrawal each year. We
demonstrate that the data-driven approach is capable of learning a near-optimal
solution by benchmarking it against the numerical results from a
Hamilton-Jacobi-Bellman (HJB) Partial Differential Equation (PDE) computational
framework. | Marc Chen, Mohammad Shirazi, Peter A. Forsyth, Yuying Li | 2023-06-18T15:20:05Z | http://arxiv.org/abs/2306.10582v1 | # Machine Learning and Hamilton-Jacobi-Bellman Equation
###### Abstract
We propose a novel data-driven neural network (NN) optimization framework for solving an optimal stochastic control problem under stochastic constraints. Customized activation functions for the output layers of the NN are applied, which permits training via standard unconstrained optimization. The optimal solution yields a multi-period asset allocation and accumulation strategy for a holder of a defined contribution (DC) pension plan. The objective function of the optimal control problem is based on expected wealth withdrawn (EW) and expected shortfall (ES) that directly targets left-tail risk. The stochastic bound constraints enforce a guaranteed minimum withdrawal each year. We demonstrate that the data-driven approach is capable of learning a near-optimal solution by benchmarking it against the numerical results from a Hamilton-Jacobi-Bellman (HJB) Partial Differential Equation (PDE) computational framework.
**Keywords:** Portfolio decumulation, neural network, stochastic optimal control
**JEL codes:** G11, G22
**AMS codes:** 93E20, 91G, 68T07, 65N06, 35Q93
## 1 Introduction
Access to traditional defined benefit (DB) pension plans continues to disappear for employees. In 2022, only 15% of private sector workers in the United States had access to a defined benefit plan, while 66% had access to a defined contribution (DC) plan (U.S. Bureau of Labor Statistics, 2022). In other countries, DB plans have become a thing of the past.
Defined contribution plans leave the burden of creating an investment and withdrawal strategy to the individual investor, which Nobel Laureate William Sharpe referred to as "the nastiest, hardest problem in finance" (Ritholz, 2017). Indeed, a review of the literature on decumulation strategies (Bernhardt and Donnelly, 2018; MacDonald et al., 2013) shows that balancing all of retirees' concerns with a single strategy is exceedingly difficult. To address these concerns and find
an optimal balance between maximizing withdrawals and minimizing the risk of depletion, while guaranteeing a minimum withdrawal, the approach in Forsyth (2022) determines a accumulation and allocation strategy for a standard 30-year investment horizon by formulating it as a problem in optimal stochastic control. Numerical solutions are obtained using dynamic programming, which results in a Hamilton-Jacobi-Bellman (HJB) Partial Differential Equation (PDE).
The HJB PDE framework developed in Forsyth (2022) maximizes expected withdrawals and minimizes the risk of running out of savings, measured by the left-tail in the terminal wealth distribution. Since maximizing withdrawals and minimizing risk are conflicting measures, we use a scalarization technique to compute Pareto optimal points. A constant lower bound is imposed on the withdrawal, providing a guaranteed income. An upper bound on withdrawal is also imposed, which can be viewed as the target withdrawal. The constraints of no shorting and no leverage are imposed on the investment allocation.
The solution to this constrained stochastic optimal control problem yields a dynamic stochastic strategy, naturally aligning with retirees' concerns and objectives. Note that cash flows are not mortality weighted, consistent with Bengen (1994). This can be justified on the basis of _planning to live, not planning to die_ as discussed in Pfau (2018).
Our dynamic strategy can be contrasted to traditional strategies such as the _Bengen Rule_ (4% Rule), which recommends withdrawing a constant 4% of initial capital each year (adjusted for inflation) and investing equal amounts into stocks and bonds (Bengen, 1994). Initially proposed in 1994, Scott et al. (2009) found the 4% Rule to still be a popular strategy 14 years later, and the near-universal recommendation of the top brokerage and retirement planning groups. Recently there has been acknowledgment in the asset management industry that the 4% Rule is sub-optimal, but wealth managers still recommend variations of the same constant withdrawal principle (Williams and Kawashima, 2023). The strategy proposed by Forsyth (2022) is shown to be far more efficient than the Bengen 4% Rule. Of course, the PDE solution in Forsyth (2022) is restricted to low dimensions (i.e. a small number of stochastic factors).
In order to remedy some of the deficiencies of PDE methods (such as in Forsyth (2022)), we propose a neural network (NN) based framework without using dynamic programming. In contrast to the PDE solution approach, our proposed NN approach has the following advantages:
1. It is data-driven and does not depend on a parametric model. This makes the framework versatile in selecting training data, and less susceptible to model misspecification.
2. The control is learned directly, thereby exploiting the low dimensionality of the control (van Staden et al., 2023). This technique thus avoids dynamic programming and the associated error propagation. The NN approach can also be applied to higher dimensional problems, such as those with a large number of assets.
3. If the control is a continuous function of time, the control approximated by the NN framework will reflect this property. If the control is discontinuous 1, the NN seems to produce a smooth, but quite accurate, approximation.2 Footnote 1: _Bang-bang_ controls, frequently encountered in optimal control, are discontinuous as a function of time.
Footnote 2: For a possible explanation of this, see Ismailov (2022).
Since the NN only generates an approximate solution to the complex stochastic optimal control problem, it is essential to assess its accuracy and robustness. Rarely is the quality of an NN solution assessed rigorously, since an accurate solution to the optimal control problem is often not available. In this paper, we compare the NN solution to the accumulation problem against the ground-truth results from the provably convergent HJB PDE method.
We have previously seen such a comparison in different applications, see, e.g., Lauriere et al. (2021) for a comparison study on a fishing control problem. As machine learning and artificial intelligence based methods continue to proliferate in finance and investment management, it is crucial to demonstrate that these methods are reliable and explainable (Boukherouaa et al., 2021). We believe that our proposed framework and test results make a step forward in demonstrating deep learning's potential for stochastic control problems in finance.
To summarize, the main contributions of this paper are as follows:
* Proposing an NN framework with suitable activation functions for decumulation and allocation controls, which yields an approximate solution to the constrained stochastic optimal decumulation problem in Forsyth (2022) by solving a standard unconstrained optimization problem;
* Demonstrating that the NN solution achieves very high accuracy in terms of the efficient frontier and the decumulation control when compared to the solution from the provably convergent HJB PDE method;
* Illustrating that, with a suitably small regularization parameter, the NN allocation strategy can differ significantly from the PDE allocation strategy in the region of high wealth and near the terminal time, while the relevant performance statistics remain unaffected. This is due to the fact that the problem is ill-posed in these regions of the state space unless we add a small regularization term;
* Testing the NN solution's robustness on out-of-sample and out-of-distribution data, as well as its versatility in using different datasets for training.
While other neural network and deep learning methods for optimal stochastic control problems have been proposed before, they differ significantly from our approach in their architecture, taking a _stacked_ neural network approach as in Buehler et al. (2019); Han and E (2016); Tsang and Wong (2020) or a hybrid dynamic programming and reinforcement learning approach (Hure et al., 2021). On the other hand, our framework uses the same two neural networks at all rebalancing times in the investment scenario. Since our NNs take time as an input, the solution will be continuous in time if the control is continuous. Note that the idea of using time as an input to the NN was also suggested in Lauriere et al. (2021). According to the taxonomy of sequential decision problems proposed in Powell (2021), our approach would most closely be described as Policy Function Approximation (PFA).
With the exception of Lauriere et al. (2021), previous papers do not provide a benchmark for numerical methods, as we do in this work. Our results show that our proposed NN method is able to approximate the numerical results in Forsyth (2022) with high accuracy. Especially notable, and somewhat unexpected, is that the _bang-bang_ control3 for the withdrawal is reproduced very closely with the NN method.
Footnote 3: In optimal stochastic control, a bang-bang control is a discontinuous function of the state.
## 2 Problem Formulation
### Overview
The investment scenario described in Forsyth (2022) concerns an investor with a portfolio wealth of a specified size, upon retirement. The investment horizon is fixed with a finite number of equally
spaced rebalancing times (usually annually). At each rebalancing time, the investor first chooses how much to withdraw from the portfolio and then how to allocate the remaining wealth. The investor must withdraw an amount within a specified range. The wealth in this portfolio can be allocated to any mix of two given assets, with no shorting or leverage. The assets the investor can access are a broad stock index fund and a constant maturity bond index fund.
In the time that elapses between re-balancing times, the portfolio's wealth will change according to the dynamics of the underlying assets. If the wealth of the portfolio goes below zero (due to minimum withdrawals), the portfolio is liquidated, trading ceases, debt accumulates at the borrowing rate, and withdrawals are restricted to the minimum amount. At the end of the time horizon, a final withdrawal is made and the portfolio is liquidated, yielding the terminal wealth.
We assume here that the investor has other assets, such as real estate, which are non-fungible with investment assets. These other assets can be regarded as a hedge of last resort, which can be used to fund any accumulated debt (Pfeiffer et al., 2013). This is not a novel assumption and is in line with the mental bucketing idea proposed by Shefrin and Thaler (1988). The use of this assumption within literature targeting similar problems is also common (see Forsyth et al. (2022)). Of course, the objective of the optimal control is to make running out of savings an unlikely event.
The investor's goal then is to maximize the weighted sum of total withdrawals and the mean of the worst 5% of the outcomes (in terms of terminal wealth). We term this tail risk measure as Expected Shortfall (ES) at the 5% level. In this section, this optimization problem will be described with the mathematical details common to both the HJB and NN methods.
### Stochastic Process Model
Let \(S_{t}\) and \(B_{t}\) represent the real (i.e. inflation-adjusted) _amounts_ invested in the stock index and a constant maturity bond index, respectively. These assets are modeled with correlated jump diffusion models, in line with MacMinn et al. (2014). These parametric stochastic differential equations (SDEs) allow us to model non-normal asset returns. The SDEs are used in solving the HJB PDE, and generating training data with Monte Carlo simulations in the proposed NN framework. For the remainder of this paper, we refer to simulated data using these models as _synthetic_ data.
When a jump occurs, \(S_{t}=\xi^{s}S_{t^{-}}\), where \(\xi^{s}\) is a random number representing the jump multiplier and \(S_{t^{-}}=S(t-\epsilon),\epsilon\to 0^{+}\) (\(S_{t^{-}}\) is the instant of time before \(t\)). We assume that \(\log(\xi^{s})\) follows a double exponential distribution (Kou, 2002; Kou and Wang, 2004). The jump is either upward or downward, with probabilities \(u^{s}\) and \(1-u^{s}\) respectively. The density function for \(y=\log(\xi^{s})\) is
\[f^{s}(y)=u^{s}\eta_{1}^{s}e^{-\eta_{1}^{s}y}\mathbf{1}_{y\geq 0}+(1-u^{s}) \eta_{2}^{s}e^{\eta_{2}^{s}y}\mathbf{1}_{y<0}. \tag{2.1}\]
We also define
\[\gamma_{\xi}^{s} = E[\xi^{s}-1]=\frac{u^{s}\eta_{1}^{s}}{\eta_{1}^{s}-1}+\frac{(1-u ^{s})\eta_{2}^{s}}{\eta_{2}^{s}+1}-1. \tag{2.2}\]
The starting point for building the jump diffusion model is a standard geometric Brownian motion, with drift rate \(\mu^{s}\) and volatility \(\sigma^{s}\). A third term is added to represent the effect of jumps, and a compensator is added to the drift term to preserve the expected drift rate. For stocks, this gives the following stochastic differential equation (SDE) that describes how \(S_{t}\) (inflation adjusted) evolves in the absence of a control:
\[\frac{dS_{t}}{S_{t^{-}}} = \left(\mu^{s}-\lambda_{\xi}^{s}\gamma_{\xi}^{s}\right)\,dt+\sigma^{ s}\,dZ^{s}+d\left(\sum_{i=1}^{\pi_{t}^{s}}(\xi_{i}^{s}-1)\right), \tag{2.3}\]
where \(dZ^{s}\) is the increment of a Wiener process, \(\pi_{t}^{s}\) is a Poisson process with positive intensity parameter \(\lambda_{\xi}^{s}\), and \(\xi_{i}^{s}\)\(\forall i\) are i.i.d. positive random variables having distribution (2.1). Moreover, \(\xi_{i}^{s}\), \(\pi_{t}^{s}\), and \(Z^{s}\) are assumed to all be mutually independent.
As is common in the practitioner literature, we directly model the returns of a constant maturity (inflation adjusted) bond index by a jump diffusion process (Lin et al., 2015; MacMinn et al., 2014). Let the amount in the constant maturity bond index be \(B_{t^{-}}=B(t-\epsilon),\epsilon\to 0^{+}\). In the absence of a control, \(B_{t}\) evolves as
\[\frac{dB_{t}}{B_{t^{-}}} = \left(\mu^{b}-\lambda_{\xi}^{b}\gamma_{\xi}^{b}+\mu_{c}^{b} \mathbf{1}_{\{B_{t^{-}}<0\}}\right)\,dt+\sigma^{b}\,dZ^{b}+d\left(\sum_{i=1}^{ \pi_{t}^{b}}(\xi_{i}^{b}-1)\right), \tag{2.4}\]
where the terms in Equation (2.4) are defined analogously to Equation (2.3). In particular, \(\pi_{t}^{b}\) is a Poisson process with positive intensity parameter \(\lambda_{\xi}^{b}\), \(\gamma_{\xi}^{b}=E[\xi^{b}-1]\), and \(y=\log(\xi^{b})\) has the same distribution as in equation (2.1) (denoted by \(f^{b}(y)\)) with distinct parameters, \(u^{b}\), \(\eta_{1}^{b}\), and \(\eta_{2}^{b}\). Note that \(\xi_{i}^{b}\), \(\pi_{t}^{b}\), and \(Z^{b}\) are assumed to all be mutually independent, as in the stock SDE. The term \(\mu_{c}^{b}\mathbf{1}_{\{B_{t^{-}}<0\}}\) represents the extra cost of borrowing (a spread).
The correlation between the two assets' diffusion processes is \(\rho_{sb}\), giving us \(dZ^{s}\cdot dZ^{b}=\rho_{sb}\ dt\). The jump processes are assumed to be independent. For further details concerning the justification of this market model, refer to Forsyth (2022).
We define the investor's total wealth at time \(t\) as
\[\mbox{Total wealth}\ \equiv W_{t}=S_{t}+B_{t}. \tag{2.5}\]
Barring insolvency, shorting stock and using leverage (i.e., borrowing) are not permitted, a realistic constraint in the context of DC retirement plans. Furthermore, if the wealth ever goes below zero, due to the guaranteed withdrawals, the portfolio is liquidated, trading ceases, and debt accumulates at the borrowing rate. We emphasize that we are assuming that the retiree has other assets (i.e., residential real estate) which can be used to fund any accumulated debt. In practice, this could be done using a reverse mortgage (Pfeiffer et al., 2013).
### Notational Conventions
We define the finite set of discrete withdrawal/rebalancing times \(\mathcal{T}\),
\[\mathcal{T}=\{t_{0}=0<t_{1}<t_{2}<\ldots<t_{M}=T\}. \tag{2.6}\]
The beginning of the investment period is \(t_{0}=0\). We assume each rebalancing time is evenly spaced, meaning \(t_{i}-t_{i-1}=\Delta t=T/M\) is constant. To avoid subscript clutter in the following, we will occasionally use the notation \(S_{t}\equiv S(t),B_{t}\equiv B(t)\) and \(W_{t}\equiv W(t)\). At each rebalancing time, \(t_{i}\in\mathcal{T}\), the investor first withdraws an amount of cash \(q_{i}\) from the portfolio and then rebalances the portfolio. At time \(T\), there is one final withdrawal, \(q_{T}\), and then the portfolio is liquidated. We assume no taxes are incurred on rebalancing, which is reasonable since retirement accounts are
typically tax-advanted. In addition, since trading is infrequent, we assume transaction costs to be negligible (Dang and Forsyth, 2014). Given an arbitrary time-dependent function, \(f(t)\), we will use the shorthand
\[f(t_{i}^{+})\equiv\lim_{\epsilon\to 0^{+}}f(t_{i}+\epsilon)\,\ \ \ \ \ \ \ \ \ f(t_{i}^{-})\equiv\lim_{\epsilon\to 0^{+}}f(t_{i}- \epsilon)\ . \tag{2.7}\]
The multidimensional controlled underlying process is denoted by \(X\left(t\right)=\left(S\left(t\right),B\left(t\right)\right)\), \(t\in\) [0,\(T\)]. For the realized state of the system, \(x=(s,b)\).
At the beginning of each rebalancing time \(t_{i}\), the investor withdraws the amount \(q_{i}(\cdot)\), determined by the control at time \(t_{i}\); that is, \(q_{i}(\cdot)=q_{i}(X(t_{i}^{-}))=q(X(t_{i}^{-}),t_{i})\). This control is used to evolve the investment portfolio from \(W_{t}^{-}\) to \(W_{t}^{+}\)
\[W(t_{i}^{+})=W(t_{i}^{-})-q_{i}\,\ \ \ \ \ t_{i}\in\mathcal{T}. \tag{2.8}\]
Formally, both withdrawal and allocation controls depend on the state of the portfolio before withdrawal, \(X(t_{i}^{-})\), but it will be computationally convenient to consider the allocation control as a function of the state after withdrawal since the portfolio allocation is rebalanced after the withdrawal has occurred. Hence, the allocation control at time \(t_{i}\) is \(p_{i}(\cdot)=p_{i}(X(t_{i}^{+}))=p(X(t_{i}^{+}),t_{i})\).
\[p_{i}(X(t_{i}^{+}))=p(X(t_{i}^{+}),t_{i})=\frac{S(t_{i}^{+})}{S(t_{i}^{+})+B( t_{i}^{+})}. \tag{2.9}\]
As formulated, the controls depend on wealth only (see Forsyth (2022) for a proof, assuming no transaction costs). Therefore, we make another notational adjustment for the sake of simplicity and consider \(q_{i}(\cdot)\) to be a function of wealth before withdrawal, \(W_{i}^{-}\), and \(p_{i}(\cdot)\) to be a function of wealth after withdrawal, \(W_{i}^{+}\).
We assume instantaneous rebalancing, which means there are no changes in asset prices in the interval (\(t_{i}^{-}\),\(t_{i}^{+}\)). A control at time \(t_{i}\) is therefore described by a pair \((q_{i}(\cdot),p_{i}(\cdot))\in\mathcal{Z}(W_{i}^{-}\),\(W_{i}^{+},t_{i})\), where \(\mathcal{Z}(W_{i}^{-}\),\(W_{i}^{+},t_{i})\) represents the set of admissible control values for \(t_{i}\). The constraints on the allocation control are no shorting, no leverage (assuming solvency). There are minimum and maximum values for the withdrawal. When wealth goes below zero due to withdrawals (\(W_{i}^{+}<0\)), trading ceases with debt accumulating at the borrowing rate, and withdrawals are restricted to the minimum. Stock assets are liquidated at the end of the investment period. We can mathematically state these constraints by imposing suitable bounds on the value of the controls as follows:
\[\mathcal{Z}_{q}(W_{i}^{-}\text{,}t_{i}) = \begin{cases}[q_{\min},q_{\max}]\ ;\ t_{i}\in\mathcal{T}\ ;\ W_{i}^{-}>q_{\max}\\ [q_{\min},W_{i}^{-}]\ ;\ t_{i}\in\mathcal{T}\ ;\ q_{\min}<W_{i}^{-}<q_{\max}\\ \{q_{\min}\}\ ;\ t_{i}\in\mathcal{T}\ ;\ W_{i}^{-}<q_{\min}\end{cases}\, \tag{2.10}\] \[\mathcal{Z}_{p}(W_{i}^{+}\text{,}t_{i}) = \begin{cases}[0\text{,}1]&W_{i}^{+}>0\ ;\ t_{i}\in\mathcal{T}\ ;\ t_{i}\neq t_{M}\\ \{0\}&W_{i}^{+}\leq 0\ ;\ t_{i}\in\mathcal{T}\ ;\ t_{i}\neq t_{M}\\ \{0\}&t_{i}=T\end{cases}\,\] (2.11) \[\mathcal{Z}(W_{i}^{-}\text{,}W_{i}^{+}\text{,}t_{i}) = \mathcal{Z}_{q}(W_{i}^{-}\text{,}t_{i})\times\mathcal{Z}_{p}(W_{i}^ {+}\text{,}t_{i}). \tag{2.12}\]
At each rebalancing time, we seek the optimal control for all possible combinations of \((S(t),B(t))\) having the same total wealth (Forsyth, 2022). Hence, the controls for both withdrawal and allocation
are formally a function of wealth and time before withdrawal (\(W_{i}^{-}\),\(t_{i}\)), but for implementation purposes it will be helpful to write the allocation as a function of wealth and time after withdrawal (\(W_{i}^{+}\),\(t_{i}\)). The admissible control set \(\mathcal{A}\) can be written as
\[\mathcal{A}=\left\{(q_{i},p_{i})_{0\leq i\leq M}:(q_{i},p_{i})\in \mathcal{Z}(W_{i}^{-},W_{i}^{+}\text{,}t_{i})\right\}\,. \tag{2.13}\]
An admissible control \(\mathcal{P}\in\mathcal{A}\), can be written as
\[\mathcal{P}=\left\{(q_{i}(\cdot),p_{i}(\cdot))\ :\ i=0,\ldots,M\right\}\,. \tag{2.14}\]
It will sometimes be necessary to refer to the tail of the control sequence at \([t_{n},t_{n+1},\ldots,t_{M}]\), which we define as
\[\mathcal{P}_{n}=\left\{(q_{n}(\cdot),p_{n}(\cdot))\ldots,(p_{M}( \cdot),q_{M}(\cdot))\right\}\,. \tag{2.15}\]
The essence of the problem, for both the HJB and NN methods outlined in this paper, will be to find an optimal control \(\mathcal{P}^{*}\).
### Risk: Expected Shortfall
Let \(g(W_{T})\) be the probability density of terminal wealth \(W_{T}\) at \(t=T\). Then suppose
\[\int_{-\infty}^{W_{\alpha}^{*}}g(W_{T})\ dW_{T}=\alpha\, \tag{2.16}\]
i.e., \(Pr[W_{T}<W_{\alpha}^{*}]=\alpha\), and \(W_{\alpha}^{*}\) is the Value at risk (VAR) at the level \(\alpha\). We then define the Expected Shortfall (ES) as the mean of the worst \(\alpha\) fraction of the terminal wealth. Mathematically,
\[\text{ES}_{\alpha}=\frac{\int_{-\infty}^{W_{\alpha}^{*}}W_{T}\ g (W_{T})\ dW_{T}}{\alpha}. \tag{2.17}\]
As formulated, a higher ES is more desirable than a smaller ES (Equation (2.17) is formulated in terms of final wealth not losses). It will be convenient use the alternate definition of ES as suggested by Rockafellar and Uryasev (2000),
\[\text{ES}_{\alpha} = \sup_{W^{*}}E\bigg{[}W^{*}+\frac{1}{\alpha}\min(W_{T}-W^{*},0) \bigg{]}. \tag{2.18}\]
Under a control \(\mathcal{P}\), and initial state \(X_{0}\), this becomes:
\[\text{ES}_{\alpha}(X_{0}^{-},t_{0}^{-}) = \sup_{W^{*}}E_{\mathcal{P}}^{X_{0}^{-},t_{0}^{-}}\bigg{[}W^{*}+ \frac{1}{\alpha}\min(W_{T}-W^{*},0)\bigg{]}. \tag{2.19}\]
The candidate values of \(W^{*}\) can be taken from the set of possible values of \(W_{T}\). It is important to note here that we define \(\text{ES}_{\alpha}(X_{0}^{-}\),\(t_{0}^{-}\)) which is the value of \(\text{ES}_{\alpha}\) as seen at \(t_{0}^{-}\). Hence, \(W^{*}\) is fixed throughout the investment horizon. In fact, we are considering the induced time consistent strategy, as opposed to the time inconsistent version of an expected shortfall policy (Forsyth, 2020; Strub et al., 2019). This issue is addressed in more detail in Appendix A.
### Reward: Expected Total Withdrawals
We use expected total withdrawals as a measure of reward. Mathematically, we define expected withdrawals (EW) as
\[\text{EW}(X_{0}^{-},t_{0}^{-})=E_{\mathcal{P}}^{X_{0}^{-},t_{0}^{-}} \bigg{[}\sum_{i=0}^{M}q_{i}\bigg{]}. \tag{2.20}\]
**Remark 2.1** (No discounting, no mortality weighting).: _Note that we do not discount the future cash flows in Equation (2.20). We remind the reader that all quantities are assumed real (i.e. inflation-adjusted), so that we are effectively assuming a real discount rate of zero, which is a conservative assumption. This is also consistent with the approach used in the classical work of Bengen (1994). In addition, we do not mortality weight the cash flows, which is also consistent with Bengen (1994). See Pfau (2018) for a discussion of this approach (i.e. _plan to live, not plan to die_)._
### Defining a Common Objective Function
In this section, we describe the common objective function used by both the HJB method and the NN method.
Expected Withdrawals (EW) and Expected Shortfall (ES) are conflicting measures. We use a scalarization method to determine Pareto optimal points for this multi-objective problem. For a given \(\kappa\), we seek the optimal control \(\mathcal{P}_{0}\) such that the following is maximized,
\[\text{EW}(X_{0}^{-},t_{0}^{-})+\kappa\text{ES}_{\alpha}(X_{0}^{-}, t_{0}^{-}). \tag{2.21}\]
We define (2.21) as the pre-commitment EW-ES problem (\(PCEE_{t_{0}}(\kappa)\)) and write the problem formally as
\[\left(\text{$PCEE_{t_{0}}(\kappa)$}\right):\] \[J\left(s,b,t_{0}^{-}\right)=\sup_{\mathcal{P}_{0}\in\mathcal{A}} \sup_{W^{*}}\Biggl{\{}E_{\mathcal{P}_{0}}^{X_{0}^{-},t_{0}^{-}}\Bigg{[}\ \sum_{i=0}^{M}q_{i}\ +\ \kappa\bigg{(}W^{*}+\frac{1}{\alpha}\min(W_{T}-W^{*},0)\bigg{)}\overbrace{+ \epsilon W_{T}}^{ stabilization}\] \[\bigg{|}X(t_{0}^{-})=\left(s,b\right)\Bigg{]}\Biggr{\}}\] \[\text{subject to }\begin{cases}(S_{t},B_{t})\text{ follow processes (\ref{eq:1}) and (\ref{eq:2});}&t\notin\mathcal{T}\\ W_{i}^{+}=S_{i}^{-}+B_{i}^{-}-q_{i}\,;\ X_{i}^{+}=(S_{i}^{+},B_{i}^{+})\\ S_{i}^{+}=p_{i}(\cdot)W_{i}^{+}\,;\ B_{i}^{+}=(1-p_{i}(\cdot))W_{i}^{+}\\ (q_{i}(\cdot),p_{i}(\cdot))\in\mathcal{Z}(W_{i}^{-},W_{i}^{+},t_{i})\\ i=0,\ldots,M\ ;\ t_{i}\in\mathcal{T}\end{cases}. \tag{2.22}\]
The \(\epsilon W_{T}\) stabilization term serves to avoid ill-posedness in the problem when \(W_{t}\gg W^{*}\), \(t\to T\), and has little effect on optimal (ES, EW) or other summary statistics when \(|\epsilon|\ll 1\). Further details about this stabilization term and its effects on both the HJB and NN framework will be discussed in Section 6. The objective function in (2.22) serves as the basis for the value function in the HJB framework and the loss function for the NN method.
**Remark 2.2** (Induced time consistent policy).: _Note that a strategy based on \((\text{PCEE}_{t_{0}}(\kappa))\) is formally a pre-commitment strategy (i.e., not time consistent). However, we will assume that the retiree actually follows the induced time consistent strategy (Forsyth, 2020; 2022; Strub et al., 2019). This control is identical to the pre-commitment control at time zero. See Appendix A for more discussion of this subtle point. In the following, we will refer to the strategy determined by (2.22) as the EW-ES optimal control, with the understanding that this refers to the induced time consistent control at any time \(t_{i}>t_{0}\)._
## 3 HJB Dynamic Programming Optimization Framework
The HJB framework uses dynamic programming, creating sub-problems from each time step in the problem and moving backward in time. For the convenience of the reader, we will summarize the algorithm in Forsyth (2022) here.
### Deriving Auxiliary Function from \(\text{PCEE}_{t_{0}}(\kappa)\)
The HJB framework begins with defining auxiliary functions based on the objective function (2.22) and the underlying stochastic processes. An equivalent problem is then formulated, which will then be solved to find the optimal value function.
We begin by interchanging the \(\sup_{\mathcal{P}_{0}}\) and \(\sup_{W^{*}}\) operators. This will serve as the starting point for the HJB solution
\[J\left(s,\!b,t_{0}^{-}\right) = \sup_{W^{*}}\,\sup_{\mathcal{P}_{0}\in\mathcal{A}}\Biggl{\{}E_{ \mathcal{P}_{0}}^{X_{0}^{-},t_{0}^{-}}\Bigg{[}\ \sum_{i=0}^{M}q_{i}\ +\ \kappa\bigg{(}W^{*}+\frac{1}{\alpha}\min(W_{T}-W^{*},0)\bigg{)} \tag{3.1}\] \[\ \ \ \ \ +\epsilon W_{T}\bigg{|}X(t_{0}^{-})=(s,\!b)\ \bigg{]}\Biggr{\}}\ \.\]
The auxiliary function which needs to be computed in the dynamic programming framework at each time \(t_{n}\) will have an associated strategy for any \(t_{n}>0\) that is equivalent with the solution of \(\text{PCEE}_{t_{0}}\left(\kappa\right)\) for a fixed \(W^{*}\). For a full discussion of pre-commitment and time-consistent ES strategies, we refer the reader to Forsyth (2020), which also includes a proof with similar steps of how the following auxiliary function is derived from (3.1). Including \(W^{*}\) in the state space gives us the expanded state space \(\hat{X}=(s,\!b,\!W^{*})\). The auxiliary function \(V(s,b,W^{*},t)\in\Omega=[0,\!\infty)\times(-\infty,+\infty)\times(-\infty,+ \infty)\times[0,\infty)\) is defined as,
\[V(s,b,W^{*},t_{n}^{-}) = \sup_{\mathcal{P}_{n}\in\mathcal{A}_{n}}\Biggl{\{}E_{\mathcal{P} _{n}}^{X_{n}^{-},t_{n}^{-}}\Bigg{[}\sum_{i=n}^{M}q_{i}+\kappa\bigg{(}W^{*}+ \frac{1}{\alpha}\min((W_{T}-W^{*}),\!0)\bigg{)}\] \[\ \ \ \ \ +\epsilon W_{T}\bigg{|}\hat{X}(t_{n}^{-})=(s,\!b,W^{*}) \bigg{]}\Biggr{\}}\.\]
\[\text{subject to}\qquad\begin{cases}(S_{t},B_{t})\ \text{follow processes}\ (\ref{eq:1})\ \text{and}\ (\ref{eq:2});\ \ t\notin\mathcal{T}\\ W_{i}^{+}=S_{i}^{-}+B_{i}^{-}-q_{i}\ ;\ \hat{X}_{i}^{+}=(S_{i}^{+},B_{i}^{+},W^{*}) \\ S_{i}^{+}=p_{i}(\cdot)W_{i}^{+}\ ;\ B_{i}^{+}=(1-p_{i}(\cdot))W_{i}^{+}\\ (q_{i}(\cdot),p_{i}(\cdot))\in\mathcal{Z}(W_{i}^{-},W_{i}^{+},t_{i})\\ i=n,\ldots,M\ ;\ t_{i}\in\mathcal{T}\end{cases}. \tag{3.2}\]
### Applying Dynamic Programming at Rebalancing Times
The principle of dynamic programming is applied at each \(t_{n}\in\mathcal{T}\) on (3.2). As usual, the optimal control needs to be computed in reverse time order. We split the \(\sup_{\mathcal{P}_{n}}\) operator into \(\sup_{q\in\mathcal{Z}_{q}}\sup_{p\in\mathcal{Z}_{p}(w^{-}-q,t)}\).
\[V(s\text{,}b\text{,}W^{*},t_{n}^{-}) = \sup_{q\in\mathcal{Z}_{q}}\ \ \sup_{p\in\mathcal{Z}_{p}(w^{-}-q,t)}\biggl{\{}q+ \biggl{[}V((w^{-}-q)p,(w^{-}-q)(1-p),W^{*},t_{n}^{+})\biggr{]}\biggr{\}} \tag{3.3}\] \[= \sup_{q\in\mathcal{Z}_{q}}\biggl{\{}q+\biggl{[}\sup_{p\in \mathcal{Z}_{p}(w^{-}-q,t)}V((w^{-}-q)p,(w^{-}-q)(1-p),W^{*},t_{n}^{+})\biggr{]} \biggr{\}}\] \[w^{-}=s+b\.\]
Let \(\overline{V}\) denote the upper semi-continuous envelope of \(V\), which will have already been computed as the algorithm progresses backward through time. The optimal allocation control \(p_{n}\)(\(w\),\(W^{*}\)) at time \(t_{n}\) is determined from
\[p_{n}(w,W^{*}) = \left\{\begin{array}{ll}\underset{p^{\prime}\in[0,1]}{\arg\max }\overline{V}(wp^{\prime},w(1-p^{\prime}),W^{*},t_{n}^{+}),&w>0\ ;\ t_{n}\neq t_{M}\\ 0,&w\leq 0\ \ \text{or}\ t_{n}=t_{M}\end{array}\right.. \tag{3.4}\]
The control \(q\) is then determined from
\[q_{n}\text{(}w\text{,}W^{*}) = \underset{q^{\prime}\in\mathcal{Z}_{q}}{\arg\max}\biggl{\{}q^{ \prime}+\overline{V}((w-q^{\prime})p_{n}(w-q^{\prime},W^{*}),(w-q^{\prime})(1- p_{n}(w-q^{\prime},W^{*})),W^{*},t_{n}^{+})\biggr{\}}. \tag{3.5}\]
Using these controls for \(t_{n}\), the solution is then advanced backwards across time from \(t_{n}^{+}\) to \(t_{n}^{-}\) by
\[V(s\text{,}b\text{,}W^{*}\text{,}t_{n}^{-}) = q_{n}(w^{-},W^{*})+\overline{V}(w^{+}p_{n}(w^{+}\text{,}W^{*}), w^{+}(\ 1-p_{n}(w^{+}\text{,}W^{*})\ ),W^{*},t_{n}^{+})\] \[\qquad w^{-}=s+b\ ;\ w^{+}=s+b-q_{n}(w^{-},W^{*})\.\]
At \(t=T\), we have the terminal condition
\[V(s\text{,}b\text{,}W^{*}\text{,}T^{+}) = \kappa\biggl{(}W^{*}+\frac{\min((s+b-W^{*}),0)}{\alpha}\biggr{)}. \tag{3.7}\]
### Conditional Expectations between Rebalancing Times
For \(t\in(t_{n-1}\text{,}t_{n})\), there are no cash flows, discounting (all quantities are inflation-adjusted), or controls applied. Hence the tower property gives, for \(0<h<(t_{n}-t_{n-1})\),
\[V(s\text{,}b\text{,}W^{*}\text{,}t) = E\biggl{[}V(S(t+h),B(t+h),W^{*}\text{,}t+h)\big{|}S(t)=s,B(t)=b \biggr{]}\ ;\ t\in(t_{n-1}\text{,}t_{n}-h)\.\]
To find this conditional expectation based on parametric models of the stock and bond processes, Ito's Lemma for jump processes (Tankov and Cont, 2009) is first applied using Equations (2.3) and (2.4). For details of the resulting partial integro differential equation (PIDE), refer to Forsyth (2022) and Appendix B. In computational practice, the resulting PIDE is solved using Fourier methods discussed in Forsyth and Labahn (2019).
### Equivalence with \(\text{{PCEE}}_{t_{0}}(\kappa)\)
Proceeding backward in time, the auxiliary function \(V(s,\!b,W^{*},\!t_{0}^{-})\) is determined at time zero. Problem \(\text{{PCEE}}_{t_{0}}\left(\kappa\right)\) is then solved using a final optimization step
\[J(s,\!b,\!t_{0}^{-})=\sup_{W^{\prime}}V(s,\!b,\!W^{\prime},\!t_{0}^{-}). \tag{3.9}\]
Notice that \(V(s,\!b,\!W^{\prime},\!t_{0}^{-})\) denotes the auxiliary function for the beginning of the investment period, and represents the last step (going backward) in solving the dynamic programming formulation. To obtain this, we begin with Equation (3.7) and recursively work backwards in time; then we obtain Equation (2.22) by interchanging \(\sup_{W^{\prime}}\sup_{\mathcal{P}}\) in the final step.
This formulation (3.2-3.8) is equivalent to problem \(PCEE_{t_{0}}(\kappa)\). For a summary of computational details, refer to Appendix C or see Forsyth (2022).
## 4 Neural Network Formulation
As an alternative to the HJB framework, we develop a neural network framework to solve the stochastic optimal control problem (2.22), which has the following characteristics:
1. The NN framework is data driven, which does not require a parametric model being specified. This avoids explicitly postulating parametric stochastic processes and the estimation of associated parameters. In addition, this allows us to add auxiliary market signals/variables (although we do not exploit this idea in this work).
2. The NN framework avoids the computation of high-dimensional conditional expectations by solving for the control at all times directly from a single standard unconstrained optimization, instead of using dynamic programming (see van Staden et al. (2023) for a discussion of this). Since the control is low-dimensional, the framework can exploit this to avoid the _curse of dimensionality_ by solving for the control directly, instead of via value iteration such as in the HJB dynamic programming method (van Staden et al., 2023). Such an approach also avoids backward error propagation through rebalancing times.
3. If the optimal control is a continuous function of time and state, the control approximated by the NN will reflect this property. If the optimal control is discontinuous, the NN approximation produces a smooth approximation. While not required by the original problem formulation in (2.22), this continuity property likely leads to practical benefits for an investment policy.
4. The NN method is further scalable in the sense that it could be easily adapted to problems with longer horizons or higher rebalancing frequency without significantly increasing the computational complexity of the problem. This is in contrast to existing approaches using a stacked neural network approach (Tsang and Wong, 2020).
We now formally describe the proposed NN framework and demonstrate the aforementioned properties. We approximate the control in \(\mathcal{P}\) directly by using feed-forward, fully-connected neural networks. Given parameters \(\boldsymbol{\theta}_{p}\) and \(\boldsymbol{\theta}_{q}\), i.e. NN weights and biases, \(\hat{p}(W(t_{i}),t_{i},\boldsymbol{\theta}_{p})\) and \(\hat{q}(W(t_{i}),t_{i},\boldsymbol{\theta}_{q})\) approximate the controls \(p_{i}\) and \(q_{i}\) respectively,
\[\hat{q}(W_{i}^{-},t_{i}^{-},\boldsymbol{\theta}_{q})\simeq q_{i} (W_{i}^{-})\ ;\ i=0,\ldots,M\] \[\hat{p}(W_{i}^{+},t_{i}^{+},\boldsymbol{\theta}_{p})\simeq p_{i} (W_{i}^{+})\ ;\ i=0,\ldots,M-1\] \[\hat{\mathcal{P}}=\{(\hat{q}(\cdot),\hat{p}(\cdot))\}\simeq \mathcal{P}\]
The functions \(\hat{p}\) and \(\hat{q}\) take time as one of the inputs, and therefore we can use just two NN functions to approximate control \(\mathcal{P}\) across time instead of defining a NN at each rebalancing time. In this section, we discuss how we solve problem (2.22) using this approximation and then provide a description of the NN architecture that is used. We discuss the precise formulation used by the NN, including activation functions that encode the stochastic constraints.
### Neural Network Optimization for \(\text{{PCEE}}_{t_{0}}(\kappa)\)
We begin by describing the NN optimization problem based on the stochastic optimal control problem (2.22). We first recall that, in the formulation in Section 3, controls \(q_{i}\) and \(p_{i}\) are functions of wealth only. Our goal is to choose NN weights \(\boldsymbol{\theta}_{p}\) and \(\boldsymbol{\theta}_{q}\) by solving (2.22), with \(\hat{q}(W_{i}^{-}\),\(t_{i}^{-},\boldsymbol{\theta}_{q})\) and \(\hat{p}(W_{i}^{+}\),\(t_{i}^{+},\boldsymbol{\theta}_{p})\) approximating feasible controls \((q_{i},p_{i})\in\mathcal{Z}(W_{i}^{-},W_{i}^{+}\),\(t_{i})\) for \(t_{i}\in\mathcal{T}\). For an arbitrary set of controls \(\hat{\mathcal{P}}\) and wealth level \(W^{*}\), we define the NN performance criteria \(V_{NN}\) as
\[V_{NN}(\hat{\mathcal{P}},W^{*},s,b,t_{0}^{-}) = E_{\hat{\mathcal{P}}_{0}^{-}}^{X_{0}^{-},t_{0}^{-}}\Bigg{[}\ \sum_{i=0}^{M}\hat{q}_{i}\ +\ \kappa\bigg{(}W^{*}+\frac{1}{\alpha}\min(W_{T}-W^{*},0)\bigg{)}\] \[\qquad+\epsilon W_{T}\bigg{|}X(t_{0}^{-})=(s,b)\ \Bigg{]}\.\] subject to \[\begin{cases}(S_{t},B_{t})\ \text{follow processes (\ref{eq:NN}) and (\ref{eq:NN})};\ \ t\notin\mathcal{T}\\ W_{i}^{+}=S_{i}^{-}+B_{i}^{-}-q_{i};\ X_{i}^{+}=(S_{i}^{+},B_{i}^{+})\\ S_{i}^{+}=\hat{p}_{i}(\cdot)W_{i}^{+}\ ;\ B_{i}^{+}=(1-\hat{p}_{i}(\cdot))W_{i}^{+}\\ (\hat{q}_{i}(\cdot),\hat{p}_{i}(\cdot))\in\mathcal{Z}(W_{i}^{-},W_{i}^{+},t_{ i})\\ i=0,\ldots,M\ ;\ t_{i}\in\mathcal{T}\end{cases} \tag{4.1}\]
The optimal value function \(J_{NN}\) (at \(t_{0}^{-}\)) is then given by
\[J_{NN}(s,b,t_{0}^{-})=\sup_{W^{*}}\sup_{\hat{p}\in\mathcal{A}}\ V_{NN}(\hat{ \mathcal{P}},W^{*},s,b,t_{0}^{-}). \tag{4.2}\]
Next we describe the structure of the neural networks and feasibility encoding.
### Neural Network Framework
Consider two fully-connected feed-forward NNs, with \(\hat{p}\) and \(\hat{q}\) determined by parameter vectors \(\boldsymbol{\theta}_{p}\in\mathbb{R}^{\nu_{p}}\) and \(\boldsymbol{\theta}_{q}\in\mathbb{R}^{\nu_{q}}\) (representing NN weights and biases), respectively. The two NNs can differ in the choice of activation functions and in the number of hidden layers and nodes per layer. Each NN takes input of the same form (\(W(t_{i})\),\(t_{i}\)), but the withdrawal NN \(\hat{q}\) takes the state variable observed before withdrawal, \((W(t_{i}^{-}),t_{i})\), and the allocation NN \(\hat{p}\) takes the state variable observed after withdrawal, \((W(t_{i}^{+}),t_{i})\).
In order for the NN to generate a feasible control as specified in (4.4), we use a modified sigmoid activation function to scale the output from the withdrawal NN \(\hat{q}\) according to the \(PCEE_{t_{0}}(\kappa)\) problem's constraints on the withdrawal amount \(q_{i}\), as given in Equation (2.10). This ultimately allows us to perform unconstrained optimization on the NN training parameters.
Specifically, assuming \(x\in[0,1]\), the function \(f(x):=a+(b-a)x\) scales the output to be in the range \([a,b]\). We restrict withdrawal to \(\hat{q}\) in \([q_{\min},q_{\max}]\). We note that this withdrawal range \(q_{\max}-q_{\min}\) depends on wealth \(W^{-}\), see from (2.10). Define the range of permitted withdrawal as follows,
\[\text{range}=\begin{cases}q_{\text{max}}-q_{\text{min}}\ ;\ \text{if}\ W_{i}^{-}>q_{ \text{max}}\\ W^{-}-q_{\text{min}}\ ;\ \text{if}\ q_{\text{min}}<W_{i}^{-}<q_{\text{max}}\\ 0\ ;\
* The activation function for the withdrawal output is different from the activation function for allocation. Control \(\hat{q}\) uses a modified sigmoid function, which is chosen to transform its output according to (2.10). Control \(\hat{p}\) uses a softmax activation which ensures that its output gives only positive weights for each portfolio asset that sum to one, as specified in (2.11). By constraining the NN output this way through proposed activation functions, we can use unconstrained optimization to train the NN.
### NN Estimate of the Optimal Control
Now we describe the training optimization problem for the proposed data driven NN framework, which is agnostic to the underlying data generation process. We assume that a set of asset return trajectories are available, which are used to approximate the expectation in (4.1) for any given control. For NN training, we approximate the expectation in (4.1) based on a finite number of samples as follows:
\[\begin{split}&\tilde{V}_{NN}(\boldsymbol{\theta}_{q},\boldsymbol{ \theta}_{p},W^{*},s,b,t_{0}^{-})=\\ &\frac{1}{N}\sum_{j=1}^{N}\Biggl{[}\ \sum_{i=0}^{M}\hat{q}((W_{i})^{j},t_{i} ;\boldsymbol{\theta}_{q})\ +\ \kappa\biggl{(}W^{*}+\frac{1}{\alpha}\min((W_{T})^{j}-W^{*},0)\biggr{)}+\epsilon (W_{T})^{j}\biggl{|}X(t_{0}^{-})=(s,\!b)\ \Biggr{]}\\ &\text{subject to}\ \begin{cases}((S_{t})^{j},(B_{t})^{j})\text{ drawn from the }j^{th}\text{ sample of returns};\ \ t\notin\mathcal{T}\\ (W_{i}^{+})^{j}=(S_{i}^{-})^{j}+(B_{i}^{-})^{j}-\hat{q}\left((W_{t_{i}}^{-}) ^{j},t_{i},\boldsymbol{\theta_{q}}\right)\ ;\ (X_{i}^{+})^{j}=(S_{i}^{+},B_{i}^{+})^{j}\\ (S_{i}^{+})^{j}=\hat{p}\left((W_{i}^{+})^{j},t_{i},\boldsymbol{\theta_{p}} \right)\ (W_{i}^{+})^{j}\ ;\ (B_{i}^{+})^{j}=(1-\hat{p}\left((W_{i}^{+})^{j},t_{i}, \boldsymbol{\theta_{p}}\right))\ (W_{i}^{+})^{j}\\ (\hat{q}_{i}(\cdot),\hat{p}_{i}(\cdot))\in\mathcal{Z}\left((W_{i}^{-})^{j},(W _{i}^{+})^{j},t_{i}\right)\\ i=0,\ldots,M\ ;\ t_{i}\in\mathcal{T}\end{cases},\end{split} \tag{4.4}\]
where the superscript \(j\) represents the \(j^{\text{th}}\) path of joint asset returns and \(N\) is the total number of sampled paths. For subsequent benchmark comparison, we generate price paths using processes
Figure 4.1: Illustration of the NN framework as per Section 4.2. Additional technical details can be found in Appendix D.
(2.3) and (2.4). We are, however, agnostic as to the method used to generate these paths. We assume that the random sample paths are independent, but that correlations can exist between returns of different assets. In addition, correlation between the returns of different time periods can also be represented, e.g., block bootstrap resampling is designed to capture autocorrelation in the time series data.
The optimal parameters obtained by training the neural network are used to generate the control functions \(\hat{q}^{*}(\cdot):=\hat{q}(\cdot;\mathbf{\theta}_{\mathbf{q}}^{*})\) and \(\hat{p}^{*}(\cdot):=\hat{p}(\cdot;\mathbf{\theta}_{\mathbf{p}}^{*})\), respectively. With these functions, we can evaluate the performance of the generated control on testing data sets that are out-of-sample or out-of-distribution. We present the detailed results of such tests in Section 7.
## 5 Data
For the computational study in this paper, we use data from the Center for Research in Security Prices (CRSP) on a monthly basis over the 1926:1-2019:12 period.4 The specific indices used are the CRSP 10-year U.S. Treasury index for the bond asset5 and the CRSP value-weighted total return index for the stock asset6. All of these various indexes are in nominal terms, so we adjust them for inflation by using the U.S. CPI index, also supplied by CRSP. We use real indexes since investors funding retirement spending should be focused on real (not nominal) wealth goals.
Footnote 4: More specifically, results presented here were calculated based on data from Historical Indexes, ©2020 Center for Research in Security Prices (CRSP), The University of Chicago Booth School of Business. Wharton Research Data Services was used in preparing this article. This service and the data available thereon constitute valuable intellectual property and trade secrets of WRDS and/or its third-party suppliers.
Footnote 5: The 10-year Treasury index was calculated using monthly returns from CRSP dating back to 1941. The data for 1926-1941 were interpolated from annual returns in Homer and Sylla (2005). The bond index is constructed by (i) purchasing a 10-year Treasury at the start of each month, (ii) collecting interest during the month and (iii) selling the Treasury at the end of the month.
Footnote 6: The stock index includes all distributions for all domestic stocks trading on major U.S. exchanges.
We use the above market data in two different ways in subsequent investigations:
1. _Stochastic model calibration:_ Any data set referred to in this paper as _synthetic data_ is generated by parametric stochastic models (SDEs) (as described in Section 2.2), whose parameters are calibrated to the CRSP data by using the threshold technique (Cont and Mancini, 2011; Dang and Forsyth, 2016; Mancini, 2009). The data is inflation-adjusted so that all parameters reflect real returns. Table E.1 shows the results of calibrating the models to the historical data. The correlation \(\rho_{sb}\) is computed by removing any returns which occur at times corresponding to jumps in either series. See Dang and Forsyth (2016) for details of the technique for detecting jumps.
2. _Bootstrap resampling:_ Any data set referred to in this paper as _historical data_ is generated by using the stationary block bootstrap method (Dichtl et al., 2016; Patton et al., 2009; Politis and Romano, 1994; Politis and White, 2004) to resample the historical CRSP data set. This method involves repeatedly drawing randomly sampled blocks of random size, with replacement, from the original data set. The block size follows a geometric distribution with a specified expected block size. To preserve correlation between asset returns, we use a paired sampling approach to simultaneously draw returns from both time series. This, in effect, shuffles the original data and can be repeated to obtain however many resampled paths one desires. Since the order of returns in the sequence is unchanged within the sampled block, this method accounts for some possible serial correlation in market data. Detailed pseudo-code for this method of block bootstrap resampling is given in Forsyth and Vetzal (2019).
We note that block resampling is commonly used by practitioners and academics (see for example Anarkulova et al. (2022); Cogneau and Zakamouline (2013); Dichtl et al. (2016); Scott and Cavaglia (2017); Simonian and Martirosyan (2022)). Block bootstrap resampling will be used to carry out robustness checks in Section 7. Note that for any realistic number of samples and expected block size, the probability of repeating a resampled path is negligible (Ni et al., 2022).
One important parameter for the block resampling method is the expected block size. Forsyth (2022) determines that a reasonable expected block size for paired resampling is about three months. The algorithm presented in Patton et al. (2009) is used to determine the optimal expected block size for the bond and stock returns separately; see Table F.1. Subsequently, we will also test the sensitivity of the results to a range of block sizes from 1 to 12 months in numerical experiments.
To train the neural networks, we require that the number of sampled paths, \(N\), be sufficiently large to fully represent the underlying market dynamics. Subsequently, we first generate training data through Monte Carlo simulations of the parametric models described in (2.3) and (2.4). We emphasize however that in the proposed data driven NN framework, we only require return trajectories of the underlying assets. In later sections, we present results from NNs trained on non-parametrically generated data, e.g. resampled historical data. We also demonstrate the NN framework's robustness on test data.
## 6 Computational Results
We now present and compare performance of the optimal control from the HJB PDE and NN method respectively on synthetic data, with investment specifications given in Table 6.1. Each strategy's performance is measured w.r.t. to the objective function in (2.22), which is a weighted reward (EW) and risk (ES) measure. To trace out an efficient frontier in the (EW,ES) plane, we vary \(\kappa\) (the curve represents the (EW,ES) performance on a set of optimal Pareto points).
We first present strategies computed from the HJB framework described in Section 3. We verify that the numerical solutions are sufficiently accurate, so that this solution can be regarded as ground truth. We then present results computed using the NN framework of Section 4, and demonstrate the accuracy of the NN results by comparing to the ground truth computed from the HJB equation. We carry out further analysis by selecting an _interesting_ point on the (EW,ES) efficient frontier, in particular \(\kappa=1.0\), to study in greater detail. The point \(\kappa=1.0\) is at the _knee_ of the efficient frontier, which makes it desirable in terms of risk-reward tradeoff (picking the exact \(\kappa\) will be a matter of investor preference, however). This notion of the knee point is loosely based on the concept of a _compromise solution_ of multi-objective optimization problems, which selects the point on the efficient frontier with the minimum distance to an unattainable ideal point (Marler and Arora, 2004). For this knee point of \(\kappa=1.0\), we analyze the controls and wealth outcomes under both frameworks. We also discuss some key differences between the HJB and NN frameworks' results and their implications.
### Strategies Computed from HJB Equation
We carry out a convergence test for the HJB framework by tracing the efficient frontier (i.e. varying the scalarization parameter \(\kappa\)) for solutions of varying refinement levels (i.e. number of grid points in the (\(s\),\(b\)) directions). Figure 6.1 shows these efficient frontiers. As the efficient frontiers from various grid sizes all practically overlap each other, this demonstrates convergence of solutions computed
from solving HJB equations. Table G.1 shows a convergence test for a single point on the frontier. The convergence is roughly first-order (for the value function). This convergence test justifies the use of the HJB framework results as a ground-truth.
**Remark 6.1** (Effect of Stabilization Term \(\epsilon W_{T}\)).: _Recall the stabilization term, \(\epsilon W_{T}\), introduced in (2.22). We now provide motivation for its inclusion, and observe its effect on the control \(\hat{\mathcal{P}}\). When \(W_{t}\gg W^{*}\) and \(t\to T\), the control will only weakly affect the objective function. This is because, in this situation, \(Pr[W_{T}<W^{*}]\simeq 0\) and thus the allocation control will have little effect on the ES term in the objective (recall that \(W^{*}\) is held constant for the induced time consistent strategy, see Appendix A). In addition, the withdrawal is capped at \(q_{\max}\) for very high values of \(W_{t}\), so the withdrawal control does not depend on \(W_{t}\) in this case either. The stabilization term can be used to alleviate ill-posedness of the problem in this region._
In Figure 6.2, we present the heat map of the allocation control computed from the HJB framework. Subplot (a) presents allocation control heat map for a small positive stabilization parameter \(\epsilon=10^{-6}\), while Subplot (b) presents allocation control heat map with \(\epsilon=-10^{-6}\). In the ill-posed region (the top right region of the heat maps), the presence of \(\epsilon W_{T}\), with \(\epsilon=10^{-6}\), forces the control
\begin{table}
\begin{tabular}{l c} \hline \hline Investment horizon \(T\) (years) & 30 \\ Equity market index & CRSP Cap-weighted index (real) \\ Bond index & 10-year Treasury (US) (real) \\ Initial portfolio value \(W_{0}\) & 1000 \\ Cash withdrawal times & \(t=0\),\(1,\ldots,30\) \\ Withdrawal range & \([35,60]\) \\ Equity fraction range & [0,1] \\ Borrowing spread \(\mu_{c}^{b}\) & 0.0 \\ Rebalancing interval (years) & 1 \\ Market parameters & See Appendix E \\ \hline \hline \end{tabular}
\end{table}
Table 6.1: Problem setup and input data. Monetary units: thousands of dollars.
Figure 6.1: EW-ES frontier, computed from problem (2.22). Note: Scenario in Table 6.1. Comparison of HJB solution performance with varying grid sizes. HJB solution performance computed on \(2.56\times 10^{6}\) observations of synthetic data. Parameters for synthetic data based on cap-weighted real CRSP, real 10 year treasuries (see Table E.1). \(q_{min}=35,q_{\max}=60\). \(\epsilon=10^{-6}\). Units: thousands of dollars.
to invest \(100\%\) in stocks to generate high terminal wealth. Conversely, changing the stabilization parameter to be negative (\(\epsilon=-10^{-6}\)) forces the control to invest completely in bonds.
We observe that the control behaves differently only at high level of wealth as \(t\to T\) in both cases. The 5th and the 50th percentiles of control on the synthetic data set behave similarly in both the positive and negative \(\epsilon\) cases. The 95th percentile curve tends towards higher wealth during later phases of the investment period when the \(\epsilon\) is positive (Figure 6.2(a)), whereas the curve tends downward when \(\epsilon\) is negative (Figure 6.2(b)). When the magnitude of \(\epsilon\) is sufficiently small, the inclusion of \(\epsilon W_{T}\) in the objective function does not change summary statistics (to four decimal places when \(|\epsilon|=10^{-6}\)). While the choice of negative or positive \(\epsilon\) with small magnitude can lead to different allocation control scenarios at high wealth level near the end of time horizon, the choice makes little difference from the perspective of the problem \(PCEE_{t_{0}}(\kappa)\). If the investor reaches very high wealth near \(T\), the choice between \(100\%\) stocks and \(100\%\) bonds does not matter as the investor always ends with \(W_{T}\gg W^{*}\). Our experiments show that the control \(q\) is unaffected when the magnitude of \(\epsilon\) is small and continues to call for maximum withdrawals at high levels of wealth as \(t\to T\), just as described in Remark 6.1.
Comparing the optimal withdrawal strategy determined by solving stochastic optimal control problem (2.22) with a fixed withdrawal strategy (both strategies with dynamic asset allocation), Forsyth (2022) finds that the stochastic optimal strategy (4.4) is much more efficient in withdrawing cash over the investment horizon. Accepting a very small amount of additional risk, the retiree can dramatically increase total withdrawals. For a more detailed discussion of the optimal control, we refer the reader to Forsyth (2022).
### Accuracy of Strategy Computed from NN framework
We compute the NN control following the framework discussed in Section 4. We compare the efficient frontiers obtained from the HJB equation solution and the NN solution. From Figure 6.3, the NN control efficient frontier is almost indistinguishable from the HJB control efficient frontier.
Figure 6.2: Effect of \(\epsilon\): fraction in stocks computed from the problem (2.22). Note: investment setup is as in Table 6.1. HJB solution performance computed on \(2.56\times 10^{6}\) observations of synthetic data. Parameters for synthetic data based on cap-weighted real CRSP, real 10 year treasuries (see Table E.1). \(q_{min}=35,q_{\max}=60\), \(\kappa=1.0\). \(W^{*}=58.0\) for PIDE results. (a) \(\epsilon=10^{-6}\). (b) \(\epsilon=-10^{-6}\). Units: thousands of dollars.
Detailed summary statistics for each computed point on the frontier can be found in Appendix H.2, and a comparison of objective function values, for the NN and HJB control at each frontier point, can be found in Appendix H.3. For most points on the frontier, the difference in objective function values, from NN and HJB, is less than \(0.1\%\). This demonstrates that the accuracy of the NN framework approximation of the ground-truth solution is more than adequate, considering that the difference between the NN solution and the PDE solution is about the same as the estimated PDE error (see Table G.1).
We now further analyze the control \(\hat{\mathcal{P}}\) produced by the NN framework for \(\kappa=1\). Comparing Figure 6.4(b) with Figure 6.4(d), we observe that the withdrawal control \(\hat{q}\) produced by the NN is practically identical to the withdrawal control produced by the HJB framework. However, there are differences in the allocation control heat maps. The NN heat map for allocation control \(p\) (Figure 6.4(a)) appears most similar to that of the HJB allocation heat map for negative \(\epsilon\) (Figure 6.2(b)), but it is clear that the NN allocation heat map differs significantly from the HJB heat map for positive \(\epsilon\) (Figure 6.2(a)) at high level of wealth as \(t\to T\). The NN allocation control behaves differently from the HJB controls in this region, choosing a mix of stocks and bonds instead of choosing a \(100\%\) allocation in a single asset. Noting this difference is only at higher level of wealth near \(T\), we see that the \(5\)th percentile and the median wealth curves are indistinguishable. The NN control's \(95\)th percentile curve, however, is different and indeed the curve is in between the \(95\)th percentile curves from the negative and positive versions of the HJB-generated control.
Drawing from this, we attribute the NN framework's inability to fully replicate the HJB control to the ill-posedness of the optimal control problem in the (top-right) region of high wealth levels near \(T\). The small value of \(\epsilon\) means that the stabilization term contributes a very small fraction of the objective function value and thus has a very small gradient, relative to the first two terms in the objective function. Since we use stochastic gradient descent for optimization, we see a very small impact of \(\epsilon\). Moreover, the data for high levels of wealth as \(t\to T\) is very sparse and so the effect of the small gradient is further reduced. As a result, the NN appears to smoothly extrapolate in this region and therefore avoids investment into a single asset. Recall that in Section 6.1, we stated that the choice in the signs of \(\epsilon\), with small \(\epsilon\), in the stabilization term is somewhat arbitrary and does not affect summary statistics. Therefore, we see that the controls produced by the two methods
Figure 6.3: Comparison of EW-ES frontier for the Neural Network (NN) and Hamilton-Jacobi-Bellman (HJB) Partial Differential Equation (PDE) methods, computed from the problem (2.22). Note: investment setup in Table 6.1. HJB solution performance computed on \(2.56\times 10^{6}\) observations of synthetic data. Parameters for synthetic data based on cap-weighted real CRSP, real 10 year treasuries (see Table E.1). Control computed from the NN model, trained on \(2.56\times 10^{6}\) observations of synthetic data. \(q_{min}=35,q_{\max}=60\). \(\epsilon=10^{-6}\). Units: thousands of dollars. Labels on nodes indicate \(\kappa\) parameter.
only differ in irrelevant aspects, at least based on the EW and ES reward-risk consideration.
It is interesting to observe that the proposed neural network framework is able to produce the _bang-bang_ withdrawal control computed in Forsyth (2022), especially since we are using the continuous function \(\hat{q}\) as an approximation.7 A _bang-bang_ control switches abruptly as shown here: the optimal strategy is to withdraw the minimum if the wealth is below a threshold, or else withdraw the maximum. As expected, the control threshold decreases as we move forward in time. We can see that the NN and HJB withdrawal controls behave very similarly at the 95th, 50th, and 5th
Figure 6.4: Heat map of controls: fraction in stocks and withdrawals, computed from the problem (2.22). Note: problem setup described in Table 6.1. HJB solution performance computed on \(2.56\times 10^{6}\) observations of synthetic data. Parameters for synthetic data based on cap-weighted real CRSP, real 10 year treasuries (see Table E.1). NN model trained on \(2.56\times 10^{6}\) observations of synthetic data. \(q_{min}=35,q_{\max}=60\), \(\kappa=1.0\). \(W^{*}=59.1\) for NN results. \(W^{*}=58.0\) for the HJB results. \(\epsilon=10^{-6}\). Normalized withdrawal \((q-q_{\min})/(q_{\max}-q_{\min})\). Units: thousands of dollars.
percentiles of wealth (Figures 6.5(c) and 6.5(f)). Essentially, the optimal strategy withdraws at either \(q_{\max}\) or \(q_{\min}\), with a very small transition zone. This is in line with our expectations. By
Figure 6.5: Scenario in Table 6.1. NN and HJB controls computed from the problem (2.22). Parameters based on the real CRSP index, and real 10-year treasuries (see Table E.1). NN model trained on \(2.56\times 10^{5}\) observations of synthetic data. HJB framework results from \(2.56\times 10^{6}\) observations of synthetic data. \(q_{min}=35,q_{\max}=60\), \(\kappa=1.0\). \(W^{*}=59.1\) for NN results. \(W^{*}=58.0\) for HJB results. Units: thousands of dollars.
withdrawing less and investing more initially, the individual decreases the chance of running out of savings.
We also note that the NN allocation control presents a small spread between the 5th and 95th percentile of the fraction in stocks (Figure 6.5(a)). In fact, the maximum stock allocation for the 95th percentile never exceeds 40%, indicating that this is a stable low-risk strategy, which as we shall see, outperforms the Bengen (1994) strategy.
## 7 Model Robustness
A common pitfall of neural networks is over-fitting to the training data. Neural networks that are over-fitted do not have the ability to generalize to previously unseen data. Since future asset return paths cannot be predicted, it is important to ascertain that the computed strategy is not overfitted to the training data and can perform well on unseen return paths. In this section, we demonstrate the robustness of the NN model's generated controls.
We conduct three types of robustness tests: (i) out-of-sample testing, (ii) out-of-distribution testing, and (iii) control sensitivity to training distribution.
### Out-of-sample testing
Out-of-sample tests involve testing model performance on an unseen data set sampled from the same distribution. In our case, this means training the NN on one set of SDE paths sampled from the parametric model, and testing on another set of paths generated using a different random seed. We present the efficient frontier generated by computed controls on this new data set in Figure 7.1, which shows almost unchanged performance on the out-of-sample test set.
### Out-of-distribution testing
Out-of-distribution testing involves evaluating the performance of the computed control on an entirely new data set sampled from a different distribution. Specifically, test data is not generated from the parametric model used to produce training data, but is instead bootstrap resampled from
Figure 7.1: Out-of-sample test. EW-ES frontiers, computed from the problem (2.22). Note: Scenario in Table 6.1. Comparison of NN training performance results vs. out-of-sample test. Both training and testing data are \(2.56\times 10^{5}\) observations of synthetic data, generated with a different random seed. Parameters for synthetic data based on cap-weighted real CRSP, real 10 year treasuries (see Table E.1). \(q_{min}=35,q_{\max}=60\). \(\epsilon=10^{-6}\). Units: thousands of dollars. Labels on nodes indicate \(\kappa\) parameter values.
historical market returns via the method described in Section 5. We vary the expected block sizes to generate multiple testing data sets of \(2.56\times 10^{5}\) paths.
In Figure 7.2, we see that for each block size tested, the efficient frontiers are fairly close, indicating that the controls are relatively robust. Note that the efficient frontiers for test performance in the historical market with expected block size of 1 and 3 months plot slightly above the synthetic market frontier. We conjecture that this may be due to more pessimistic tail events in the synthetic market.
The out-of-sample and out-of-distribution tests verify that the neural network is not over-fitting to the training data, and is generating an effective strategy, at least based on our block resampling data.
### Control sensitivity to training distribution
To test the NN framework's adaptability to other training data sets, we train the NN framework on historical data (with expected block sizes of both 3 months and 12 months) and then test the resulting control on synthetic data. In Figure 7.3, we compare the training performance and the test performance. The EW-ES frontiers for the test results on the synthetic data are very close to the results on the bootstrap market data (training data set). This shows the NN framework's adaptability to use alternative data sets to learn, with the added advantage of not being reliant on a parametric model, which is prone to miscalibration. Figure 7.3 also shows that, in all cases, in the synthetic or historical market, the EW-ES control significantly outperforms the Bengen _4% Rule_8(Bengen, 1994).
Figure 7.2: _Out-of-distribution test. EW-ES frontiers of controls generated by NN model trained on \(2.56\times 10^{5}\) observations of synthetic data, tested on \(2.56\times 10^{5}\) observations of historical data with varying expected block sizes. Computed from the problem (2.22). Note: Setup as in Table 6.1. Parameters based on real CRSP index and real 10-year U.S. Treasuries (see Table E.1). Historical data in range 1926:1-2019:12. Units: thousands of dollars. \(q_{min}=35;q_{max}=60\). Simulated training data refers to Monte Carlo simulations using the SDEs (2.3) and (2.4)._
## 8 Conclusion
In this paper, we proposed a novel neural network (NN) architecture to efficiently and accurately compute the optimal accumulation strategy for retirees with DC pension plans. The stochastically constrained optimal control problem is solved based on a single standard unconstrained optimization, without using dynamic programming.
We began by highlighting the increasing prevalence of DC pension plans over traditional DB pension plans, and outlining the critical accumulation problem that faces DC plan investors. There is an extensive literature on devising strategies for this problem. In particular, we examine a Hamilton-Jacobi-Bellman (HJB) Partial Differential Equation (PDE) based approach that can be shown to converge to an optimal solution for a dynamic withdrawal/allocation strategy. This provides an attractive balance of risk management and withdrawal efficiency for retirees. In this paper, we seek to build upon this approach by developing a new, more versatile framework using NNs to solve the decumulation problem.
We conduct computational investigations to demonstrate the accuracy and robustness of the proposed NN solution, utilizing the unique opportunity to compare NN solutions with the HJB results as a ground truth. Of particular noteworthiness is that the continuous function approximation from the NN framework is able to approximate a bang-bang control with high accuracy. We extend our experiments to establish the robustness of our approach, testing the NN control's performance on both synthetic and historical data sets.
We demonstrate that the proposed NN framework produced solution accurately approximates the ground truth solution. We also note the following advantages of the proposed NN framework:
1. The NN method is data driven, and does not require postulating and calibrating a parametric model for market processes.
2. The NN method directly estimates the low dimensional control by solving a single unconstrained optimization problem, avoiding the problems associated with dynamic programming
Figure 7.3: Training on historical data. EW-ES frontiers of controls generated by NN model trained on \(2.56\times 10^{5}\) observations of historical data with expected block sizes of a) 3 months and b) 12 months, each tested on \(2.56\times 10^{5}\) observations of synthetic data. Parameters based on real CRSP index and real 10-year U.S. Treasuries (see Table E.1). Historical data in range 1926:1-2019:12. Units: thousands of dollars. \(q_{min}=35;q_{max}=60\). The Bengen (1994) results are based on bootstrap resampling of the historical data. Labels on nodes indicate \(\kappa\) parameter values. Simulated testing data refers to Monte Carlo simulations using the SDEs (2.3) and (2.4). \(\epsilon=+10^{-6}\).
methods, which require estimating high dimensional conditional expectations (see van Staden et al. (2023)).
3. The NN formulation maintains its simple structure (discussed in Section 4.2), immediately extendable to problems with more frequent rebalancing and/or withdrawal events. In fact, the problem presented in (2.22) requires each control NN to have only two hidden layers for 30 rebalancing and withdrawal periods.
4. The approximated control maintains continuity in time and/or space, provided it exists, or otherwise provides a smooth approximation. Continuity of the allocation control \(p\) is an important practical consideration for any investment policy.
Due to the ill-posedness of the stochastic optimal control problem in the region of high wealth near the end of the decumulation horizon, we observe that the NN allocation can appear to be very different from the HJB PDE solution. We note, however, that both strategies yield indistinguishable performance when assessed with the expected withdrawal and ES reward-risk criteria. In other words, these differences hardly affect the objective function value, a weighted reward and risk value. In the region of high wealth level near the end of the time horizon, the retire is free to choose whether to invest 100% in stocks or 100% in bonds, since this has a negligible effect on the objective function value (or reward-risk consideration).9
Footnote 9: This can be termed the _Warren Buffet_ effect. Buffet is the fifth richest human being in the world. He is 92 years old. Buffet can choose any allocation strategy, and will never run out of cash.
To conclude, the advantages of the NN framework make it a more versatile method, compared to the solution of the HJB PDE. We expect that the NN approach can handle problems of higher complexity, e.g., involving a higher number of assets. In addition, the NN method can be applied to other proposed formulations for the retirement planning problem (for example, see Forsyth et al. (2022)). We leave the extension of this methodology to future work.
## 9 Acknowledgements
Forsyth's work was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) grant RGPIN-2017-03760. Li's work was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) grant RGPIN-2020-04331. The author's are grateful to P. van Staden for supplying the initial software library for NN control problems.
## 10 Conflicts of interest
The authors have no conflicts of interest to report.
## Appendix A Induced Time Consistent Policy
In this section of the appendix, we review the concept of time consistency and relate its relevance to the \(PCEE_{t_{0}}(\kappa)\) problem, (2.22).
Consider the optimal control \(\mathcal{P}^{*}\) for problem (2.22),
\[(\mathcal{P}^{*})^{t_{0}}(X(t_{i}^{-}),t_{i})\ ;\ i=0,\ldots,M\.\] (A.1)
Equation (A.1) can be interpreted as the optimal control for any time \(t_{i}\geq t_{0}\), as a function of the state variables \(X(t)\), as computed at \(t_{0}\).
Now consider if we were to solve the problem (2.22) starting at a later time \(t_{k},k>0\). This optimal control starting at \(t_{k}\) is denoted by:
\[(\mathcal{P}^{*})^{t_{k}}(X(t_{i}^{-}),t_{i})\ ;\ i=k,\ldots,M\}\.\] (A.2)
In general, the solution of (2.22) computed at \(t_{k}\) is not equivalent to the solution computed \(t_{0}\):
\[(\mathcal{P}^{*})^{t_{k}}(X(t_{i}^{-}),t_{i})\neq(\mathcal{P}^{*})^{t_{0}}(X(t _{i}^{-}),t_{i})\ ;\ i\geq k>0.\] (A.3)
This non-equivalence makes problem (2.22) _time inconsistent_, implying that the investor will have the incentive to deviate from the control computed at time \(t_{0}\) at later times. This type of control is considered a _pre-commitment_ control since the investor would need to commit to following the strategy at all times following \(t_{0}\). Some authors describe pre-commitment controls as non-implementable because of the incentive to deviate.
In our case, however, the pre-commitment control from (2.22) can be shown to be identical to the time consistent control for an alternative version of the objective function. By holding \(W^{*}\) fixed at the optimal value (at time zero), we can define the time consistent equivalent problem (TCEQ). Noting that the inner supremum in (2.22) is a continuous function of \(W^{*}\), we define the optimal value of \(W^{*}\) as
\[\mathcal{W}^{*}(s,\!b) = \underset{W^{*}}{\arg\max}\bigg{\{}\sup_{\mathcal{P}_{0}\in \mathcal{A}}\bigg{\{}E_{\mathcal{P}_{0}}^{X_{0}^{-},t_{0}^{-}}\bigg{[}\ \sum_{i=0}^{M}q_{i}\ +\ \kappa\bigg{(}W^{*}+\frac{1}{\alpha}\min(W_{T}-W^{*},0)\bigg{)}\bigg{|}X(t_{0} ^{-})=(s,\!b)\ \bigg{]}\bigg{\}}\bigg{\}}\.\] (A.4)
With a given initial wealth of \(W_{0}^{-}\), this gives the following result from Forsyth (2020):
**Proposition A.1** (Pre-commitment strategy equivalence to a time consistent policy for an alternative objective function).: _The pre-commitment EW-ES strategy found by solving \(J\left(s,\!b,t_{0}^{-}\right)\) from (2.22), with fixed \(W^{*}=\mathcal{W}^{*}\) from Equation A.4, is identical to the time consistent strategy for the equivalent problem TCEQ (which has fixed \(\mathcal{W}^{*}(0,\!W_{0}^{-})\)), with the following value function:_
\[(\textit{TCEQ}_{t_{n}}\left(\kappa/\alpha\right)):\] \[\tilde{J}\left(s,\!b,t_{n}^{-}\right) = \underset{\mathcal{P}_{n}\in\mathcal{A}}{\sup}\Bigg{\{}E_{ \mathcal{P}_{n}}^{X_{n}^{-},t_{n}^{-}}\bigg{[}\ \sum_{i=n}^{M}q_{i}\ +\ \frac{\kappa}{\alpha}\min(W_{T}-\mathcal{W}^{*}(0,W_{0}^{-}),\!0)\bigg{|}X(t_{ n}^{-})=(s,\!b)\ \bigg{]}\bigg{\}}.\] (A.5)
Proof.: This follows similar steps as in Forsyth (2020), proof of Proposition (6.2).
With fixed \(W^{*}\), \(\textit{TCEQ}_{t_{n}}(\kappa/\alpha)\) is based on a target-based shortfall as its measure of risk, which is trivially time consistent. \(W^{*}\) has the convenient interpretation of a disaster level of final wealth, as specified at time zero. Since the optimal controls for \(PCEE_{t_{0}}(\kappa)\) and \(\textit{TCEQ}_{t_{n}}(\kappa/\alpha)\) are identical, we regard \(\textit{TCEQ}_{t_{n}}(\kappa/\alpha)\) as the EW-ES induced time consistent strategy (Strub et al., 2019), which
is implementable since the investor will have no incentive to deviate from a strategy computed at \(t_{0}\) at later times.
For further discussion concerning the relationship between pre-commitment, time consistent, and induced time consistent strategies, we refer the reader to Bjork et al. (2021); Bjork and Murgoci (2010, 2014); Forsyth (2020); Strub et al. (2019); Vigna (2014, 2022).
## Appendix B PIDE Between Rebalancing Times
Applying Ito's Lemma for jump processes (Tankov and Cont, 2009), using Equations (2.3) and (2.4) in Equation (3.8) gives
\[V_{t} +\frac{(\sigma^{s})^{2}s^{2}}{2}V_{ss}+(\mu^{s}-\lambda_{\xi}^{s }\gamma_{\xi}^{s})sV_{s}+\lambda_{\xi}^{s}\int_{-\infty}^{+\infty}V(e^{y}s,b,t )f^{s}(y)\ dy+\frac{(\sigma^{b})^{2}b^{2}}{2}V_{bb}\] \[+(\mu^{b}+\mu_{c}^{b}\mathbf{1}_{\{b<0\}}-\lambda_{\xi}^{b} \gamma_{\xi}^{b})bV_{b}+\lambda_{\xi}^{b}\int_{-\infty}^{+\infty}V(s,e^{y}b,t )f^{b}(y)\ dy-(\lambda_{\xi}^{s}+\lambda_{\xi}^{b})V+\rho_{sb}\sigma^{s} \sigma^{b}sbV_{sb}=0\,\] \[s\geq 0\.\] (B.1)
where the density functions \(f^{s}(y),f^{b}(y)\) are as given in equation (2.1).
## Appendix C Computational Details: Hamilton-Jacobi-Bellman (HJB) PDE Framework
For a detailed description of the numerical algorithm used to solve the HJB equation framework described in Section 3, we refer the reader to Forsyth (2022). We summarize the method here.
First, we solve the auxiliary problem (3.2), with fixed values of \(W^{*}\), \(\kappa\) and \(\alpha\). The state space in \(s>0\) and \(b>0\) is discretized using evenly spaced nodes in log space to create a grid to represent cases. A separate grid is created in a similar fashion to represent cases where wealth is negative. The Fourier methods discussed in Forsyth and Labahn (2019) are used to solve the PIDE representing market dynamics between rebalancing times. Both controls for withdrawal and allocation are discretized using equally spaced grids. The optimization problem (3.4) is solved first for the allocation control by exhaustive search, storing the optimal for each discretized wealth node. The withdrawal control in (3.5) can then be solved in a similar fashion, using the previously stored allocation control to evaluate the right-hand side of (3.5). Linear interpolation is used where necessary. The stored controls are used to advance the solution in (3.7).
Since the numerical method just described assumes a constant \(W^{*}\), an outer optimization step to find the optimal \(W^{*}\) (candidate Value-at-Risk) is necessary. Given an approximate solution to (3.2) at \(t=0\), the full solution to \(PCEE_{t_{0}}(\kappa)\) (2.22) is determined using Equation (3.9). A coarse grid is used at first for an exhaustive search. This is then used as the starting point for a one-dimensional optimization algorithm on finer grids.
## Appendix D Computational Details: NN Framework
### NN Optimization
The NN framework, as described in Section 4 and illustrated in Figure 4.1, was implemented using the PyTorch library (Paszke et al., 2019). The withdrawal network \(\hat{q}\), and allocation network \(\hat{p}\) were both implemented with 2 hidden layers of 10 nodes each, with biases. Stochastic Gradient Descent
(Ruder, 2016) was used in conjunction with the Adaptive Momentum optimization algorithm to train the NN framework (Kingma and Ba, 2014). The NN parameters and auxiliary training parameter \(W^{*}\) were trained with different initial learning rates. The same decay parameters and learning rate schedule were used. Weight decay (\(\ell_{2}\) penalty) was also employed to make training more stable. The training loop utilizes the auto-differentiation capabilities of the PyTorch library. Hyper-parameters used for NN training in this paper's experiments are given in Table D.1.
The training loop tracks the minimum loss function value as training progresses and selects the model that had given the optimal loss function value based on the entire training dataset by the end of the specified number of training epochs.
### Transfer learning between different \(\kappa\) points
For high values of \(\kappa\), the objective function is weighted more towards optimizing ES (lower risk). In these cases, optimal controls are more difficult to compute. This is because the ES measure used (CVAR) is only affected by the sample paths below the \(5^{th}\) percentile of terminal wealth, which are quite sparse. To overcome these training difficulties, we employ transfer learning (Tan et al., 2018) to improve training for the more difficult points on the efficient frontier. We begin training the model for the lowest \(\kappa\) from a random initialization ('cold-start'), and then initialize the models for each increasing \(\kappa\) with the model for the previous \(\kappa\). Through numerical experiments, we found this method made training far more stable and less likely to terminate in local minima for higher values of \(\kappa\).
### Running minimum tracking
The training loop tracks the minimum loss function value as training progresses and selects the model that had given the optimal loss function value based on the entire training dataset by the end of the specified number of training epochs.
\begin{table}
\begin{tabular}{l c} \hline \hline NN framework hyper-parameter & Value \\ \hline Hidden layers per network & 2 \\ \# of nodes per hidden layer & 10 \\ Nodes have biases & True \\ \# of iterations (\#itn) & 50,000 \\ SGD mini-batch size & 1,000 \\ \# of training paths & \(2.56\times 10^{5}\) \\ Optimizer & Adaptive Momentum \\ Initial Adam learning rate for \((\mathbf{\theta}_{q},\mathbf{\theta}_{p})\) & 0.05 \\ Initial Adam learning rate for \(W^{*}\) & 0.04 \\ Adam learning rate decay schedule & \([0.70\times\#itn,0.97\times\#itn]\), \(\gamma=0.20\) \\ Adam \(\beta_{1}\) & 0.9 \\ Adam \(\beta_{2}\) & 0.998 \\ Adam weight decay (\(\ell_{2}\) Penalty) & 0.0001 \\ Transfer Learning between \(\kappa\) points & True \\ Take running minimum as result & True \\ \hline \hline \end{tabular}
\end{table}
Table D.1: Hyper-parameters used in training the NN framework for numerical experiments presented in this paper.
### Standardization
To improve learning for the neural network, we normalize the input wealth using means and standard deviations of wealth samples from a reference strategy. We use the constant withdrawal and allocation strategy defined in Forsyth (2022) as the reference strategy with \(2.56\times 10^{5}\) simulated paths. Let \(W_{t}^{b}\) denote the wealth vector at time \(t\) based on simulations. Then \(\bar{W}_{t}^{b}\) and \(\sigma(W_{t}^{b})\) denote the associated average wealth and standard deviation. Then we normalize the feature input to the neural network in the following way:
\[\tilde{W}_{t}=\frac{W_{t}-\bar{W}_{t}^{b}}{\sigma(W_{t}^{b})}\]
For the purpose of training the neural network, the values \(\bar{W}_{t}^{b}\) and \(\sigma(W_{t}^{b})\) are just constants, and we can use any reasonable values. This input feature normalization is done for both withdrawal and allocation NNs.
In Section 7, we show in out-of-sample and out-of-distribution tests that \(\bar{W}_{t}^{b}\) and \(\sigma(W_{t}^{b})\) do not need to be related to the testing data as long as these are reasonable values. In Section 4, when referring to \(W\) as part of the input to the NN functions \(\hat{q}\) and \(\hat{p}\), we use the standardized \(\bar{W}\) for computation.
## Appendix E Model Calibrated from Market Data
Table E.1 shows the calibrated model parameters for processes (2.3) and (2.4), from Forsyth (2022) using market data described in SS5.
## Appendix F Optimal expected block sizes: bootstrap resampling
Table F.1 shows our estimates of the optimal block size using the algorithm in Patton et al. (2009); Politis and White (2004) using market data described in SS5.
## Appendix G Convergence Test: HJB Equation
Table G.1 shows a detailed convergence test for a single point on the (EW, ES) frontier, using the PIDE method. The controls are computed using the HJB PDE, and stored. The stored controls are then used in Monte Carlo simulations, which are used to verify the PDE solution, and also generate various statistics of interest.
\begin{table}
\begin{tabular}{c c c c c c c c} \multicolumn{8}{c}{Calibrated Model Parameters} \\ \hline CRSP & \(\mu^{s}\) & \(\sigma^{s}\) & \(\lambda^{s}\) & \(u^{s}\) & \(\eta_{1}^{s}\) & \(\eta_{2}^{s}\) & \(\rho_{sb}\) \\ \hline & 0.0877 & 0.1459 & 0.3191 & 0.2333 & 4.3608 & 5.504 & 0.04554 \\ \hline \hline
10-year Treasury & \(\mu^{b}\) & \(\sigma^{b}\) & \(\lambda^{b}\) & \(u^{b}\) & \(\eta_{1}^{b}\) & \(\eta_{2}^{b}\) & \(\rho_{sb}\) \\ \hline & 0.0239 & 0.0538 & 0.3830 & 0.6111 & 16.19 & 17.27 & 0.04554 \\ \hline \hline \end{tabular}
\end{table}
Table E.1: Estimated annualized parameters for double exponential jump diffusion model. Value-weighted CRSP index, 10-year US treasury index deflated by the CPI. Sample period 1926:1 to 2019:12.
## Appendix H Detailed efficient frontier comparisons
Table H.1 shows the detailed efficient frontier, computed using the HJB equation method, using the \(2048\times 2048\) grid. Table H.2 shows the efficient frontier computed from the NN framework. This should be compared to Table H.1. Table H.3 compares the objective function values, at various points on the efficient frontier, for the HJB and NN frameworks.
\begin{table}
\begin{tabular}{c c c c} \multicolumn{3}{c}{Detailed Efficient Frontier: NN Framework} \\ \hline \(\kappa\) & ES (5\%) & \(E[\sum_{i}q_{i}]/(M+1)\) & \(Median[W_{T}]\) \\ \hline
0.05 & -599.81 & 57.15 & 106.23 \\
0.2 & -333.01 & 56.14 & 78.59 \\
0.5 & -160.14 & 54.40 & 105.05 \\
1 & -43.02 & 51.95 & 227.79 \\
1.5 & -8.57 & 50.62 & 302.17 \\
3 & 16.01 & 48.99 & 374.43 \\
5 & 23.20 & 48.13 & 425.13 \\
50 & 29.88 & 45.72 & 493.41 \\ \(\infty\) & 29.90 & 35.00 & 947.60 \\ \hline \end{tabular}
\end{table}
Table 2: Synthetic market results for NN framework optimal strategies. Gives the detailed results used to construct NN efficient frontier in Figure 3. Assumes the scenario given in Table 1. Stock index: real capitalization weighted CRSP stocks; bond index: ten year treasuries. Parameters from Table E.1. Units: thousands of dollars. Training performance statistics based on \(2.56\times 10^{5}\) Monte Carlo simulation runs. Control is computed using the algorithm in Section 4. \(q_{\min}=35.0\), \(q_{\max}=60\). \((M+1)\) is the number of withdrawals. \(M\) is the number of rebalancing dates. \(\epsilon=10^{-6}\).
\begin{table}
\begin{tabular}{c c c c} \multicolumn{3}{c}{Detailed Efficient Frontier: HJB Framework} \\ \hline \(\kappa\) & ES (5\%) & \(E[\sum_{i}q_{i}]/(M+1)\) & \(Median[W_{T}]\) \\ \hline
0.05 & -596.00 & 57.14 & 124.36 \\
0.2 & -334.29 & 56.17 & 92.99 \\
0.5 & -148.99 & 54.25 & 111.20 \\
1.0 & -42.62 & 51.97 & 227.84 \\
1.5 & -8.05 & 50.63 & 298.20 \\
3.0 & 17.42 & 48.95 & 380.36 \\
5.0 & 24.09 & 48.12 & 414.60 \\
50.0 & 30.60 & 45.70 & 519.03 \\ \(\infty\) & 31.00 & 35.00 & 1003.47 \\ \hline \end{tabular}
\end{table}
Table 1: Synthetic market results for HJB framework optimal strategies. Gives the detailed results used to construct HJB efficient frontier in Figure 3. Assumes the scenario given in Table 1. Stock index: real capitalization weighted CRSP stocks; bond index: ten year treasuries. Parameters from Table E.1. Units: thousands of dollars. Statistics based on \(2.56\times 10^{6}\) Monte Carlo simulation runs. Control is computed using the Algorithm in Section 3, \((2048\times 2048\) grid) stored, and then used in the Monte Carlo simulations. \(q_{\min}=35.0\), \(q_{\max}=60\). \((M+1)\) is the number of withdrawals. \(M\) is the number of rebalancing dates. \(\epsilon=10^{-6}\).
\begin{table}
\begin{tabular}{c c c c} \hline \hline \(\kappa\) & HJB equation & NN & \% difference \\ \hline
0.05 & 1741.54 & 1741.71 & 0.01\% \\
0.2 & 1674.41 & 1673.81 & -0.04\% \\
0.5 & 1607.26 & 1606.44 & -0.05\% \\
1 & 1568.45 & 1567.34 & -0.07\% \\
1.5 & 1557.46 & 1556.22 & -0.08\% \\
3 & 1569.71 & 1566.86 & -0.18\% \\
5 & 1612.16 & 1607.86 & -0.27\% \\
50 & 2946.70 & 2911.10 & -1.21\% \\ \hline \hline \end{tabular}
\end{table}
Table 3: Objective function value comparison for the HJB equation and NN framework model results on range of \(\kappa\) values. Objective function values for both frameworks computed according to \(PCEE_{t0}(\kappa)\) (higher is better). Assuming the scenario given in Table 1. Stock index: real capitalization weighted CRSP stocks; bond index: ten year treasuries. Parameters from Table E.1. HJB solution statistics based on \(2.56\times 10^{6}\) Monte Carlo simulation runs. HJB control is computed using the Algorithm in Section 3, (\(2048\times 2048\) grid) stored, and then used in the Monte Carlo simulations. NN Training performance statistics based on \(2.56\times 10^{5}\) Monte Carlo simulation runs. Control is computed using the NN framework in Section 4. \(q_{\min}=35.0\), \(q_{\max}=60\). \((M+1)\) is the number of withdrawals. \(M\) is the number of rebalancing dates. \(\epsilon=10^{-6}\). |
2303.15224 | Open the box of digital neuromorphic processor: Towards effective
algorithm-hardware co-design | Sparse and event-driven spiking neural network (SNN) algorithms are the ideal
candidate solution for energy-efficient edge computing. Yet, with the growing
complexity of SNN algorithms, it isn't easy to properly benchmark and optimize
their computational cost without hardware in the loop. Although digital
neuromorphic processors have been widely adopted to benchmark SNN algorithms,
their black-box nature is problematic for algorithm-hardware co-optimization.
In this work, we open the black box of the digital neuromorphic processor for
algorithm designers by presenting the neuron processing instruction set and
detailed energy consumption of the SENeCA neuromorphic architecture. For
convenient benchmarking and optimization, we provide the energy cost of the
essential neuromorphic components in SENeCA, including neuron models and
learning rules. Moreover, we exploit the SENeCA's hierarchical memory and
exhibit an advantage over existing neuromorphic processors. We show the energy
efficiency of SNN algorithms for video processing and online learning, and
demonstrate the potential of our work for optimizing algorithm designs.
Overall, we present a practical approach to enable algorithm designers to
accurately benchmark SNN algorithms and pave the way towards effective
algorithm-hardware co-design. | Guangzhi Tang, Ali Safa, Kevin Shidqi, Paul Detterer, Stefano Traferro, Mario Konijnenburg, Manolis Sifalakis, Gert-Jan van Schaik, Amirreza Yousefzadeh | 2023-03-27T14:03:11Z | http://arxiv.org/abs/2303.15224v1 | # Open the box of digital neuromorphic processor: Towards effective algorithm-hardware co-design
###### Abstract
Sparse and event-driven spiking neural network (SNN) algorithms are the ideal candidate solution for energy-efficient edge computing. Yet, with the growing complexity of SNN algorithms, it isn't easy to properly benchmark and optimize their computational cost without hardware in the loop. Although digital neuromorphic processors have been widely adopted to benchmark SNN algorithms, their black-box nature is problematic for algorithm-hardware co-optimization. In this work, we open the black box of the digital neuromorphic processor for algorithm designers by presenting the neuron processing instruction set and detailed energy consumption of the SENeCA neuromorphic architecture. For convenient benchmarking and optimization, we provide the energy cost of the essential neuromorphic components in SENeCA, including neuron models and learning rules. Moreover, we exploit the SENeCA's hierarchical memory and exhibit an advantage over existing neuromorphic processors. We show the energy efficiency of SNN algorithms for video processing and online learning, and demonstrate the potential of our work for optimizing algorithm designs. Overall, we present a practical approach to enable algorithm designers to accurately benchmark SNN algorithms and pave the way towards effective algorithm-hardware co-design.
0
Footnote 0: 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
## I Introduction
Energy-efficient computations are essential for edge applications that operate with limited energy resources. Brain-inspired spiking neural networks (SNNs) have the potential to reduce energy costs by introducing sparse and event-driven computation [1], making them ideal candidate solutions for the edge. However, the low-power assumption of the SNN algorithms is not always valid if computational costs are not properly benchmarked. Many works use the sparsity of synaptic operations to demonstrate efficiency [2, 3, 4], disregarding additional expenses introduced by hardware primitives like memory access or instruction operation. Since SNN algorithms require dedicated hardware, namely the neuromorphic processor, algorithm designs based on inaccurate hardware assumptions can fail to realize potential advantages. Therefore, there is a need for effective algorithm-hardware co-design to truly realize the promised benefits of neuromorphic computing.
Digital neuromorphic processors provide the opportunity to benchmark the energy efficiency of SNNs [5, 6, 7, 8, 9]. However, these processors behave like a black box for algorithm designers. First, their bottom-up designs support restricted predefined computational elements and leave limited space for co-optimizing new algorithms with the hardware. Second, the coarse benchmarking results from the hardware do not provide precise insight into the design of the SNN algorithm to locate potential optimizations. Although there are neuromorphic processors developed using co-design approaches [10, 11, 12, 13], they are mainly confined to a specific SNN algorithm and are hard to use by algorithm designers without a sufficient hardware background. Therefore, algorithm designers need a flexible neuromorphic processor design with transparent and customizable internal operations.
In this work, we precisely detail the neuron processing instruction set of SENeCA [14], our scalable and flexible digital neuromorphic architecture, to help algorithm designers conveniently benchmark and optimize the cost of their novel SNN algorithms. To demonstrate the potential of SENeCA on algorithm-hardware co-design, we show three levels of abstraction to benchmark costs for SNN algorithms. The main contributions of this paper are the following:
1. We conduct circuit-level energy measurements on neuron processing instructions in SENeCA (Section II). This will enable the algorithm design community to accurately estimate the energy cost of their novel SNN algorithms without running them on the actual hardware.
2. We provide essential neuromorphic components (neuron models, learning rules, and hierarchical memory exploitation) constructed using SENeCA instructions, together with their energy costs (Section III). This layer of abstraction will enable algorithm designers to quickly estimate hardware overheads of typical SNN topologies without resorting to low-level instruction.
3. To clearly verify the usefulness of our contributions, we illustrate how our framework can be utilized to compute the energy efficiency of different SNN algorithms targeting video processing and online learning (Section IV), based on the energy costs provided in this work.
## II Neuron Processing on Neuromorphic Processor
The SENeCA neuromorphic architecture performs event-driven computation with time-multiplexing Neuron Processing Elements (NPEs) emulating numerous neurons per core (Figure 1). To provide sufficient flexibility, SENeCA embeds
Fig. 1: The pipeline of a SENeCA neuromorphic core (left), the interconnected mesh architecture via NoC (right), and the hierarchical memory consist of register files (orange), local SRAM memories (green) and large shared memories (gray).
a RISC-V controller that enables customizable processing pipelines, rich NPE instructions for versatile computations, and hierarchical memories to optimize the deployment and processing of networks. When a new event enters the core, the RISC-V is interrupted from sleep, preprocesses the event, writes information into the NPEs, and activates the neuron processing before returning to sleep. After events are captured from NPEs, they interrupt the RISC-V from sleep again and communicate to other cores via the NoC.
### _NPE and Neuron Processing Instruction Set_
NPE is the central neuron processing unit in the SENeCA core, which accelerates a rich neuron processing instruction set (Table I). Each instruction is executed in one cycle (pipelined, 2ns per cycle) and operates in BrainFloat 16 (BF16) format [15]. SNN algorithms can be built by different sequential executions of the instructions, namely micro-kernels. These micro-kernels are stored in the register-files of loop buffer and sent to the NPEs during runtime. For efficient time-multiplexing, the loop buffer executes micro-kernels in a "for-loop" fashion on NPEs and incrementally calculates Data-Memory addresses. This design gives a much lower cost than using the more flexible instruction memory (Table II). Determined by the event type, the RISC-V controller selects which micro-kernel to process on the NPEs. Neuron processing operates with hierarchical memory, including register-files, local data memory, and external shared memory if the model cannot fit locally. To introduce intra-core parallelism, NPEs in the SENeCA core form a SIMD (single instruction multiple data) type architecture [16] that accesses data through a wide data memory port in parallel. The NPE also supports quantized integer data types (Int4 and Int8) to reduce energy costs (see Section III-E). When events are generated, the event capture unit converts them to the Address Event Representation (AER) form [17] before sending them to the RISC-V and NoC. The present version of SENeCA core has 8 NPEs and 64 registers per NPE. These numbers are parameterized and can be fine-tuned before synthesis.
### _Circuit-level Energy Measurements_
We report the average consumption of the NPE instructions in Table I. The pre-silicon energy number includes the power consumption of all the modules needed to execute the instruction (e.g., address calculations in loop buffer, access to instruction memory, etc.). The results are measured by running each instruction 8k times with random data using the Cadence JOULES (time-based mode), a RTL level power measurement tool (within 15% of signoff power) [18], with the GF-22nm FDX technology node1. The leakage power for the core is around 30\(\mu\)W (0.06pJ in a 2ns clock cycle). For clarity, we report the memory and NoC information in Table II. Since in a typical SNN, there are significantly more synaptic operations than events, the computational cost for synaptic operations (done in NPEs) largely dominates the event pre-processing (RISC-V) and communications (NoC). Therefore, in this paper, for simplicity and due to limited space, we safely ignored the RISC-V and NoC costs.
Footnote 1: In typical corner (0.8v and 25C, no back-biasing)
## III Essential Neuromorphic Components
Direct optimization of complex algorithms at the instruction level is difficult. A level of abstraction for essential components of the SNN algorithm can significantly simplify benchmarking and optimization. Here, we present neuron models and learning rules constructed from NPE instructions and compute their cost using circuit-level power measurements (see Table III). Furthermore, we exploit the hierarchical memory in SENeCA and compare the costs of synaptic operations when using quantized integer weights and multi-event processing.
### _Integrate and Fire Neuron_
Integrate and Fire (IF) neurons are widely used for SNN processing [20, 21, 22]. Here, we define an IF neuron as:
\[\begin{split} v_{i}[k]\leftarrow v_{i}[k-1]\times(1-s_{out,i}[k-1 ])+\Sigma_{j}w_{ij}\times s_{in,j}[k]\\ s_{out,i}[k]\gets H(v_{i}[k]-v_{th})\end{split} \tag{1}\]
where \(k\) is the time step, \(v_{i}\) is the state of neuron \(i\), \(s_{in,j}\) is the input spike from neuron \(j\), \(s_{out,i}\) is the output spike of neuron \(i\), \(w_{ij}\) is the weight, \(v_{th}\) is the voltage threshold and \(H\) is the Heaviside function. The first micro-kernel in Component 1 integrates spikes instantly, and the second micro-kernel generates spikes at the end of each time step.
### _Sigma Delta Neuron_
Sigma Delta (SD) neurons sparsify deep neural networks (DNNs) by communicating temporal activation differences through events [23]. First, the sigma integrates events as:
\[z_{i}[k]\gets z_{i}[k-1]+\Sigma_{j}w_{ij}\times o_{in,j}[k] \tag{2}\]
where \(z_{i}\) is the sigma state of neuron \(i\) and \(o_{in,j}\) is the input event from neuron \(j\). Then, the delta generates events as:
\[o_{out,i}[k]\gets round(\frac{f(z_{i}[k])}{q})\times q-round(\frac{f(z_{i }[k-1])}{q})\times q \tag{3}\]
where \(o_{out,i}\) is the output event of neuron \(i\), \(f\) is a non-linear activation function (e.g. \(ReLU\)), \(round\) function rounds a number to integer and \(q\) is the scaling factor. The quantization can significantly increase the sparsity of the events [24]. The first micro-kernel in Component 2 integrates events instantly and the second operates at flexible frequency while maintaining equivalence to the trained DNN [25].
\begin{table}
\begin{tabular}{|c|c|c|} \hline & Register-File (NPE) & SRAM (Inst/Data Mem) \\ \hline Size & \(64W\times 16b\) & \(8KW\times 32b\) (\(2Mb\)) \\ \hline Energy (IJ/b) & 12.0 & 200 \\ \hline \hline & NoC event & HBM (Shared Mem) [19] \\ \hline Size & 32b & \(32b\) (multi-Gb) \\ \hline Energy (IJ/b) & 65.62 & 7000 \\ \hline \end{tabular}
\end{table} TABLE II: Memory Size and Energy Consumption
\begin{table}
\begin{tabular}{|c|c|c|} \hline Instruction & Description & Energy (pJ) \\ \hline ADD/SUB/MUL/DIV & Arithmetic ops. & 1.4 \\ & 2xNT8b Arithmetic ops. & 1.2 \\ \hline GTH/MAX/MIN & Compare ops. & 1.2 \\ EQL/ABS & & 1.1 \\ \hline AND/ORK & Bit-wise ops. & 1.1 \\ SHL/SHR & & 1.2 \\ \hline IZF & data type cnv. & 1.1 \\ RND & & 1.4 \\ \hline EVC & Event Capture & 0.5 \\ & + if generates event & + 1.1 \\ \hline MLD & Data Mem Load/Store & 3.7 \\ MST & & 3.9 \\ \hline RISC-V & Per Instruction & 11.6 \\ pre/post Process & + Data mem access & +10.0 \\ \hline \end{tabular}
\end{table} TABLE I: Neuron Processing Energy Consumption
### _Hebbian Learning_
Hebbian learning and its variants are bio-inspired unsupervised learning rules that have been extensively used to train shallow SNNs [26]. In contrast to backprop-based learning, Hebbian learning schemes do not suffer from update locking and weight transport problems [10], making them better suited for low-complexity on-chip learning [27]. Given a layer of spiking neurons with fully-connected connections, the Hebbian learning rule modifies the weight as follows [28]:
\[w_{ij}[k]\gets w_{ij}[k-1]+\eta\times\text{trace}\{s_{out,i}\}[k]\times \text{trace}\{s_{in,j}\}[k] \tag{4}\]
where \(\eta\) is the learning rate and \(\text{trace}\{.\}\) is an estimator of the local spiking rate via low-pass filtering:
\[\text{trace}\{s\}[k]\leftarrow\beta\times\text{trace}\{s\}[k-1]+(1-\beta) \times s[k] \tag{5}\]
where \(\beta\) is the decay constant. Micro-kernels in Components 3 update the SNN weights at the end of each time step.
### _Gradient-based Online Learning (e-prop)_
Gradient-based online learning performs end-to-end learning in SNN by estimating gradients using only local information [29, 30, 31, 32]. Here, we show the e-prop learning [29] in SENeCA as an example. First, the eligibility trace \(e_{ij}\) combines pre- and post-synaptic activities:
\[e_{ij}[k]\gets e_{ij}[k-1]+h(v_{i}[k])\times\text{trace}\{s_{in,j}\}[k] \tag{6}\]
where \(h\) is the surrogate gradient function. The weight updates when there are error events from the supervised signal:
\[\triangle w_{ij}=-\eta\times e_{ij}\times\Sigma_{k}b_{ik}\times y_{k} \tag{7}\]
where \(b_{ik}\) is the feedback weight and \(y_{k}\) is the error events from the output layer. We implemented the learning rule using four SENeCA micro-kernels, with the first micro-kernel in Component 4 updating every time step using a rectangular function for \(h\) as introduced in [33], and the second micro-kernel in Component 4 updates when there is a supervised signal available. Additionally, we use micro-kernel 2 in Component 3 to compute \(\text{trace}\{s_{in,j}\}\) and micro-kernel 1 in Component 2 to compute \(\Sigma_{k}b_{ik}\times y_{k}\).
### _Efficient Synaptic Operation with Hierarchical Memory_
The measurement results show memory accesses dominate the total energy consumption for neuron processing. Hierarchical memory architecture in SENeCA allows for data-reuse in NPE register-files and therefore reducing more expensive SRAM accesses. This reduction is achieved using quantized weights and multi-event processing. Using quantized weights (4-bit or 8-bit) reduces the number of SRAM reads per weight. However, there is an overhead as the weight needs to be converted into BF16 using the I2F instruction for computation. As another example of data reuse in the NPEs, processing multiple events in one iteration also reduces the SRAM accesses. The neuron state becomes stationary on the NPEs, avoiding frequently accessing the states from the SRAM. Using fully integer operations on INT4 weights and INTS states further reduce memory accesses, and thereby decrease energy cost.
```
MLD(R0, ADD1,1)//loadweight\(w_{ij}\) MLD(R1, ADD2,1)//loadtrace\(\{s_{in,j}\}\) MUL(R1, R1, R2)//trace\(\{s_{out,i}\}\times\text{trace}\{s_{in,j}\}\) MUL(R1, R1, R3)//\(\eta\times\text{R1}\),R3\(\leftarrow\eta\) ADD(R0, R0, R1)//updateweight MSTD(ADD1, R0,1)//storeR0in\(w_{ij}\) Micro-kernel 2: SpikeTrace Update. See Eq. (5).
```
Listing 1: Integrate and Fire Neuron
### _Gradient-based Online Learning (e-prop)_
Gradient-based online learning performs end-to-end learning in SNN by estimating gradients using only local information [29, 30, 31, 32]. Here, we show the e-prop learning [29] in SENeCA as an example. First, the eligibility trace \(e_{ij}\) combines pre- and post-synaptic activities:
\[e_{ij}[k]\gets e_{ij}[k-1]+h(v_{i}[k])\times\text{trace}\{s_{in,j}\}[k] \tag{8}\]
where \(h\) is the surrogate gradient function. The weight updates when there are error events from the supervised signal:
\[\triangle w_{ij}=-\eta\times e_{ij}\times\Sigma_{k}b_{ik}\times y_{k} \tag{9}\]
where \(b_{ik}\) is the feedback weight and \(y_{k}\) is the error events from the output layer. We implemented the learning rule using four SENeCA micro-kernels, with the first micro-kernel in Component 4 updating every time step using a rectangular function for \(h\) as introduced in [33], and the second micro-kernel in Component 4 updates when there is a supervised signal available. Additionally, we use micro-kernel 2 in Component 3 to compute \(\text{trace}\{s_{in,j}\}\) and micro-kernel 1 in Component 2 to compute \(\Sigma_{k}b_{ik}\times y_{k}\).
### _Efficient Synaptic Operation with Hierarchical Memory_
The measurement results show memory accesses dominate the total energy consumption for neuron processing. Hierarchical memory architecture in SENeCA allows for data-reuse in NPE register-files and therefore reducing more expensive SRAM accesses. This reduction is achieved using quantized weights and multi-event processing. Using quantized weights (4-bit or 8-bit) reduces the number of SRAM reads per weight. However, there is an overhead as the weight needs to be converted into BF16 using the I2F instruction for computation. As another example of data reuse in the NPEs, processing multiple events in one iteration also reduces the SRAM accesses. The neuron state becomes stationary on the NPEs, avoiding frequently accessing the states from the SRAM. Using fully integer operations on INT4 weights and INTS states further reduce memory accesses, and thereby decrease energy cost.
```
MLD(R0, ADD1,0)//loadweight\(w_{ij}\) MLD(R1, ADD2,1)//loadtrace\(\{s_{in,j}\}\) MUL(R1, R1, R2)//trace\(\{s_{out,i}\}\times\text{trace}\{s_{in,j}\}\) MUL(R1, R1, R3)//\(\eta\times\text{R1}\),R3\(\leftarrow\eta\) ADD(R0, R0, R1)//updateweight MSTD(ADD1, R0,1)//storeR0in\(w_{ij}\) Micro-kernel 2: SpikeTrace Update. See Eq. (5).
```
Listing 2: Sigma Delta Neuron
## _Micro-kernel 1: Bigline Integration_. See Eq. (2).
### _Hebbian Learning_
Hebbian learning and its variants are bio-inspired unsupervised learning rules that have been extensively used to train shallow SNNs [26]. In contrast to backprop-based learning, Hebbian learning schemes do not suffer from update locking and weight transport problems [10], making them better suited for low-complexity on-chip learning [27]. Given a layer of spiking neurons with fully-connected connections, the Hebbian learning rule modifies the weight as follows [28]:
\[w_{ij}[k]\gets w_{ij}[k-1]+\eta\times\text{trace}\{s_{out,i}\}[k]\times\text{ trace}\{s_{in,j}\}[k] \tag{10}\]
where \(\eta\) is the learning rate and \(\text{trace}\{.\}\) is an estimator of the local spiking rate via low-pass filtering:
\[\text{trace}\{s\}[k]\leftarrow\beta\times\text{trace}\{s\}[k-1]+(1-\beta) \times s[k] \tag{11}\]
where \(\beta\) is the decay constant. Micro-kernels in Components 3 update the SNN weights at the end of each time step.
### _Gradient-based Online Learning (e-prop)_
Gradient-based online learning performs end-to-end learning in SNN by estimating gradients using only local information [29, 30, 31, 32]. Here, we show the e-prop learning [29] in SENeCA as an example. First, the eligibility trace \(e_{ij}\) combines pre- and post-synaptic activities:
\[e_{ij}[k]\gets e_{ij}[k-1]+h(v_{i}[k])\times\text{trace}\{s_{in,j}\}[k] \tag{12}\]
where \(h\) is the surrogate gradient function. The weight updates when there are error events from the supervised signal:
\[\triangle w_{ij}=-\eta\times e_{ij}\times\Sigma_{k}b_{ik}\times y_{k} \tag{13}\]
where \(b_{ik}\) is the feedback weight and \(y_{k}\) is the error events from the output layer. We implemented the learning rule using four SENeCA micro-kernels, with the first micro-kernel in Component 4 updating every time step using a rectangular function for \(h\) as introduced in [33], and the second micro-kernel in Component 4 updates when there is a supervised signal available. Additionally, we use micro-kernel 2 in Component 3 to compute \(\text{trace}\{s_{in,j}\}\) and micro-kernel 1 in Component 2 to compute \(\Sigma_{k}b_{ik}\times y_{k}\).
### _Efficient Synaptic Operation with Hierarchical Memory_
The measurement results show memory accesses dominate the total energy consumption for neuron processing. Hierarchical memory architecture in SENeCA allows for data-reuse in NPE register-files and therefore reducing more expensive SRAM accesses. This reduction is achieved using quantized weights and multi-event processing. Using quantized weights (4-bit or 8-bit) reduces the number of SRAM reads per weight. However, there is an overhead as the weight needs to be converted into BF16 using the I2F instruction for computation. As another example of data reuse in the NPEs, processing multiple events in one iteration also reduces the SRAM accesses. The neuron state becomes stationary on the NPEs, avoiding frequently accessing the states from the SRAM. Using fully integer operations on INT4 weights and INTS states further reduce memory accesses, and thereby decrease energy cost.
Table IV shows the average energy cost per IF neuron synaptic operation (i.e., spike integration) when using the integer weights with one and four event processing. By exploiting hierarchical memory, SNN algorithms in SENeCA can potentially achieve lower energy costs compared to existing digital neuromorphic processors without hierarchical memory [5, 6, 7] (see Table IV bottom row).
## IV Application-Level Benchmarking
To illustrate how the cost of neuromorphic components reported in Table III can be used to reliably estimate the energy cost of solving downstream tasks and optimize the algorithm based on application needs, we perform an application-level benchmarking of SNN algorithms for video processing and online Hebbian learning.
### _Sigma Delta Network for Video Processing_
Sigma Delta networks can result in more than 90% synaptic operation sparsity when performing video-based human action recognition without sacrificing accuracy [24]. Here, we compute the energy cost of employing SD neurons in ResNet-50 [34] and MobileNet [35] on SENeCA, for efficient video processing using the UCF-101 human action recognition dataset [36]. Using Table III, we calculate the average energy cost of the networks per frame, as shown in Table V, by counting the number of synaptic operations (sigma) and neuron output evaluations (delta). Compared to the estimated energy cost in [24], the precise instruction-level results given here reflect in a more accurate way the actual energy cost of hardware processing. Although a single delta operation requires more instructions than sigma, Table V shows that sigma operations cost much more energy compared to delta, due to the difference in execution dimensionality. Therefore, an algorithm designer can reduce the number of events using a complex delta unit with only negligible energy overheads.
### _Unsupervised Hebbian Learning for Digit Classification_
For demonstrating the cost of online learning, we consider a canonical digit classification task [37] with a dataset composed of \(1797\) instances of \(8\times 8\) grayscale images normalized between \(0\) and \(1\)[38]. We flatten each image to a \(64\)-dimension grayscale vector and encode each entry of the vector into a 100 time-step spike train using Poisson spike encoding.
In SENeCA, we implement a modified version of the SNN architecture proposed in [26], where we use the IF neuron model of Section III-A and the Hebbian learning model of Section III-C to replace the leaky IF neurons and the Hebbian-like STDP rule used in [26]. We can then estimate the energy consumption of the network in function of the number of output neurons \(M\) and the input dimension \(N\), by counting the number of instructions executed for each completed SNN execution step (i.e., forward propagation of the input spikes, recurrent propagation of the output spikes, feedback propagation of the output spikes and all Hebbian learning mechanisms in [26]). Then, energy consumption is found by relating the number of occurrences of each instruction with the energy measurements provided in Table I and III. In the illustrative case of the canonical digit classification dataset [37], the input data dimension is \(N=64\) and the output dimension \(M\) can be arbitrarily chosen, leading to a trade-off between energy consumption and classification accuracy, as shown in Figure 2. The ability to accurately generate this trade-off gives algorithm designers the opportunity to _optimize_ the network size based on the need of the application _without_ the overhead of hardware deployment.
## V Conclusion
This paper presents a practical approach to properly benchmark SNN algorithms using the neuron processing instruction set of the SENeCA neuromorphic architecture. We strongly believe that the instructions and micro-kernels provided here, together with their precise energy measurements, will allow a reliable estimate of energy consumption at algorithm design time. We hope that this work will greatly help algorithm designers to conveniently benchmark the hardware costs of their various SNN algorithms, and will enable further optimization of these costs via effective algorithm-hardware co-design.
## Acknowledgment
This work is partially funded by research and innovation projects ANDANTE (ECSEL JU under grant agreement No876925), DAIS (KDT JU under grant agreement No101007273) and MemScale (Horizon EU under grant agreement 871371). The JU receives support from the European Union's Horizon 2020 research and innovation
\begin{table}
\begin{tabular}{|l|c|c|} \hline Network Model & Sigma Energy (\(\mu\)J) & Delta Energy (\(\mu\)J) \\ \hline ResNet-50 & 3504.0 & 181.2 \\ \hline MobileNet & 1792.3 & 104.4 \\ \hline \end{tabular}
\end{table} TABLE V: Energy consumption per frame for SD networks
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline Weight, _1 Event_ & BF16 & Int8 & Int4 & Int4\({}^{2}\) \\ \hline Energy (pJ) & 12.7 & 11.95 & 11.03 & 5.63 \\ \hline \hline Weight, _4 Events_ & BF16 & Int8 & Int4 & Int4\({}^{2}\) \\ \hline Energy (pJ) & 7.0 & 6.25 & 5.33 & 2.78 \\ \hline \hline
**Hardware** & Lohli [6] & TrueNorth [5] & NeuronFlow [7] \\ \hline Energy (pJ)3 & 23 & 2.5 & 20 \\ \hline \end{tabular}
\end{table} TABLE IV: Energy per Synaptic Operation in SENeCA
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline Component & Micro-kernel & Energy (pJ) & Frequency \\ \hline IF Neuron & 1 & 12.7 & event \\ & 2 & 13.2 & time step \\ \hline SD Neuron & 1 & 14.1 & event \\ & 2 & 19.7 & flexible \\ \hline Hebbian & 1 & 15.5 & time step \\ Learning & 2 & 15.5 & time step \\ \hline Gradient & 1 & 22.9 & time step \\ Learning & 2 & 19.2 & flexible \\ \hline \end{tabular}
\end{table} TABLE III: Neuromorphic Components Energy Consumption
Fig. 2: Accuracy vs. energy consumption of SNN-Hebbian learning execution (one time-step), with \(M\) output neurons.
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline Support & Micro-kernel & Energy (pJ) & Frequency \\ \hline IF Neuron & 1 & 12.7 & event \\ & 2 & 13.2 & time step \\ \hline SD Neuron & 1 & 14.1 & event \\ & 2 & 19.7 & flexible \\ \hline Hebbian & 1 & 15.5 & time step \\ Learning & 2 & 15.5 & time step \\ \hline Gradient & 1 & 22.9 & time step \\ Learning & 2 & 19.2 & flexible \\ \hline \end{tabular}
\end{table} TABLE III: Neuromorphic Components Energy Consumption
programme and Sweden, Spain, Portugal, Belgium, Germany, Slovenia, Czech Republic, Netherlands, Denmark, Norway and Turkey.
|
2301.05896 | Geometric embedding for regularity structures | In this paper, we show how one can view certain models in regularity
structures as some form of geometric rough paths. This is performed by
identifying the deformed Butcher-Connes-Kreimer Hopf algebra with a quotient of
the shuffle Hopf algebra which is the structure underlying the definition of a
geometric rough path. This provides an extension of the isomorphism between the
Butcher-Connes-Kreimer Hopf algebra and the shuffle Hopf algebra. This new
algebraic result relies strongly on the deformation formalism and the post-Lie
structures introduced recently in the context of regularity structures. | Yvain Bruned, Foivos Katsetsiadis | 2023-01-14T11:28:52Z | http://arxiv.org/abs/2301.05896v3 | # Geometric embedding for regularity structures
###### Abstract
In this paper, we show how one can view certain models in regularity structures as some form of geometric rough paths. This is performed by identifying the deformed Butcher-Connes-Kreimer Hopf algebra with a quotient of the shuffle Hopf algebra which is the structure underlying the definition of a geometric rough path. This provides an extension of the isomorphism between the Butcher-Connes-Kreimer Hopf algebra and the shuffle Hopf algebra. This new algebraic result relies strongly on the deformation formalism and the post-Lie structures introduced recently in the context of regularity structures.
###### Contents
* 1 Introduction
* 2 From non-geometric to geometric rough paths
* 3 An isomorphism for the deformed Grossman-Larson Hopf algebra
* 4 Extension of the Chapoton-Foissy isomorphism
* 5 Applications in regularity structures
## 1 Introduction
In this work we attempt to construct a correspondence between models in regularity structures [30] and geometric rough paths from classical rough path theory [36, 28]. Results of this kind have already been obtained in the case of branched rough paths which are another type of rough paths defined on trees (see [29]) instead of words. An approach given in [2] constructs a bijection between the two spaces \(\mathbf{BRP}^{\gamma}\) of branched rough paths and \(\mathbf{ARP}^{\gamma}\) of anisotropic rough paths. The main idea is to use an algebraic result from [23, 17] that directly relates the underlying Hopf algebras. Inspired by this approach, we endeavour to show that
certain Hopf algebras appearing in the context of regularity structures also relate to Hopf algebras with simpler presentation such as quotients of the tensor Hopf algebra (\(T(\mathscr{B}),\otimes,\Delta_{\shuffle}\)) that appears in the context of geometric rough paths. Trying to find other combinatorial structures than decorated trees introduced in [30, 8] for SPDEs has been developed in the recent multi-index formalism in [33, 39]. The main difference with our approach, is that in their context one can hope at the best for a post-Lie morphism between decorated trees and multi-indices. Whereas, we obtain in this work an isomorphism. This duality between trees and words for coding expansions has been considered in numerical analysis (see [38, 37]). We also expect this work to have an impact in the context of low regularity integrators in [15] for dispersive PDEs where similar decorated trees are used.
Our approach further relies on an indispensable algebraic tool, which is the notion of a post-Lie algebra. In [12], it has been shown that certain Hopf algebras appearing in the context of regularity structures can be built directly from certain pre-Lie algebraic structures -a special case of post-Lie algebras- that are simpler to describe. This is accomplished via means of a recursive construction of the product by Guin and Oudom, first appearing in [26, 27]. This fact can reveal important information about the Hopf algebras involved. Given a pre-Lie algebra (\(E,\curvearrowright\)) the Guin-Oudom procedure constructs a product on the symmetric space over the underlying vector space \(E\). It also endows the space with the shuffle coproduct \(\Delta_{\shuffle}\) thus turning it into a Hopf algebra, which is isomorphic to the universal enveloping algebra of the commutator Lie algebra \(E_{Lie}\) associated to \(E\).
It was already known that the graded dual of the Butcher-Connes-Kreimer Hopf algebra \(\mathcal{U}_{\text{\tiny BCK}}\)[16, 19, 20], which is the Grossman-Larson Hopf algebra \(\mathcal{U}_{\text{\tiny GL}}\)[25], can be generated in this manner by the free pre-Lie algebra over a set of generators which can be described as the linear span of trees endowed with the grafting product. In the work of [12] it is proven that the graded dual of a deformed version of the Butcher-Connes-Kreimer Hopf algebra is also generated in this manner by a deformed version of the grafting product. This deformed version of the grafting product is then shown to be isomorphic to the original grafting product in the category of pre-Lie algebras via an isomorphism \(\Theta\). This is illustrated below via the following diagram:
(1.1)
where \(\widehat{\curvearrowright}\) is the deformed grafting obtained from \(\curvearrowright\) by \(\Theta\). The coproducts \(\Delta_{\text{\tiny BCK}}\) and \(\Delta_{\text{\tiny DBCK}}\) are respectively the Butcher-Connes-Kreimer and the deformed Butcher-Connes-Kreimer coproducts. The products \(\star\) and \(\tilde{\star}\) are the Grossman-Larson and deformed Grossman-Larson products. The isomorphism \(\Phi\) between these two products is obtained by applying the Guin-Oudom functor to \(\Theta\) (see Theorem 3.17).
Furthermore, the work of [11] completes this programme, in the sense that post-Lie algebras that generate the graded duals of the Hopf algebras \(\mathcal{H}_{2}\) used for the recentering in singular SPDEs (see [30, 8, 12]). Again, for each Hopf algebra one has an original and deformed version and these are correspondingly proven to be generated by a post-Lie product or a deformed version thereof. This could be summarise in the following diagram:
\[\widehat{\curvearrowright}^{\perp\text{Lie}}\xrightarrow{\text{Guin-Oudom}} \star_{2}\xleftarrow{\text{Dual}}\Delta_{2} \tag{1.2}\]
We have added the notation \(\perp\text{Lie}\) to stress the fact that one starts with a Lie algebra and therefore the previous deformed grafting product \(\widehat{\curvearrowright}\) is extended to new objects. The Guin-Oudom procedure used is the one for post-Lie algebras developed in [21]. The map \(\Delta_{2}\) is the coproduct for \(\mathcal{H}_{2}\).
The map \(\Psi\) allows to say that the deformed Butcher-Connes-Kreimer Hopf algebra is isomorphic to the tensor Hopf algebra (see Theorem 3.19). Indeed, the basis \(B\) given by the Chapoton-Foissy isomorphism \(\Psi_{\text{CF}}\) is transported via \(\Phi\) in the sense that one has:
\[\Psi_{\text{CF}}:\tau_{1}\star\ldots\star\tau_{r}\mapsto\tau_{1}\otimes\ldots \otimes\tau_{r},\quad\tau_{i}\in B.\]
Then, the new isomorphism \(\Psi_{\Phi}\) is given by
\[\Psi_{\Phi}:\Phi(\tau_{1})\;\tilde{\star}\ldots\tilde{\star}\;\Phi(\tau_{r}) \mapsto\Phi(\tau_{1})\otimes\ldots\otimes\Phi(\tau_{r})\]
where \(\Phi(\tau_{1})\,\otimes\ldots\,\otimes\,\Phi(\tau_{n})\in T(\Phi(\mathcal{B}))\) and \(\mathcal{B}\) is the linear span of \(B\). This gives a clear description of the basis that can be used in the context of the deformed Butcher-Connes-Kreimer Hopf algebra. We also know that elements of \(\mathcal{B}\) are linear combinations of planted trees that are primitives elements for the Butcher-Connes-Kreimer coproduct of the form \(\mathcal{J}_{a}(\tau)\). Here, in the notation \(\tau\) is a linear combination of decorated trees and \(\mathcal{J}_{a}\) correspond of the grafting of these trees onto a new root via an edge decorated by \(a\).
This result does cover the Hopf algebras used in [15] but not the one at play in the context of regularity structures. Indeed, not only are planted trees used for describing solutions of singular SPDEs but so are classical monomials \(X_{i}\). In the expansion, these objects are associated to some operators that do not commute, motivating the introduction of a natural Lie bracket. The grafting product has to be compatible with this underlying Lie bracket and that is encapsulated in the form of a post-Lie product recently introduced for SPDEs in [11]. Therefore, the Lie-algebraic structure has to be taken into account when one extends the isomorphism introduced by Chapoton and Foissy. In our main result (see Theorem 4.12) the alphabet \(A\) is given by the \(X_{i}\) and the \(\Phi(\mathcal{J}_{a}(\tau))\in\Phi(\mathcal{B})\). We denote by \(W\) the words on this alphabet. The space \(\tilde{W}\) is given as the quotient of \(W\) by the Hopf ideal \(\mathcal{F}\) generated by the elements
\[\{X_{i}\otimes\Phi(\mathcal{J}_{a}(\tau))-\Phi(\mathcal{J}_{a}(\tau))\otimes X _{i}-\uparrow^{i}\Phi(\mathcal{J}_{a}(\tau))-\Phi(\mathcal{J}_{a-e_{i}}(\tau))\}\]
where \(\mathcal{F}_{a}(\tau)\in B\) and \(\uparrow^{i}\) corresponds to changing a node decoration by adding \(e_{i}\) to it. The \(e_{i}\) are the canonical basis of \(\mathbb{N}^{d+1}\). Then, there exists a Hopf algebra isomorphism \(\Psi\) between decorated trees and \(\tilde{W}\) equipped with \(\star_{2}\). The map \(\Psi\) is given by
\[\Psi:\prod_{i=1}^{n}\mathcal{F}_{a_{i}}(\tau_{i})X^{k}\to\Psi_{\Phi}(\prod_{i= 1}^{n}\mathcal{F}_{a_{i}}(\tau_{i}))\otimes_{j=0}^{d}\otimes_{i=1}^{k_{j}}X_{ j}.\]
where \(k=(k_{0},...,k_{d})\in\mathbb{N}^{d+1}\), \(X^{k}=\prod_{j=0}^{d}X_{j}^{k_{j}}\) and \(\prod_{i=1}^{n}\mathcal{F}_{a_{i}}(\tau_{i})X^{k}\) corresponds to a certain order as two planted trees commute but not \(X_{i}\) and a planted tree which is encoded in the Lie bracket. The product \(\prod_{i=1}^{n}\) is commutative.
Let us comment on the main algebraic result of this paper. We know from the Milnor-Moore theorem that a Hopf algebra is the universal enveloping algebra of its primitive elements, which is defined as a quotient of a tensor algebra. The difference is that the space \(W\) appearing here is much smaller than that of the primitive elements. It is the image by an isomorphism of a basis of primitive elements for the Butcher-Connes-Kreimer Hopf algebra. The other elements are the \(X_{i}\). The quotient is happening with a Lie bracket between the \(X_{i}\) and the planted trees which is way smaller than the one taken for the Milnor-Moore theorem. The main thing to check is to see that this Lie bracket preserves the basis of Chapoton and Foissy which is the subject of Proposition 4.11. The construction features a very general mechanism that can be reproduced in other contexts:
* Deformation with the help of an isomorphism that transports the structure.
* Post-Lie structures: one adds new elements \(X_{i}\) that do not commute with the previous space. They introduce a natural Lie algebra. This new Lie algebra is used in the quotient of \(W\).
We obtain a better understanding of the recentering Hopf algebra that can potentially provide new results in the theory of regularity structures and the analysis of (S)PDEs.
The paper will be structured as follows: In Section 2 we give an outline of the results in [2], where the authors construct a correspondence between geometric rough paths and branched rough paths. This is accomplished using a result of Chapoton and Foissy that shows that the Grossman-Larson Hopf algebra is in fact isomorphic to the tensor Hopf algebra. As rough paths may be seen as parametrized families of characters of these Hopf algebras one can use composition with the Hopf algebra isomorphism to directly transverse across structures see Theorem 2.9. In this section, we also present the generalised version of the classical Butcher-Connes-Kreimer Hopf algebra \(\mathcal{H}_{\text{\tiny BCK}}\) that can accomodate decorations on the edges and vertices of the forests. We also present the generalized version of the Grossman-Larson Hopf algebra \(\mathcal{H}_{\text{\tiny GL}}\) which again comes with decorations on the edges and vertices and is dual to \(\mathcal{H}_{\text{\tiny BCK}}\). We also present the Chapoton-Foissy isomorphism) in this context (see Theorem 2.8) that we will use in the sequel.
In Section 3, we present the grafting and deformed grafting pre-Lie products and explain how they generate the Grossman-Larson product (see Theorem 3.14) and a
deformed version thereof (see Theorem 3.15), via a contruction given by Guin and Oudom. We then present the isomorphism \(\Theta\) appearing in [12] between the grafting and deformed grafting pre-Lie algebras. We use this to prove that this deformed structure is isomorphic as a Hopf Algebra to the original Grossman-Larson Hopf algebra. Hence, by virtue of the result of Chapoton and Foissy, it is also isomorphic to the tensor Hopf algebra (see Theorem 3.19).
In Section 4 we present the post-Lie algebraic formalism and the generalisation of the Guin-Oudom construction in that context. We recall the results in [11] and make essential use of them to prove our main result. The post-Lie algebraic formalism allows for a precise encoding of the action of the deformed grafting product alongside its interaction with the \(\uparrow^{i}\) operators. Uploading this data onto the universal enveloping algebra via the construction in [21] allows for a finer analysis of the \(\mathcal{H}_{2}\) Hopf algebra and the relation it bears to the simpler deformed Grossman-Larson Hopf algebra. This information, together with the isomorphism \(\Psi_{\Phi}\) ultimately leads to the main result Theorem 4.12 which is an isomorphism between the \(\mathcal{H}_{2}\) Hopf algebra and an appropriate quotient of the shuffle Hopf algebra.
Finally in Section 5, we explore some applications in the context of regularity structures. We show how, using this isomorphism one may move from one encoding to another. This is done by composing the elements of the structure group with the isomorphism and obtaining a new structure group acting on the relevant quotient of the tensor Hopf algebra. This is similar in spirit to the approach in [2]. The main algebraic result of the section is Theorem 5.1 which identifies \(\mathcal{T}_{+}\) the vector space used for the structure group as isomorphic to a Hopf subalgebra of the quotient of the shuffle Hopf algebra described before. Then, we propose an attempt to rewrite Theorem 2.9 int the context of regularity structures (see Theorem 5.2).
## 2 From non-geometric to geometric rough paths
In this section, we present the state of the art in the context of moving from geometric to branched rough paths. We formulate the results using the notion of anisotropic and branched \(\gamma\) rough paths, with the corresponding spaces denoted by \(\mathbf{ARP}^{\gamma}\) and \(\mathbf{BRP}^{\gamma}\). We shall give the definitions of the relevant Hopf algebras and then proceed to outline the results in [32], [2] and [40]. Our approach in the next sections is a generalization of the approach presented in [2].
Before moving further, we introduce some notations. We shall denote by \(T^{V}_{E}\) the set of all rooted trees with vertices decorated by \(V\) and edges decorated by \(E\) and by \(F^{V}_{E}\) the set of forests which consists of monomials over \(T^{V}_{E}\). We then denote by \(\mathcal{T}^{V}_{E}\) the formal linear span of \(T^{V}_{E}\). We also denote by \(\mathcal{F}^{V}_{E}\) the forest algebra, which consists of all polynomials over \(T^{V}_{E}\). It is the free commutative algebra over the vector space \(\mathcal{F}^{V}_{E}\).
Furthermore, we denote by \(P^{V}_{E}\) the set of planted trees and by \(\mathcal{P}^{V}_{E}\) their linear span. A planted tree is of the form \(\mathcal{J}_{a}(\tau)\) where \(\tau\in\mathcal{F}^{V}_{E}\) and \(\mathcal{J}_{a}(\tau)\) denotes the grafting of the tree \(\tau\) onto a new root with no decoration via an edge decorated by \(a\in E\). We also use \(N_{\tau}\), \(E_{\tau}\) and \(L_{\tau}\) for the set of vertices, edges and leaves of a tree
\(\tau\in T_{E}^{V}\). We may equip these structures with different products and at these times we will essentially use the notation to refer to the underlying vector spaces.
We will use **Vec** for the category of vector spaces and **Alg** and **CAlg** for the category of algebras and commutative algebras respectively. We shall use \(S:\textbf{Vec}\rightarrow\textbf{CAlg}\) for the symmetric algebra functor taking a vector space \(V\) to the free commutative algebra over \(V\). Similarly, we use \(T:\textbf{Vec}\rightarrow\textbf{Alg}\) for the tensor algebra functor taking a vector space \(V\) to the free associative algebra over \(V\).
We also define an admissible cut of a tree to be any selection of edges that are pairwise incomparable with respect to the partial ordering induced by the tree. If \(h\in\mathbb{F}_{E}^{V}\) then we use \(\text{Adm}(h)\) to denote the set of admissible cuts of \(h\).
**Definition 2.1**: We define the Butcher-Connes-Kreimer coproduct \(\Delta_{\text{BCK}}\) on the symmetric algebra \(S(\mathcal{P}_{E}^{V})\) by setting, for \(h\in\mathcal{P}_{E}^{V}\):
\[\Delta_{\text{BCK}}(h):=\sum_{C\in\text{Adm}(h)}R^{C}(h)\otimes\tilde{P}^{C}(h) \tag{2.1}\]
Here, we have used \(\tilde{P}^{C}(h)\) to denote the pruned forest that is formed by collecting all the edges at or above the cut, including the ones upon which the cut was performed, so that the edges that were attached to the same node in \(h\) are part of the same tree. The term \(R^{C}(h)\) corresponds to the "trunk", that is the subforest formed by the edges not lying above the ones upon which the cut was performed. In the case of decorated trees with no decorations on the edges, we consider the classical Butcher-Connes-Kreimer coproduct given by:
\[\hat{\Delta}_{\text{BCK}}(h):=\sum_{C\in\text{Adm}(h)}R^{C}(h)\otimes P^{C}(h) \tag{2.2}\]
where this time, we do not keep the edges in the cut \(C\) with \(P^{C}(h)\).
The Butcher-Connes-Kreimer Hopf algebra \(\mathcal{H}_{\text{BCK}}\) is the graded bialgebra with underlying algebraic structure given by the natural symmetric product on \(S(\mathcal{P}_{E}^{V})\) and coalgebra structure given by \(\Delta_{\text{BCK}}\). The grading is defined to be the number of edges. As a graded connected bialgebra, it is also a Hopf algebra. We denote the usual Butcher-Connes-Kreimer Hopf algebra by \(\hat{\mathcal{H}}_{\text{BCK}}\) given by \(S(\mathcal{T}^{V})\) equipped with the forest product and coproduct \(\hat{\Delta}_{\text{BCK}}\). The space \(\mathcal{T}^{V}\) si the linear span of decorated trees with only deorations on the vertices. For this Hopf algebra, the grading is the number of nodes.
**Remark 2.2**: The coproduct \(\hat{\Delta}_{\text{BCK}}\) is used for branched rough paths. In the context of SPDEs with several equations, one has to keep track of the various operators needed for a given iterated integral. This is perfomed by decorations on the edges. Let us mention, that a variant of the Butcher-Connes-Kreimer coproduct has been used in the context of Volterra-type rough paths in [10] where the edges cut are also kept for keeping track the fact that they were attached to the same node. This is crucial for proving a generalized Chen's relation involving a convolution-type product.
**Remark 2.3**: We shall frequently use Sweedler's notation and will write \(\Delta_{\text{\tiny{BCK}}}(h)=\sum_{(h)}h^{(1)}\otimes h^{(2)}\) to denote the sum ranging over the expansion of the coproduct \(\Delta_{\text{\tiny{BCK}}}\). We will also frequently use Sweedler's notation for the other coproducts that will be introduced.
One can provide a recursive formula for the Butcher-Connes-Kreimer coproduct \(\Delta_{\text{\tiny{BCK}}}\):
\[\begin{split}\Delta_{\text{\tiny{BCK}}}\mathbf{1}&= \mathbf{1}\otimes\mathbf{1}\\ \Delta_{\text{\tiny{BCK}}}\mathcal{F}_{a}(X^{k}\tau)& =\left(\text{id}\otimes\mathcal{F}_{a}(X^{k}\cdot)\right)\Delta_{ \text{\tiny{BCK}}}\tau+\mathcal{F}_{a}(X^{k}\tau)\otimes\mathbf{1},\end{split} \tag{2.3}\]
and it extends multiplicatively to the product of \(S(\mathcal{P}_{E}^{V})\). Here, \(\tau\) belongs to \(S(\mathcal{P}_{E}^{V})\). The map \(\mathcal{F}_{a}(X^{k}\cdot)\) is an operator from \(S(\mathcal{P}_{E}^{V})\) into \(S(\mathcal{P}_{E}^{V})\) that grafts the decorated tree \(X^{k}\tau\) onto a new root via an edge decorated by \(a\). From this coproduct, one has an associative product denoted as the Grossman-Larson product \(\star\) defined as:
\[\sigma\star\tau:=(\sigma\otimes\tau)\Delta_{\text{\tiny{BCK}}}\]
where we use the following identification by viewing \(\tau\in S(\mathcal{P}_{E}^{V})\) as a linear functional in \(S(\mathcal{P}_{E}^{V})\) such that : \(\langle\tau,\bar{\tau}\rangle=S(\tau)\) if \(\tau=\bar{\tau}\) and zero elsewhere. The coefficient \(S(\tau)\) corresponds to the internal symmetry factor of the forest \(\tau\). It is given by \(\prod_{i}|\text{Aut}(\tau_{i})|\) where the \(\tau_{i}\) are the trees of \(\tau\) and \(|\text{Aut}(\tau_{i})|\) is the number of automorphisms preserving \(\tau_{i}\). We define a second coproduct on \(S(\mathcal{P}_{E}^{V})\) as
\[\Delta\tau=\tau\otimes\mathbf{1}+\mathbf{1}\otimes\tau\]
and then extends multiplicatively for the symmetric product of \(S(\mathcal{P}_{E}^{V})\).
For the rest of the section, we set \(V=\{0,...,d\}\). We denote by \(\mathcal{G}\) the set of decorated trees whose nodes are decorated by \(V\) and edges are not decorated. The set \(\mathcal{G}_{N}\) consists of decorated trees of \(\mathcal{G}\) with at most \(N\) nodes. We also set \(\mathcal{F}\) to be forests formed of decorated trees in \(\mathcal{G}\) and \(\mathcal{F}_{N}\) forests with at most \(N\) nodes. We denote by \(\mathcal{G}\) (resp. \(\mathcal{G}_{N}\)) the set of characters from the Butcher-Connes-Kreimer Hopf algebra \(\mathcal{H}\) (resp. \(\mathcal{H}_{N}=\langle\mathcal{F}_{N}\rangle\)) into \(\mathbb{R}\). These are linear algebra morphisms forming a group with respect to the convolution product \(\star_{0}\)
\[X\star_{0}Y:=(X\otimes Y)\hat{\Delta}_{\text{\tiny{BCK}}} \tag{2.4}\]
The unit for the convolution product is the co-unit \(\mathbf{1}^{*}\) which is non-zero only on the empty tree.
**Definition 2.4**: Let \(\gamma\in\,]0,1[\), we define a branched \(\gamma\)-rough path as a map \(X:[0,1]^{2}\to\mathcal{G}\) such that \(X_{tt}=\mathbf{1}^{*}\) and such that Chen's relation is satisfied:
\[X_{su}\star_{0}X_{ut}=X_{st},\qquad s,u,t\in[0,1], \tag{2.5}\]
and the analytical condition
\[|\langle X_{st},\tau\rangle|\lesssim|t-s|^{\gamma|\tau|}, \tag{2.6}\]
for every \(\tau\) that does not contain the zero decoration on the nodes. The term \(|\tau|\) corresponds to the number of nodes. Otherwise, we have
\[\sup_{0\leq s,t\leq 1}\frac{\langle X_{st},\tau\rangle}{|t-s|^{(1-\gamma)|\tau|_ {0}+\gamma|\tau|}}<\infty, \tag{2.7}\]
where \(|\tau|_{0}\) counts the number of times the decoration \(0\) appears in \(\tau\). In the sequel, we will consider the biggest \(N\in\mathbb{N}\) such that \(\gamma N\leq 1\). The branched \(\gamma\)-rough paths are taking values in \(\mathcal{G}_{N}\). We denote this space by \(\mathbf{BRP}^{\gamma}\).
We also introduce the shuffle Hopf algebra. Given an alphabet \(A\) and we consider the linear span of the words on this alphabet denoted by \(T(A)\). We set \(\varepsilon\) as the empty word. The product on \(T(A)\) is the shuffle product defined by
\[\varepsilon\shuffle v=v\shuffle\varepsilon=v,\quad(\text{\it au}\shuffle bv)=a(u \shuffle bv)+b(\text{\it au}\shuffle v)\]
for all \(u,v\in T(A)\) and \(a,b\in A\). We first define the shuffle coproduct dual to the deshuffle coproduct defined for every \(a\in A\) as
\[\Delta_{\shuffle}a=a\otimes\varepsilon+\varepsilon\otimes a\]
and then extends multiplicatively for the tensor product \(\otimes\). The coproduct \(\bar{\Delta}:T(A)\to T(A)\otimes T(A)\) is the deconcatenation of words:
\[\bar{\Delta}(a_{1}\cdots a_{n})=a_{1}\cdots a_{n}\otimes\varepsilon+ \varepsilon\otimes a_{1}\cdots a_{n}+\sum_{k=1}^{n-1}a_{1}\cdots a_{k}\otimes a _{k+1}\cdots a_{n}.\]
Equipped with this product and coproduct \(T(A)\) is a Hopf algebra. The grading of \(T(A)\) is given by the length of words \(\ell(a_{1}\cdots a_{n})=n\). We denote by \(\mathcal{G}_{A}\) the group of characters associated to \(T(A)\) and by \(*\) the convolution product.
**Definition 2.5**: An anisotropic \(\gamma\)-rough path, with \(\gamma=(\gamma_{a},\,a\in A)\), \(0<\gamma_{a}<1\), is a map \(X:[0,1]^{2}\to\mathcal{G}_{A}\) such that \(X_{tt}=\varepsilon^{*}\) where \(\varepsilon^{*}\) is the counit. It satisfies
\[X_{su}*X_{ut}=X_{st},\qquad|\langle X_{st},v\rangle|\lesssim|t-s|^{\hat{ \gamma}\omega(v)}\]
for all \((s,u,t)\in[0,1]^{3}\) and words \(v\). Here, \(\hat{\gamma}=\text{min}_{a\in A}\,\gamma_{a}\) and for a word \(v=a_{1}\cdots a_{k}\) of length \(k\) we define
\[\omega(v)=\frac{\gamma_{a_{1}}+\ldots+\gamma_{a_{k}}}{\hat{\gamma}}=\frac{1}{ \hat{\gamma}}\sum_{a\in A}n_{a}(v)\gamma_{a} \tag{2.8}\]
where \(n_{a}(v)\) is the number of times the letter \(a\) appears in \(v\). The space of anisotropic \(\gamma\)-rough paths is denoted by \(\mathbf{ARP}^{\gamma}\). When the \(\gamma_{a}\) are all equal to a fixed \(\gamma\), one recovers the classical geometric rough paths.
As for the branched rough paths, we perform a truncation and consider paths taking values in \(\mathcal{G}_{\mathcal{G}_{N},N}\). Elements of \(\mathcal{G}_{\mathcal{G}_{N},N}\) are characters over \(T_{N}(\mathcal{G}_{N})\) which are words \(v=\tau_{1}\otimes\ldots\otimes\tau_{n}\) built on the alphabet \(\mathcal{G}_{N}\) such that \(\sum_{i=1}^{n}|\tau_{i}|\leq N\).
The first approach to moving from trees into words is given by the Hairer-Kelly map \(\Psi_{\mbox{\tiny HK}}\) in the context of geometric rough paths in [32]. This map first introduced in [32] is given in [4, Def. 4, Sec. 6] as the the unique Hopf algebra morphism from \(\mathcal{H}\) to the shuffle Hopf algebra (\(T(\mathcal{G}),\shuffle)\) obeying:
\[\Psi_{\mbox{\tiny HK}}=(\Psi_{\mbox{\tiny HK}}\otimes P_{\mbox{\tiny 1 }})\hat{\Delta}_{\mbox{\tiny BCK}}\]
where \(P_{\mbox{\tiny 1}}:=\mbox{id}-\mbox{\tiny 1}^{*}\) is the augmentation projector. The following theorem given in [40] established a correspondence between anisotropic rough paths and branched rough paths:
**Theorem 2.6**: _Let \(X\) be a branched \(\gamma\)-rough path. There exists an anisotropic geometric rough path \(\bar{X}\) indexed by words on the alphabet \(\mathcal{G}_{N}\), \(N=\lfloor 1/\gamma\rfloor\), with exponents (\(\gamma_{\tau},\tau\in\mathcal{G}_{N}\)), and such that \(\langle X,\tau\rangle=\langle\bar{X},\Psi_{\mbox{\tiny HK}}(\tau)\rangle\)._
**Remark 2.7**: The previous theorem relies on the Lyons-Victoir extension theorem given in [35] which is not canonical. The authors in [40] identified a transitive free action of the additive group \(\mathcal{G}^{\gamma}\) on **BRP\({}^{\gamma}\)**. The abelian group \(\mathcal{G}^{\gamma}\) is given by
\[\mathcal{G}^{\gamma}:=\{(g^{\tau})_{\tau\in\mathcal{G}_{N}}:\,g_{0}^{\tau}=0, \,g^{\tau}\in C^{\gamma_{\tau}}([0,1]),\,\forall\,\tau\in\mathcal{G}\,,|\tau| \leq N\}.\]
Explicit expressions for \(g\) have been given in [14] for the BPHZ renormalisation at the level of rough paths introduced in [6]. Parametrisation in the context of regularity structures has been considered in [1].
Lastly, the approach most relevant to this work, given in [2], constructs a bijection between the two spaces \(\mathbf{BRP}^{\gamma}\) and \(\mathbf{ARP}^{\gamma}\). The main idea is to use an algebraic result from [23, 17]. We denote by \(\hat{\mathcal{H}}_{\mbox{\tiny GL}}\) the Grossmann-Larson Hopf algebras (resp. \(\mathcal{H}_{\mbox{\tiny GL}}\)) defined on \(S(\mathcal{G}_{E}^{V})\) (resp. \(S(\mathcal{G}_{E}^{V})\) ) equipped with the product \(\star_{0}\) (resp. \(\star\)) and the coproduct \(\Delta\). We recall this result of Chapoton and Foissy:
**Theorem 2.8**: _There exists a subspace \(\hat{\mathcal{B}}=\langle\tau_{1},\tau_{2},...\rangle\) of \(\mathcal{G}\) (resp. \(\mathcal{G}\) and \(\mathcal{G}_{E}^{V}\)) such that \(\hat{\mathcal{H}}_{\mbox{\tiny GL}}\) (resp. \(\mathcal{H}_{\mbox{\tiny GL}}\)) is isomorphic as a Hopf algebra to the tensor Hopf algebra (\(T(\hat{\mathcal{B}}),\otimes,\Delta_{\shuffle}\)) (resp. (\(T(\mathcal{G}),\otimes,\Delta_{\shuffle}\))) which consists of the linear span of the set of words from the alphabet \(\hat{\mathcal{B}}\) (resp. \(\mathcal{B}\)), endowed with the tensor product and the shuffle coproduct._
We provide an outline here of the construction. First, one proves the existence of a set \(B=\{\tau_{1},\tau_{2},...\}\) that consists of a basis of primitive elements of the Hopf algebra \(\mathcal{H}_{\mbox{\tiny BCK}}\) belonging to \(\mathcal{G}_{E}^{V}\) such that every \(\tau\in\mathcal{H}_{\mbox{\tiny GL}}\) has a unique representation of the form:
\[\tau=\sum_{B}\lambda_{R}\tau_{r_{1}}\star\ldots\star\tau_{r_{n}} \tag{2.9}\]
where the sum is performed over finitely many multi-indices \(R=(r_{1},\ldots,r_{n})\). Then, one can exhibit an isomorphism \(\Psi_{\mbox{\tiny CF}}\) between the two Hopf algebras \(\mathcal{H}_{\mbox{\tiny GL}}\) and \(T(\mathcal{B})\) where \(\mathcal{B}\) is the linear span of \(B\) as follows:
\[\Psi_{\mbox{\tiny CF}}:\tau_{1}\star\ldots\star\tau_{r}\mapsto\tau_{1}\otimes \ldots\otimes\tau_{r}\]
where \(\tau_{1}\otimes\ldots\otimes\tau_{r}\in T(\mathcal{B})\). This will be the isomorphism that we will use in the next section. We will obtain an isomorphism of the deformed Grossman-Larson Hopf algebra \(\mathcal{H}_{\mbox{\tiny GL}}\) with the tensor Hopf algebra (\(T(\mathcal{B}),\otimes,\Delta_{\shuffle}\)).
In the context of rough paths, one uses the isomorphism \(\hat{\Psi}_{\mbox{\tiny CF}}\) between the two spaces \(\mathcal{H}_{N}^{*}\) and \(T_{N}(\hat{\mathcal{B}}_{N})\) based on the basis \(\hat{\mathcal{B}}_{N}\) (see [2, Lemma 4.2]):
\[\hat{\Psi}_{\mbox{\tiny CF}}:\tau_{1}\star_{0}\ldots\star_{0}\tau_{r}\mapsto \tau_{1}\otimes\ldots\otimes\tau_{r}\]
where \(\tau_{1}\otimes\ldots\otimes\tau_{n}\in T_{N}(\hat{\mathcal{B}}_{N})\). Here \(\hat{\mathcal{B}}_{N}\) are elements of \(\hat{\mathcal{B}}\) with at most \(N\) nodes. One has from [2]
**Theorem 2.9**: _Let \(X\in\mathbf{BRP}^{\gamma}\), then \(\hat{X}:=\hat{\Psi}_{\mbox{\tiny CF}}(X)\in\mathbf{ARP}^{\gamma}\)._
In [14], the action of the renormalisation on this construction has been described. The family of renormalisation maps considered are BPHZ renormalisation map \(M\) (inspired from the BPHZ renormalisation of Feynman diagrams [41, 31, 13] which was used in the context of regularity structures [8, 18]) whose adjoints \(M^{*}\) are morphisms for the product \(\star_{0}\):
\[M^{*}(\tau\star_{0}\sigma)=M^{*}\tau\star_{0}M^{*}\sigma.\]
Then, one is able to define a renormalisation map \(\hat{M}^{*}\) on \(T_{N}(\hat{\mathcal{B}}_{N})\) that commutes with the isomorphism \(\hat{\Psi}_{\mbox{\tiny CF}}\) (see [14, Theorem 4.7]):
\[\hat{M}^{*}\hat{X}=\hat{M}^{*}\hat{\Psi}_{\mbox{\tiny CF}}(X)=\hat{\Psi}_{ \mbox{\tiny CF}}(M^{*}X).\]
BPHZ renormalisation maps in the context of rough paths have been first considered in [6] with some examples provided in [5].
## 3 An isomorphism for the deformed Grossman-Larson Hopf algebra
In this section, we introduced pre-Lie and multi-pre-Lie algebras with the main example being the grafting product for decorated trees and its deformations given in [3, 12]. Then, we apply the Guin-Oudom procedure [26, 27] to the grafting product for deriving the Grossman-Larson Hopf algebra \(\mathcal{H}_{\mbox{\tiny GL}}\), the graded dual of the Butcher-Connes-Kreimer Hopf algebra \(\mathcal{H}_{\mbox{\tiny BCK}}\). Using the functor given by Guin-Oudom, we are able to lift the isomorphism \(\Theta\) introduced in [12] (see (3.9)) at the level of the grafting products to an isomorphism \(\Phi\) for the deformed Grossman-Larson Hopf algebras (see Theorem 3.17). This allows us to state our main result which is Theorem 3.19 that translates the Chapoton-Foissy isomorphism in the context of the deformed Grossman-Larson Hopf algebra: One just applies the isomorphism \(\Phi\) to the basis previously obtained in Theorem 2.8.
We begin this section by giving the definition of a pre-Lie algebra.
**Definition 3.1**: A pre-Lie algebra is an algebra (\(\mathscr{P}\), \(\curvearrowleft\)) over a field \(\mathbf{k}\) of characteristic \(0\), whose product satisfies the following relation for every \(x,y,z\in\mathscr{P}\)
\[x\curvearrowleft(y\curvearrowleft z\right)-\left(x\curvearrowleft y\right) \curvearrowleft z=y\curvearrowleft(x\curvearrowleft z\right)-\left(y \curvearrowleft x\right)\curvearrowleft z\right.\]
**Remark 3.2**: Note that every associative algebra is a pre-Lie algebra as in this case the associator vanishes and the left- and right-hand sides above are both equal to zero.
A pre-Lie algebra gives rise to a Lie algebra:
**Proposition 3.3**: _If \((E,\curvearrowleft)\) is a pre-Lie algebra, then the commutator \([x,y]=x\curvearrowleft y-y\curvearrowleft x\) is a Lie bracket._
**Remark 3.4**: Here is an equivalent definition: An algebra \((E,\curvearrowleft)\) over a field \(\mathbf{k}\) of characteristic \(0\), whose commutator is a Lie bracket and left multiplication by \(\curvearrowleft\) gives a representation of the commutator Lie algebra.
**Remark 3.5**: Not every Lie algebra comes from an associative algebra (with the commutator as its Lie bracket). For example, free Lie algebras do not arise from any associative algebra. They do, however, arise from free pre-Lie algebras, see [17]. It is interesting to study the implications of a Lie algebra \(L\) arising from a pre-Lie structure. An explicit recursive procedure, given by Guin and Oudom (see Theorem 3.13), for constructing an associative product on the symmetric space over \(L\) is one implication. The free cocommutative coalgebra endowed with this associative product turns out to be a Hopf algebra that is isomorphic to the universal enveloping algebra of \(L\). This can be seen as exploiting the pre-Lie structure to obtain extra information about the universal envelope of \(L\).
We also give the definition of a multi-pre-Lie algebra first introduced in [3]. Although a seemingly richer structure, all the information can be condensed into a single pre-Lie algebra. It is nonetheless a useful notion when describing certain families of products.
**Definition 3.6**: A multi-pre-Lie algebra indexed by a set \(E\) is a vector space \(\mathscr{P}\) over a field \(\mathbf{k}\) of characteristic \(0\), endowed with a family (\(\curvearrowleft^{\alpha}\))\({}_{\alpha\in E}\) of bilinear products such that for every \(x,y,z\in\mathscr{P}\)
\[x\curvearrowleft^{a}\left(y\curvearrowleft^{b}z\right)-\left(x\curvearrowleft^ {a}y\right)\curvearrowleft^{b}z=y\curvearrowleft^{b}\left(x\curvearrowleft^ {a}z\right)-\left(y\curvearrowleft^{b}x\right)\curvearrowleft^{a}z\right.\]
As it is shown below (see [24]), one can summarise all the data of a multi-pre-Lie algebra into a single pre-Lie algebra.
**Lemma 3.7**: _If \(\mathscr{P}\) is a multi-pre-Lie algebra over a field \(\mathbf{k}\) of characteristic \(0\) and indexed by a set \(E\), then \(\mathscr{P}\otimes\mathbf{k}E\) is a pre-Lie algebra when endowed with the product_
\[\left(x\otimes a\right)\curvearrowleft(y\otimes b\right)=\left(x\curvearrowleft^ {a}y\right)\otimes b\right.\]
_for any \(a,b\in E\) and for any \(x,y,z\in\mathscr{P}\)._
**Example 1**.: A family of pre-Lie products on \(\mathscr{T}_{E}^{V}\) is given by grafting by means of decorated edges, namely:
\[\sigma\curvearrowright^{a}\tau:=\sum_{v\in N_{\tau}}\sigma\curvearrowright_{v} ^{a}\tau, \tag{3.1}\]
where \(\sigma\) and \(\tau\) are two decorated rooted trees and where \(\sigma\curvearrowright_{v}^{a}\tau\) is obtained by grafting the tree \(\sigma\) on the tree \(\tau\) at vertex \(v\) by means of a new edge decorated by \(a\in E\).
Another example is a deformed version of the above family of grafting products.
**Example 2**.: We suppose here that the vertices are decorated by elements of a monoid \(\Omega\) and that \(\Omega=\mathbb{N}^{d+1}\) here, endowed with componentwise addition. A grading is given by
\[|\textbf{n}|_{\mathfrak{s}}:=s_{0}n_{0}+\dots+s_{d}n_{d}\]
where \(\mathfrak{s}:=(s_{0},\dots,s_{d})\in\mathbb{N}_{>0}^{d+1}\) is fixed. We suppose that \(V=S\times\mathbb{N}^{d+1}\) and \(E=S^{\prime}\times\mathbb{N}^{d+1}\) where \(S\) and \(S^{\prime}\) are two finite sets. Then \(\Omega\) acts freely on both \(E\) and \(V\) in a graded way. We denote by \(+\) the addition in \(\Omega\) as well as both actions of \(\Omega\) on \(E\) and \(V\). A family of deformed grafting products on \(\mathscr{T}_{E}^{V}\) is defined as follows:
\[\sigma\widehat{\curvearrowright^{a}}\tau:=\sum_{v\in N_{\tau}}\sum_{\ell\in \textbf{N}^{d+1}}\binom{\textbf{n}_{v}}{\ell}\sigma\curvearrowright_{v}^{a- \ell}(\uparrow_{v}^{-\ell}\tau). \tag{3.2}\]
Here \(\textbf{n}_{v}\in\mathbb{N}^{d+1}\) denotes the second component of the decoration at the vertex \(v\). The generic term is self-explanatory if there exists a (unique) pair \((b,\alpha)\in E\times V\) such that \(a=\ell+b\) and \(\textbf{n}_{v}=\ell+\alpha\). It vanishes by convention if this condition is not satisfied. The operators \(\uparrow_{v}^{\omega}\) act by adding \(\omega\) to the decoration \(\textbf{n}_{v}\). We define the _grading_ of a tree in \(\mathscr{T}_{E}^{V}\) by the sum of the gradings of its edges given by \(|\cdot|_{\text{grad}}\):
\[|\tau|_{\text{grad}}:=\sum_{e\in E_{\tau}}|\mathfrak{e}(e)|_{\mathfrak{s}}. \tag{3.3}\]
where \(\mathfrak{e}(e)\) is the decoration of the edge \(e\). Then, \(\widehat{\curvearrowright^{a}}\) is a deformation of \(\curvearrowright^{a}\) in the sense that:
\[\sigma\widehat{\curvearrowright^{a}}\tau=\sigma\curvearrowright^{a}\tau+ \text{ lower grading terms}.\]
We now proceed to give the definitions of pre-Lie products that will be of interest to us. We first define the grafting product, a pre-Lie product on planted trees with edge and vertex decorations that gives rise to the free multi-pre-Lie algebra equipped with a family of pre-Lie products over a prescribed set of generators. We will then define a deformed version of this grafting product that is of greater interest to us, which turns out to be isomorphic to the original grafting product.
**Definition 3.8** (Grafting product): The grafting product on the space \(\mathcal{G}_{E}^{V}\) is defined as follows for planted trees and is then extended by linearity:
\[\mathcal{F}_{a}(\sigma)\curvearrowright\mathcal{F}_{b}(\tau)=\mathcal{F}_{b}( \sigma\curvearrowright^{a}\tau) \tag{3.4}\]
for any \(a,b\in E\) and any \(\tau,\sigma\in\mathcal{G}_{E}^{V}\).
**Definition 3.9** (Deformed grafting product): The deformed grafting product on the space \(\mathcal{G}_{E}^{V}\) is defined as follows for planted trees and is then extended by linearity:
\[\mathcal{F}_{a}(\sigma)\widehat{\curvearrowright}\mathcal{F}_{b}(\tau)= \mathcal{F}_{b}(\sigma\widehat{\curvearrowright}^{a}\tau) \tag{3.5}\]
for any \(a,b\in E\) and any \(\tau,\sigma\in\mathcal{G}_{E}^{V}\).
**Remark 3.10**: The grafting and deformed grafting products are clearly pre-Lie products under the identification of the space \(\mathcal{G}_{E}^{V}\) with \(\mathcal{G}_{E}^{V}\otimes\mathbf{k}E\) by identifying an element \(\mathcal{F}_{a}(\tau)\) with \(\tau\otimes a\).
We will now introduce a deformed version of the original Butcher-Connes-Kreimer coproduct, which we call the deformed Butcher-Connes-Kreimer (DBCK) coproduct and denote by \(\Delta_{\mbox{\tiny{DBCK}}}\).
**Definition 3.11**: We suppose that the set \(V\) of vertex decorations coincides with the commutative monoid \(\Omega\). Then, the deformed Butcher-Connes-Kreimer coproduct is defined by the maps \(\Delta_{\mbox{\tiny{DBCK}}}:S(\mathcal{G}_{E}^{V})\to S(\mathcal{G}_{E}^{V}) \otimes S(\mathcal{G}_{E}^{V})\) and \(\bar{\Delta}_{\mbox{\tiny{DBCK}}}:\mathcal{G}_{E}^{V}\to S(\mathcal{G}_{E}^{V })\otimes\mathcal{G}_{E}^{V}\) defined recursively by:
\[\Delta_{\mbox{\tiny{DBCK}}}\mathcal{F}_{a}(\tau) =(\mbox{id}\otimes\mathcal{F}_{a})\bar{\Delta}_{\mbox{\tiny{DBCK }}}\tau+\mathcal{F}_{a}(\tau)\otimes\mathbf{1},\quad\bar{\Delta}_{\mbox{\tiny {DBCK}}}X^{k}=\mathbf{1}\otimes X^{k} \tag{3.6}\] \[\bar{\Delta}_{\mbox{\tiny{DBCK}}}\mathcal{F}_{a}(\tau) =(\mbox{id}\otimes\mathcal{F}_{a})\bar{\Delta}_{\mbox{\tiny{DBCK }}}\tau+\sum_{\ell\in\mathbb{N}^{d+1}}\frac{1}{\ell!}\mathcal{F}_{a+\ell}( \tau)\otimes X^{\ell}.\]
The map \(\Delta_{\mbox{\tiny{DBCK}}}\) is extended using the product of \(S(\mathcal{G}_{E}^{V})\). We use the tree product for extending the map \(\bar{\Delta}_{\mbox{\tiny{DCK}}}\). Here the tree product is the merging root product. It means that given two decorated trees, their tree product is equal to a new decorated tree obtained by identifying the roots of the two trees and adding decorations from the previous roots to the new root. The infinite sum over \(\ell\) makes sense via a bigrading introduced in [8, Section 2.3].
**Definition 3.12**: The Deformed Butcher-Connes-Kreimer (DBCK) Hopf algebra \(\mathcal{H}_{\mbox{\tiny{DBCK}}}\) is the graded bialgebra on \(\mathcal{F}_{E}^{V}=S(\mathcal{G}_{E}^{V})\) equipped with the forest product (i.e. the product of the symmetric algebra) and the \(\Delta_{\mbox{\tiny{DBCK}}}\) coproduct. As a graded, connected bialgebra, it is also a Hopf algebra.
For a Hopf algebra, we shall denote the space of primitive elements of \(\mathcal{H}\) by \(Prim(\mathcal{H})\). Note that \(Prim(\mathcal{H})\) is a linear subspace of \(\mathcal{H}\). Equipping \(Prim(\mathcal{H})\) with the commutator Lie bracket \([h_{1},h_{2}]=h_{1}h_{2}-h_{2}h_{1}\) it is also a Lie algebra.
It is well-known that the primitive elements of a Hopf Algebra carry the structure of a Lie algebra. When \(\mathcal{H}\) is a cocommutative graded connected Hopf algebra with finite-dimensional graded components, the Milnor-Moore theorem tells us that \(\mathcal{H}\cong U(Prim(\mathcal{H}))\), i.e. that \(\mathcal{H}\) is isomorphic as a Hopf algebra to the universal enveloping algebra over it's primitives. When \(\mathcal{H}\) is cofree-cocommutatve and right-sided, the primitive elements of \(\mathcal{H}\) admit a finer structure, that of a pre-Lie algebra. An explicit description of any Hopf algebra obeying these conditions via means of it's underlying pre-Lie algebra is given by the Guin-Oudom procedure [26, 27], which gives a recursive construction of the algebra's associative product on the symmetric algebra \(S(Prim(\mathcal{H}))\) over the primitives:
**Theorem 3.13** (Guin-Oudom): _Let \((\mathcal{P},\curvearrowright)\) be a pre-Lie algebra and let \(S(\mathcal{P})\) denote the symmetric space over the underlying vector space. For every \(u,v,w\in S(\mathcal{P})\), \(x,y\in\mathcal{P}\) we start by defining a product \(\bullet\) on \(S(\mathcal{P})\) as follows:_
\[\begin{split}\mathbf{1}\bullet w&=w,\quad u \bullet\mathbf{1}=\mathbf{1}^{\star}(u),\\ w\bullet uv&=\sum_{(w)}(w^{(1)}\bullet u)(w^{(2) }\bullet v),\\ xv\bullet y&=x\ \curvearrowright(v\bullet y)-(x\ \curvearrowright v)\bullet y\end{split} \tag{3.7}\]
_where \(\mathbf{1}^{\star}\) stands for the counit, the summation over \((w)\) is shorthand for summing over the terms of the expansion for the shuffle coproduct \(\Delta_{\shuffle}\) and where \(\curvearrowright\) is extended to \(\mathcal{P}\otimes S(\mathcal{P})\) in the following way:_
\[x\ \curvearrowright x_{1}\ldots x_{k}=\sum_{i=1}^{k}x_{1}\ldots(x\ \curvearrowright x_{i})\ldots x_{k},\]
_with \(x_{i}\in\mathcal{P}\). We now define the associative product \(\star\) as follows:_
\[w\star v=\sum_{(w)}\left((w^{(1)}\bullet v)w^{(2)}\right) \tag{3.8}\]
_Then, the associative product \(\star\) on \(S(\mathcal{P})\) is such that the Hopf algebra \((S(\mathcal{P}),\star,\Delta_{\shuffle})\) is isomorphic to the universal enveloping algebra \(U(\mathcal{P})\) of the Lie algebra associated to \(\mathcal{P}\), equipped with its standard Hopf-algebraic structure. Furthermore, the induced mapping from the category \(\mathbf{PreLie}\) to the category \(\mathbf{Hopf}\) of Hopf algebras is a functor. A morphism \(\varphi\) in \(\mathbf{PreLie}\) is mapped to \(S(\varphi)\) where \(S\) is the symmetric space functor._
Given a pre-Lie algebra \(\mathcal{P}\) the Guin-Oudom procedure describes a way to impose a Hopf algebra structure on \(S(\mathcal{P})\) by using the pre-Lie product to obtain an associative product on \(S(\mathcal{P})\). This turns out to be isomorphic to \(U(\mathcal{P})\). In fact, one obtains a functor from the category \(\mathbf{PreLie}\) to the category \(\mathbf{Hopf}\) of Hopf algebras, see the proof of Proposition 3.1 in [27]. Furthermore, Loday and Ronco, in [34] prove that this mapping is an equivalence of categories from \(\mathbf{PreLie}\) to a certain
category **CHA** of cofree-cocommutative right-sided combinatorial Hopf Algebras. For example, under this correspondence, the free pre-Lie algebra on one generator gives rise to the Grossman-Larson Hopf algebra.
**Theorem 3.14**: _The grafting pre-Lie algebra with product \(\curvearrowright\) and edge decorations from the set \(E\) is the free pre-Lie algebra over \(E\). Furthermore the product obtained by the Guin-Oudom construction above is the Grossman-Larson product and therefore \(\mathcal{H}_{\text{GL}}=(S(\mathcal{P}_{E}^{V}),\star,\Delta_{\sqcup\sqcup})\) is isomorphic to the universal enveloping algebra \(U(\mathfrak{h})\), where \(\mathfrak{h}\) is the Lie algebra induced by the grafting product \(\curvearrowright\)._
We also have analogous results for the deformed grafting product:
**Theorem 3.15**: _The Hopf algebra \(\mathcal{H}_{\text{DGL}}=(S(\mathcal{P}_{E}^{V}),\tilde{\star},\Delta_{\sqcup})\), where \(\tilde{\star}\) is obtained from \(\widehat{\curvearrowright}\) via the Guin-Oudom construction given above, is isomorphic to the universal enveloping algebra \(U(\mathfrak{g})\), where \(\mathfrak{g}\) is the commutator Lie algebra induced by \(\widehat{\curvearrowright}\)._
We shall denote by \(\mathcal{H}_{\text{DGL}}\) the deformed Grossman-Larson Hopf algebra. This is due to the following theorem [12, Theorem 3.4]:
**Theorem 3.16**: _The product \(\tilde{\star}\) is dual to the deformed Butcher-Connes-Kreimer coproduct \(\Delta_{\text{DBCK}}\)._
In [12, Theorem 2.7], the authors prove that there exists an isomorphism \(\Theta\) between the pre-Lie algebra \(E_{\text{GL}}\) associated with the Grossman-Larson Hopf algebra and the pre-Lie algebra \(E_{\text{DGL}}\) associated with the deformed Grossman-Larson Hopf algebra. One can describe recursively the isomorphism \(\Theta\) by
\[\begin{split}\Theta\Big{(}\mathcal{J}_{a}(X^{k})\Big{)}& =\mathcal{J}_{a}(X^{k})\\ \Theta\Bigg{(}\mathcal{J}_{a}(X^{k}\prod_{i=1}^{n}\mathcal{J}_{a _{i}}(\tau_{i}))\Bigg{)}&=\prod_{i=1}^{n}\Theta(\mathcal{J}_{a_ {i}}(\tau_{i})))\widehat{\curvearrowright}\mathcal{J}_{a}(X^{k}).\end{split} \tag{3.9}\]
Using the isomorphism \(\Theta\), together with Theorem 3.13, we can now prove:
**Theorem 3.17**: _The deformed Grossman-Larson Hopf algebra \(\mathcal{H}_{\text{DGL}}\) is isomorphic as a Hopf algebra to the original Grossman-Larson Hopf algebra \(\mathcal{H}_{\text{GL}}\)._
_Proof._ Let \(\Theta\) denote the isomorphism between the pre-Lie Algebras associated to \(\mathcal{H}_{\text{GL}}\) and \(\mathcal{H}_{\text{DGL}}\). We let \(G:\textbf{PreLie}\rightarrow\textbf{Hopf}\) denote the Guin-Oudom functor. Then, \(\Phi:=G(\Theta)\) is an isomorphism between \(\mathcal{H}_{\text{GL}}\) and \(\mathcal{H}_{\text{DGL}}\) in the category of Hopf algebras. \(\sqcap\)\(\sqcup\)
Then, our previously defined isomorphism \(\Phi:\mathcal{H}_{\text{GL}}\rightarrow\mathcal{H}_{\text{DGL}}\) induces a linear isomorphism of the space of primitive elements for \(\mathcal{H}_{\text{BCK}}\), seen as a subspace of \(\mathcal{H}_{\text{GL}}\), onto its image. This allows us to prove the following theorem:
**Proposition 3.18**: _Let \(\sigma\in\mathscr{H}_{\text{DGL}}\). Then, there exists a subspace \(\tilde{\mathscr{B}}=\langle\sigma_{1},\sigma_{2},...\rangle\subseteq\mathscr{H}_ {\text{DGL}}\) such that_
\[\sigma=\sum_{R}\lambda_{R}\,\sigma_{r_{1}}\tilde{\star}...\tilde{\star}\, \sigma_{r_{n}}\]
_with a unique such decomposition._
We pick the unique \(\tau\) such that \(\sigma=\Phi(\tau)\). We know that
\[\tau=\sum_{R}\lambda_{R}\tau_{r_{1}}\star...\star\tau_{r_{n}}\]
for some Butcher-Connes-Kreimer primitive elements \(\tau_{r_{i}}\) belonging to \(B\). Hence,
\[\sigma=\Phi(\tau)=\sum_{R}\lambda_{R}\Phi(\tau_{r_{1}})\,\tilde{\star}...\tilde {\star}\,\Phi(\tau_{r_{n}})\]
We then pick \(\sigma_{r_{i}}=\Phi(\tau_{r_{i}})\). This completes the proof by considering \(\tilde{\mathscr{B}}=\Phi(\mathscr{B})\).
From the previous proposition, given the basis \(B\), the basis one can use for \(\tilde{\star}\) is \(\Phi(B)=\{\Phi(\tau_{1}),\Phi(\tau_{2}),...\}\). Then, one can exhibit an isomorphism \(\Psi_{\Phi}\) between the two spaces \(S(\mathscr{P}_{E}^{V})\) and \(T(\Phi(\mathscr{B}))\) based on the basis of \(\Phi(\mathscr{B})\):
\[\Psi_{\Phi}:\Phi(\tau_{1})\,\tilde{\star}\ldots\tilde{\star}\,\Phi(\tau_{r}) \mapsto\Phi(\tau_{1})\otimes\ldots\otimes\Phi(\tau_{r})\]
where \(\Phi(\tau_{1})\otimes\ldots\otimes\Phi(\tau_{n})\in T(\Phi(\mathscr{B}))\). As a corollary, by composition of isomorphisms, we obtain:
**Theorem 3.19**: _There exists a subspace \(\mathscr{B}=\langle\tau_{1},\tau_{2},...\rangle\) of \(\mathscr{P}_{E}^{V}\) such that \(\mathscr{H}_{\text{DGL}}\) is isomorphic as a Hopf algebra to the tensor Hopf algebra \((T(\mathscr{B}),\otimes,\Delta_{\shuffle})\) endowed with the tensor product and the shuffle coproduct._
## 4 Extension of the Chapoton-Foissy isomorphism
In this section, we prove our main result, Theorem 4.12, which asserts that the \(\mathscr{H}_{2}\) Hopf algebra, used in the context of regularity structures for recentering the ensuing Taylor-type expansions around different points, is actually isomorphic to a simple quotient of the tensor Hopf algebra. This quotient comes from the Lie bracket between planted trees and extra elements \(X_{i}\) which are parts of \(\mathscr{H}_{2}\) but were absent in the previous section. The main difficulty is to check that the basis given by Theorem 3.19 is stable under this quotient. This is proved in Proposition 4.11 and relies on properties of the two derivations \(\uparrow^{i}\) and \(\mathscr{D}^{i}\). They commute with the isomorphism \(\Psi\), as shown in Proposition 4.8 and leave invariant the primitives of the Butcher-Connes-Kreimer Hopf algebra, see Corollary 4.10. The formulation of the main result and its proof rely strongly on the post-Lie algebras introduced in [11]
for describing \(\mathcal{H}_{2}\). Finally, we give a non-trivial extension of the Chapoton-Foissy isomorphism. This is a consequence of having better understood the two main algebraic components at play in the context of regularity structures currently used, which are the deformation and the post-Lie structure.
We shall begin by introducing the concept of a post-Lie algebra, which generalizes that of a pre-Lie algebra. We also describe the recursive construction of an associative product on the universal envelope of a post-Lie algebra that directly generalizes the construction of Guin and Oudom. It was first introduced in [21].
A post-Lie algebra is a Lie algebra (\(\mathfrak{g}\), \([.,.]\)) equipped with a bilinear product \(\triangleright\) satisfying the following identities:
\[\begin{split} x\triangleright[y,z]&=[x \triangleright y,z]+[y,x\triangleright z]\\ [x,y]\triangleright z&=a_{\triangleright}(x,y,z)-a_{ \triangleright}(y,x,z)\end{split} \tag{4.1}\]
with \(x,y,x\in\mathfrak{g}\) and the commutator \(a_{\triangleright}(x,y,z)\) is given by:
\[a_{\triangleright}(x,y,z)=x\triangleright(y\triangleright z)-(x\triangleright y) \triangleright z.\]
When (\(\mathfrak{g}\), \([.,.]\)) is the abelian Lie algebra, we obtain the notion of a pre-Lie algebra. One can define a new Lie bracket \([[.,.]]\) given by:
\[[[x,y]]=[x,y]+x\triangleright y-y\triangleright x. \tag{4.2}\]
The post-Lie product \(\triangleright\) can be extended to a product on the universal enveloping algebra \(U(\mathfrak{g})\) by first defining it on \(\mathfrak{g}\otimes U(\mathfrak{g})\):
\[x\triangleright\mathbf{1}=0,\quad x\triangleright y_{1}...y_{n}=\sum_{i=1}^{n}y_{1}...(x\triangleright y_{i})...y_{n}.\]
and then extending it to \(U(\mathfrak{g})\otimes U(\mathfrak{g})\) by defining:
\[\begin{split}\mathbf{1}\triangleright A&=A,\quad xA \triangleright y=x\triangleright(A\triangleright y)-(x\triangleright A) \triangleright y,\\ A\triangleright BC&=\sum_{(A)}(A^{(1)}\triangleright B )(A^{(2)}\triangleright C).\end{split}\]
where \(A,B,C\in U(\mathfrak{g})\) and \(x,y\in\mathfrak{g}\). Here, \((A)\) correspond to the shuffle coproduct. One defines an associative product \(*\) on \(U(\mathfrak{g})\), the universal enveloping algebra of \(\mathfrak{g}\):
\[A*B=\sum_{(A)}A^{(1)}(A^{(2)}\triangleright B). \tag{4.3}\]
Then, a result that generalizes that of Guin and Oudom, allows us to exploit the underlying post-Lie structure on \(\mathfrak{g}\) in order to gain additional insight on the structure of \(U(\mathfrak{g})\). This is formalised in the following theorem:
**Theorem 4.2**: _The Hopf algebra \((U(\mathfrak{g}),\ast,\Delta)\) is isomorphic to the enveloping algebra \(U(\bar{\mathfrak{g}})\) where \(\bar{\mathfrak{g}}\) is the Lie algebra equipped with the Lie bracket \([[.,.]]\)._
This result has been used in [11], in the context of regularity structures, in order to show that the \(\star_{2}\) product, dual to the \(\Delta_{2}\) coproduct appearing in [12] and introduced in [30, 8], comes directly from a post-Lie product by applying the above procedure. Below, we briefly recall this. We define the following spaces:
\[\mathscr{V} =\Big{\langle}\{\mathcal{F}_{a}(\tau),\,a\in\mathbb{N}^{d+1},\, \tau\in\mathscr{F}_{E}^{V}\}\cup\{X_{i}\}_{i=0,...,d}\Big{\rangle}_{\mathbb{R}},\] \[\mathscr{\widehat{V}} =\Big{\langle}\mathcal{F}_{a}(\tau),\,a\in\mathbb{N}^{d+1},\, \tau\in\mathscr{F}_{E}^{V}\Big{\rangle}_{\mathbb{R}}.\]
We denote by \(\uparrow_{v}^{k}\) the operator acting on decorated trees by adding \(k\) to the decoration of the node \(v\). We then define, for a tree \(\tau\in\mathscr{F}_{E}^{V}\) the operator \(\uparrow^{i}\) as follows:
\[\uparrow^{i}\tau=\sum_{v\in N_{\tau}}\uparrow_{v}^{e_{i}}\tau.\]
This operator acts as a derivation on the multi-pre-Lie algebra of grafting products in the sense that:
\[\uparrow^{i}(\sigma\curvearrowright^{a}\tau)=(\uparrow^{i}\sigma)\curvearrowright ^{a}\tau+\sigma\curvearrowright^{a}(\uparrow^{i}\tau). \tag{4.4}\]
The derivation property (4.4) is not preserved under the deformation. One has the following identity similar to [11, Proposition 4.4].
\[\uparrow^{i}(\sigma\widehat{\curvearrowright^{a}}\tau)=(\uparrow^{i}\sigma) \widehat{\curvearrowright^{a}}\tau+\sigma\widehat{\curvearrowright^{a}}( \uparrow^{i}\tau)-\sigma\widehat{\curvearrowright^{a-e_{i}}}\tau, \tag{4.5}\]
for all decorated trees \(\sigma,\tau\) and \(a\in\mathbb{N}^{d+1}\), \(i\in\{0,...,d\}\). Looking at the above formula, one observes that the pair of operators \(\tau\mapsto\sigma\widehat{\curvearrowright^{a}}\tau\) and \(\tau\mapsto\uparrow^{i}\tau\) does not satisfy the commutativity relation satisfied by the operators \(\tau\mapsto\sigma\curvearrowright^{a}\tau\) and \(\tau\mapsto\uparrow^{i}\tau\). The non-commutative relation 4.5 motivates the introduction of a Lie bracket together with a product that is a derivation for that bracket, that encode these relations in the form of a post-Lie algebra. We begin by introducing a product \(\widehat{\triangleright}\) on \(\mathscr{V}\):
\[X_{i}\widehat{\triangleright}\,\mathcal{F}_{a}(\tau) =\mathcal{F}_{a}(\uparrow^{i}\tau),\quad\mathcal{F}_{a}(\tau) \widehat{\triangleright}\,X_{i}=X_{i}\widehat{\triangleright}\,X_{j}=0 \tag{4.6}\] \[\mathcal{F}_{a}(\sigma)\widehat{\triangleright}\,\mathcal{F}_{b}(\tau) =\mathcal{F}_{a}(\sigma)\widehat{\curvearrowright}\mathcal{F}_{b} (\sigma).\]
In the sequel, we will use the notation \(\uparrow^{i}\mathcal{F}_{a}(\tau)\) for \(\mathcal{F}_{a}(\uparrow^{i}\tau)\). We now proceed to define the appropriate Lie bracket, motivated by 4.5:
**Definition 4.3**: We define the Lie bracket on \(\mathscr{V}\) as \([x,y]_{0}=0\) for \(x,y\in\mathscr{\widehat{V}}\), \([x,y]_{0}=0\) for \(x,y\in\langle\,X_{i}\,\rangle_{\mathbb{R}}\) and as
\[[\mathcal{F}_{a}(\tau),X_{i}]_{0}=\mathcal{F}_{a-e_{i}}(\tau) \tag{4.7}\]
With these definitions at hand, we have the following theorem (see [11, Theorem 4.4]):
**Theorem 4.4**: _The triple \((\mathcal{V},[.,.]_{0},\widehat{\triangleright})\) is a post-Lie algebra._
The bracket induced by the post-Lie algebra encodes all the (non-)commutativity relations between operators acting on decorated trees. However, most of these actually commute with one another, forming a pre-Lie algebra that lives inside the Lie algebra \(\tilde{V}\). The extra post-Lie structure allows one to, roughly speaking, split the bracket into a commutative and non-commutative part. Hence the non-commutativity relations are actually encoded more succinctly by the Lie bracket \([.,.]_{0}\).
We denoted by \(U(\mathcal{V}_{0})\) the enveloping algebra with the Lie bracket \([.,.]_{0}\) and by \(U(\mathcal{V})\) the enveloping algebra with the Lie bracket \([[.,.]]\). We also set \(*\) to be the product obtained by the generalization of the Guin-Oudom procedure given in (4.3). As a mere application of Theorem 4.2, one gets
**Theorem 4.5**: _The Hopf algebra \(U(\mathcal{V})\) is isomorphic to the Hopf algebra \((U(\mathcal{V}_{0}),*,\Delta)\)._
Then, the main result of [11] is
**Theorem 4.6**: _The Hopf algebra \((U(\mathcal{V}_{0}),*,\Delta)\) is isomorphic to the Hopf algebra \(\mathcal{H}_{2}=(\mathcal{G}_{E}^{V},\star_{2},\Delta)\) as presented in [12]._
**Remark 4.7**: An explicit formula for the \(\star_{2}\) product for \(\sigma=X^{k}\prod_{i\in I}\mathcal{F}_{a_{i}}(\sigma_{i})\) and \(\tau\in\mathcal{G}_{E}^{V}\) is given by
\[\mathcal{F}_{b}(\sigma\star_{2}\tau):=\tilde{\uparrow}_{N_{\tau}}^{k}\left( \prod_{i\in I}\mathcal{F}_{a_{i}}(\sigma_{i})\,\widehat{\frown}\,\mathcal{F} _{b}(\tau)\right),\quad\tilde{\uparrow}_{N_{\tau}}^{k}=\sum_{k=\sum_{v\in N_{ \tau}}k_{v}}\,\uparrow_{v}^{k_{v}}\]
We shall now decompose trees of the form \(X^{k}\prod_{i}\mathcal{F}_{a_{i}}(\tau_{i})\) with decoration \(k\) at the root. So far, we have been successful in doing this for trees with no root decoration. For these terms, we will need to utilize the underlying post-Lie structure and the fact that \(X_{i}\) does not commute with any term of the form \(\mathcal{F}_{a}(\tau)\). Instead one has:
\[X_{i}\star_{2}\mathcal{F}_{a}(\tau)-\mathcal{F}_{a}(\tau)\star_{2}X_{i}= \uparrow_{e_{i}}\mathcal{F}_{a}(\tau)-\mathcal{F}_{a-e_{i}}(\tau)\]
where \(\star_{2}\) is the product constructed from the post-Lie product. The restriction of this product on the space spanned by planted trees coincides with \(\tilde{\star}\). What we obtain will then be an isomorphism with a space of words quotiented by the following relation:
\[X_{i}\otimes\mathcal{F}_{a}(\tau)-\mathcal{F}_{a}(\tau)\otimes X_{i}= \uparrow_{e_{i}}\mathcal{F}_{a}(\tau)-\mathcal{F}_{a-e_{i}}(\tau)\]
where now the trees with a single node are treated as letters. Let us explain how this works for a decorated tree of the form \(X^{k}\prod_{i=1}^{n}\mathcal{J}_{a_{i}}(\tau_{i})\) when one wants to decompose the following terms:
\[X^{k}\prod_{i=1}^{n}\mathcal{J}_{a_{i}}(\tau_{i})\star_{2}\tau.\]
We begin by making the following remarks that shall prove useful in what follows:
* By choosing a different ordering in the Poincare-Birkhoff-Witt theorem, we clearly see that the set of elements of the form \[\prod_{i=1}^{n}\mathcal{J}_{a_{i}}(\tau_{i})X^{k}\] where \(\mathbf{F}=\mathcal{F}_{a_{1}}(\tau_{1})\cdot\cdot\cdot\mathcal{F}_{a_{n}}( \tau_{n})\) ranges over all forests of planted trees and \(\mathbf{m}\in\mathbb{N}^{d+1}\) is a basis for \(U(\mathcal{V}_{0})\).
* The operator \(\tau\mapsto\prod_{i=1}^{n}\mathcal{J}_{a_{i}}(\tau_{i})X^{k}\star_{2}\tau\) is equal to the operator \(\tau\mapsto\prod_{i=1}^{n}\mathcal{J}_{a_{i}}(\tau_{i})\star_{2}X^{k}\star_{2}\tau\) We introduce a second derivation \(\mathbb{D}^{i}\) defined by \[\mathbb{D}^{i}\tau=\sum_{e\in E_{\tau}}\mathbb{D}^{i}_{e}\tau\] where \(\mathbb{D}^{i}_{e}\) adds \(-e_{i}\) to the decoration of the edge \(e\) if possible. Otherwise, it is equal to zero.
**Proposition 4.8**: _For every \(\tau\in\mathcal{P}^{V}_{E}\), one has:_
\[\uparrow^{i}\Phi(\tau)=\Phi(\uparrow^{i}\tau)-\Phi(\mathbb{D}^{i}\tau)\]
Proof: We first consider a decorated tree \(\tau\) of the form
\[\tau=\tau_{1}\curvearrowright^{b}\mathcal{J}_{a}(\tau_{2})\]
then one has
\[\begin{split}\uparrow^{i}\Phi(\tau)=&\uparrow^{i} \left(\Phi(\tau_{1})\,\widehat{\curvearrowright^{b}}\,\Phi(\mathcal{J}_{a}( \tau_{2}))\right)\\ =&\left(\uparrow^{i}\Phi(\tau_{1})\right)\,\widehat {\curvearrowright^{b}}\,\Phi(\mathcal{J}_{a}(\tau_{2}))+\Phi(\tau_{1})\, \widehat{\curvearrowright^{b}}\left(\uparrow^{i}\Phi(\mathcal{J}_{a}(\tau_{2 }))\right)\\ -&\Phi(\tau_{1})\,\widehat{\curvearrowright^{b-e_{ i}}}\,\Phi(\mathcal{J}_{a}(\tau_{2}))\end{split}\]
where we have used (4.5). Then, one can apply an induction hypothesis on \(\mathcal{J}_{b}(\tau_{1})\) and \(\mathcal{J}_{a}(\tau_{2})\) and one gets
\[\uparrow^{i}\Phi(\mathcal{J}_{b}(\tau_{1}))=\Phi(\mathcal{J}_{b}(\uparrow^{i }\tau_{1}))-\Phi(\mathcal{J}_{b}(\mathbb{D}^{i}\tau_{1}))\]
\[\uparrow^{i}\Phi(\mathcal{J}_{a}(\tau_{2}))=\Phi(\mathcal{J}_{a}(\uparrow^{i}\tau_{ 2}))-\Phi(\mathcal{J}_{a}(\overleftarrow{\mathcal{D}}^{i}\tau_{2}))\]
Then, one observes that
\[\Phi(\mathcal{J}_{a}(\overleftarrow{\mathcal{D}}^{i}\tau)) =\Phi(\tau_{1}\curvearrowleft^{b-e_{i}}\mathcal{J}_{a}(\tau_{2}))+ \Phi(\overleftarrow{\mathcal{D}}^{i}\tau_{1}\curvearrowleft^{b}\mathcal{J}_{ a}(\tau_{2}))+\Phi(\tau_{1}\curvearrowleft^{b}\mathcal{J}_{a}(\overleftarrow{ \mathcal{D}}^{i}\tau_{2}))\] \[\Phi(\mathcal{J}_{a}(\uparrow^{i}\tau)) =\Phi(\uparrow^{i}\tau_{1}\curvearrowleft^{b}\mathcal{J}_{a}(\tau_{ 2}))+\Phi(\tau_{1}\curvearrowleft^{b}\mathcal{J}_{a}(\uparrow^{i}\tau_{2}))\]
We conclude by using again the morphism property of \(\Phi\) that gives us for example:
\[\Phi(\uparrow^{i}\tau_{1}\curvearrowleft^{b}\mathcal{J}_{a}(\tau_{2}))=\Phi( \uparrow^{i}\mathcal{J}_{b}(\tau_{1}))\widehat{\curvearrowleft}\Phi(\mathcal{ J}_{a}(\tau_{2}))\]
and the fact that \(\tau\) is generated by the family \((\widehat{\curvearrowleft^{b}})_{b}\).
**Proposition 4.9**: _One has the following commutation identities:_
\[\Delta_{\mbox{\tiny BCK}}\uparrow^{i} =\big{(}\uparrow^{i}\otimes\mathbf{1}\big{)}\Delta_{\mbox{\tiny BCK }}+\big{(}\mathbf{1}\otimes\uparrow^{i}\big{)}\Delta_{\mbox{\tiny BCK}}\] \[\Delta_{\mbox{\tiny BCK}}\overleftarrow{\mathcal{D}}^{i} =\big{(}\overleftarrow{\mathcal{D}}^{i}\otimes\mathbf{1}\big{)}\Delta_{ \mbox{\tiny BCK}}+\big{(}\mathbf{1}\otimes\overleftarrow{\mathcal{D}}^{i} \big{)}\Delta_{\mbox{\tiny BCK}}\]
_with the convention that \(\overleftarrow{\mathcal{D}}^{i}\mathbf{1}=\uparrow^{i}\mathbf{1}=0\)._
_Proof._ This is just a consequence of the fact that \(\uparrow^{i}\) and \(\overleftarrow{\mathcal{D}}^{i}\) are derivations for \(\curvearrowleft^{a}\) and therefore for the Grossman-Larson product \(\star\). By going to the dual, one gets the desired identities.
**Corollary 4.10**: _The set of primitives elements for \(\mathcal{H}_{\mbox{\tiny BCK}}\) is stable under the action of the derivations \(\uparrow^{e_{i}}\) as well as the derivations \(\overleftarrow{\mathcal{D}}^{i}\)._
_Proof._ Let \(\tau\) a primitive elements, one has
\[\Delta_{\mbox{\tiny BCK}}\uparrow^{i}\tau=\big{(}\uparrow^{i}\otimes\mathbf{1 }\big{)}\Delta_{\mbox{\tiny BCK}}\tau+\big{(}\mathbf{1}\otimes\uparrow^{i} \big{)}\Delta_{\mbox{\tiny BCK}}\tau\]
where we have used Proposition 4.9. Then, from the primitiveness of \(\tau\)
\[\Delta_{\mbox{\tiny BCK}}\tau=\tau\otimes\mathbf{1}+\mathbf{1}\otimes\tau\]
which allows us to get using the fact that \(\uparrow^{i}\mathbf{1}=0\):
\[\Delta_{\mbox{\tiny BCK}}\uparrow^{i}\tau=\uparrow^{i}\tau\otimes\mathbf{1}+ \mathbf{1}\otimes\uparrow^{i}\tau\]
The proof works as the same for \(\overleftarrow{\mathcal{D}}^{i}\).
**Proposition 4.11**: _If \(\sigma=\Psi(\tau)\) for some primitive element \(\tau\) with respect to the \(\Delta_{\mbox{\tiny BCK}}\) coproduct, then \(\uparrow^{e_{i}}\sigma\) and \(\overleftarrow{\mathcal{D}}^{i}\sigma\) are also in the image of \(Prim(\mathcal{H}_{\mbox{\tiny BCK}})\)._
_Proof._ This is a consequence of Proposition 4.8 and Corollary 4.10.
We can now state and prove our main result:
**Theorem 4.12**: _We equip \(\mathcal{F}_{E}^{V}\) with two products: \(\tilde{\star}\) is the product dual to the deformed Butcher-Connes-Kreimer coproduct and \(\star_{2}\) is the product of \(\mathcal{H}_{2}\). We let \(W\) be the linear space of the words from the alphabet \(A\) whose letters are the \(X_{i}\) and \(\Phi(\mathcal{F}_{a}(\tau))\) where \(\mathcal{F}_{a}(\tau)\) is a primitive element for \(\Delta_{\text{\tiny BCK}}\) and belongs to \(B\). We define \(\tilde{W}\) as the quotient of \(W\) by the Hopf ideal \(\mathcal{J}\) generated by the elements_
\[\{X_{i}\otimes\Phi(\mathcal{F}_{a}(\tau))-\Phi(\mathcal{F}_{a}(\tau))\otimes X _{i}-\uparrow^{i}\Phi(\mathcal{F}_{a}(\tau))-\Phi(\mathcal{F}_{a-e_{i}}(\tau))\}\]
_where \(\mathcal{F}_{a}(\tau)\in B\). Then, there exists a Hopf algebra isomorphism \(\Psi\) between \(\mathcal{F}_{E}^{V}\) and \(\tilde{W}\) equipped with \(\star_{2}\). The map \(\Psi\) is given by_
\[\Psi:\prod_{i=1}^{n}\mathcal{F}_{a_{i}}(\tau_{i})X^{k}\to\Psi(\prod_{i=1}^{n} \mathcal{F}_{a_{i}}(\tau_{i}))\otimes_{j=0}^{d}\otimes_{i=1}^{k_{j}}X_{j}.\]
Proof.: We first apply the isomorphism described in Proposition 3.18 on \(\sigma=\prod_{i=1}^{n}\mathcal{F}_{a_{i}}(\tau_{i})\) by writing
\[\prod_{i=1}^{n}\mathcal{F}_{a_{i}}(\tau_{i})=\sum_{R}\lambda_{R}\,\sigma_{r_{1 }}\star_{2}...\star_{2}\sigma_{r_{n}}\]
with \(\sigma_{r_{i}}\in A\). We then map \(\prod_{i=1}^{n}\mathcal{F}_{a_{i}}(\tau_{i})X^{k}\) as follows:
\[X^{k}\prod_{i=1}^{n}\mathcal{F}_{a_{i}}(\tau_{i})\mapsto\sum_{R}\lambda_{R} \sigma_{r_{1}}\otimes...\otimes\sigma_{r_{n}}\otimes X_{0}^{\otimes k_{0}} \otimes...\otimes X_{d}^{\otimes k_{d}}\]
By virtue of Proposition 4.11, this clearly gives an isomorphism onto the Hopf algebra \(\tilde{W}\). Indeed, given a letter \(\Phi(\mathcal{F}_{a}(\tau))\), one has that \(\uparrow^{i}\Phi(\mathcal{F}_{a}(\tau))\) and \(\Phi(\mathcal{F}_{a-e_{i}}(\tau))\) are linear combination of letters of \(W\).
## 5 Applications in regularity structures
In this section we restrict ourselves to the setting that is more specific to the theory of regularity structures, specifically the structures first appearing in the works [30, 8]. For an introduction to the theory see [22, 9, 7]. This involves considering a Hopf subalgebra of the \(\mathcal{H}_{2}\) Hopf algebra, that consists of trees with branches of positive degree. We shall use the theorem proved in the previous section to embed this into the tensor Hopf algebra. This allows for an encoding of the iterated integrals appearing when solving the equations, in the form of words. We begin by defining the space:
\[T_{+}:=\{X^{k}\prod_{i=1}^{n}\mathcal{F}_{a_{i}}(\tau_{i})\mid\alpha(\mathcal{ F}_{a_{i}}(\tau_{i}))>0,\,\tau\in T_{E}^{V}\}\]
We also define \(\mathcal{T}_{+}\) to be the linear span of \(T_{+}\). Here, \(\alpha\) is a degree map computing a number associated to a decorated tree. This corresponds of some kind of regularity
of the stochastic integral associated to the decorated tree. It takes into account the decoration on the edges that could both encode distributional noises or convolution with kernel that provides a smoothing effect via Schauder estimates. We refrain to give a precise definition that could be found in many works [30, 8].
For each subcritical singular SPDE one constructs a Hopf subalgebra \(\mathcal{T}_{R}^{+}\) of \(\mathcal{G}_{+}\) by attaching a generating rule \(R\) to the nonlinearity \(F\) of the equation. The rule induces a recursive procedure that generates the entire Hopf subalgebra \(\mathcal{T}_{R}^{+}\). This procedure may be thought of as formal Picard iteration. The resulting Hopf subalgebra is then used to describe the regularity structure for the given equation. In the next theorem, we denote by \(\cdot\) the product on \(\mathcal{T}_{+}\).
**Theorem 5.1**.: _The Hopf algebra \((\mathcal{T}_{+},\star_{2},\Delta_{\sqcup})\), which is the graded dual of \((\mathcal{T}_{+},\cdot,\Delta_{2})\), is isomorphic to a Hopf subalgebra of \(T(A)/\mathcal{F}\)._
Proof.: By Theorem 4.12, we have an isomorphism \(\Phi:\mathcal{H}_{2}\to T(A)/\mathcal{F}\). By simply restricting \(\Phi\) to \(\mathcal{T}_{+}\) we obtain a Hopf algebra isomorphism of \(\mathcal{T}_{+}\) onto its image.
Let explain how this algebraic result allows to interpret regularity structures as some kind of geometric rough paths. Solutions \(u\) of local subcritical singular stochastic partial equations (SPDEs) are locally described by
\[u(y)-u(x)=\sum_{\tau\in\mathcal{T}_{R}}u_{\tau}(x)(\Pi_{x}\tau)(y),\quad(\Pi_{ x}\tau)(y)\lesssim|y-x|^{\alpha(\tau)}\]
where \(x,y\in\mathbb{R}^{d+1}\), \((\Pi_{x}\tau)(y)\) are stochastic iterated integrals recentered around the point \(x\) such that one has a behaviour close to \(x\) according to the degree of the given decorated tree. The \(u_{\tau}(x)\) are some kind of derivatives. Then, the theory of regularity structures provides a reexpansion map \(\Gamma_{xy}\) that allows us to move the recentering:
\[\Pi_{y}=\Pi_{x}\Gamma_{xy}.\]
The collection of these two maps \((\Pi_{x},\Gamma_{xy})\) is what is referred to as a model [30, Def. 3.1]. One important algebraic construction is to represent \(\Gamma_{xy}\) via a character \(\gamma_{xy}:\mathcal{T}_{+}\to\mathbb{R}\) multiplicative for the tree product. This description is given via a co-action \(\Delta:\mathcal{G}\,\to\mathcal{G}\,\otimes\mathcal{T}_{+}\)
\[\Gamma_{xy}=(\text{id}\otimes\gamma_{xy})\Delta,\quad|\gamma_{xy}(\tau)| \lesssim|y-x|^{\alpha(\tau)}. \tag{5.1}\]
The character \(\gamma_{xy}\) can be viewed as an extension of branched rough paths to the multidimensional case as \(x,y\in\mathbb{R}^{d+1}\). Moreover, it satisfies some Chen's relation:
\[\gamma_{xy}\star_{2}\gamma_{yz}=\gamma_{xz}\]
We denote the space of such maps by \(\mathbf{TM}^{\alpha}\) called \(\alpha\)-Tree-indexed Models. Maps \(\gamma_{xy}\) defined as character on \(\Psi(\mathcal{T}_{+})\) are \(\alpha\)-Geometric Models denoted by \(\mathbf{GM}^{\alpha}\). They satisfy the following properties:
\[\gamma_{xy}\otimes\gamma_{yz}=\gamma_{xz},\quad|\gamma_{xy}(\Psi(\tau))| \lesssim|y-x|^{\alpha(\tau)}. \tag{5.2}\]
We could have used the terminology of anisotropic rough paths but the characters are defined on a quotient of a tensor Hopf algebra and not the tensor Hopf algebra itself. One can rephrase our main algebraic result as:
**Theorem 5.2**: _Let \(X\in\mathbf{TM}^{\alpha}\), then \(\hat{X}:=\Psi(X)\in\mathbf{GM}^{\alpha}\)._
The analytical bounds are easily satisfied by realising that:
\[\langle\Psi(X)_{xy},\Psi(\tau)\rangle=\langle X_{xy},\tau\rangle.\]
The algebraic identities are such as Chen's relation are preserved by the map \(\Psi\).
As in [14], one can investigate the action of the renormalisation on this construction by looking at maps \(M\) that are morphisms for the product \(\star_{2}\) which are BPHZ renormalisation maps. One of the main issue is that \(\mathcal{T}_{+}\) may not be stable under \(M\) due to the constraint imposed on the degree being positive. Extended decorations on trees have been introduced in [8] in order to guarantee that \(M\) is degree preserving. This property implies that \(\mathcal{T}_{+}\) is invaraint under \(M\). One can easily check that \(M\) commutes with \(\Phi\) and then it is possible to find a map \(\tilde{M}\) defined on \(T(A)/\mathcal{F}\) such that it will commute with \(\Psi\):
\[\tilde{M}\Psi=\Psi M.\]
This will be an equivalent of Theorem 4.7 in [14].
|
2303.02758 | WADER at SemEval-2023 Task 9: A Weak-labelling framework for Data
augmentation in tExt Regression Tasks | Intimacy is an essential element of human relationships and language is a
crucial means of conveying it. Textual intimacy analysis can reveal social
norms in different contexts and serve as a benchmark for testing computational
models' ability to understand social information. In this paper, we propose a
novel weak-labeling strategy for data augmentation in text regression tasks
called WADER. WADER uses data augmentation to address the problems of data
imbalance and data scarcity and provides a method for data augmentation in
cross-lingual, zero-shot tasks. We benchmark the performance of
State-of-the-Art pre-trained multilingual language models using WADER and
analyze the use of sampling techniques to mitigate bias in data and optimally
select augmentation candidates. Our results show that WADER outperforms the
baseline model and provides a direction for mitigating data imbalance and
scarcity in text regression tasks. | Manan Suri, Aaryak Garg, Divya Chaudhary, Ian Gorton, Bijendra Kumar | 2023-03-05T19:45:42Z | http://arxiv.org/abs/2303.02758v1 | WADER at SemEval-2023 Task 9: A Weak-labelling framework for Data augmentation in tExt Regression Tasks
###### Abstract
Intimacy is an essential element of human relationships and language is a crucial means of conveying it. Textual intimacy analysis can reveal social norms in different contexts and serve as a benchmark for testing computational models' ability to understand social information. In this paper, we propose a novel weak-labeling strategy for data augmentation in text regression tasks called WADER. WADER uses data augmentation to address the problems of data imbalance and data scarcity and provides a method for data augmentation in cross-lingual, zero-shot tasks. We benchmark the performance of State-of-the-Art pre-trained multilingual language models using WADER and analyze the use of sampling techniques to mitigate bias in data and optimally select augmentation candidates. Our results show that WADER outperforms the baseline model and provides a direction for mitigating data imbalance and scarcity in text regression tasks.
## 1 Introduction
Intimacy is considered a fundamental element of human relationships, as recognized by several scholars [1, 1, 2]. Research indicates that intimacy can be modeled computationally and that textual intimacy is a crucial aspect of language [2]. Analyzing textual intimacy can reveal social norms in various contexts and serve as a benchmark to test computational models' ability to understand social information [2, 3]. Moreover, intimacy plays a critical role in human development and well-being [1, 15], and language is an essential means of conveying it in a social context. Individuals negotiate intimacy in language to fulfill fundamental and strategic needs while respecting social norms. Task 9 of SemEval 2023 [2] aims to quantify intimacy in a multilingual context, with evaluation on tweets from 10 languages. The training corpus for the task consists of tweets in English, Spanish, Italian, Portuguese, French, and Chinese. The testing corpus additionally contains tweets from Hindi, Arabic, Dutch and Korean.
The novelty of our strategy, WADER (Weak-labeling strategy for Data augmentation in tExt Regression Tasks) is the use of data augmentation to A) solve the problem of an imbalance distribution of data, B) augment data for a cross-lingual zero-shot set-up. WADER uses the distribution to selectively sample texts with lower representation in the label distribution, uses translation to augment sentences and validates the augmentations against a baseline model, using a distribution based sampling approach. We finetune State-of-the-Art pre-trained language models including XLM RoBERTa [1] and XLNET [20]. Real world datasets are plagued by the problems of data imbalance and data scarcity, and WADER provides a direction for mitigating these problems for text regression tasks. WADER ranks 32nd overall across languages, 34th on seen languages and 29th on unseen languages. Our code has been released on GitHub. 1
Footnote 1: Code will be released on paper acceptance.
The main contributions of this paper are as follows:
1. Provide a data augmentation framework specific to text regression.
2. Provide a method for data augmentation in cross-lingual, zero-shot tasks.
3. Benchmark performance of pre-trained language models.
4. Analysis of use of sampling techniques to mitigate bias in data and optimally select augmentation candidates.
The paper is organized as follows: Section 2 provides background information on the research, including a review of relevant literature, details about the task at hand, and information on the data used. Section 3 presents an overview of our approach, followed by a discussion of the experimental set-up in Section 4. The results of our study are analyzed in Section 5, and the paper concludes with a summary of findings and future directions for research in Section 6.
## 2 Background
### Past Work
Data imbalance and scarcity are problems that are rampant in real world datasets. The high cost of obtaining large amounts of data, and expert annotations, a wealth of research has been done to support limited data settings. Data augmentation for text is broadly done in two ways, conditional data augmentation which involves data augmentation conditioned by the target label, and unconditional data augmentation which involves working with the corpus features only Bayer et al. (2021); Liu et al. (2020). Conditional data augmentation is done usually by deep generative models and pre-trained language models such as BART Lewis et al. (2019), CBERT Wu et al. (2018), GPT2 Radford et al. (2019). Common ways to perform unconditional data augmentation are lexical substitution and back translation Wei and Zou (2019) introduce several lexical techniques to augment textual data, including synonym replacement, random insertion, random swap and random deletion. However, these methods suffer from lack of sufficient diversity and often produce sentences that are not coherent. Back-translation especially has received widespread attention, because progress in machine translation has made back-translation an efficient way to generate diverse sentences in the dataset without compromise in coherence and semantic quality. Common translation tools used are seq2seq based models, NMT and transformers. Different techniques exist for text classification and NER tasks, but to the best of our knowledge our work is unique in the text regression domain.
Weak supervision of text labeling during data augmentation is an example of Semi-Supervised Learning (SSL) methods. The main idea of these methods is to regularize the learning process by training a network with the given data, using the network to label unlabelled data and finally use both the true-labeled and weak-labeled data points to train the final model.
### Task Description
SemEval 2023 Task 9: Multilingual Tweet Intimacy Analysis Pei et al. (2022) is a task that deals with detecting intimacy in 10 languages. This task is co-organized by University of Michigan and Snap Inc. Intimacy is a fundamental aspect of human relationships, and studying intimacy in a textual context has many potential applications in the field of computational linguistics. The training data is available in 6 languages: English, Spanish, Italian, Portuguese, French, and Chinese. The evaluation is done on the given training languages, as well as 6 unseen languages: Hindi, Arabic, Dutch and Korean.
The metric of evaluation for the task is Pearson's R. Pearson's R, \(r\) is expressed as follows for two variables \(x\) and \(y\):
\[r=\frac{\sum\left(x_{i}-\bar{x}\right)\left(y_{i}-\bar{y}\right)}{\sqrt{\sum \left(x_{i}-\bar{x}\right)^{2}\sum\left(y_{i}-\bar{y}\right)^{2}}} \tag{1}\]
The correlation coefficient \(r\) ranges from -1 to 1, with an absolute value of 1 indicating a perfect linear relationship between the two variables. In such a case, all data points lie on a line that can be represented by a linear equation. The sign of the correlation coefficient is determined by the regression slope, with a value of +1 indicating that all data points lie on a line where \(y\) increases as \(x\) increases, and a value of -1 indicating the opposite. A correlation coefficient of 0 implies that there is no linear dependency between the two variables.
### Data Description
The dataset for the task is the MINT- Multilingual INTimacy analysis Pei et al. (2022) dataset. The training set contains sentences in 6 languages: Chinese, English, French, Portuguese, Spanish and Italian. The dataset has 9491 tweets. Distribution of sentences in different languages is given in Table 1.
75% of the samples in the dataset have a label less than or equal to 2.667.
The testing set additionally contains 4 unseen languages, Hindi, Korean, Arabic and Dutch.
## 3 Methodology
### Data Augmentation
As noted in section 2.3, the data is highly imbalanced for the given labels. Moreover, since the task has 4 unseen languages, there is an additional need for data augmentation. WADER performs data augmentation using the framework described in Fig 2. The steps followed are described as follows:
**Distribution based Sampling:** Since the distribution of labels is skewed, and not all labels need augmentation, we perform a distribution based sampling to select candidate tweets for data augmentation. We fix a threshold \(p\), and sample all tweets above the given threshold. We take the value of \(p\) as 3.2, and less
**Translation:** Data is augmented through translation and back translation. The translation scheme is described in Fig 3.
For an unseen language, \(L_{unseen}\), set of sampled sentences \(L_{i}\forall i\in L\) are taken and translated to the target language, \(L_{unseen}\). The translated sentences are appended to the set \(T_{unseen}\).
For a seen language, say \(L_{i}\), the language is translated to all other languages except the language itself, \(L_{k}\forall k\in L,k\neq i\). The translated tweets are appended to their respective translated sets \(T_{k}\), and they are translated back to the source language \(L_{i}\), appended to the translated set \(T_{i}\).
Our final translated set has 49774 sentences.
**Label Validation:** We train a baseline model by finetuning a pretrained language model on the gold labelled data. This model is then used to infer on the concatenated translated corpus of seen and unseen languages.
**Difference Based Sampling:** We take the absolute difference between predicted and pre-assigned values (derived label from before a sentence was translated). We use this as a metric for quality of translations and pick appropriate thresholds to se
\begin{table}
\begin{tabular}{l r r r r r r} \hline
**Language** & **Count** & **Mean** & **Std. Dev.** & **25th \%ile** & **50th \%ile** & **75th \%ile** \\ \hline
**English** & 1587 & 1.89 & 0.877273 & 1.2 & 1.6 & 2.4 \\
**Chinese** & 1596 & 2.27 & 0.93851 & 1.5 & 2 & 2.8 \\
**French** & 1588 & 2.06 & 0.886265 & 1.34 & 2 & 2.6 \\
**Italian** & 1532 & 1.94 & 0.835105 & 1.25 & 1.8 & 2.425 \\
**Spanish** & 1592 & 2.21 & 0.941339 & 1.4 & 2 & 2.8 \\
**Portuguese** & 1596 & 2.16 & 0.872903 & 1.4 & 2 & 2.8 \\
**Overall** & 9491 & 2.09 & 0.903512 & 1.4 & 2 & 2.67 \\ \hline \end{tabular}
\end{table}
Table 1: Description of the training set.
Figure 1: Frequency plots for different languages in the training set.
lect sentences. Table 2 shows an analysis of the distribution of differences. The mean difference is 0.62, which is a below average translation quality since the resolution of labels is 0.1 in the dataset. However, 75% of sentences have differences \(\leq\) 0.86 For our experiments, which means that coarse grain labels (differences of 1) are correctly assigned in most of the cases.
We define \(\beta\) as the parameter which represents the difference threshold. We pick difference values of \(\beta\) as 0.1, 0.2 and 0.3 in our experiments. Table 2 shows the count of sentences in each of these thresholds.
### Finetuning Pre-trained Language Models
Finetuning pretrained language models has become a popular approach for natural language processing tasks in recent years. Transformer based Vaswani et al. (2017) Pretrained Language Models such as BERT Devlin et al. (2018), GPT-2Radford et al. (2019), and RoBERTaLiu et al. (2019) are trained on massive amounts of text data, which allows them to capture complex linguistic patterns and structures. Finetuning involves taking a pretrained language model and further training it on a specific downstream task, such as sentiment analysis or question answering. This approach has been shown to achieve state-of-the-art performance on a wide range of natural language processing tasks, with significantly less data and computation needed compared to training a model from scratch. Finetuning also allows for the transfer of knowledge learned from a large, diverse set of data to a smaller, more specific task, making it a powerful technique for natural language processing research.
The pre-training models used in our system include:
**XLM RoBERTa:** XLM-RoBERTa Conneau et al. (2019) is a variation of the RoBERTa model that has been designed to handle multilingual natural language processing tasks. This model is pre-trained on a massive dataset of 2.5 terabytes of CommonCrawl data filtered for 100 different languages. By training on such a large and diverse dataset, XLM-RoBERTa is able to capture the linguistic nuances and patterns that are unique to different languages. The architecture of XLM-RoBERTa is based on the highly successful BERT model, but with key modifications to hyperparameters such as larger mini-batches and learning rates, allowing it to handle the additional complexity of multilingual data. XLM-RoBERTa has shown impressive results across a range of multilingual natural language processing tasks, demonstrating the power of pre-training on large, diverse datasets for building highly effective models. We use implementation of the XLMR model from HuggingFace.
**XLNET:** XLNet Yang et al. (2019) is a state-of-the-art natural language processing model that
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline
**Parameters** & **count** & **mean** & **std** & **min** & **25\%** & **50\%** & **75\%** & **max** \\ \hline
**Value** & 49774 & 0.62 & 0.51 & 0 & 0.23 & 0.47 & 0.86 & 3.550781 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Analysis of the translated sentence set, specifically the difference during validation.
Figure 2: Data Augmentation Flowchart
extends the Transformer-XL architecture and uses an innovative pre-training method. Unlike BERT, which corrupts input with masks and neglects dependencies between masked positions, XLNet is able to learn bidirectional contexts by maximizing the expected likelihood over all possible permutations of the factorization order. This allows XLNet to capture complex linguistic patterns and dependencies in the input sequence. XLNet also integrates ideas from Transformer-XL, which is currently the most advanced autoregressive model in use. With its autoregressive formulation, XLNet is able to overcome the limitations of BERT and achieve even better performance on a range of natural language processing tasks.
For finetuning the pre-trained models, we add a single linear layer on top of the embeddings of the classification token \(<\)s\(>\) for XLM-RoBERTa and [cls]. Since this is a text regression task and the scores are in a limited, we apply a clamp function as the final activation function which clamps the scores in the range \([1,5]\). Fig 4 is a representation of our finetuning procedure.
### Ensembling
We evaluate results on the test set using 6 models, XLM RoBERTa and XLNET trained on augmented sets with difference sampling parameter \(\beta=0.1,0.2,0.3\).
We choose 6 ensembles. The configurations of ensembles are defined in Table 4.
Ensembling is done by taking the mean prediction of all the ensembled models.
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Ensemble**} & \multicolumn{3}{c|}{**XLM RoBERTa**} & \multicolumn{3}{c|}{**XLNET**} \\ \cline{2-7} & **0.1** & **0.2** & **0.3** & **0.1** & **0.2** & **0.3** \\ \hline
**Ensemble 1** & & & & & & & \\ \hline
**Ensemble 2** & & & & & & & \\ \hline
**Ensemble 3** & & & & & & & \\ \hline
**Ensemble 4** & & & & & & & \\ \hline
**Ensemble 5** & & & & & & & \\ \hline
**Ensemble 6** & & & & & & & \\ \hline \end{tabular}
\end{table}
Table 4: The configurations of the different chosen ensembles that we experimented with. The different choices are motivated by A) Model choice, B) Threshold of difference sampling \(\beta\).
Figure 4: Fine-tuning Architecture
Figure 3: Translation scheme
## 4 Experimental Setup
We use the original test and train set. Further, we take 15% of the train set, sampled randomly from each language as our validation set.
We build our models using open source available implementations of the XLM-RoBERTA and XLNET available on HuggingFace. We use xlnet-base-cased xlm-roberta-base2 and 3. We use Adam [10] as our optimiser. The size of the the embeddings are \(D\in 768\) and the size of the linear layer is \(D/2\times 1\). The batch size is taken as 8 and the learning rate is 4e-5. We train the models for 2 epochs. Experiments are performed on Google Colab cloud GPU. Google Translate API has been used to perform translations. These hyperparameters are common for all system settings including our two baselines: 1) XLM RoBERTa finetuned on only Gold data, 2) XLNET finetuned on only Gold data.
Footnote 2: [https://huggingface.co/xlm-roberta-base](https://huggingface.co/xlm-roberta-base)
Footnote 3: [https://huggingface.co/xlnet-base-cased](https://huggingface.co/xlnet-base-cased)
Final submission is reported on Ensemble 6, configured as per the desciption in Section 3.3.
## 5 Results and Discussion
Table 5 represents the scores achieved by our system in different experimental settings. The final submission for the competition is denoted by Ensemble 6. Table 6 shows our rank under different categories of the shared task.
As we can observe from Table 5, WADER seems to improve on existing transformer baselines for all categories except one where it ties with an ensemble.
### Comparison of Pre-trained Language Models:
We observe a general trend that XLM RoBERTa performs better than XLNET on multilingual baselines in our experiments. This can be demonstrated by the fact that the XLNET Baseline outperforms XLM RoBERTa only on English. For all other languages, there is a significant margin in between performance of XLNET and XLM RoBERTa. For Hindi, and Korean which have non latin characters, performance of XLNET is even worse with a negative R coefficient. which has This demonstrated the importance of multilingual pretraining.
### Comparison of Difference Sampling Threshold \(\beta\):
While lower values of \(\beta(=0.1)\) give more accurate labelled sets, we observe that moderate values of \(\beta(=0.1,0.2)\) outperform them. This is because, moderate values of \(\beta\) allow for larger sized training corpuses, which would positively effect the performance of the models. Moreover, moderate values \(\beta\) include more number of low quality translations, due to a higher difference. We hypothesize that this would have a regularising effect by providing the model with diversity in the training set, and preventing it from overfitting on the training corpus.
### Discussion on Performance
We rank 32 overall, 34 on seen languages and 29 on unseen languages. The lower performance of our model can be understood by the following factors:
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c} \hline \hline
**Tram** & **Overall** & **Seen Languages** & **Unseen Languages** & **English** & **Spanish** & **Purgetose** & **Italian** & **French** & **Chinese** & **H Hindi** & **Dutch** & **Korana** & **Arabic** \\ \hline
**WADER** & 32 & 34 & 29 & 34 & 32 & 36 & 35 & 34 & 34 & 40 & 30 & 15 & 35 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Rank achieved by our system in the shared task.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c} \hline \hline
**System** & **Overall** & **Seen Langs** & **Unseen Langs** & **English** & **Spanish** & **Purgetose** & **Italian** & **French** & **Chinese** & **H Hindi** & **Dutch** & **Korana** & **Arabic** \\ \hline
**Baseline-XLM RoBERTa** & 0.52 & 0.65 & 0.35 & 0.60 & **0.69** & 0.60 & 0.64 & 0.60 & 0.70 & 0.19 & 0.59 & 0.37 & 0.42 \\
**0-1-XL MoBERTa** & 0.52 & 0.66 & 0.34 & 0.61 & 0.66 & 0.60 & 0.67 & 0.63 & 0.72 & 0.19 & 0.59 & 0.35 & 0.48 \\
**0-2-XL MoBERTa** & 0.52 & 0.67 & 0.33 & 0.63 & 0.66 & **0.61** & 0.67 & **0.64** & **0.72** & 0.19 & 0.60 & **0.38** & 0.49 \\
**0-3-XL MoBERTa** & 0.53 & 0.66 & 0.35 & 0.63 & 0.67 & 0.60 & 0.67 & **0.64** & **0.72** & **0.20** & **0.61** & 0.43 & 0.50 \\
**Baseline-XLNET** & 0.38 & 0.51 & 0.22 & 0.62 & 0.61 & 0.42 & 0.47 & 0.47 & 0.24 & -0.08 & 0.37 & -0.03 & 0.05 \\
**0-1-XLNET** & 0.41 & 0.52 & 0.26 & **0.64** & 0.61 & 0.47 & 0.49 & 0.49 & 0.20 & -0.08 & 0.41 & -0.03 & 0.14 \\
**0-2-XLNET** & 0.41 & 0.51 & 0.29 & 0.61 & 0.58 & 0.43 & 0.50 & 0.51 & 0.24 & -0.05 & 0.45 & 0.08 & 0.22 \\
**0-3-XLNET** & 0.42 & 0.52 & 0.29 & 0.61 & 0.63 & 0.46 & 0.53 & 0.50 & 0.19 & -0.06 & 0.44 & 0.16 & 0.19 \\
**Embed-1** & 0.53 & 0.67 & 0.34 & 0.63 & 0.68 & **0.61** & **0.68** & 0.64 & 0.72 & 0.20 & 0.61 & 0.40 & **0.49** \\
** Ensemble-2** & 0.43 & 0.53 & 0.30 & 0.63 & 0.63 & 0.47 & 0.52 & 0.52 & 0.22 & -0.06 & 0.45 & 0.08 & 0.20 \\
**Example-3** & 0.52 & 0.64 & 0.36 & **0.64** & 0.67 & 0.58 & 0.63 & 0.60 & 0.67 & 0.11 & 0.55 & 0.29 & 0.45 \\
** Ensemble-4** & 0.52 & 0.63 & 0.37 & 0.63 & 0.66 & 0.57 & 0.62 & 0.60 & 0.67 & 0.11 & 0.56 & 0.34 & 0.48 \\
**Example-5** & 0.52 & 0.64 & **0.38** & **0.64** & **0.69** & 0.57 & 0.64 & 0.61 & 0.64 & 0.11 & 0.57 & 0.41 & 0.47 \\
**Example-6** & **0.53** & **0.65** & 0.37 & **0.64** & 0.68 & 0.58 & 0.64 & 0.61 & 0.67 & 0.11 & 0.57 & 0.36 & 0.48 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Pearson’s R score of different system settings on the test set. \(\beta-Model\) represents \(Model\) finetuned on Gold labels + \(\beta\) difference set.
* **Translation Quality:** The quality of translation is a key driver in WADER's performance. Lower quality translations would produce augmentations with noisy and unreliable labels. Translation quality is often dependant on the pair of languages in question. For languages such with a non latin script such as Hindi, translations are often of a lower quality which is also reflected in the results.
* **Overfitting:** By translating the data, while we increase linguistic diversity, most sentences would still be semantically similar, causing the model to overfit. This can further be seen by the fact that settings like 0.2, 0.3-XLM RoBERTA (where we can expect higher diversity from gold sentences due to higher differences) give the best performance for a lot of languages. Similarly, Ensemble 1 which preserves data quality while also reaping the regularising benefit of ensembling performs quite well in the given setting. Another indication of overfitting is the the better rank of our model on unseen languages.
* **Word Sensitivity:** For a task like Intimacy Detection, specific vocabulary used is key to identify the intimacy level. Translations can lead to replacement of words which do not hold the same degree of influence in accounting for textual intimacy.
## 6 Conclusion and Future Work
This paper proposes a novel data augmentation framework, WADER, for text regression tasks that use weak-labeling strategies to solve the problems of data imbalance and data scarcity. We also provide a method for data augmentation in cross-lingual, zero-shot tasks. Our approach uses sampling techniques to mitigate bias in data and optimally select augmentation candidates. We benchmarked the performance of State-of-the-Art pre-trained multilingual language models XLM RoBERTa using WADER and achieved promising results. Our findings demonstrate the importance of data augmentation for mitigating data imbalance and scarcity in text regression tasks. This study's contributions provide a direction for future research in the field of computational linguistic and its applications to social information analysis.
|
2310.01369 | Dissecting Cosmological Filaments at High Redshifts: Emergence of
Spaghetti-type Flow Inside DM Haloes | We use high-resolution zoom-in simulations to study the fueling of the
central galaxies by gas accretion from cosmological filaments at high
redshifts, z>=2. Their parent haloes with similar DM masses of
log(M_vir/M})~11.65, have been chosen at z=6, 4, and 2, in high/low overdensity
environments, with the goal of comparing evolution within similar M at
different z, under dual action of cosmological accretion and galactic outflows
-- forming the circumgalactic medium (CGM). We focus on the filamentary and
diffuse gas accretion within few virial radii, R_vir, down to the central
galaxy. Using a hybrid d-web/entropy method we have mapped the gaseous
filaments, and invoking particle kinematics allowed us to separate inflows from
outflows, thus resolving thermodynamic and kinematic signatures of the CGM. We
find that (1) The CGM is multiphase and not in thermodynamic or dynamic
equilibrium; (2) accretion rates via individual filaments display a lower
accretion rate and densities at lower redshifts. The inflow velocities along
the filaments decrease with redshift, z~ 6-2, from 200-30 kms^-1 by a factor of
2; (3) Temperature within the filaments increases inside R_vir, faster at lower
redshifts, in tandem with decrease in the accretion rate; (4) The filaments
show a complex structure along their spines: a core radial flow surrounded by a
lower density envelope. The core exhibits an elevated density and lower
temperature, with no obvious metallicity gradient in the filament cross
sections. It also tends to separate the filament into different infall velocity
regions and density cores, thus producing a spaghetti-type flow; (6) Inside the
inner ~ 30\,h^-1 kpc, the filaments develop the Kelvin-Helmholtz instability
which ablates and dissolves them, and triggers turbulence along the filament
spine; (7) Finally, the galactic outflows affect mostly the inner ~ 0.5R_vir~
100 h^-1 kpc of the CGM. | Da Bi, Isaac Shlosman, Emilio Romano-Diaz | 2023-10-02T17:32:15Z | http://arxiv.org/abs/2310.01369v3 | # Dissecting Cosmological Filaments at High Redshifts:
###### Abstract
We use high-resolution zoom-in simulations to study the fueling of the central galaxies by gas accretion from cosmological filaments at high redshifts, \(z\gtrsim 2\). Their parent haloes with similar DM masses of \(\log\left(M_{\rm vir}/{\rm M}_{\odot}\right)\sim 11.65\pm 0.05\), have been chosen at \(z=6\), \(4\), and \(2\), in high/low overdensity environments, with the goal of comparing evolution within similar \(M_{\rm vir}\) at different \(z\), under dual action of cosmological accretion and galactic outflows -- forming the circumgalactic medium (CGM). We focus on the filamentary and diffuse gas accretion within few virial radii, \(R_{\rm vir}\), down to the central galaxy. Using a hybrid d-web/entropy method we have mapped the gaseous filaments, and invoking particle kinematics allowed us to separate inflows from outflows, thus resolving thermodynamic and kinematic signatures of the CGM. We find that (1) The CGM is multiphase and not in thermodynamic or dynamic equilibrium; (2) accretion rates via individual filaments display a lower accretion rate and densities at lower redshifts. The inflow velocities along the filaments decrease with redshift, \(z\sim 6-2\), from \(200-300\,{\rm km\,s^{-1}}\) by a factor of \(2\); (3) Temperature within the filaments increases inside \(R_{\rm vir}\), faster at lower redshifts, in tandem with decrease in the accretion rate; (4) The filaments show a complex structure along their spines: a core radial flow surrounded by a lower density envelope. The core exhibits an elevated density and lower temperature, with no obvious metallicity gradient in the filament cross sections. It also tends to separate the filament into different infall velocity regions and density cores, thus producing a spaghetti-type flow; (6) Inside the inner \(\sim 30\,h^{-1}\,{\rm kpc}\), the filaments develop the Kelvin-Helmholtz instability which ablates and dissolves them, and triggers turbulence along the filament spine; (7) Finally, the galactic outflows affect mostly the inner \(\sim 0.5R_{\rm vir}\sim 100h^{-1}{\rm kpc}\) of the CGM.
keywords: Methods: numerical - galaxies: abundances -- galaxies: evolution -- galaxies: haloes -- galaxies:high-redshift - galaxies: interactions
## 1 Introduction
Galaxies reside inside dark matter (DM) haloes which extend to about ten times the galaxy size. The DM haloes lie within the cosmological network formed by filaments, walls and their intersections, which, together with voids, form the so-called large-scale structure of the universe extensively studied over the last 3-4 decades (e.g., Bardeen et al., 1986; Geller and Huchra, 1989; Bond et al., 1996; Springel et al., 2006). Deep within these parent haloes, galactic morphology in contemporary universe has been analyzed for the last century with a great success, solidified by Hubble (1936, see reviews by Binney and Tremaine (2008); Mo et al. (2010); Kormendy (2013); Shlosman (2013)).
However, the gray zone of contemporary cosmology lies between these two extremes -- how does the large-scale structure -- the cosmic web, connect to the galactic structure? How do properties of the infalling material change within the halo, and to what extent it preserves its identity on the way to the central galaxy? How does it feed the galaxy growth and affect their morphology. And finally, how does this process evolve with the cosmological time?
In this work, we attempt to answer more modest questions. We focus on comparing the evolution of baryonic filaments and diffuse accretion in the immediate vicinity of galaxies and their host DM haloes, focusing on the kinematic and thermodynamic properties of the gas. To distinguish the baryonic flows from the extensions of DM filaments inside the haloes, we denote them as _streamers_. For this purpose we use our high-resolution cosmological zoom-in simulations presented in Bi et al. (2022a,b).
Two primary modes of galaxy growth exist: mergers with other galaxies and smooth accretion of matter (e.g., Rees, 1977; Fall and Efstathiou, 1980). High-redshift galaxies exhibit very high star formation rates for extended periods of time which cannot be supported by galaxy merger events only (e.g., Chapman, et al., 2004; Keres et al., 2005; Genzel et al., 2006; Dekel and Birnboim, 2006; Bi et al., 2022a). Smooth gas accretion plays an important and even dominant role in the growth of these galaxies, affecting their morphology. In the
past decade or so, numerical modeling confirmed that gas accretion can dominate over galaxy mergers at high redshifts, becoming an important component of the hierarchical scenario of galaxy growth (e.g., Keres et al. 2005; Dekel et al. 2009; Devriendt et al. 2010; Romano-Diaz et al. 2014).
It is widely accepted that the two complementary mechanisms for gas accretion can operate: the so-called cold and hot accretion modes (e.g., Keres et al. 2005; Dekel & Birnboim 2006). In the hot accretion, the incoming gas, which is supersonic and not virialized, will trigger a shock positioned around the halo virial radius, \(R_{\rm vir}\). The postshocked gas, which is heated up to the virial temperature, \(T_{\rm vir}\), forms a quasistatic envelope, starts to cool and is accreted radially thereafter as it can potentially cool down over less than the Hubble time and hence, contribute as well to the galaxy growth (e.g., Birnboim & Dekel 2003). On the other hand, the cold gas penetrates deep inside the halo and streams towards the central galaxy, without being shocked and heated up, allowing for an efficient delivery of a potentially starforming fuel to the central galaxy disk (Brooks et al. 2009). This can even happen when the hot shock is present (e.g., Dekel & Birnboim 2006; Dekel et al. 2009; Agertz et al. 2009). Actually, recent theoretical work has shown that the cold baryonic accretion rather than the hot accretion rate dominates below a certain critical DM halo mass (e.g., Birnboim & Dekel 2003; Keres et al. 2005; Dekel & Birnboim 2006; Ocvirk et al. 2008; Dekel et al. 2009; Keres et al. 2009) and the cold accretion rate decays steeply down at lower redshifts (e.g., Keres et al. 2005; Dekel & Birnboim 2006; Romano-Diaz et al. 2017). Therefore, it is especially important to investigate the details of the cold accretion flows in the early universe.
Several numerical works focused on the cold flows and the baryon cycle of accretion versus outflow in galaxies. Some of them favored low numerical resolution in order to follow an statistical approach to galaxy growth from gas accretion (e.g., Keres et al. 2005; Ocvirk et al. 2008; Dekel et al. 2009), while others used "zoom-in" simulations to address the process on galactic scales at specific redshifts (e.g., Faucher-Giguere et al. 2011; Fumagalli et al. 2011; Kimm et al. 2011; Stewart et al. 2011; Goerdt et al. 2012; Shen et al. 2013; Romano-Diaz et al. 2017).
In this work, we focus on the baryonic accretion onto the central galaxy within its parent DM halo at the Cosmic Dawn, ending at three selected redshifts, \(z_{\rm f}=6\), 4, and 2. Using high-resolution zoom-in simulations (Bi et al. 2022a), we follow the galaxy evolution within similar mass DM haloes, \(\log M_{\rm vir}\sim 11.65\pm 0.05\,{\rm M}_{\odot}\), which are only a factor of 2 below those haloes which are the most efficient in producing stars (e.g., Behroozi et al. 2013).
The chosen mass range ensures that the selected haloes can create favorable conditions for both the hot and cold accretion flows, and are expected to form sufficiently massive galaxies to be properly resolved numerically in our simulations. In addition, these haloes are expected to contain \(L*\) galaxies at their respective \(z_{\rm f}\). Hence, their comparison with present day galaxies is the most optimal one. Our final redshifts, \(z_{\rm f}\), encompass the end of the reionization epoch, \(z\ga 6\), and the subsequent time period of \(\sim 2.5\) Gyr, when the star formation in the universe peaks. Overall, galaxy evolution for \(z\sim 9-2\) is analyzed.
Our main goal is to study accretion of the cosmological gas from the large scale, i.e., from \(\sim 4R_{\rm vir}\), down to the central galaxies, where the gas settles in the circumgalactic space. We define this gas as the circumgalactic medium (CGM, e.g., Tomlinson et al. 2017, for a recent review). This basically corresponds to the gaseous component of the DM haloes (e.g., Sadoun et al. 2019), and we analyze its specific properties, i.e., the kinematics, chemical composition and thermodynamics. Within this spatial range, the gas transform from the extragalactic medium to the starforming ISM. This gas is expected to supply the new material for star formation from the IGM, satellite galaxies, the hot and cold accretion phases, and from the recycled material injected by the disk stars and active galactic nuclei (AGN) -- as a result of the galactic feedback processes. This region also has the potential of hiding the missing baryons in the universe. Moreover, the CGM can be responsible for the quenching of starformation in galaxies (e.g., Davies et al. 2020).
Major efforts have been aimed at understanding the feedback of winds from massive stars, supernovae (SN), and AGN on galaxy evolution. However, specific details of this feedback and its implementation and fine tuning to observational and computational models are far from being understood (e.g., Sadoun et al. 2016). In particular, the spatial extent of this feedback is still unclear -- is it limited to the inner DM haloes, to the halo virial radius, or extends beyond it?
This paper has been organized as follows. Section 2 describes the numerical issues and the methods used. Section 3 presents our results. This is followed by the discussion section and the summary.
## 2 Numerical modeling
### Simulation Suite
We analyze the zoom-in simulations suite presented in Bi et al. (2022a) which used the hybrid \(N\)-body/hydro code gizmo(Hopkins 2017) with the Lagrangian meshless finite mass (MFM) hydro solver. The full details of the simulations are given in Bi et al. (2022a). Here we only provide the highlights.
The simulations adopted the Plank Collaboration et al. (2016)\(\Lambda\)CDM concordant model, i.e. \(\Omega_{\rm m}=0.308\), \(\Omega_{\Lambda}=0.692\), \(\Omega_{\rm b}=0.048\), \(\sigma_{8}=0.82\), and \(n_{\rm s}=0.97\). The Hubble constant is taken as \(h=0.678\) in units of \(100\,{\rm km\,s^{-1}\,Mpc^{-1}}\).
Uni-grid, DM-only initial conditions (IC) were generated at redshift \(z=99\) by means of the music code (Hahn & Abel 2011) within a box of \(50^{-1}\,{\rm Mpc}\), which were evolved until redshift 2. From this parent simulation haloes of the same mass range were chosen at three different target redshifts, \(z_{\rm f}=6\), 4, and 2, to be re-simulated at much higher resolution. The selected DM haloes have been chosen to reside at low and high overdensities, \(\delta\sim 1\) and \(\delta\sim 3\), respectively (Bi et al. 2022a).
The individual zoom-in ICs, also generated with music, were composed of five nested levels of refinement on top of the base grid, i.e., from \(2^{7}\) to \(2^{12}\). Their DM-only versions were first evolved in order to check for (and to avoid) contamination from massive, lower-resolution particles at the highest resolution-level volume. After this, baryons were included at the highest level of refinement in the reconstruction of their respective ICs. All the details of the models are displayed in the Table 1, as well as in Tables 1 and 2 of Bi et al. (2022a). The total number of selected haloes were 6, and in combination with the two different galactic winds feedbacks (see section 2.3) the final simulation suite was composed of 12 haloes.
Within this setup, the effective number of particles (DM and baryons) in our simulations is \(2\times 4096^{3}\), leading to a mass resolution per particle of \(3.5\times 10^{4}\,{\rm M}_{\odot}\) for gas and (eventually) stars, and \(2.3\times 10^{5}\,{\rm M}_{\odot}\) for DM. The minimal adaptive gravitational softening (in comoving units) for gas was 74 pc, and for stars and DM 74 pc and 118 pc. This means that at the final redshifts, \(z_{\rm f}=6\), 4, and 2, the softening for stars in physical coordinates is 10.5 pc, 14.7 pc, and 24.6 pc, respectively.
For better angular momentum conservation and in order to resolve the Kelvin-Helmholtz instability which is expected to develop when
the cosmological filaments penetrate the DM halo, the MFM hydro solver was employed instead of the "traditional" SPH solver with an adaptive gravitational softening for the gas. The hybrid multiphase model was invoked for the ISM and star formation (Springel & Hernquist, 2003). In this, starforming particles contain the cold phase that forms stars, and the hot phase that results from the SN II heating. Metal enrichment is included: the metallicity increase in the starforming gas scales with the fraction of gas in the cold phase, the fraction of stars that turns into SN, and the metal yield per SN. A total of 11 metal species were followed in both gas and stars, including H, He, C, N, O, Ne, Mg, Si, S, Ca, and Fe. Metal diffusion is not implemented explicitly, but metals can be transported by winds (see section 2.3). The density threshold for star formation (SF) was set to \(\tau_{\rm crit}^{\rm SF}=4\,{\rm cm^{-3}}\). Simulations include the redshift-dependent cosmic UV background (Faucher-Giguere et al., 2009).
### Identification and Properties of DM haloes and Galaxies
DM haloes and their properties were identified by the group finder rockstar(Behroozi et al., 2013), with a Friends-of-Friends linking length of \(b=0.28\). The halo virial radius and the virial mass, \(R_{\rm vir}\) and \(M_{\rm vir}\), have been defined by \(R_{200}\) and \(M_{200}\)(e.g., Navarro et al., 1996). \(R_{200}\) is the radius within which the mean interior density is 200 times the critical density of the universe at that time.
Galaxies have been identified by the group-finding algorithm hop(Eisenstein & Hut, 1998), using the outer boundary threshold of baryonic density of \(10^{-2}\,\,\rm SF_{\rm crit}^{\rm SF}\), which ensured that both the host starforming gas and the lower density non-starforming gas are roughly bound to the galaxy (Romano-Diaz et al., 2014). This assures that identified galaxies are not imposed with a particular geometry. Note that with this definition, all the galaxies appear to be generally smaller than galaxies defined with \(0.1\,\rm RV_{\rm vir}\), which is typically used in the literature (e.g., Scannapieko et al., 2012; Marinacci et al., 2014).
### Galactic Wind Models
We also have made used of the two wind models of Bi et al. (2022) -- the Constant Wind (Springel & Hernquist, 2003) and the Variable Wind (Oppenheimer & Dave, 2006) (hereafter CW and VW, respectively). Both wind models implement the concept of "the decoupled particle wind," when a wind particle decouples from the ambient gas particles, no longer interacts hydrodynamically, and move ballistically. The time period of decoupling stage depends on the shortest of either \(10^{6}\) yrs, or when the particle moves to a region (but still within the galaxy) where the background gas density is lower by a factor of 10 compared to the critical density of star formation.
Both CW and VW orientations have been assumed isotropic. For the CW model, the wind velocity is \(v_{\rm W}=484\,{\rm km\,s^{-1}}\), and the mass loading factor, \(\beta_{\rm w}\equiv\dot{M}_{\rm w}/\dot{M}_{\rm SF}=2\)(Sadoun et al., 2016). Here \(\dot{M}_{\rm w}\) is the mass loss by the wind, and \(\dot{M}_{\rm SF}\) is the (mass) SFR. For the VW model, the wind velocity scales with the physical escape velocity of the host halo. In this case, the mass loading factor has been calculated assuming the total wind energy, and it is given by the energy-driven and the momentum-driven winds. In general, by calculating the total kinetic energy of all the wind particles, the VW constitutes a stronger feedback, about 8 times stronger than the CW.
### Cosmic Web Decomposition
The cosmic web can be divided into its various components, i.e., voids, sheets/walls, filaments, and clusters/knots (e.g., De Lapparent et al., 1986; Colless et al., 2003; Tegmark et al., 2004; Mehmet et al., 2014). A number of different methods have been attempted in the literature in both numerical simulations and observational data to separate the web into components (Aragon-Calvo et al., 2007a; Hahn et al., 2007a; Forero-Romero et al., 2009; Bond et al., 2010; Sousbie, 2011; Hoffman et al., 2012; Cautun et al., 2013). We have employed a hybrid scheme, involving the d-web, which is based on the orbital stability using the local tidal tensor usually calculated for the DM distribution (Hahn et al., 2007b), as well as two additional methods discussed below. Because we are mainly interested in the evolution of the baryonic component within DM haloes, this method has been applied to scales smaller than few\(\times R_{\rm vir}\) and to the baryonic distribution, rather than to the DM one. In this way, we have been capable to follow the baryonic streamers, i.e., cold streams, inside the virial radius (e.g., Dekel et al., 2009).
We briefly describe the d-web method. A particle \(i\) moving in a peculiar gravitational potential, \(\phi_{i}\), can be described by the following equation of motion in comoving coordinates,
\[\bar{x}_{\rm i}=-\triangledown\,\phi_{\rm i}\,, \tag{1}\]
where the dots represent derivatives with respect to time. After linearizing the equation of motion, the system can be re-written as
\[\bar{x}_{\rm i}=-T_{\rm ij}(\bar{x}_{\rm k})(x_{\rm j}-\bar{x}_{\rm kj})\,, \tag{2}\]
where \(T_{\rm ij}\) represents (in our case) the tidal tensor of baryonic gravitational potential within the DM haloes. By calculating the Hessian1 matrix, we can define different matrix elements by an analogy with the Zel'dovich approximation (Zel'dovich, 1970), which depend on the basis of the eigenvalues \(\lambda_{1}\), \(\lambda_{2}\), \(\lambda_{3}\) of the tidal tensor,
Footnote 1: The Hessian matrix or Hessian is a square matrix of second-order partial derivatives of a scalar-valued function, or scalar field.
* voids: \(\lambda_{1}<\lambda_{2}<\lambda_{3}<0\)
* sheets/walls: \(\lambda_{1}<\lambda_{2}<0<\lambda_{3}\)
* filaments: \(\lambda_{1}<0<\lambda_{2}<\lambda_{3}\)
* knots: \(0<\lambda_{1}<\lambda_{2}<\lambda_{3}\)
However, the particular details of this classification method, i.e., extension, thickness of the identified structures, depends on the length scale of the potential field's grid resolution.
As our goal is to extend the filament identification to the baryonic component inside the virial radius, we complement the d-web method outside the virial radius with an entropy-based method inside the such radius, as described below.
We define the entropy as \(K=T/\rho^{\frac{3}{4}}\). The virial entropy of the halo, \(K_{\rm vir}\), is defined by the virial temperature and the total avhttys://doku.lrz.de/hardware-of-superrmuc-ng-11482553.htmlrategy density at the virial radius (e.g., Dekel et al., 2009). In this way, the penetrating cold (filamentary) baryon streams on this scale show a lower entropy than the virial entropy of the halo, \(K_{\rm vir}\).
We use the d-web outside the virial radius where the potential has been smoothed by \(10h^{-1}\)kpc Gaussian filter in order to define the entropy cutoff there. The entropy cutoff constant has been defined as the maximal entropy which results in similar filaments outside the virial radius obtained from the d-web method. This cutoff is used inside the virial radius as well. We have tested this method, and it shows a smooth transitions across the virial radius. Furthermore, we extended the baryonic d-web determination inside \(R_{\rm vir}\), down to \(50h^{-1}\)kpc, in tandem with the entropy method, for comparison. In addition, inside the virial radius, we apply the baryon kinematics
method to distinguish the accretion flow from the outflow, in the region where both coexist.
We have tested the above hybrid method and found that it successfully extrapolates the d-web/entropy elements down to \(\sim 10h^{-1}\) kpc, where the boundary of the galaxy lies.
## 3 Results
We present our results based on our full simulation suite, starting with the detection of cold baryonic streamers and diffuse accretion. Furthermore, we emphasize differences between the two galactic wind feedback models used in our simulations under identical initial conditions, the CW and VW galactic outflow models, and the environmental effects.
### Identifying the filamentary streamers
By using our hybrid d-web/entropy method (section 2.4), we have been able to separate the large scale flows (outside \(R_{\rm vir}\)) down to the central galaxy.
Figure 1 displays the mapped individual streamers within a box of side \(600h^{-1}\) kpc, as computed with the d-web algorithm. Only CW feedback models are shown because at these scales there is no observable difference between both feedback schemes. All models are displayed at their final redshifts.
We follow the filamentary streamers and diffuse accretion flows from \(\sim 3R_{\rm vir}\) down to the inner \(\sim 10\,h^{-1}\) kpc from the halo centers. Our general implementation allow us to overcome the difficulty in following the accretion streams deep into the DM haloes and connecting them to galaxies. Furtheremore, the hybrid d-web/entropy method supplemented by the kinematics inside \(R_{\rm vir}\) allows us to separate the inflow from outflow. The streamers have been concatenated to the innermost flow using the baryons kinematic (see Appending sec:append). We discuss the details of gas motions in the innermost regions and their penetration into the central galaxies in section 3.5.
We observe that, overall, the DM haloes are typically associated with two or three main cosmological filaments at \(z_{\rm f}\), as shown in Figure 1.
### Radial properties of filamentary streamers
Figure 2 shows the whole-sky Hierarchical Equal Area isoLatitude Pixelisation (HEALPIX) projection maps of spherical shells of \(10h^{-1}\)kpc thickness at \(0.1R_{\rm vir}\), \(0.5R_{\rm vir}\) and \(R_{\rm vir}\). It reveals the streamers extending down from \(R_{\rm vir}\) to the haloes' central regions. Also shown are the walls (seen as the threads connecting the streamers), and diffuse accretion flow. The color represents radial influx of gas mass per solid angle. Sheets existing between the streamers at larger radii dissolve on the galactic scale, i.e., \(0.1R_{\rm vir}\), which are especially detectable at \(z_{\rm f}=6\) and \(z_{\rm f}=4\).
The color palette in Figure 2 reveals the difference in the accretion rates between the filaments and walls. The former channel up to \(100\,M_{\odot}\) yr\({}^{-1}\) rad\({}^{-2}\), while the latter less than \(40\,M_{\odot}\) yr\({}^{-1}\) rad\({}^{-2}\). Furthermore, notice how the smooth accretion (not from filaments and, or walls) comes from all directions, although at a much lower rates (few \(\times M_{\odot}\) yr\({}^{-1}\) rad\({}^{-2}\)).
Figure 3 reveals that the streamers differ in their density distribution at their final redshifts \(z_{\rm f}\). At lower redshifts, the gas density profiles are lower than those at higher \(z\), with the decline becoming more prominent between \(z_{\rm f}=4\) and 2, as this represents the longest time interval between different \(z_{\rm f}\). At \(z_{\rm f}=6\), their density ranges in the interval of \(\rho\sim 10^{-23}-10^{-25}\) g cm\({}^{-3}\) over the distance of \(400h^{-1}\)kpc. At \(z_{\rm f}=4\), this density is about a factor of 3 lower. At \(z_{\rm f}=2\), the density is even lower and ranges between \(\sim 10^{-25}-10^{-27}\) g cm\({}^{-3}\). These results are indicative of the overall decrease in the gas mass flux in the streamers with a decreasing redshift. Notice, however, that no trend is observed between the high and low overdensities models in this Figure.
Figure 4 displays the radial distributions of temperature in the filamentary accreting gas (the diffuse accretion flow properties are discussed in the next section). We distinguish a different behavior inside \(R_{\rm vir}\)(\(\sim 200\,h^{-1}\)kpc) and outside it, with a transition region between \(0.7R_{\rm vir}-1.3R_{\rm vir}\). Some differences can also be seen between the CW and VW models. For \(z_{\rm f}=6\) models, the temperature is rising relatively mildly inside the virial radius. At \(z_{\rm f}=4\), this rise is much
\begin{table}
\begin{tabular}{c c c c c} \hline z\({}_{\rm f}\) & Model Name & \(\log M_{\rm vir}\,{\rm M}_{\odot}\) & \(R_{\rm vir}\) kpc & 8 \\ \hline \hline
6 & Z6H & 11.7 & \(184h^{-1}\) & 3.04 \\ \hline
6 & Z6L & 11.7 & \(184h^{-1}\) & 1.60 \\ \hline \hline
4 & Z4H & 11.7 & \(184h^{-1}\) & 3.00 \\ \hline
4 & Z4L & 11.8 & \(185h^{-1}\) & 1.33 \\ \hline \hline
2 & Z2H & 11.8 & \(206h^{-1}\) & 2.80 \\ \hline
2 & Z2L & 11.8 & \(195h^{-1}\) & 1.47 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Table of the halo properties in the DM only simulations. All values are given at the final redshifts \(z_{\rm f}=6\), 4, and 2. The columns correspond to (from left to right): the final redshift \(z_{\rm f}\); the model number (see definition in section 2.2; the virial mass of DM halo \(M_{\rm vir}\) at \(z_{\rm f}\); the halo virial radius (in comoving coordinates) \(R_{\rm vir}\); 8 — the local overdensity.
Figure 1: Structure classification of the DM-haloes environment using the d-web method applied to baryons in boxes of length \(1,200\,h^{-1}\) kpc, (\(\sim\pm 3R_{\rm vir}\)). The color coding represents: knots in blue, filaments, (\(z_{\rm f}\), streams) red, walls green and voids in black. From left to right columns, haloes at their final redshifts \(z_{\rm f}=6\), 4, and 2 are shown. All the models correspond to the CW feedback as they are similar to the VW models at these scales. The upper panels display haloes in the high overdensity environments and the lower panels show haloes in low overdensity environments, (see Table 1). The arrowhead at the Z2L snapshot points to the filament #1 which is analyzed in detail in the subsequent sections.
steeper. Streamers at \(z_{\rm f}=2\) are hotter everywhere, and still display an additional rise towards the center. In the same way, the central temperature increases towards lower \(z_{\rm f}\).
We also observe a larger amount of hot gas inside \(R_{\rm vir}\) at lower redshifts and for lower overdensity models. At higher redshifts, the gas accumulates mostly at \(T\sim 10^{4}\,\)K. The virial temperature of high-redshift haloes is higher than for the low redshift ones, which is expected as these haloes are more centrally concentrated. Moreover, the flat part of the \(T\)-curves broadens to \(\sim{\rm few}\times 10^{5}\,\)K at lower redshifts.
Figure 5 exhibits a diverse radial profile behavior of the gas metallicity. While the filamentary streamers fit in the range of \((Z/Z_{\odot})\sim 10^{-4}-1\), the radial gradients sometimes exhibit opposing trends, i.e., the metallicity can increase or decrease with radius. This can be understood when the metal production occurs in small haloes embedded within the streamers at different radii. Note that the
Figure 3: Volume density radial profiles of gas representing the sum of all the streamer spines at \(z_{\rm f}=6\), \(4\), and \(2\), from left to right columns, respectively. The streamers have been divided into shells of \(1\,h^{-1}\,\)kpc thickness. The corresponding DM haloes can be identified from Figure 1. The VW feedback models are given by the red lines and CW models by the blue lines, which represent the median. The color shadows show the 20-80 percentile scale. The upper panels display haloes in the high overdensity environment and the lower panels show haloes in the low overdensity environment (see Table 1 for future details).
Figure 2: Inflowing streamers, diffuse accretion, and walls (seen as threads connecting the streamers) at three representative radii in a thick shell of \(10h^{-1}\,\)kpc, at \(0.1R_{\rm vir}\), \(0.5R_{\rm vir}\) and \(1R_{\rm vir}\) for haloes at \(z_{\rm f}=6\), \(z_{\rm f}=4\) and \(z_{\rm f}=2\). Shown as whole-sky HEALPix projection maps (see the text). The color represents radial mass influx of gas per solid angle. Thin walls are seen between the streamers at larger radii until dissolved on the galaxy scale of \(\sim 0.1R_{\rm vir}\), especially at \(z_{\rm f}=6\) and \(z_{\rm f}=4\).
Figure 4: The hex-binned gas temperature radial profiles of the filamentary streamers within \(2R_{\rm vir}\) at \(z_{\ell}=6\), 4 and 2, with the color palette representing the gas mass in individual pixels. Note that the virial temperatures of these DM haloes is \(T\sim 1.3\times 10^{6}(1+z)\) K.
Figure 5: As in Figure 3, but for the mass weighted average gas metallicity radial profiles of filamentary streamers.
Figure 6: As in Figure 3, but for the filamentary gas inflow radial velocity profiles.
Figure 8: Whole-sky HEALPix projection maps (see text) of the inflowing streamers and diffuse accretion in 3\(h^{-1}\)kpc thick shells at 5, 10, 20, 30, 40 and 50\(h^{-1}\)kpc for haloes at \(z_{\ell}=6\) (a) and \(z_{\ell}=2\) (b) with both CW (left column) and VW (right column). The color bar represents the gas surface density.
Figure 7: Radial profiles of the mass accretion rates along the filamentary streamers (solid lines) and of the outflows along the same structures (dashed lines).
surrounding massive haloes have been removed, but the small ones and substructures persist in these Figures. Hence, the metal contamination contributes due to the outflows and the SN feedback. We also observe that the VW models appear to be less metal rich compared to the CW models, especially in the low overdensity models. This is the result of the VW models being more gas rich due to a reduced star formation everywhere (Bi et al., 2022).
Figure 6 provides the radial profiles of the inflow velocities along the filamentary streamers. The obvious trend which shows up is that these velocities increase at smaller radii but declines overall at lower redshift, i.e., becomes more negative inside \(R_{\rm vir}\), but around 50 kpc/h from the center they decrease sharply -- true for all \(z_{\rm f}\). This is due to the interplay of this accreting gas with the central galaxy gas content. Inside \(\sim 100\) kpc/h, the difference between the CW and VW become large, which is probably related to stronger shocks in VW models due to the large fraction of gas in VW models.
The innermost inflow velocity is decreasing with decreasing final redshift, \(z_{\rm f}\). This is mostly notorious for the CW models rather than their VW counterparts, probably because of the different central gas densities between the two types of winds. At \(z_{\rm f}=6\), the inflow velocity in CW models ranges within \(v_{\rm r}\sim 100-300\) km s\({}^{-1}\). At \(z_{\rm f}=4\), this range becomes smaller, \(v_{\rm r}\sim 50-200\) km s\({}^{-1}\). While at \(z_{\rm f}=2\) it decreases to \(v_{\rm r}\sim 70-150\) km s\({}^{-1}\).
The mass accretion rate of low-temperature gas onto the halo provides an important reservoir of gas that can join the galaxy and contributes to its SFR. Figure 7 shows the radial profiles of the accretion rate from baryonic filaments, \(\dot{M}\), for all models at \(z_{\rm f}=6\), 4, and 2. The individual streamer contributions have been added up. The \(\dot{M}\) appears flat.
The observed, well-defined variations within a factor of 2 at smaller radii for \(z_{\rm f}=6\) galaxies and for high overdensity models at \(z_{\rm f}=4\), are associated to the presence of substructures. At \(z_{\rm f}=2\), accretion rate in filaments is either flat or decreases at small radii for the high density haloes, with a slight increase for the low-density ones.
Figure 7 also shows that some of the material in the filamentary accretion misses the central galaxy, it is subsequently diverted and becomes a filamentary outflow. This is detected in the analyzed velocity field not shown here, and is typical for all the models. This outflow does not escape from the halo, reaching \(\lesssim 150h^{-1}\)kpc. The gas in these outflows increases its entropy and is converted into a diffuse virialized gas. In addition, this Figure includes contribution from galactic outflows, i.e., CWs and VWs.
As expected, the accretion rates along the streamers decrease with decreasing \(z_{\rm f}\). For \(z_{\rm f}=6\) it lies within \(\dot{M}\sim 50-100\) M\({}_{\odot}\) yr\({}^{-1}\). For \(z_{\rm f}=4\), it ranges within \(\sim 10-100\) M\({}_{\odot}\) yr\({}^{-1}\), and for \(z_{\rm f}=2\), it ranges within \(\sim 5-10\) M\({}_{\odot}\) yr\({}^{-1}\). The spikes visible in this Figure are the result of substructure and should be ignored.
In order to visualize the accretion flow on scales smaller than \(50h^{-1}\)kpc, Figure 8 shows whole-sky HEALPix maps on scales of 5, 10, 30, 40, and \(50h^{-1}\)kpc, using shells of \(3h^{-1}\)kpc thickness and colored by the gas surface density. In this Figure, we can detect the filamentary and diffuse accretion flows, but the walls that have been observed in Figure 2 on larger scales, have been washed out here. The inner most panels show the gas distribution within the galaxies (aligned along the equatorial planes) and its immediate surroundings. The presence and strength of the streamers decrease at smaller radius until \(R\sim 10h^{-1}\)kpc (typical radii of our galaxies) and also as a function of redshift. This situation is also independent of the wind mechanism employed. At this region, some streamers connect directly with the galaxy, but some others might miss it becoming outflows. Such interplay can be noticed by the intricate density pattern at the \(10h^{-1}\)kpc shell for all models, different and almost uncorrelated with respect to their external shells.
### Radial properties of diffuse accretion flow
Next, we turn to the radial properties of the diffuse accretion flow in the vicinity and inside the DM haloes, \(\lesssim 2R_{\rm vir}\). We follow their density, temperature, radial velocity, metallicity and diffuse accretion rate.
Figure 9 shows the density profile of the diffuse accreting gas. Its radial dependence is similar to that of the filamentary gas, but shifted down by a factor of \(\sim 5-20\). These profiles appear to be flat between \((1-2)R_{\rm vir}\) and \(\rho\sim R^{-1}\) inside the virial radius. Also, the profiles become lower with decreasing \(z_{\rm f}\). We observe no differences between the density profiles based on environment neither with respect to wind model.
The temperature in the diffuse accretion gas (Figure 10) seems to differ from that in the filaments (Figure 4). The lion share of the gas mass in the filaments has a temperature of \(\sim 10^{4}\) K, except for \(z_{\rm f}=2\). The diffuse gas however, has such a low temperature only outside the virial radius. At smaller radii, the majority of the gas lies at \(T\sim\) few \(\times 10^{5}-10^{7}\) K. The cold diffuse flow is ceased to exist inside \(R_{\rm vir}\). At \(z_{\rm f}=2\), the amount of hot gas outside \(R_{\rm vir}\) is also increased. Some differences among the models at \(z_{\rm f}\) regarding their local overdensities can be noticed, in particular the amount of diffuse gas around and beyond \(R_{\rm vir}\), it is much larger the amount of diffuse gas for the high-density environment haloes (as expected) with respect to their low-density counterparts.
Figure 11 exhibits the mass-weighted metallicity radial profiles of the diffuse accreting gas. The profiles are relatively flat with significant variations related to the embedded substructures and small neighboring haloes. Simulations with the VW feedback exhibit a lower metallicity than those with CW feedback, especially outside the virial radius. This is mainly due to the overall lower SFR in these models when compared to the CW ones. Furthermore, this is quite similar when compared with the filamentary accretion flows (Figure 5). Both distributions display a substantial fraction of low metallicity gas, \(Z/Z_{\odot}\lesssim 10^{-3}\) and even pristine gas outside \(R_{\rm vir}\).
The radial inflow velocity of the diffuse accreting gas differs from the gas flow in the filaments (Figure 12). In the filaments, the gas accelerates towards the central galaxies rather strongly. In contrast, the diffuse accretion velocities are flat, something expected from its thermodynamic state, which can be understood in conjunction with the gas temperature profiles. From outside the halo, the gas accelerates towards the halo, where it starts to virialize and heats up, hence, slowing down its radial velocity.
Next, we calculate the mass accretion rates, \(\dot{M}\), of the diffuse gas (Figure 13), and separate it from the outflows triggered by the feedback from the central galaxies. To distinguish the inflow and outflow from the virialized gas inside the DM haloes, we have counted only the inflow and outflow velocities which exceed the local gas velocity dispersions. The resulting mass accretion rates for the diffuse gas decrease with approaching the central galaxies. They also decline with decrease of \(z_{\rm f}\) from \(\dot{M}\sim 10-40\) \(M_{\odot}\) yr\({}^{-1}\) at \(z_{\rm f}=6\) to \(\sim 5-10\) \(M_{\odot}\) yr\({}^{-1}\) at \(z_{\rm f}=2\). We do not see differences between the CW and VW models, except at the \(z_{\rm f}=2\) low overdensity Z2L model.
The outflows are clearly focused on the central galaxies and become more extended, i.e., reaching larger radii, at lower \(z_{\rm f}\). At \(z_{\rm f}=6\), the outflows reach \(\sim R_{\rm vir}\), at \(z_{\rm f}=4\), the VW outflow is reaching \(\sim 1.5R_{\rm vir}\), while at \(z_{\rm f}=2\), in the low-density environment haloes the CW reaches \(\sim 1.3R_{\rm vir}\) while VW extends to \(\sim 1.8R_{\rm vir}\). In general,
Figure 11: Mass-weighted gas metallicity radial profiles of diffuse gas. The VW models are given by the red lines, and CW models by the blue lines.
Figure 10: Hex-binned radial temperature profiles of a diffuse gas inside \(2R_{i\mathrm{vir}}\) with the color palette representing the gas mass in individual pixels at \(z_{\mathrm{f}}=6\), 4 and 2. Note that the virial temperatures of these DM haloes is \(T\sim 1.3\times 10^{6}(1+z)\) K.
Figure 9: Radial density profiles of diffuse gas. The VW (CW) models are given by the red (blue) lines. The color shadows represent the 20-80 percentile distributions.
the outflows seen at larger radii belong to small haloes located well beyond \(R_{\rm vir}\).
The outflow radial profiles decrease with radius. At the maximum, the outflows are \(\sim 5-10\,M_{\odot}\,{\rm yr}^{-1}\) at all \(z_{\rm f}=6\), but decline below \(0.1\,M_{\odot}\,{\rm yr}^{-1}\) at the maximal radii mentioned above. But in all cases, we find that the outflow extends to the virial radius, at least. Based on this result, we call this region which is affected both by outflows and inflows, and therefore has a complicated kinematics and other thermodynamic characteristics, as the CGM.
We find that the net accretion rates, filamentary plus diffuse, remain roughly constant with radius.
### Dissecting the streamers: emerging spaghetti-type flow
While analyzing the radial profiles of various thermodynamic properties of streamers at specific times is necessary for obtaining the global picture, following the time-dependent processes is required as well (e.g., Arzoumanian et al., 2011). As a next step, we investigate the radial motions within the streamers in the range of \(\sim 50-350\,h^{-1}\) kpc from the central galaxy, and determine their structure at different radii by dissecting them into slices of \(10\,h^{-1}\) kpc radial thickness. This is complemented by the next section (section 3.5), where we focus on the inner \(50\,h^{-1}\) kpc, where the inflow gradually dissolves, changing its kinematics before penetrating the central galaxy.
Analysis of the streamers radial profiles and their cross section structures at different radii is important for our understanding of their evolution within the DM haloes. It also reveals the fate of these streamers at radii comparable to galaxy sizes. As a prototype, we use the streamer #1 of the Z2L halo identified in Figure 1, and analyze it down to \(\sim 10\,h^{-1}\) kpc. For this purpose, we select the gas particles outside \(R_{\rm vir}\) at \(z\sim 2.4\) and trace them inwards until \(z_{\rm f}=2\). This streamer falls into the central galaxy being inclined with respect to the disk plane by an angle of \(\sim 30^{\circ}\).
Properties such as the volume density, temperature, metallicity and the radial velocity of this gas show an interesting structure as seeing in Figure 14 (upper plot). Most prominent is the inner, central region of the streamer, i.e., the radial spine, at almost all radii, where one can define the _core_ of the streamer for each of these variables. We also observe that the above physical properties correlate along the streamer. For example, the volume density and the temperature show a tight correlation. However, we do not see a gradient of metallicity between the core and the envelope of the streamer.
To verify this, we have dissected this streamer perpendicularly to its spine. Figure 15 provides a more revealing view to study the
Figure 12: Radial velocity profiles of diffuse gas. The VW (CW) models are given by the red (blue) lines. The color shadows show the 20-80 percentile scale.
Figure 13: Mass accretion rate profiles of diffuse gas (solid lines) overplotted onto the outflows (dashed lines) triggered by feedback from the central galaxies. The VW (CW) models are given by the solid red (blue) lines. For both inflow and outflow rates, we have used the radial velocities exceeding the local dispersion velocities in the gas.
gradients between the core and the envelope of the streamer, and their radial dependence. For the volume density, the cross section at the innermost radius \(R\sim 50\,h^{-1}\,\mathrm{kpc}\) (in the midst of the inner CGM) displays a high density core of \(\rho\sim 10^{-25.5}\,\mathrm{g\,cm^{-3}}\), while somewhat less denser at \(150\) (in the midst of the outer CGM), \(200\) (the virial radius), \(250\), and \(350\,h^{-1}\,\mathrm{kpc}\). The envelope density of the streamer drops by more than one order of magnitude compared to that at the inner boundary of \(R\sim 50\,h^{-1}\,\mathrm{kpc}\). The core density has a weaker dependence on radius than the envelope.
Small scale structures and substructures are typically embedded in the streamer. For example, the slice situated at \(R\sim 350\,h^{-1}\,\mathrm{kpc}\) of Figure 15 (i.e., the lowest frame) displays a structure embedded in the spine of the streamer. This trapping of the structure in the core of the streamer explains partly its very high density. The temperature minimum typically lies within the streamer's core, this is also the case for this snapshot due to the cold gas contained within the structure.
Most of the slices exhibit almost pristine metallicity, \((Z/Z_{\odot})\sim 10^{-4}\). But high-metallicity gas particles of \((Z/Z_{\odot})\sim 10^{-3}-10^{-1}\) are found scattered across the cross-section without a strong correlation with the density or temperature. This is in agreement with the metallicity radial profile in Figure 14. At the same time, the region surrounding the structure always has a high metallicity, and this is confirmed by the slice at \(R\sim 350\,h^{-1}\,\mathrm{kpc}\).
We have measured the radial infall velocity at the innermost radius of \(R=50\,h^{-1}\,\mathrm{kpc}\), and it has reached \(v_{\rm r}\sim 250-200\,\mathrm{km\,s^{-1}}\) for the core gas. But the infall velocity for the envelope drops to as low as \(v_{\rm r}\sim 25-50\,\mathrm{km\,s^{-1}}\). At larger radii, the infall velocity gradually decreases from the core to the envelope from \(v_{\rm r}\sim 200\,\mathrm{km\,s^{-1}}\) to below \(v_{\rm r}\sim 50\,\mathrm{km\,s^{-1}}\). Such velocity gradients will induce shear between the core and the envelope, and trigger turbulence. Furthermore, the high density zones, i.e., the cores, tend to separate into different infall velocity regions with a spaghetti-like morphology. Both the developed turbulence and the spaghetti-type flow are characteristics of dissolution of the filamentary flow and its gradual virialization.
The lower plot of Figure 14 displays the \(0.1R_{\rm vir}\) thick slice of the same halo, Z2L, with the superposed velocity field. The kinematics of filamentary streamers is delineated here, supplemented by increasingly virialized gas around the central region. The major streamers can be seen partly hitting the central galaxy region and partly missing it converting the inflow into outflow. The emergence of the central turbulent region can be observed as well. Clearly, the gas withing the halo is not in hydrostatic or thermal equilibrium.
### Joining the central galaxy: dissolution of the accretion streamers
We proceed to address the evolution of the streamers in the innermost halo region of \(R\sim 10-50\,h^{-1}\,\mathrm{kpc}\), where it is more difficult to separate the streamers from the smooth accretion and where the kinematics becomes more complex -- thus characterizing the virialization process of the streamers. This region is the most interesting one because it borders with the central galaxy -- we call it the _inner_ CGM in contrast to the overall CGM which extends to the \(R_{\rm vir}\).
As stated earlier, in order to analyze the streamer's kinematic and thermodynamic properties, we identify their particles at larger radii and follow them individually to the center. For the sake of not overestimating the inflow rate within this region, we only consider particles belonging to the streamer or to the diffuse flow if their radial velocity exceeds the local dispersion velocity in the gas.
Figure 16 displays the fate of the streamer #1 within the halo Z2L discussed earlier (see also Figure 1), using two scales: a small scale of \(30\,h^{-1}\,\mathrm{kpc}\) (top panels) and a large scale representing the virial radius, \(\sim 200\,h^{-1}\,\mathrm{kpc}\) (bottom panels). On both scales the streamer and the flow streamlines are presented in two perpendicular projections with respect to the central galactic disk, face-on (left column) and edge-on (right column). We noted before that these streamer approaches the galaxy with an angle of \(\sim 30^{\circ}\) to the galactic equatorial plane. In fact, in all of our models, the streamers appear inclined to the galaxy planes -- we do not encounter orthogonal filaments or those approaching the galaxy in its midplane, as has been shown in previous simulations (e.g., Heller et al., 2007; Shlosman, 2013).
In the lower panel of Figure 16, the gaseous streamer can be seen maintaining the temperature around \(T\sim 10^{4}-10^{5}\,\mathrm{K}\) when falling through the virial radius, and it is heated up dramatically to \(T\sim 10^{6}\,\mathrm{K}\) at around \(R\sim 50\,h^{-1}\,\mathrm{kpc}\). The streamer has a thickness, i.e., diameter, of \(\sim 120\,h^{-1}\,\mathrm{kpc}\). This is much wider than the galaxy scale of \(\sim 20\,h^{-1}\,\mathrm{kpc}\). Most of the gas falls towards the center being aligned with the direction of the streamer until \(R\sim 30\,h^{-1}\,\mathrm{kpc}\)
Figure 14: _Upper plot_: the gas particles in the filamentary streamer #1 of the halo Z2L at \(z=2.1\) (see Figure 1), colored with density (top left), temperature (top right), metallicity (bottom left) and the infall velocity (bottom right). _Lower plot_: the CGM temperature in a slice with \(0.1R_{\rm vir}\) thickness, normalized by \(T_{\rm vir}\) from \(0.1\) (galaxy’s radius, inner circle) to \(R_{\rm vir}\) for the halo Z2L. The velocity field is traced by the blue streamlines.
where the gas streamlines become distorted and more turbulent (see also upper panels). Some of the infalling gas which missed the galaxy is converted into an outflow here, which extends to the _splash_ radius -- the first apocentric passage, which in most of the cases lies outside \(R_{\rm vir}\). Although originally defined with respect to DM (e.g., Diemer & Kravtsov, 2014; Adhikari et al., 2014), it has also a baryonic feature formed by the caustic generated by the piling up of the accreted material near apocenters.
Analysis of a turbulent motion is inherently difficult. We follow the method outlined in Choi et al. (2013), which relies on the vorticity and its cross product with the velocity field, \(\vec{w}=\triangledown\times\vec{v}\), called the inertial vortex force. We have pixelized the filament image and plotted the vorticity field in Figure 17 at the galaxy (upper frames)
Figure 16: Face-on (left) and edge-on (right) maps of the gas stream in filament #1 of the halo Z2L at \(z_{\ell}=2\) (see Figure 1), colored with the gas temperature. The upper panels represent a region of the \(60\,h^{-1}\) kpc side, and the lower panels of \(400\,h^{-1}\) kpc side. The dashed circles represent the virial radii. The central black contours in the upper boxes display the shape of the central galaxy, i.e., face-on and edge-on.
Figure 17: Face-on (left) and edge-on (right) vorticity maps of the gas particles in the filament #1 of the halo Z2L at \(z_{\ell}=2\) (see Figure 1). The upper panels shown in the \(60\,h^{-1}\) kpc boxes and the lower panel shown in the \(400\,h^{-1}\) kpc boxes. The dashed circles represent the virial radii. The central white contours in the upper frames display the shape of the central galaxy, i.e., face-on (left) and edge-on (right).
Figure 15: Representative slices of the gas in the filament #1 of halo Z2L (see Figure 1) at \(z_{\ell}=2\). Shown the gas density (top) and metallicity (bottom). From left to right the slices represent the cuts at distances \(R=50\) (the inner CGM), \(150\) (the outer CGM), \(200\), \(250\), and \(350\,h^{-1}\) kpc. Each slice has a \(10\,h^{-1}\) kpc radial thickness. Each frame has \(120\,h^{-1}\) kpc length on the side.
and the halo scales (lower frames) as specified above. On large scales and outside the streamer, the flow appears to be irrotational. Inside the streamer, the vorticity increases closer to the galaxy. This is partly due to the shear discussed above and the resulting Kelvin-Helmholtz instability which triggers ablation of the streamer gas inside \(R\sim 30\)\(h^{-1}\) kpc. Ablated gas contributes to the turbulent media surrounding the galaxy. Understanding and quantifying this ablation process is of a prime importance for the evolution of streamers at smaller radii.
Figure 18 displays the growth rate of galaxies (dashed lines) in our sample overplotted on their SFRs (solid lines). For \(z_{\rm f}=6\) models, the growth rate exceeds the corresponding SFR by about an order of magnitude. For \(z_{\rm f}=4\) the difference between the growth rate and the SFR here decreases towards \(z_{\rm f}\), independently of the environment and feedback. The decrease comes clearly because of the reduced growth rate, which depends on the net accretion rate.
## 4 Discussion
We used high-resolution zoom-in cosmological simulations to trace the structure and evolution of the cosmological gas streamers penetrating DM haloes until the galaxy boundaries at selected final redshifts of \(z_{\rm f}=6\), \(4\), and \(2\). All the haloes have been chosen to have similar masses of \(\log M_{\rm vir}/\mathrm{M}_{\odot}\sim 11.65\pm 0.05\) at their final redshifts, and evolving in high or low density environments, i.e., in different overdensities. Furthermore, the resulting central galaxies have been subjected to different types of feedback. We compared the thermodynamic and kinematic properties of streamers and diffuse accretion at these redshifts, starting outside the virial radii and down to the central galaxy regions. We analyze the dissolution process of streamers due to their interaction with the galactic environment extending to the halo virial radius and forming the CGM -- the gaseous counterpart of DM halo.
We start by summarizing our results and analyze them. Specifically, we find that,
* Using a hybrid d-web/entropy method applied to the gaseous filaments, i.e., streamers, allows us to map these streamers down to \(\sim 10\)\(h^{-1}\) kpc, i.e., down to the central galaxy scales. Applying this method in tandem with the gas kinematics provides an efficient way to separate the inflow from outflow and from the virialized gas within the parent DM haloes.
* Accretion rates decrease with decreasing final redshifts, whether they proceed via streamers or via diffuse accretion. The typical density of both types of accretion declines with redshift as well, i.e., from \(\rho\sim 10^{-24}-10^{-25}\) g cm\({}^{-3}\) at \(z_{\rm f}=6\) to \(\sim 3\times 10^{-26}-10^{-27}\) g cm\({}^{-3}\) at \(z_{\rm f}=2\). The temperature inside the streamers increases inside the virial radii, and faster at lower redshifts. However, when mass-weighted, it shows that majority of gas remains at low temperature,
Figure 19: Evolution of the CGM of the Z6LCW halo model. _Left_: the gas mass in the inner (red line) and outer (blue line) CGM, which envelops the central galaxy and extends to \(\sim R_{\rm vir}\). The inner CGM is defined within \(0.5R_{\rm vir}\), and the outer CGM lies in the range of \(0.5R_{\rm vir}-R_{\rm vir}\); _Center_: the median temperature evolution of the inner (red) and outer (blue) CGM. The shaddows represent the 20-80 percentiles. _Right:_ average metallicity evolution of the inner (red) and outer (blue) CGM.
Figure 18: Evolution of the SFR (solid lines) and galaxy growth rates (dashed lines) in all models of our simulation suite. The CW and VW models are represented by blue and red lines, respectively. The galaxy growth rate (gas + stars) corresponds to the baryonic accretion rate minus the outflow rates. The galaxies residing in a high density environment are shown in the upper panels and those in low density environment lie in the lower panels.
\(\sim 10^{4}\) K, at \(z_{\rm f}=6\) and \(4\), and only at \(z_{\rm f}=2\) it spreads more equally in mass for \(T\sim 10^{4}-10^{6}\) K. The temperature of the diffuse mode of accretion is higher than in the streamers inside the haloes, and while most of the gas is at \(\sim 10^{4}\) K outside the virial radius, it forms a bi-modal distribution of \(T\sim 10^{4.5-5}\) K and \(\sim 10^{5.5-7}\) K. In the presence of a wide range of temperatures and velocities there, a wide range of ionization is expected as well. The maximal radial velocities within the streamers decrease with a decreasing redshift, from \(\sim 200-300\) km s\({}^{-1}\) at \(z_{\rm f}=6\), down to \(\sim 100-150\) km s\({}^{-1}\) at \(z_{\rm f}=2\). The infall velocities for the diffuse gas are smaller compared to the streamers.
* Outside \(\sim 2R_{\rm vir}\), both the streamers and the diffuse gas accretion have metallicity in the range of \(Z/Z_{\odot}\sim 10^{-4}-10^{0}\), while inside the haloes, the metallicity distributions are flat. The VW models appear less metal rich than CW models, due to the larger gas fraction in VW galaxies and lower SFRs. The radial inflow velocities inside the central \(\sim 100h^{-1}\)kpc are typically lower for VW models than CW, possibly due to the stronger shocks in the former.
* Within the DM haloes, the streamers exhibit a core-envelope structure: the core radial flow surrounded by a lower density envelope. The core possesses an elevated density and lower temperature, a similar metallicity to that in the envelope, but a higher inflow velocity. This velocity gradient between the core and its envelope induces shear. At smaller radii, inside \(\sim 50h^{-1}\) kpc, the core-envelope structure is fading away and the filamentary flow splits and can be described as a spaghetti-type flow. Within the central \(\sim 30\)\(h^{-1}\) kpc of the streamers, the shear triggers the Kelvin-Helmholtz instability and turbulence which contribute to the ablation process of the streamers and dissolves them. We have quantified this turbulence and mapped it in two projections.
* Galactic outflows in all our models reach the characteristic distance of \(\sim 0.5R_{\rm vir}\sim 100h^{-1}\)kpc from the central galaxies. The maximal outflow rates are located close to those galaxies, \(\sim\) few\(\times M_{\odot}\) yr\({}^{-1}\), which is less than 10% of the inflow rate at \(z_{\rm f}=6\) and 4. But for \(z_{\rm f}=2\) models, the accretion rates have declined the maxima of the galactic outflows have increased. Hence they differ by factor of \(\sim 2\) only. Moreover, the outflow rates decline substantially with the distance to the galaxy, to \(\lesssim 0.1\)\(M_{\odot}\) yr\({}^{-1}\) at \(\sim R_{\rm vir}\).
* The entire DM halo region has a complex kinematics involving filamentary and diffuse accretion, and galactic outflows. Some of the filamentary accretion is deflected and extends back to the splash radius, \(\sim R_{\rm vir}-1.5R_{\rm vir}\). Based on the modeled properties of such a multiphase CGM, one must conclude that it cannot be in thermal equilibrium, and its dynamic and thermodynamic state are strongly time-dependent.
* We do not find any significant dependence on the environment in the filamentary and diffuse accretion in our models. This is probably related to a relatively small difference between the overdensities involved, between \(\delta\sim 1.3\) and \(\delta\sim 3\) (Table 1).
In the cold accretion model framework, the filamentary accreting gas can be kept at low temperature within the DM haloes, without being shocked around the virial radius, in constrast with the diffuse accretion in more massive haloes. We have confirmed that the streamer gas temperature remains low, \(\sim 10^{4}\) K, as it crosses the virial radius, which is substantially lower than the halo virial temperature of \(\sim 4-10\times 10^{6}\) K, depending on \(z_{\rm f}\). Inside the virial radius, some of the gas temperature starts to increase, reaching \(\sim 10^{5.5-6}\) K. The accreting gas is heated up by the galactic winds, is compressed and heated by the decay of the turbulent motions and small-scale shocks. However, the amount of the hot gas is not large, the majority of the filamentary flow remains cold, as can be seen in the Figure 4. Only at \(z_{\rm f}=2\), the cold stream appears to be disrupted inside \(R_{\rm vir}\), and the filamentary gas is distributed evenly in the range of \(10^{4}-10^{6}\) K.
The properties of diffuse gas differ substantially from that of the filamentary one. Outside \(R_{\rm vir}\), most of the gas remains around \(10^{4}\) K, but inside the halo, the gas heats up and its mass is distributed between \(10^{5}\) K and up to the virial temperature.
Figures 7 and 13 reveal that about equal mass accretion rates of filamentary and diffuse accretion cross the virial radii. But deeper in the haloes, the diffuse accretion rates start to dominate, as the filamentary flow dissolves gradually. Therefore, the hot gas starts to dominate in mass in the inner halo.
Figure 20: Pressure–entropy phase diagram in \(0.1R_{\rm vir}\) thick shells in the inner (centered on \(0.25R_{\rm vir}\)) and outer (centered on \(0.75R_{\rm vir}\)) CGM of three haloes LCW at \(z_{\rm f}=6\), 4 and 2. The dashed lines show lines of a constant temperature (\(10^{3}-10^{3}\) K) and constant density (\(10^{-3}-10^{1}\) cm\({}^{-3}\)). Note that the color palate is given in \(M_{\odot}\).
Fielding et al. (2020) have compared in detail the CGM properties in various numerical simulations at \(z=0\), invoking the pressure-entropy plane. Following their work, we have attempted to characterize the properties of the gas filling up the CGM in our high resolution simulations into the pressure-entropy plane, separating the inner and outer CGM of three ZLCW haloes (i.e., Z2LCW, Z4LCW and Z6LCW), at \(z_{\rm f}=6\), 4 and 2 (Figure 20). This gas excludes the ISM of the central galaxy determined by the HOP. Moreover, we limit our analysis to CW models residing in the low overdensity regions.
We find that both CGM regions differ substantially in their properties. Generally, we distinguish three different regions which account for most of the gas mass in Figure 20. First, the top region, which extends from \(K/K_{\rm vir}\sim 10^{1}\) to \(\sim 10^{-2}\). Second, the middle region of an isothermal gas at \(\sim 10^{4}\) K at \(K/K_{\rm vir}\sim 10^{-2}-10^{-4.5}\). And third, the horizontally extended and, therefore, radially extended, low-entropy gas across a range in density at \(K/K_{\rm vir}\sim 10^{-4.5}-10^{-6}\).
We can reconstruct the locations and masses of this gas in this halo. The total mass of the CGM is \(\sim 10^{10}\,M_{\odot}\) at \(z_{\rm f}=6\), \(\sim 3\times 10^{9}\,M_{\odot}\) at \(z_{\rm f}=4\), and \(\sim 2\times 10^{9}\,M_{\odot}\) at \(z_{\rm f}=2\). The first region defined above contains \(\sim 3\times 10^{9}\,M_{\odot}\) (inner CGM) and \(\sim 4\times 10^{8}\,M_{\odot}\) (outer CGM) of the virialized hot diffuse gas at \(z_{\rm f}=6\). This amount decreases by a factor of 3 (inner CGM) and stays the same (outer CGM) at \(z_{\rm f}=4\). It stays unchanged further on at \(z_{\rm f}=2\).
The second region has \(\sim 4\times 10^{9}\,M_{\odot}\) (inner CGM) and \(\sim 2\times 10^{9}\,M_{\odot}\) (outer CGM) of the nearly isothermal gas at \(z_{\rm f}=6\). This gas is in thermal equilibrium with the cosmological UV background. Its amount decreases by a factor of 4 (inner CGM) and stays the same (outer CGM) at \(z_{\rm f}=4\). It further decreases by a factor of 2 (inner CGM) and by a factor of 10 (outer CGM) at \(z_{\rm f}=2\).
The third region includes the starforming gas locked in the substructures. Its left boundary is limited by our critical density for star formation, \(4\,{\rm cm}^{-3}\). It extends horizontally to high pressure, and this extension depends on the star formation recipe -- it can be seen in the \(\log T-\log\rho\) diagrams as well. Note also that because this gas is located in substructures, it should be normalized by the virial pressure of each object, but we ignore this change in normalization. The starforming gas mass has \(\sim 5\times 10^{8}\,M_{\odot}\) (inner CGM) and \(\sim 4\times 10^{7}\,M_{\odot}\) (outer CGM) at \(z_{\rm f}=6\). This amount decreases by a factor of 10 (inner CGM) and by a factor of 100 (outer CGM) at \(z_{\rm f}=4\). It further decreases by a factor of 100 (inner CGM) and by another factor of 10 (outer CGM) at \(z_{\rm f}=2\).
When the starforming gas exists, it has the highest pressure of all three components discussed above. The high entropy gas which is always present in simulations extends from low to intermediate pressure, while the isothermal gas exhibits the lowest pressure and intermediate entropy.
Overall, we observe similar structural details in pressure-entropy plane, as well as substantial differences between the gas properties in the inner and outer CGM, as well as redshift evolution in similar DM haloes. In the inner CGM, the amount of the starforming gas decreases abruptly with \(z_{\rm f}\), and this gas is basically absent at \(z_{\rm f}=2\). The amount of \(10^{4}\) K gas decreases as well with decreasing redshift, but not so dramatically. In the outer CGM, the starforming gas dispears already by \(z_{\rm f}=4\). And the amount of \(10^{4}\) K gas decreases sharply as well, becoming negligible at \(z_{\rm f}=2\). In summary, the CGM analysis exposes its multiphase character and being away from thermal and dynamic equilibrium.
Note that masses quoted above represent the CGM in specific and similar haloes used in our simulations. The median properties of the CGM, such as mass fractions, have been found to differ among numerical simulations due to the galactic feedback (e.g., Davies et al., 2020). Such differences in the CGM become explicit in our nearly identical haloes at three different redshifts due to the changing galactic feedback.
While the inflow velocity profiles along the streamers increase at smaller radii, within \(\sim 100\,h^{-1}\)kpc the trend is reversed. The maxima of this velocity decrease with lower \(z_{\rm f}\) (Figure 6). This decrease appears to correlate with the halo escape speed which decreases with time because the halo concentration of similar haloes at different redshifts decreases with lower \(z_{\rm f}\). The velocity profiles of diffuse accreting gas are flatter (Figure 12), but generally follow the same trend as the filamentary accretion.
Slicing the streamers perpendicular to their spines (e.g., Figure 15) allows us to define the high density core and low density envelope region(s) in each slice, e.g., Figures 14 and 15. The high and low density regions in the streamers display velocity gradients, i.e., the core region separates higher from lower infall velocities. Such gradients are expected to contribute to the ablation process of the core region in streamers by the Kelvin-Helmholtz instability. Indeed, at smaller radii, we observe the splitting in the streamer's core, and the streamer starts to resemble a spaghetti-type flow.
Along with the gas temperature and density radial profiles, we do find substantial gradients for the metallicity on the halo scale, i.e., between the inner and outer CGM. But on smaller scales, larger variations of metallicity prevail, forming pockets of high metallicity. When a (sub)structure is embedded in a streamer or in diffuse accretion, its metallicity diffuses out either by stellar feedback or other processes, forming these pockets.
As one of the important issues in understanding the evolution of the filamentary inflow inside the virial radius, we have measured the ratio of the cold-to-hot gas mass, \(f_{\rm ch}=M_{\rm cold}/M_{\rm hot}\), at \(R_{\rm vir}\), \(0.5R_{\rm vir}\) and \(0.1R_{\rm vir}\) (not shown here). We define the cold gas with \(T<3\times 10^{4}\) K, and hot gas with the temperature above this threshold. We find that \(f_{\rm ch}\) declines monotonically with lower \(z_{\rm f}\) from \(\sim 2-3\) at \(z_{\rm f}=6\) to \(\sim 0.25-0.5\) at \(z_{\rm f}=2\) by an order of magnitude. The sharper decline happens inside \(0.5\,R_{\rm vir}\). Not accidentally this decline coincides with the region where the galactic outflows are strong, i.e., inside \(100\,h^{-1}\)kpc. At all radii, \(f_{\rm ch}\) falls below unity at \(z_{\rm f}=2\), so, a sharp decrease at lower redshift haloes. Note that our haloes have the same mass by design at all these final redshifts.
Finally, we analyse the structure of the CGM which populates the entire DM haloes outside the central galaxies. As an example, we present the evolution of the CGM around the Z6LCW galaxy -- its mass growth, and temperature and metallicity evolution (Figure 19). We have divided the CGM which occupies the parent DM halo, into inner and outer CGM. Based on the ratio of cold-to-hot gas, \(f_{\rm ch}\), discussed above, the inner CGM can be defined inside the inner \(100\,h^{-1}\)kpc, and the outer CGM region between this radius and the virial radius. The inner CGM is also affected by the galactic feedback stronger than the outer CGM. For models presented here, we find that the CGM is not in equilibrium, nor kinematically, thermodynamically, chemically or temporally. It is constantly perturbed by the influx mass, momentum and energy across \(R_{\rm vir}\) and from the central galaxy, and forms a multi-phase structure. This conclusion agrees with observations of low-redshift galaxies (e.g., Tomlinson et al., 2017, and refs. therein).
Furthermore, the filamentary flow which does not impact the central galaxy expands to the baryonic backsplash radius, which appears to be small compared to the DM backsplash radius, and, therefore, can provide additional perturbations to the CGM, especially for \(z\gtrsim 1\), when the filamentary accretion is more significant (e.g., Figure 14, lower panel).
### Streamer properties in the innermost halo
Turning our attention to the innermost haloes, inside \(\sim 30\,h^{-1}\,\)kpc, we observe that the geometrical width of the streamers exceeds substantially the central galaxy cross sections. In all our models, the streamers are inclined to the equatorial plane of the central galaxy. The inclination angle is never around \(90^{\circ}\) or \(0^{\circ}\). For example, Figure 16, shows the inclination angle being \(\sim 30^{\circ}\) for the streamer #1 (see also Figure 1 for the identification of this filament).
Inside this region, the streamers have been observed to dissolve gradually by the ablation process (e.g., Figure 16). This process is characterized by an increase in the turbulent layer associated with the streamer and decreasing radii. In order to quantify the turbulence, we have calculated the associated vorticity (see section 3.5) and have pixelized it (Figure 17). It shows that the dissolution is rapid and the turbulence, which is minimal at higher radii, is spreading sideways. A related question is to what extent this turbulent flow in the innermost halo region contributes to the turbulence in the underlying galactic disk (e.g., Choi et al. 2013)
We can recognize few types of interactions between the baryonic filamentary inflow and the galactic disk. First, some of the inflow impacts the galaxy and has both prograde or retrograde motion with respect to the rotation of the stellar/gaseous galactic disk. We expect, that the prograde encounter between the filamentary inflow and the galactic disk is smoother than the retrograde one. Second, and more generally, most of the gas inflow misses the direct encounter with the disk and creates a 'tail' flow around the disk, which appears to be turbulent. This part of the inflow which has missed the disk contributes to buildup of the extended and highly turbulent region around the disk, and its kinematics differs substantially from that of the filamentary or diffuse accretion at larger radii.
We observe that all the haloes in our simulations are connected typically by 2-3 filaments, and only in one case by 4 filaments. This is in agreement with other simulations (e.g. IllustrisTNG simulation, Nelson et al. (2018)). The reason for this is still not well understood.
Comparison between our profiles of thermodynamic variables provide a complementary information to those presented in Fielding et al. (2020). Direct comparison for our median profiles agree with the latter. But our hex-binned distributions of the temperatures in filamentary and diffuse accretion separate their gas mass as a function of temperature. For example, the increasing mass accumulation around \(T\sim 10^{4}\,\)K in the inner halo is evident in the filamentary gas (Figure 4), but not in the diffuse accretion (Figure 10).
The difference in growth rate with redshift in our galaxies (Figure 18) is also mainly due to the time employed by the galaxies to reach their final masses, i.e., for a given mass range, galaxies at high-\(z\) grow faster than galaxies at lower-\(z\).
### From the virial radius to the central galaxy
Finally, we take a look at the big picture of accretion and outflows inside the parent haloes of central galaxies and its immediate environment, i.e., within \(\sim 2R_{\rm vir}\). While the number of models presented here is limited compared to the cosmological simulations in large computational boxes, the advantage of our zoom-in simulations is the higher resolution which allows to obtain a detailed picture of kinematic and thermodynamic properties of these flows.
We find that the gaseous component of DM haloes in our models has a complex structure, both thermodynamically and kinematically. Filamentary and diffuse inflows differ in density, temperature, metallicity, and in the associated velocity field. Moreover, the observed outflows have multiple sources of origin. First come the galactic winds of various strength. Second contribution comes from filamentary accretion which missed the central galaxy and has been diverted radially out. The cross sections of the streamers are typically much wider than those of the galaxies. Much of the flow along the streamers, therefore, misses the galaxy, but turns around and can even interact with itself.
The outer boundary of this interaction defines the backsplash radius (e.g., Adhikari et al. 2014). The backsplash radius of the gas is expected to be smaller than the one for the DM due to the shocks and associated dissipation. By using the logarithmic derivative of the density with respect to the radius, we have checked for the position of the backsplash radii for our central galaxies. For \(z_{\rm f}=6\) and 4, we find a good convergence with \(R_{200}\), which lies close to \(R_{\rm vir}\). But for \(z_{\rm f}=2\), we find that the backsplash radius lies further out of \(R_{\rm vir}\), on the average around \(\sim 1.7R_{\rm vir}\).
We find that the properties of the filamentary accretion vary with distance to the central galaxy. We detect the ablation process of the filamentary inflow, associated with Kelvin-Helmholtz instability and generation of turbulence around the dissolving filaments, leading to the spaghetti-type flow. Thus the filament separates in a number of individual streamers which continue to be ablated and mixing with the CGM. Hence, we associate the CGM with the entire volume of the DM halo surrounding the central galaxy and define it as the transition region between the galactic ISM and the virial radius. This confirms that the CGM is highly inhomogeneous, i.e., multiphase, in its thermodynamic and kinematic properties.
We have also calculated the fraction of volume occupied by the walls shown in Figure 1, and found it is roughly independent of the final redshifts, \(z_{\rm f}\). By definition, the sheets represent a one-dimensional stable manifold, while a filamentary streamer represents a two-dimensional stable manifold. Note, that Hahn et al. (2007a) has measured evolution of this fraction and found that it increases untill \(z\sim 2\) and decreases thereafter. However, there is no contradiction with our results -- our redshift dependence is not an evolutionary one, but compares the environment of similar mass haloes at different _final_ redshifts. When we switch to evolutionary redshift for each halo separately, we confirm the Hahn et al. result.
All the streamers exhibit metallicity in the range of \(Z/Z_{\odot}\sim 10^{-3}-10^{-1}\), with a radial gradient between the outer and inner halo. Inside \(R_{\rm vir}\), the streamers have a mass-weighted metallicity \(Z/Z_{\odot}\geq{\rm few}\times 10^{-2}-{\rm few}\times 10^{-1}\), and agrees well with observations in the optical-X-ray bands (e.g., Werk et al. 2014; Lehner et al. 2019). For the diffuse accretion, the metallicity is smaller by a factor of a few.
## 5 Conclusions
Based on a set of high-resolution zoom-in cosmological simulations, we investigate the baryonic filaments (streamers) and the diffuse accretion flows which channel the gas across the virial radii and down to the galactic regions, under a dual action of accretion and galactic outflows. We choose a set of DM haloes with similar masses of \(\log{(M_{\rm vir}/{\rm M}_{\odot})}\sim 11.65\) at three representative redshifts, \(z_{\rm f}=6\), 4 and 2 from Bi et al. (2022a). These haloes have been evolved in relatively low and high overdensities compared to the average density in the universe, and using two different stellar feedback.
Using a hybrid d-web/entropy method supplemented by kinematics in the innermost regions, we have mapped the filamentary streamers and seprated them from galactic outflows and diffuse accretion flows. This allowed us to analyze the dynamic and thermodynamic properties of the CGM in nearly identical haloes at different redshifts.
We find that the CGM is highly inhomogeneous and multiphase, and not in thermodynamic or dynamic equilibrium.
We find that accretion rates in filamentary streamers exhibit decrease with decreasing redshifts, and the inflow velocities along these filaments decrease by a factor of \(\sim 2\) with lower \(z\), again in similar haloes. The temperature inside the CGM increases at smaller radii, as well as with decreasing redshift.
The filamentary streamers display a core-envelope structure inside the virial radius -- a higher density, lower temperature core surrounded by a lower density, higher temperature envelope. The filament separates into spaghetti-type flow. Inside the inner \(\sim 30\,h^{-1}\,\)kpc, we show that the filaments develop the Kelvin-Helmholtz instability, which triggers turbulence, ablates and dissolve them.
We find that the galactic outflows in tandem with diverted accretion flow affect the accretion flow, mostly within the inner CGM of \(\sim 0.5R_{\rm vir}\sim 100h^{-1}\,\)kpc. Finally, we find that the thermodynamic properties of the CGM gas can be separated into three phases on the pressure-entropy plane (first used by Fielding et al. (2020)). These regions include the high entropy, low-density hot gas, isothermal gas at \(\sim 10^{4}\,\)K in equilibrium with the UV background, and the low entropy starforming gas.
## Acknowledgements
We thank Phil Hopkins for providing us with the latest version of the code. We are grateful to Alessandro Lupi for his help with GIZMO, and to Peter Behroozi for clarifications about ROCKSTAR. I.S. acknowledges insightful discussions with Nick Kaiser during his stay at KITP, and is grateful for a generous support from the International Joint Research Promotion Program at Osaka University. I.S. acknowledges the hospitality of KITP where part of this research has been conducted. This work has been partially supported by the JSPS KAKENHI grant 16H02163, and by the NSF under Grant No. NSF PHY-1748958. The STScI is operated by the AURA, Inc., under NASA contract NAS5-26555. E.R.D. acknowledges support of the Collaborative Research Center 956, subproject C4, funded by the Deutsche Forschungsgemeinschaft (DFG). Simulations have been performed using generous allocation of computing time on the XSEDE machines under the NSF grant TG-AST190016, and by the University of Kentucky Liscomb Computing Cluster. We are grateful for help by Vikram Gazula at the Center for Computational Studies of the University of Kentucky.
## Data Availability
The data used for this paper can be available upon reasonable request.
|
2308.03734 | Labeling without Seeing? Blind Annotation for Privacy-Preserving Entity
Resolution | The entity resolution problem requires finding pairs across datasets that
belong to different owners but refer to the same entity in the real world. To
train and evaluate solutions (either rule-based or machine-learning-based) to
the entity resolution problem, generating a ground truth dataset with entity
pairs or clusters is needed. However, such a data annotation process involves
humans as domain oracles to review the plaintext data for all candidate record
pairs from different parties, which inevitably infringes the privacy of data
owners, especially in privacy-sensitive cases like medical records. To the best
of our knowledge, there is no prior work on privacy-preserving ground truth
dataset generation, especially in the domain of entity resolution. We propose a
novel blind annotation protocol based on homomorphic encryption that allows
domain oracles to collaboratively label ground truths without sharing data in
plaintext with other parties. In addition, we design a domain-specific
easy-to-use language that hides the sophisticated underlying homomorphic
encryption layer. Rigorous proof of the privacy guarantee is provided and our
empirical experiments via an annotation simulator indicate the feasibility of
our privacy-preserving protocol (f-measure on average achieves more than 90\%
compared with the real ground truths). | Yixiang Yao, Weizhao Jin, Srivatsan Ravi | 2023-08-07T17:32:33Z | http://arxiv.org/abs/2308.03734v1 | # Labeling without Seeing? Blind Annotation for Privacy-Preserving Entity Resolution
###### Abstract.
The entity resolution problem requires finding pairs across datasets that belong to different owners but refer to the same entity in the real world. To train and evaluate solutions (either rule-based or machine-learning-based) to the entity resolution problem, generating a ground truth dataset with entity pairs or clusters is needed. However, such a data annotation process involves humans as domain oracles to review the plaintext data for all candidate record pairs from different parties, which inevitably infringes the privacy of data owners, especially in privacy-sensitive cases like medical records. To the best of our knowledge, there is no prior work on privacy-preserving ground truth dataset generation, especially in the domain of entity resolution. We propose a novel blind annotation protocol based on homomorphic encryption that allows domain oracles to collaboratively label ground truths without sharing data in plaintext with other parties. In addition, we design a domain-specific easy-to-use language that hides the sophisticated underlying homomorphic encryption layer. Rigorous proof of the privacy guarantee is provided and our empirical experiments via an annotation simulator indicate the feasibility of our privacy-preserving protocol (f-measure on average achieves more than 90% compared with the real ground truths).
entity resolution, ground truth labeling, privacy-enhancing technology, homomorphic encryption +
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
the size of the required labels, are still annotated in plaintext (Kumar et al., 2017) and can not be identified as _privacy-preserving annotation_.
Making annotation to be privacy-preserving is challenging because the domain oracles need to compare the data across parties side-by-side in some plaintext forms. Previous annotation works or tools focus on aspects like better interaction experience, multi-user collaboration, high throughput, and labeling efficiency, but none of them take privacy protection into consideration with one exception using differential privacy (Kumar et al., 2017). Contrary to the relatively well-researched privacy-preserving entity resolution solutions for the application phase (Kumar et al., 2017; Kumar et al., 2017; Kumar et al., 2017; Kumar et al., 2017), there still exists a privacy protection void in the data annotation phase. Additionally, though automatic ground truth generation exists in some AI tasks (Kumar et al., 2017; Kumar et al., 2017), it is limited to certain domains (Kumar et al., 2017; Kumar et al., 2017) and almost implausible to have such techniques universally generalized for privacy-preserving entity resolution.
Different from the aforementioned approach of using differential privacy to preserve data anonymity which merely hides the individual identifier but not the content, we propose a novel "blind annotation" approach of ground truth labeling, based on homomorphic encryption (HE), for privacy-preserving entity resolution where oracles from each party can only inspect the data of their own but are still able to work collaboratively. The conceptual model is as shown in Figure 2, where data owners "blindly" annotate the ground truth set, which can later be used in the model training and evaluation phases of privacy-preserving entity resolution. No data in the plain view of each party is shared with any other parties so the privacy of the data is guaranteed without loss of data utility.
We summarize our main contributions as follows:
* To the best of our knowledge, this is the first solution to discover the potential of applying homomorphic encryption in privacy-preserving ground truth annotation.
* We propose a novel homomorphic-encryption-based privacy-preserving protocol that allows multiple parties to collaboratively produce entity annotations without sharing their data with any other parties in plaintext. The privacy properties are rigorously proved.
* We implement an annotation simulator and conduct empirical studies on heterogeneous real-world datasets to prove the feasibility of our blind annotation solution.
## 2. Problem Definition
The **privacy-preserving entity resolution** (PPER) problem can be defined as a triple \(T=(D,M,E)\) where \(D=\{D_{1}\dots D_{n}\}\) is a set of \(n\) different datasets consisting of records \(r\), from \(n\) different data owners \(P=\{P_{1}\dots P_{n}\}\) respectively. \(E\) denotes encoding or encryption algorithm that keeps the \(r\) from each \(D\) in encoded or ciphertext form, that is, \(\llbracket r\rrbracket\in E(D)\), where \(\llbracket r\rrbracket\) explicitly denotes \(r\) is in ciphertext. \(M\) is the match set containing record pairs between any two datasets amongst the \(n\) parties, so
\[M=\{\llbracket r_{i}\rrbracket,\llbracket r_{j}\rrbracket\mid|r_{i}=r_{j}; \llbracket r_{i}\rrbracket\in E(D_{k}),\llbracket r_{j}\rrbracket\in E(D_{m})\},\]
where \(r_{i}=r_{j}\) indicates that \(r_{i}\) and \(r_{j}\) refer to the same entity in the real world.
Finding the complete and precise \(M\) depends on the domain-specific algorithm/model which relies on the high-quality ground truth data for training, fine-tuning or evaluation. The ground truth set \(G\) is a set of triples \((i,j,l)\) where \(i\) and \(j\) are record ids of a record pair \(r_{i}\) and \(r_{j}\), and \(l\) is a Boolean label which indicates if such record pair is the same entity. The process of constructing \(G\) is called **annotation**. Formally,
\[G=\{(i,j,l)\mid r_{i}\in X(D_{k}),r_{j}\in X(D_{m})\},\]
Figure 2. The conceptual model for privacy-preserving entity resolution (upper white dashed box) and blind annotation (lower grey solid box). In this paper, we focus on the annotation phase and utilize “blind annotation” to label ground truth for PPER with zero-knowledge shared with other parties. The homomorphically encrypted records from each party are first sampled. Then, the data owners or their delegated oracles annotate sampled records without looking at the other party’s data but interact with the blind annotation protocol. Finally, the ground truth as a set of triplets \(G=\{(i,j,l)\}\) (\(i\) and \(j\) are record ids from two parties and \(l\) is a label indicating if \(r_{i}\) and \(r_{j}\) is a match) is formed and can be used for training and evaluation in any PPER task. Note that no record content \(r_{i}\) or \(r_{j}\) in plaintext is revealed throughout the annotation process. The solid line denotes the data stream in plaintext and the dashed line denotes the data stream in ciphertext. \(D\) denotes dataset, \(X\) denotes sampling, \(E\) denotes homomorphic encryption, and \(\llbracket r\rrbracket\) denotes record \(r\) in ciphertext.
where \(X\) is a sampling algorithm. \(I\) is determined by domain oracles with the content of \(r_{i}\) and \(r_{j}\).
Usually, \(G\) is not constructed individually from each party: because the domain oracles have to see \(r_{i}\) and \(r_{j}\) in clear in order to determine whether two records are the same real-world entity based on the features the records have. Therefore, the owners of the records need to share sampled records in the plain with other parties which makes the raw content of \(r_{i}\) and \(r_{j}\) revealed.
To prevent the potential privacy leakage during this process, this paper focuses on creating ground truth data with \(\llbracket r_{i}\rrbracket\) and \(\llbracket r_{j}\rrbracket\) straight so that no plaintext from any parties reveals to any other parties. We name this process **blind annotation**, that is
\[G=\{(i,j,l)\ \middle\lVert\ \llbracket r_{i}\rrbracket\in X(E(D_{k})), \llbracket r_{j}\rrbracket\in X(E(D_{m}))\}.\]
## 3. Preliminaries
### Related Works
As stated in this privacy-preserving record linkage survey paper(Han et al., 2017), almost no open method is available for making ground truth annotation for PPER due to privacy concerns regardless of its a time-consuming and erroneous process.
The only similar scenario we found that explored the attempts for privacy-preserving annotation is about active learning using differential privacy (Han et al., 2017). The general active learning method aims to learn a distribution of the data by selecting less training data that carries the highest information with the active learner. The sampled data from the learner needs to be annotated by domain oracles or crowd workers. The problem is that crowdsourcing the labels is usually from an open call and transmitting non-public data to crowd workers has an inevitable privacy leakage risk, especially for the identities. For the insufficient privacy guarantee of \(k\)-anonymity, the authors adopted a differential privacy algorithm (Zhou et al., 2017), which prevents a user from sustaining additional damage by including their data in a dataset, on binary classification tasks, this method achieves similar accuracy scores as the non-privacy counterpart with a small performance hit but strong privacy guarantee. Though Personal Identifiable Information (PII) is protected from revealing, the content of the sampled data as plain text is still known by the crowd workers for them to understand and annotate. If the content of the non-public data itself requires to be kept private, such a method is not yet qualified.
Another interesting method to prepare data for machine learning tasks is to learn the distribution of the original data and generate synthetic data based on that (Kang et al., 2017). Even though this method might work for training statistical models and conducting analytical works, it is not feasible to be applied to entity resolution tasks. Because the representation of the token/word is an important signal to determine the similarities between records, a tiny modification or substitution of the original representation could cause huge judgment deviations for the annotators.
Yu (Yu et al., 2017) provided a method of not considering ground truth but instead using unsupervised heuristic measures based on a greedy matching approach to evaluate and optimize the hyper-parameters for entity resolution models. This method is based on the assumption that a match of record pairs gets the highest similarity score among a set of heuristic measures against other candidate pairs. Using heuristics to estimate linkage quality is doable in some certain scenarios, however, evaluating heuristics by heuristics, in general, is somewhat a "the chicken or the egg" problem. Additionally, this method only works in some general cases where the same entity looks similar: in some extreme conditions or some hard record linking problems, the representations of the record are different even when they are referring to the same entity.
In short, labeling ground truth in a privacy-preserving manner is tough and no direct work exists. Our approach achieves the goal without compromising privacy protection by employing homomorphic encryption.
### Homomorphic Encryption
_Homomorphic encryption_ allows the computation to be performed over encrypted data while preserving the features of the function and format of the encrypted data (Baherski et al., 2016; Dosovitski et al., 2017). For example, previously, users outsourcing their data to a cloud computing platform would have privacy concerns; with homomorphic encryption, the platform is able to perform necessary computations on the encrypted data directly (Zhou et al., 2017). In general, an encryption scheme is said to be homomorphic if for some operator \(\odot\) over plaintext (\(\odot_{\mathcal{M}}\)) and ciphertext (\(\odot_{\mathcal{C}}\)), the encryption function \(E\) satisfies
\[\forall m_{1},m_{2}\in\mathcal{M},\ E(m_{1}\odot_{\mathcal{M}}m_{2})\gets E (m_{1})\odot_{\mathcal{C}}E(m_{2}),\]
where \(\mathcal{M}\) denotes the message in plaintext and \(\mathcal{C}\) denotes the message in ciphertext. \(\leftarrow\) declares the computation is direct without any decryption in the middle of the process. Therefore, if we let \(\odot\) to be \(+\), computing \(m_{1}+m_{2}\) can be done by a computation unit that receives the original message from the data owner in the encrypted form, that is, \(E(m_{1})\) and \(E(m_{2})\), and is able to compute the addition of two messages without decrypting the messages from ciphertext back to plaintext. The result in ciphertext sends back to the data owner who has the key to decrypt and see the result.
Homomorphic encryption schemes are normally categorized into three types according to the survey (Baherski et al., 2016): (1) _Partially Homomorphic Encryption_ (PHE) allows only one type of operation with an unlimited number of times (i.e., no bound on the number of usages), e.g., Pailier (Pailier, 2017). (2) _Somewhat Homomorphic Encryption_ (SWHE) allows some types of operations a limited number of times, e.g., Boneh-Goh-Nissim (BGN) (Boh, 2016). (3) _Fully Homomorphic Encryption_ (FHE) allows an unlimited number of operations for an unlimited number of times. Since Gentry realized the first FHE scheme (Gentry, 2017) based on the lattice, a lot of follow-up FHE schemes have been proposed to address the issue it has and optimized to be practical in real-world applications. Some schemes including Brakerski-Gentry-Vaikuntanathan (BGV) (Brakerski et al., 2016), Brakerski-Fan-Vercauteren (BFV) (Brakerski et al., 2016; Dosovitski et al., 2017) and Cheon-Kim-Kim-Song (KKKS) (Dosovitski et al., 2017) support arithmetic operations while some others (Brakerski et al., 2016) including FHEW (Brakerski et al., 2016) is capable for logical operations. Recently, works are extending the pure arithmetic schemes to run logical comparators without losing advantages such as Single Instruction Multiple Data (SIMD) (Kang et al., 2017).
Figure 3 demonstrates a typical public key homomorphic encryption scheme \(\varepsilon\) primarily characterized by four operations. _Key generation_ takes in a security parameter \(\lambda\) and outputs public key pair \((pk,sk)\). _Encryption_ encrypts plaintext message \(m\) with \(pk\) and
returns ciphertext \(c\). _Decryption_, as an inverse, decrypts ciphertext \(c\) with \(sk\) into plaintext \(m\). _Evaluation_ evaluates \(f\) over a ciphertext tuple \((c_{1},c_{2},\dots,c_{k})\) with \(pk\) and returns \(c_{f}\), which is equivalent to \(f(m_{1},m_{2},\dots,m_{k})\) after decrypting.
Clearly, the party that executes the homomorphic functions does not know anything about the data and the party that provides the encrypted data is unaware of what functions have been executed. This property, which is informally named _blind evaluation_, is the basis of our protocol.
## 4. Blind Annotation Protocol
We propose a privacy-preserving annotation protocol that allows the ground truth to be annotated among multiple parties "blindly".
**Intuition:** The intuition behind this is that without putting a candidate record pair side-by-side for domain oracles to determine if they are the same entity or not, the oracles on each side could extract the core features by just looking at the record content in plaintext of their own and summarize these features into a series of Boolean questions. If any record from the other parties satisfies all conditions defined by these questions, this record is highly likely to be a match of the record that the questions are derived from. Suppose the record content is encrypted, and these Boolean functions can be blindly evaluated over the ciphertext of the record. In that case, no plaintext of the record content is revealed to other parties.
**Protocol:** The protocol is elaborated in Figure 4. Succinctly, assume the ground truth construction is between party \(P_{A}\) and \(P_{B}\), and party \(P_{C}\) is a coordinator for key management and result collection. Party \(P_{A}\) and \(P_{B}\) first send their dataset size to \(P_{C}\), and \(P_{C}\) randomly samples the record ids. Moreover, \(P_{C}\) also prepares the public key pair \((pk,sk)\) for homomorphic encryption and sends the public key \(pk\) to \(P_{A}\) and \(P_{B}\), and keeps the secret \(sk\) for decryption.
\(P_{A}\) and \(P_{B}\) then sample records \(r_{i}\) and \(r_{j}\) respectively according to the sampled record ids from \(P_{C}\), and prepare a set of Boolean logic-style questions \(Q_{i}\) and \(Q_{j}\) accordingly to the features of the record content for each record. Each question set \(Q\) is combined into a form that returns one Boolean result with first-order logic and converted to be the function that is homomorphically executable. The record data in clear is homomorphically encrypted with \(pk\) in \(P_{A}\) and \(P_{B}\) respectively and the ciphertext \(\llbracket r_{i}\rrbracket\) and \(\llbracket r_{j}\rrbracket\) are shared with another party.
Once \(P_{A}\) receives a \(\llbracket r_{j}\rrbracket\), it evaluates \(Q_{i}\) over it homomorphically, that is \(\texttt{HE}.\texttt{Eval}_{c}(Q_{i},\llbracket r_{j}\rrbracket,pk)\), and gets \(\llbracket A_{A}^{ij}\rrbracket\). \(P_{B}\) does a similar operation and gets \(\llbracket A_{B}^{ij}\rrbracket\). Both \(\llbracket A_{A}^{ij}\rrbracket\) and \(\llbracket A_{B}^{ij}\rrbracket\) are in ciphertext and sent to \(P_{C}\). \(P_{C}\) then tests if \(\llbracket A_{A}^{ij}\rrbracket\) equals \(\llbracket A_{B}^{ij}\rrbracket\) and decrypts the result with the secret key \(sk\).
If \(P_{A}\) and \(P_{B}\) agree on the result, the result will be stored. Otherwise, this pair will be picked out for the next round. In the next round, \(P_{A}\) and \(P_{B}\) require to refine their questions and evaluate with the encrypted record data from another party again. After \(t\) rounds, the pairs with no agreement will be discarded.
**Privacy:** From \(P_{A}\)'s perspective, throughout the protocol, \(P_{A}\) has no access to \(P_{B}\)'s record content \(r_{j}\) in plaintext but evaluates questions over ciphertext \(\llbracket r_{j}\rrbracket\). The question functions \(Q_{i}\) are evaluated on \(P_{A}\)'s side so \(P_{B}\) does not know anything about the features tested in \(Q_{i}\) by \(P_{A}\). The same situation applies to \(P_{B}\). \(P_{C}\), which has the secret key \(sk\), decrypts the final results which contain no record content in plaintext from either \(P_{A}\) or \(P_{B}\). None of the parties is able to apply ciphertext collision attack because homomorphic encryption is semantically secure (Section 4.3).
In the following subsections, we walk through the protocol details and dissect each essential component. We also formally prove the security of the protocol in Section 4.6.
### Initialization
Prior to getting involved in the annotation, a few initialization steps should be conducted first. The dataset owners (\(P_{A}\) and \(P_{B}\)) need to first consent on the **annotation criteria** including how similar can two records be identified as the match and what kind
Figure 4. The timing diagram for the blind annotation protocol. The protocol, as well as five main components (initialization, feature questions, record encryption, blind evaluation, and end conditions) are dissected in Section 4.
Figure 3. A typical public-key homomorphic encryption scheme \(\varepsilon\)
of difference is tolerable. For example, if two products are "Canon 24-70mm f/2.8L USM" and "Canon 24-70mm f2.8L USM II" (A1 and B1 in Figure 1), should these two lenses with different generations be considered the same? The data used as the example in this step can be synthetic or be derived from the record content so it does not leak any information about the original records.
A **data preprocessing** step is necessary for some of the datasets. If the dataset owners believe the format of their data has a noticeable distinction, they could apply a series of data cleaning and standardization operations, for example, removing the dataset-specific characters, lemmatization or stemming for natural language content, on their data individually.
\(P_{C}\) is responsible for **sampling records** from \(P_{A}\) and \(P_{B}\)'s dataset. After the dataset size \(|D_{A}|\) and \(|D_{B}|\) are sent to \(P_{C}\), \(P_{C}\) randomly samples some amount of record from \(D_{A}\) and \(D_{B}\) and sends back the selected record ids as a list back to the original data owners respectively. \(P_{C}\) then **generates the key pairs**\((pk,sk)\leftarrow\) HE.KeyGen\({}_{r}\)(\(\lambda\)) with selected homomorphic encryption scheme \(\varepsilon\) and parameters \(\lambda\), then distributes the public key \(pk\) to both \(P_{A}\) and \(P_{B}\) for encrypting their data and questions. Note that \(P_{C}\) is the _only_ party that can decrypt any ciphertext back to plaintext with access to the secret key \(sk\).
### Feature Questions
Feature questions are designed for testing if essential features of a record are satisfied. This is based on the assumption that if all the core features of two records are the same, then they are highly likely to be the same entity. Specifically, the features of the records could be the sub-strings that are necessary for representing the record. For example, in a lens name "Canon 24-70mm f/2.8L USM II", one feature can be the focal length "24-70" which can be used to construct a question that tests if "24-70" is in the compared lens. So, using this question to evaluate a "15-85" lens returns false.
Under this assumption, the data owner can design a set of Boolean questions for all the features of a certain record. Utilizing first-order logic chains a set of questions together and the evaluation of a record turns it into a single Boolean value as the final result. For example, assume the question set is \(Q=\{Q_{1},Q_{2}\ldots Q_{n}\}\), and it could be \(Q=Q_{1}\vee\neg Q_{2}\ldots\wedge Q_{n}\) after conversion with first-order logic. Therefore, \(Q(r)=Q_{1}(r)\vee\neg Q_{2}(r)\ldots\wedge Q_{n}(r)\) returns a Boolean value that implies whether the record \(r\) has all the desired features.
In our protocol, the feature question set \(Q\) is as \(f\) in homomorphic encryption that uses to evaluate the encrypted record \(\llbracket r\rrbracket\) with HE.Eval, that is, HE.Eval\({}_{r}(Q,\llbracket r\rrbracket,pk)\), or simply \(Q(\llbracket r\rrbracket)\). Constructing \(Q\) that is homomorphically executable is non-trivial due to the operator limitation and the hardness of the efficient encryption circuit construction (see details in Section 4.2.2).
Hence, on a high level, we design an annotation language for the ease of use in Section 4.2.1 and pre-defined some primitive functions that encapsulate the details of homomorphic circuits for constructing such questions using DSL in Section 4.2.2.
#### 4.2.1. Domain-specific Language
The annotation is designed to be written in a domain-specific language (DSL). This language provides general methods for data manipulation and logic computation, which are extensible for defining functions. Here we brief some of the features this language equips: (1) The primitive data types are string which is wrapped in double quotes and number. (2) Three logic operators are supported. Or operator: a | b returns true if either a or b is true. Otherwise, false. And operator: a & b returns true if both a and b are true. Otherwise, false. Not operator: 1a returns true if a is false, or returns false if a is true. (3) Variables can be defined by $v={exp} where $ is the variable indicator, v is the variable name and {exp} is a valid expression. (4) A preset variable $r is act as the target record for comparison and should be used in the annotation as the argument of the question \(Q\). (4) At least one return statement ret is required. The argument of the return should be in Boolean. If multiple is provided, the code terminates when the first return statement is found. (5) Round bracket pair () is for prioritizing the execution of specified expressions. (6) Comment starts with #, e.g., # this is a comment. (7) Whitespaces and empty lines are ignored.
With this DSL, it is efficient and sufficient to "ask questions". Take the lens example, the record content from \(P_{A}\) is "Canon 24-70mm f/2.8L USM II", the annotation from the oracle could be:
```
1$r=lower($r)
2$c1=is_in("canon",$r)#condition1
3$c2=is_in("24-70",$r)
4$|is("24-70",$r)
5$c3=is_in("24-105",$r)#condition2
6ret$c1$&c2$&c3
```
where target record $r$ is first being lower-cased, and then requires the brand to be "Canon", focal range to be "24-70" or simply "2470" as the common abbreviation but can not be "24-105". Therefore, both "Canon 24-70 f2.8" and "Canon 2470" are considered matches, but "Canon 24-105mm USM" isn't. Notice not all features but the important ones, that are consensus in the annotation criteria, are tested, e.g., the motor type "USM" or the version "II" are not being used for making the decision.
#### 4.2.2. Homomorphic Functions
Writing homomorphic functions from scratch is not as straightforward as writing some normally "obvious" functions because of all the cumbersomeness and restrictions that homomorphic encryption schemes bring: (1) The homomorphic encryption scheme only supports a collection of limited operators. More complicated functions need to be built using these basic building blocks. (2) Simply porting the logic of a normal function to a homomorphic encryption function would suffer from privacy and/or efficiency issues. The function needs to revamp or refactor for it to work properly. (3) To achieve desired privacy protection as defined in Section 4.6, the encryption is end-to-end so no decryption is allowed in the middle of the evaluation, which confines the use of control flow in programming.
To address these challenges and for the ease of writing effective and efficient functions along with the proper privacy protection, we pre-define several functions that are essential components for constructing Boolean questions \(Q\) and can be evaluated homomorphically without exposing the details of the underlying circuits. For the encryption schemes that only support logical operators, the arithmetic operators could be enriched by building upon the lower-level gate circuits, for example, constructing an 8-bit adder with AND/OR/NOT/XOR gates [(29)]. On the other hand, the arithmetic schemes such as BGV can also be extended to support logical comparisons [(16)] (e.g., "\(\leftarrow\)" and "\(=\)") and that also retrains the benefit of efficient SIMD operations [(36)] naturally come with these schemes.
Nevertheless, we still need to tackle one more issue that these pre-defined functions rely on. As (Kumar et al., 2017) pointed out, the control flow including choice (e.g. if) and loop (e.g. for, while) based flows require conditional expressions, which is in Boolean, for execution. These encrypted Boolean values cannot be used in conditional operations unless they are decrypted. However, the decryption of the sensitive encrypted value causes the exposing of the execution path which is not allowed in our privacy-preserving settings. The solution is to import **ternary operation** that converts to logic operation to be arithmetic operation hence bypassing the decryption of the encrypted Boolean condition. The ternary operator is defined as cond: a? b where cond is the encrypted Boolean condition and a or b are encrypted integer values. To return a when cond is true or return b when cond is false without decrypting cond, we exploit oblivious-style ternary operators (Boges et al., 2015; Boges et al., 2015) and implement choose([\(cond\)],[\(a\)],[\(b\)]) with the arithmetic operators as \([cond\)] * ([\(a\)] - [\(b\)]) + [\(b\)] where [\(cond\)],[\(a\)] and [\(b\)] are all ciphertexts, and the return of choose, which is either [\(a\)] or [\(b\)], is also a ciphertext.
With the ternary operator, we can define more sophisticated functions. Algorithm 1 lists some commonly-used functions and the implementation details. All these functions are evaluated and returned in ciphertexts. lower([\(s\)],[\(s\)]) (line 1-6) and upper([\(s\)]) (line 7-12) convert the ASCII character to all lower-case or upper-case for standardization. These are not by default applied because, in some circumstances, letter case is an important signal for identifying record similarity. Both methods take in the encrypted string s and loop the characters in it one by one. For each character, it determines if it falls in a certain range of the ASCII table, and uses that as the condition to choose whether to modify the value or not. If the homomorphic scheme supports SIMD operation, calculating cond and choose can be run as a batch in one operation so the for-loop is saved. We also provide is_in([\(a\)],[\(b\)]) function (line 13-22) which detects if [\(a\)] is a sub-string of [\(b\)] by scanning the existence of [\(a\)] from [\(b\)]'s left to right. Using this \(O(n^{2})\) naive method without the early exit, even if a sub-string match exists, is not ideal but unavoidable because of preventing the disclosure of the execution path for privacy, as stated above.
Note that though we design these methods under the assumption of homomorphic encryption schemes, some of the secure multi-party computation protocols may also be suitable for adopting these methods (Kumar et al., 2017). In terms of extensibility, as long as a function needed can be encapsulated in a way that is able to compute homomorphically, it is safe to be added to the framework.
``` Input &return: The input [\(s\)],[\(a\)],[\(b\)] are all homomophically encrypted. The return of all functions is also in ciphertext. \(len\) returns the encrypted string length in plaintext. Functionlower([\(s\)]:\(string\)):for\(i\gets 1\)to\(len([s])\)do
1\([c]\leftarrow[s\)];
2\([cond]\leftarrow[\(c\)] >=>=>=>=>=>=>=>=>=>==>==>==>==>==>==>==>==>==>===>==>==>===>==>===>===>==>===>==>==>===>===>===>===>===>==>===>==>==>==>==>===>===>===>===>===>==>===>==>===>==>==>==>===>==>==>===>==>==>===>==>==>==>==>==>==>==>==>=>==>==>==>==>==>==>=>==>=>==>==>==>=>==>==>=>==>==>=>==>=>==>=>=>==>=>==>=>=>==>=>==>=>=>=>=>=>=>=>=>=>=>=>=>=>=>=>=>=>=>=>=>=>=>=>=>=>=>=>=>=>=>=>=>=>=>=>=>=>>=>=>=>=>>=>=>=>=>=>=>>=>>=>>=>>=>>=>>=>>=>>=>>=>>=>>>=>>=>>=>>=>>=>>>=>>>=>>>>=>>>>=>>>>=>>>>=>>>>>=>>>>>=>>>>>=>>>>>>=>>>>>>=>>>>>>>>=>>>>>>>=>>>>>>>>>=>>>>>>>>>>=
### End Conditions
The encrypted results from \(P_{A}\) and \(P_{B}\) are collected and evaluated by \(P_{C}\). On \(P_{C}\)'s side, it aligns \(\llbracket A^{ij}_{A}\rrbracket\) and \(\llbracket A^{ij}_{B}\rrbracket\) according to \(i\) and \(j\), and tests if \(\llbracket A^{ij}_{A}\rrbracket=\llbracket A^{ij}_{B}\rrbracket\) meanwhile decrypts the result with secret key \(sk\). \(P_{C}\) then has a mapping \(F\) with record id pairs \((i,j)\) as keys and Boolean values indicating if two parties have made an agreement on the same record pairs as values. Formally,
\[F=\{(i,j)\mapsto\mathsf{HE}\,\mathsf{Dec}_{\ell}(\llbracket A^{ ij}_{A}\rrbracket=\llbracket A^{ij}_{B}\rrbracket,sk)\\ |\,r_{i}\in X(D_{A}),r_{j}\in X(D_{B})\}.\]
For each record pair \((i,j)\) in \(F\), if the value is true, the agreement has been established between the two data owners, so no additional process is needed. Otherwise, the record pair needs further investigation, and \(P_{C}\) extracts all \(i\)s and \(j\)s from disagreed pairs and returns them to the data owner respectively. The data owners have to conduct another round of annotation only for these records. The annotations for the later rounds tend to be not as strict as the former rounds so it increases the possibility of making \(\llbracket A^{ij}_{A}\rrbracket\) and \(\llbracket A^{ij}_{B}\rrbracket\) the same. Simultaneously, \(P_{C}\) maintains another list \(G_{h}\) that stores the annotation result from one of the parties (assuming \(P_{A}\) here):
\[G_{h}=\{(i,j,l)\mid h\in[1,t],l\leftarrow\mathsf{HE}\,\mathsf{Dec}_{\ell}( \llbracket A^{ij}_{A}\rrbracket,sk)\},\]
where \(l\) is the label, \(h\) is the round number and \(t\) is a parameter denoting the maximum number of rounds that should be decided between the data owners in the initialization step. When \(t\) rounds have finished, the protocol ends: the pairs whose value is true in \(F\) are added to the final ground truth, and others are discarded. Therefore, the ground truth set \(G\) is constructed as
\[G=\{(i,j,k)\mid(i,j,l)\in G_{t},F(i,j)=true\}.\]
Obviously, when all the values in \(F\) are true in the \(h\)-th round, it is by no means to recurse into the \((h+1)\)-th round. The protocol meets the early termination condition and exits immediately.
### Security Analysis
#### 4.6.1. Security Definitions
We model \(\mathcal{A}\) as a probabilistic polynomial-time machine and the parties as interactive Turing machines.
Definition 1 (Adversary).: _An honest-but-curious adversary \(\mathcal{A}\) is capable of corrupting a subset of parties in the system, both not the server and one of the data owners at the same time. A compromised party will divert to \(\mathcal{A}\) all the received messages and act as \(\mathcal{A}\) requests._
**Ideal World.** Our ideal-world functionality \(\mathcal{F}\) interacts with annotation parties as follows:
1. Each data owner sends \(\mathcal{F}\) its plaintext data \(r\in D\) and plaintext feature questions \(Q\). \(\mathcal{F}\) processes and checks Boolean result \(w\) as \(Q_{i}(r_{j})=Q_{j}(r_{i}),\forall Q\forall r\).
2. If \(w\) is true, then add the records from both sides to the ground truth.
3. If \(w\) is false, repeat Step (1) until an agreement is reached or it exceeds a set trial threshold.
**Real World.** In the real world, \(\mathcal{F}\) is replaced and realized by our protocol described in the previous parts of this section.
Definition 2 (Security).: _A blind annotation protocol is simulation secure if for every adversary \(\mathcal{A}\) in the real world, there exists a security simulator \(\mathcal{S}\) in the ideal world that also corrupts the same set of parties and produces an output identically distributed to \(\mathcal{A}\)'s output in the real world._
#### 4.6.2. Security Simulation
We describe a security simulator \(\mathcal{S}\) that simulates the view of the \(\mathcal{A}\) in the real-world execution of our protocol. Our security definition 2 and \(\mathcal{S}\) ensure both confidentiality and correctness. \(\mathcal{S}\) receives from \(\mathcal{F}\), \(\mathcal{F}(C,x)\), where \(C\) is computing circuits. \(\mathcal{S}\) sends \(\mathcal{F}(C)\) to \(\mathcal{S}\) and obtains fake homomorphic Encryption circuits \(HE_{fake}\). \(\mathcal{S}_{HE}\) generates a random string \(\sigma_{fake}\) of the same length as output. \(\mathcal{S}\) sends \((HE_{fake},\sigma_{fake})\) to \(\mathcal{A}\). As HE circuits distribution is independent, \(HE_{fake}\) is computationally indistinguishable from the real HE circuits \(HE\) in the real execution. The random output \(\sigma_{fake}\) in ideal execution is indistinguishable from \(o\) in the real execution. In the ideal world, \(\mathcal{S}\) creates fake circuits \(HE_{fake}\) and does not use \(x\) for computing. Otherwise, \(\mathcal{A}\) could use \(x\) to evaluate the circuit, which would allow \(\mathcal{A}\) to distinguish between real and ideal executions.
## 5. Experiments
We rigorously prove the privacy-preserving property of the proposed protocol in Section 4.6. In this section, we conduct experiments to evaluate the feasibility of using blind annotation to annotate datasets.
### Simulation, Datasets and Settings
#### 5.1.1. Software Simulation
To empirically study the feasibility of our blind annotation protocol in a practical sense, we implement a web-based user-friendly simulator 1. The DSL syntax is written in Extended Backus-Naur Form (EBNF) and parsed by Lark library 2 with Look-Ahead Left-to-Right (LALR) parser. The syntax definition can easily be extended for new syntax, operators, and functions. The workflow of this annotation simulator simulates the settings in our protocol: each domain oracle annotates the owned dataset individually, and the annotation simulator merges and calculates the results. If the annotations do not satisfy the exit condition, the unqualified records are returned to the corresponding oracles for the next annotation.
Footnote 1: Link is removed for anonymity.
#### 5.1.2. Datasets
We use the real-world entity resolution benchmark (Lark, 2017) which includes 8 datasets and lies in both e-commerce and bibliographic domains. The task is to find matches from every two datasets that are paired, meanwhile, the ground truth files (4 in total) of the matches are provided. Since we try to mock the process of labeling ground truth, we ought to sample subsets from each dataset: we first randomly sample 50 labeled matches from each provided ground truth, and this gives us at most 50 records from each dataset because one record could link to multiple records. In order to calculate the precision, recall, and f-measure correctly, the dataset has to be self-contained, that is, all the matches of the sampled records should appear in the sampled ground truth. The
specifications and the basic statistics of the dataset are listed in Table 1. For "Abt-Buy" and "Amazon-Google" which are about products from two online retailers, we select "name" to be the only attribute to use since other attributes are mostly missing or very noisy. For "DBLP-Scholar" and "DBLP-ACM", though attributes are also sparse except for the "title", we select all of them. The selected number of entities from each dataset is not strictly 50 because of the many-to-many matches issue mentioned above. We do not make any specific data cleaning and normalization except that all the Unicode characters are mapped into the ASCII range.
#### 5.1.3. Evaluation Metrics
We use precision, recall and F-measure as the standard evaluation metrics to measure the accuracy of blind annotation against the scores computed from ground truth labels. Moreover, we analyze other observed phenomenons including the relationship between the annotations rounds and the number of labeled pairs that come to an agreement, the importance of tokens vs the features that are being extracted.
### Feasibility Verification
We try to validate two hypotheses in the following experiments to verify the feasibility of using blind annotation to construct the ground truth dataset for PPER: (1) The feature extraction from only one side of the dataset without inspecting the other side and comparing candidate pair side-by-side is a feasible approach. (2) More annotation rounds are effective for improving the quality of the PPER ground truth labeling.
From Figure 5, we observe the F-measure for Abt-Buy, DBLP-Scholar, and DBLP-ACM are all above 0.9, but Amazon-Google is not as good as others. The Amazon-Google dataset used in our experiment can be considered as hard problems for the record representations are full of abbreviations (e.g., software as sw), missing brands or models for products, and a limited amount of information (very short names). If better data pre-processing, especially normalization, is applied beforehand, a visible performance boost should be achieved. With more rounds of records annotated, precision and f-measure increase sharply for Abt-Buy and Amazon-Google, but not that noticeable for DBLP-Scholar and DBLP-ACM since both of them have already achieved high scores at the initial stage. Interestingly, the recall values for DBLP-Scholar and DBLP-ACM drop slightly with more rounds executed, this is mainly due to the side-effect of the annotation strategy: when annotations from two parties do not come to an agreement, the oracles tend to make the annotations to be more generalized, that is, relaxing on the strictness of the matching criterion for easier capturing similarities, in the next round. Normally, having slack criterion will diminish precision but rise recall for more negatives become positives; however, another variable, the number of agreements between parties, is also increasing in blind annotation, which leads to the increment of both true positives and false negatives and the recall somewhat drops as the consequence.
We further explore other effects that occur with the iteration of annotation rounds. As is shown in Figure 6, the green line, which indicates the number of agreements, approaches the gray line, which is the total number of candidate pairs, as the annotation round increases. On the other hand, the number of records that requires to annotate, shown as the bars, decreases. Specifically, for Abt-Buy and Amazon-Google, the annotation demands drop to 1/4 and 1/2 of the original amount respectively; for DBLP-Scholar and DBLP-ACM, in the second and the third, this number is even less than 10%, because of the more explicit features the datasets have.
It is worth mentioning that since the experiment is through the simulation, we do not focus on the execution time. The labeling time for each record from the simulation GUI is on average less than 1 min per record. Though homomorphic operations are generally computation-intense and resource-draining, utilizing it for labeling ground truth datasets, is only a small subset of the entire process and is also not time-sensitive. Specifically, labeling locally and processing encrypted inputs from both sides are not strictly online where real-time responses from parties are expected. Therefore, the HE overheads would not challenge our protocol's feasibility.
In conclusion, the first hypothesis holds because all evaluation results lie in the acceptable range. With better cleaned and normalized datasets, the performance in terms of precision, recall and f-measure should be promising. The second hypothesis also holds because precision, recall, and f-measure surge at the beginning with the increment of annotation rounds, and tend to be steady after a series of such rounds. The number of disagreements between the parties drops over the rounds before it hits the plateau.
### Feature Importance
We are curious about if the important features are more likely to be extracted by the annotators. We tokenize the record based on white space without any additional processing steps. Meanwhile, we count the number of times that function is_in is being utilized to extract the corresponding tokens. Note that annotators could modify the tokens accordingly, e.g., "photoshop" to be "ps", and "international" to be "i18n", we only use the original token to construct the x-axis, thus none of these non-original tokens are counted if they do not appear in the original token list.
The distribution of tokens and is_in usage is summarized in Figure 7. The tokens are present in the long-tail form, whereas the dense invocations of is_in function concentrate on its "tail". This observation shows the common tokens relatively contain less
\begin{table}
\begin{tabular}{r l l|c c|c c} \hline Dataset & Domain & Attributes & \#entities (ori) & \#matches (ori) & \#entities & \#matches \\ \hline Abt-Buy & E-commerce & name, description, manufacturer, price & 1081+1092 & 1097 & 50+50 & 50 \\ Amazon-Google & E-commerce & name, description, manufacturer, price & 1363+3226 & 1300 & 49+50 & 50 \\ DBLP-Scholar & Bibliographic & title, authors, venue, year & 2616+64263 & 5347 & 49+50 & 50 \\ DBLP-ACM & Bibliographic & title, authors, venue, year & 2614+2294 & 2224 & 50+50 & 50 \\ \hline \end{tabular}
\end{table}
Table 1. Dataset specifications and basic statistics. The underlined attributes are the attributes selected in the experiments.
information than the rare ones so that the feature extraction depends more on the latter. Some common tokens also attract a fair amount of tokens, these are usually the common but special tokens for records, for instance, the brand name of a product. Some is_in functions are called a significant amount of times more than the token itself, they derive from the modification of the original token.
## 6. Conclusion and Future Work
In this work, we propose a blind annotation protocol based on homomorphic encryption for labeling ground truth that are tailored for privacy-preserving entity resolution. Unlike revamping the traditional annotation methods with de-identified records, this protocol explores the possibility of annotating without revealing any record in plaintext to other parties. The domain-specific language lowers the bar of the implementation in the real world and the simulated experiment results show the rationality of blind annotation.
The simulator starts the conversation on the feasibility of the blind annotation protocol, developing and benchmarking with the real encryption toolchain is the reasonable next step. Because of the extensibility of DSL, more qualified homomorphic evaluation functions could be invented and integrated. Additionally, supporting the multi-attribute dataset is a practical enhancement that increases information utilization and improves linkage accuracy.
|
2304.05090 | CrowdSim2: an Open Synthetic Benchmark for Object Detectors | Data scarcity has become one of the main obstacles to developing supervised
models based on Artificial Intelligence in Computer Vision. Indeed, Deep
Learning-based models systematically struggle when applied in new scenarios
never seen during training and may not be adequately tested in non-ordinary yet
crucial real-world situations. This paper presents and publicly releases
CrowdSim2, a new synthetic collection of images suitable for people and vehicle
detection gathered from a simulator based on the Unity graphical engine. It
consists of thousands of images gathered from various synthetic scenarios
resembling the real world, where we varied some factors of interest, such as
the weather conditions and the number of objects in the scenes. The labels are
automatically collected and consist of bounding boxes that precisely localize
objects belonging to the two object classes, leaving out humans from the
annotation pipeline. We exploited this new benchmark as a testing ground for
some state-of-the-art detectors, showing that our simulated scenarios can be a
valuable tool for measuring their performances in a controlled environment. | Paweł Foszner, Agnieszka Szczęsna, Luca Ciampi, Nicola Messina, Adam Cygan, Bartosz Bizoń, Michał Cogiel, Dominik Golba, Elżbieta Macioszek, Michał Staniszewski | 2023-04-11T09:35:57Z | http://arxiv.org/abs/2304.05090v1 | # CrowdSim2: an Open Synthetic Benchmark for Object Detectors
###### Abstract
Data scarcity has become one of the main obstacles to developing supervised models based on Artificial Intelligence in Computer Vision. Indeed, Deep Learning-based models systematically struggle when applied in new scenarios never seen during training and may not be adequately tested in non-ordinary yet crucial real-world situations. This paper presents and publicly releases _CrowdSim2_, a new synthetic collection of images suitable for people and vehicle detection gathered from a simulator based on the _Unity_ graphical engine. It consists of thousands of images gathered from various synthetic scenarios resembling the real world, where we varied some factors of interest, such as the weather conditions and the number of objects in the scenes. The labels are automatically collected and consist of bounding boxes that precisely localize objects belonging to the two object classes, leaving out humans from the annotation pipeline. We exploited this new benchmark as a testing ground for some state-of-the-art detectors, showing that our simulated scenarios can be a valuable tool for measuring their performances in a controlled environment.
## 1 Introduction
In recent years, Computer Vision swerved toward Deep Learning (DL)-based models that learn from vast amounts of annotated data during the supervised learning phase. These models achieved astonishing results in several tasks that nowadays are considered basic, such as image classification, causing interest in addressing more complex domains such as object detection [1], image segmentation [2], visual object counting [3][4][5], people tracking [6], or even facial reconstruction [7] and video violence detection [8]. However, these more cumbersome tasks often also require more structured datasets that come with challenges concerning bias, privacy, and cost in terms of human effort for the annotation procedure.
Indeed, more complex tasks correspond to more elaborated labels, and for each data sample, the effort shifts from annotating an image to annotating the objects present in it, even at the pixel level. Furthermore, more challenging tasks often go hand in hand with more complex scenarios that may rarely occur in the real world, yet correctly handling them can be crucial. Finally, privacy concerns surrounding Artificial Intelligence-based models have become increasingly important, further complicating data collection. Consequently, labeled datasets are then limited, and data scarcity has become the main stumbling block for the development and the in-the-wild application of Computer Vision algorithms. Deep Learning-based algorithms systematically struggle in new scenarios never seen during the training phase and may not be adequately tested in non-ordinary yet crucial real-world situations.
One appealing solution that is recently arising relies on collecting _synthetic data_ gathered from _virtual environments_ resembling the real world. Here, by interacting with the graphical engine, it is possible to _automatically_ collect the labels associated with the objects of interest, cutting off the human effort from the annotation procedure, thus reducing the costs. Furthermore, these reality simulators provide frameworks where it is possible to create specific scenarios by controlling and explicitly varying the factors that characterize them. Hence, they represent the perfect environments where automatically acquiring labeled data for the training phase but also be used as controlled testing grounds for evaluating the performance capabilities of the employed models.
In this paper, we consider the object detection task, focusing our attention on _people_ and _vehicle_ detection. We deem that people localization is crucial for security as well as for crowd analysis; on the other hand, vehicle detection constitutes the building block for urban and road planning, traffic light modeling, and traffic management, to name a few. In particular, we introduce and make publicly available _CrowdSim2_, a new vast collection of synthetic images suitable for object detection and counting, collected by exploiting a simulator based on the _Unity_ graphical engine. Specifically, it consists of thousands of small video clips gathered from various synthetic scenarios where we varied some factors of interest, such as the weather conditions and the number of objects in the scenes. The labels are automatically collected and consist of bounding boxes that precisely localize objects belonging to two different classes -- _person_ and _vehicle_. We report in Figure 1 some samples of images together with the bounding boxes localizing the objects of interest in different scenarios we rendered with our simulator. Then, we present a detailed experimental analysis of the performance of several state-of-the-art DL-based object detectors pre-trained over general object detection databases present in the literature by exploiting our _CrowdSim2_ dataset as a testing ground. More in-depth, we extracted, from the collected videos, batches of frames belonging to specific and controlled scenarios, and we
measured the obtained performances by varying the factors that characterized them.
Summarizing, the contributions of this paper are listed below:
* we propose _CrowdSim2_, a new synthetic dataset suitable for _people_ and _vehicle_ detection, collected by exploiting a simulator based on the _Unity_ graphical engine and made freely available in the Zenodo Repository at [9];
* we test some state-of-the-art object detectors over this new benchmark, exploiting it as a testing ground where we varied some factors of interest such as the weather conditions and the object density;
* we show that our simulated scenarios can be a valuable tool for measuring detectors' performances in a controlled environment.
## 2 Related Works
### Synthetic Datasets
Synthetically-generated datasets have recently gained considerable interest due to the need for huge amounts of annotated data. Some notable examples are _GTA5_[10] and _SYNTHIA_[11] for semantic segmentation, _Joint Track Auto (JTA)_[12] for pedestrian pose estimation and tracking, _Virtual Pedestrian Dataset (ViPeD)_[13][14] for pedestrian detection, _Grand Traffic Auto (GTA)_[15] for vehicle segmentation and counting, _CrowdVisorPPE_[16] for Personal Protective Equipment detection and _Virtual World Fallent People (VWFP)_[17] for fallen people detection. These datasets are mainly exploited for training deep learning models, which benefit from the fact that these collections of images are vast since the labels are automatically collected. On the other hand, using synthetic data as ground test collections is a relatively unexplored field. Furthermore, the datasets mentioned above are collected from the GTA V (Grand Theft Auto V) video game by Rockstar North. Although it is a very realistic generator of annotated images, some limitations arise when new scenarios or behaviors are needed. By contrast, using a simulator based on an open-source graphical engine allows one to create more customized environments and easily modify some factors of interest -- density of the objects, weather conditions, and object interactions.
### Object Detectors
In the last decade, object detection has become one of the most critical and challenging branches of Computer Vision. It deals with detecting instances of semantic objects of a specific class (such as humans, buildings, or cars) in digital images and videos [18]. This task has attracted increasing attention due to its wide range of applications and recent technological breakthroughs. Currently, most state-of-the-art object detectors employ Deep Learning models as their backbones and detection networks to extract features from images, classification, and localization, respectively. Existing object detectors can be divided into two categories: _anchor-based_ detectors and _anchor-less_ detectors. The models in the first category compute bounding box locations and class labels of object instances exploiting Deep Learning-based architectures that rely on anchors, i.e., prior bounding boxes with various scales and aspect ratios. They can be further divided into two groups: i) the two-stage paradigm, where a first module is responsible for generating a sparse set of object proposals and a second module is in charge of refining these predictions and classifying the objects; and ii) the one-stage approach that directly regresses to bounding boxes by sampling over regular and dense locations, skipping the region proposal stage. Some notable examples belonging to the first group are _Faster R-CNN_[19] and _Mask R-CNN_[20]. At the same time, popular networks of the latter set are the _YOLO_ family and _Reti-nNet_[21] algorithm. On the other hand, anchor-free methods rely on predicting key points, such as corner or center points, instead of using anchor boxes and their inherent limitations. Some popular works existing in the literature are _CenterNet_[22], and _YOLOX_[23]. Very recently, another object detector category is emerging, relying on the newly introduced Transformer attention modules in processing image feature maps, removing the need for hand-designed components like a non-maximum suppression procedure or anchor generation. Some examples are _DEcteion TRansformer (DETR)_[24] and one of its evolution, _Deformable DETR_[25].
In this paper, we consider some networks belonging to the _"You Only Look Once" (YOLO)_ family detectors, which turned out to be one of the most promising detector architectures in terms of efficiency and accuracy. The algorithm was introduced by [26] as a part of a custom framework called _Darknet_[27]. Acronym _YOLO (You Only Look Once)_ derived from the idea of single shot regression approach. The author introduced the single-stage paradigm that made the model very fast and small, even
Figure 1: Some samples of our synthetic dataset we rendered with our simulator, together with the bounding boxes localizing the objects of interest.
possible to implement on edge devices. The next version was _YOLOv2_[28], which introduced some iterative improvements (higher resolution, BatchNorm, and anchor boxes). _YOLOv3_[29] added backbone network layers to the model and some other minor improvements. _YOLOv4_[30] introduced improved feature aggregation and mish activation. _YOLOv5_[31] proposed some improvements in feature detection, split into two stages - shallow feature detection and deep feature detection. The latest ones _YOLOv6_[32] and _YOLOv7_[33] added some new modules like the re-parameterized module and a dynamic label assignment strategy, further increasing the accuracy.
Footnote 1: [https://github.com/yolovolov/](https://github.com/yolovolov/)
## 3 The Crowdsim2 dataset
In this section, we introduce our _CrowdSim2_ dataset, a novel synthetic collection of images for _people_ and _vehicle_ detection 2. First, we describe the Unity-based simulator we exploited for gathering the data, and then we depict the salient characteristics of this new database.
Footnote 2: The dataset is freely available in the Zenodo Repository at [https://doi.org/10.5281/zenodo.7262220](https://doi.org/10.5281/zenodo.7262220)
### The Simulator
In this work, we exploited an extended version of the _CrowdSim_ simulator, introduced in [34], that was designed and developed by using the _Unity_ graphical engine. The main goal of this simulator is to produce _annotated_ data to be used for training and testing Deep Learning-based models suitable for object and action detection. For this purpose, it allows users to generate realistic image sequences depicting scenes of urban life, where objects of interest are localized with precise bounding boxes. More in-depth, the simulator is designed using the _agent-based_ paradigm. In this approach, an agent - in our work either a human or a vehicle - is controlled individually, and decisions are made in the context of the environment in which the agent was placed. For instance, people can perform different types of movement thanks to the skeletal animation [35] and actions depending on the situation in which they find themselves, including running, walking, jumping, waving or shaking hands, etc. The related animations vary depending on the age, height, and posture of the agent. Also, interactions between agents are possible in the so-called _interaction zones_. Within this zone, the simulator continuously checks several conditions, such as the number of agents in the zone or random variables. If the conditions are met, the agents interact (fight, dance, etc.).
The environment in which agents are placed is important as the movement and behavior of the agents themselves. The considered simulator allows the user to generate a situation in four locations. They are:
* traffic with intersections, pedestrian crossings, side-walks, etc., in a typical urban environment, captured from three different cameras;
* a green park for pedestrians without traffic, filmed from three cameras;
* the main square of an old town, captured with two cameras;
* a tunnel for cars captured at both the endpoints, perfect for issues related to re-identification.
General rules of road traffic were applied to car movements. The starting positions of the cars are randomized among pre-defined starting points, and then the vehicles move to the point where they need to change direction. In such a place, cars make random decisions regarding further movements. Cars can only move in designated zones (streets and parking bays).
### Simulated data
Using the simulator described in the previous section, we gathered a synthetic dataset suitable for _people_ and _vehicle_ detection. Specifically, for people detection, we used three different scenes, while for car detection, two different scenarios. We recorded thousands of small video clips of 30 seconds at a resolution of \(800\times 600\) pixels and a frame rate of 25 Frames Per Second (FPS), from which we extracted hundreds of thousands of still images. We varied several factors of interest, such as people's clothes, vehicle models, weather conditions (sun, fog, rain, and snow), and the objects' density in the scene. The ground truth is generated following the golden standard of the _MOTDet Challenge_3, consisting of the coordinates of the bounding boxes localizing the objects of interest -- _people_ and _vehicles_ in our case. The summary of the generated data is presented in Table 1. We report in Figure 2 the four different weather conditions we considered as one of the factors we varied during the data recording.
Footnote 3: [https://motchallenge.net/](https://motchallenge.net/)
## 4 Results and discussion
In this section, we evaluate several deep learning-based object detectors belonging to the _YOLO_ family, described in Section 2, on our _CrowdSim2_ dataset. Following the primary use case for this dataset explained in Section 1, we employed it as a test benchmark to measure the performance of the considered methods in a simulated scenario where some factors of interest are controlled and changed. Specifically, we compared the obtained results considering four different weather conditions - _sun_, _rain_, _fog_, _snow_ - and different densities of objects present in the scene - from 1 object to hundreds of objects.
We considered two different _YOLO_-based models: _YOLOv5_ and _YOLOv7_. Concerning _YOLOv5_, we selected two different architectures having a different number of trainable parameters - a light version we called _YOLO5_ and a more deep architecture we referred to as _YOLO5x_. Concerning _YOLOv7_, we exploited
\begin{table}
\begin{tabular}{|c|c|c|} \hline & \# video-clips & \# frames \\ \hline Sun & 2,899 & 2,174,250 \\ \hline Rain & 1,633 & 1,224,750 \\ \hline Fog & 1,653 & 1,239,750 \\ \hline Snow & 1,646 & 1,234,500 \\ \hline \end{tabular}
\end{table}
Table 1: Summary of our generated synthetic data. Each row corresponds to different weather conditions we set using our simulator. We report the total number of the collected video clips and the number of frames we extracted from them.
the standard architecture (we referred to as _YOLO7_) and a deeper version which we called _YOLO7x_. Our decision to consider models having different architectures has been dictated by the fact that we wanted to prove that their behavior in the simulated data reflects the one observable over the real-world datasets - shallow models are expected to exhibit moderate performances compared to deeper architectures. We refer the reader to Section 2 and the related papers for further details about the architectures of the employed detectors. All the models were fed with images of \(640\times 640\) pixels, and the models were pre-trained using the _COCO_ dataset [36], a popular collection of images for general object detection.
We performed two different sets of experiments -- the first related to people detection and the second to vehicle detection. We evaluated and compared the above-described detectors following the golden standard Average Precision (AP), i.e., the average precision value for recall values over 0 to 1. Specifically, we considered the MS COCO AP@[0.50], i.e., the AP computed at the single IoU threshold value of 0.50 [36]. We report the results concerning people detection varying the weather conditions and the people density in Figure 3 and Figure 4, respectively. On the other hand, results regarding vehicle detection varying the same two factors are depicted in Figure 5 and Figure 6, respectively. Concerning people detection, the considered models perform slightly better when the _sun_ weather condition is set. On the other hand, concerning the _rain_, _snow_, and _fog_ weather conditions, the detectors obtain lower APs. This is an expected outcome since, also in the real world, the detectors have to face more challenges when they are required to work in that specific conditions since the objects are more difficult to find. This trend is even more pronounced considering the car detection experiments, where some detectors particularly struggle in the _rain_ and _fog_ settings. On the other hand, the trend of both people detection and car detection exhibits performance degradation with the increasing of the objects present on the scene. Again, in this case, this behavior is expected and reflects that detecting instances is way more challenging in overcrowded scenarios.
Looking at Figure 3, note how in the people detection scenarios, the performance difference among the different detectors is negligible, although the _YOLO7x_ seems to achieve the best mean AP and the _YOLOS_s exhibits the worse results. Also, considering Figure 3(a), we can observe how _YOLO7_, _YOLO7x_ and _YOLO5m_ maintain certain robustness even in the most challenging conditions, while _YOLO5s_ - besides starting with a worse detection performance even in the _sun_ setting - has a decreasing trend for the other weather conditions, reaching the worst AP of around 0.19 in the _fog_ setting. Contrarily, the performance of the
Figure 3: Average Precision with IOU = 0.5 calculated for different weather conditions (_sun_, _fog_, _rain_ and _snow_), obtained for the _people_ detection task by exploiting the four considered _YOLO_ methods.
Figure 2: Samples of our synthetic data where we show the four different weather conditions we varied with our simulator.
different models shows steeper differences in the car detection scenarios. In that case, the _YOLOS_ completely struggles in the _fog_, _snow_ and _rain_ scenarios, as shown in Figure 5 and in Figure 5(a). On the other hand, _YOLO_Y seems more robust to all weather conditions, except in the _fog_ setting, for which it exhibits moderate performances. This higher sensitivity of the detectors in the vehicle detection compared to the people scenario may be due to how the different _YOLO_ versions have been trained, demonstrating their major robustness to people detection - even in very challenging weather scenarios - than cars. This result contributes to validating our main claim that synthetic scenarios are crucial during the testing phase for finding biases or robustness breaches of largely-used detector models. Finally, by analyzing the results depicted in Figure 3(b) and in Figure 5(b), we can again confirm that the performances of the considered detectors are more similar in the people detection task, while they show significant differences in detecting vehicles, especially in crowded scenarios.
## 5 Conclusion
In this work, we introduced a new synthetic dataset for _people_ and _vehicle_ detection. This collection of images is automatically annotated by interacting with a realistic simulator based on the _Unity_ graphical engine. This allowed us to create a vast number of different simulated scenarios leaving out humans from the annotation pipeline, in turn reducing costs and tackling the data scarcity problem affecting supervised Deep Learning models. At the same time, we kept control over some factors of interest, such as weather conditions and object densities, and we measured the performances of some state-of-the-art object detectors by varying that factors. Results showed that our simulated scenarios can be a valuable tool for measuring their performances in a controlled environment. The presented idea has an extensive number of possible applications. People and car detection can lead to different usages, such as object counting and traffic analysis or object tracking. Furthermore, crowd simulation development is also desirable in the direction of action recognition. We also plan to enrich our simulator by introducing the possibility of viewing from multiple cameras in urban environments to create a new benchmark for multi-object tracking.
## Acknowledgements
This work was supported by: European Union funds awarded to Blees Sp. z o.o. under grant POIR.01.01.01-00-0952/20-00 "Development of a system for analysing vision data captured by public transport vehicles interior monitoring, aimed at detecting undesirable situations/behaviours and passenger counting (including their classification by age group) and the objects they carry"); EC H2020 project "AI4media: a Centre of Excellence delivering next generation AI Research and Training at the service of Media, Society and Democracy" under GA 51911; research project (RAU-6, 2020) and projects for young scientists of the Silesian University of Technology (Gliwice, Poland); research project INAROS (Netllegenza Artificiale per il Din Ontarioge e Supporto agli anziani), Tusany POR FSE CUP B53D2100806008. Publication supported under the Excellence Initiative - Research University program implemented at the Silesian University of Technology, year 2022. This research was supported by the European Union from the European Social Fund in the framework of the project "Silesian University of Technology as a Center of Modern Education based on research and innovation" POWR.03.05.00- 00-Z098/17. We are thankful for students participating in design of Crowd Simulator: P. Bartosz, S. Wrobel, M. Wola, A. Gluch and M. Matuszczyk.
|
2306.05831 | Interplay between Markovianity and Progressive Quenching | Progressive quenching (PQ) is a process in which we sequentially fix a
system's degrees of freedom, which would otherwise evolve according to their
stochastic dynamics. Previous studies have discovered what we refer to as the
hidden martingale property in PQ. Here, we first attribute this martingale
property to the canonicity of the two-layer ensemble comprising quenched and
thermal ensembles and demonstrate that the Markovian property, coupled with the
detailed balance (DB) of the evolution dynamics, underpins this canonicity. We
then expand the PQ to the Markovian dynamics on the transition network where
the DB is locally upheld. Additionally, we examine the PQ of the systems that
evolve through non-Markovian dynamics between consecutive quenching. When
non-Markovian dynamics ensure a trajectory-wise DB, such as in an equilibrium
spin system with a hidden part, the PQ can occasionally maintain the canonical
structure of the overall statistical ensemble, but not always. Lastly, we
analytically and numerically investigate the PQ of a non-Markovian spin system
with delayed interaction and illustrate how the reduction of spin correlations
due to the delay can be compensated by the PQ. | Charles Moslonka, Ken Sekimoto | 2023-06-09T12:03:42Z | http://arxiv.org/abs/2306.05831v2 | # Interplay between Markovianity and Progressive Quenching
###### Abstract
Progressive quenching (PQ) is a process in which we sequentially fix a system's degrees of freedom, which would otherwise evolve according to their stochastic dynamics. Previous studies have discovered what we refer to as the hidden martingale property in PQ. Here, we first attribute this martingale property to the canonicity of the two-layer ensemble comprising quenched and thermal ensembles and demonstrate that the Markovian property, coupled with the detailed balance (DB) of the evolution dynamics, underpins this canonicity. We then expand the PQ to the Markovian dynamics on the transition network where the DB is locally upheld. Additionally, we examine the PQ of the systems that evolve through non-Markovian dynamics between consecutive quenching. When non-Markovian dynamics ensure a trajectory-wise DB, such as in an equilibrium spin system with a hidden part, the PQ can occasionally maintain the canonical structure of the overall statistical ensemble, but not always. Lastly, we analytically and numerically investigate the PQ of a non-Markovian spin system with delayed interaction and illustrate how the reduction of spin correlations due to the delay can be compensated by the PQ.
+
Footnote †: Corresponding author: [email protected]
+
Footnote †: Corresponding author: [email protected]
+
Footnote †: Corresponding author: [email protected]
## I Introduction
Markov chains have been studied in depth for more than a century now, whose fields of application are wide and diverse [1; 2; 3; 4]. This is especially the case of stochastic physics [5; 6; 7]. Some studies concern the effect of change of the parameters of the Markov chain, that is, the topology of the transition network and the rates associated with the jump on it [8; 9; 10; 11]. What we call Progressive Quenching (PQ) belongs to this category of studies, where the system which was initially in equilibrium is modified by progressively fixing a part of the system's degrees of freedom.
In our previous work [12; 13; 14], the results of which are briefly summarized in Section II for the sake of self-containedness, we demonstrated that a martingale process emerged during the Progressive Quenching of a model of Ising spins. We tested this result both numerically and theoretically, using the inverse system size expansion as well as tower-rule-based arguments under certain hypotheses. This property enabled us to make predictions about the individual future trajectories of the process, in addition to inferring the anterior one. We realized that there is a canonical distribution underlying the ensemble of the final quenched configurations, and we explored different approaches to understand its origin via an explicit construction of path weight through the "local invariance," which is equivalent to the martingale property that emerged during the evolution process [14].
So far, we have assumed that before each quenching operation, the unquenched part of the system is in thermal equilibrium under a given number of fixed spins, and we have implicitly assumed continuous-time Markovian dynamics. However, we have not been conscious enough in distinguishing the dynamic aspect from the statistical aspect when defining the quenching process, in particular, we have not fully appreciated the importance of the Markovian property of the evolution dynamics.
In the present paper, we want to understand the conditions under which the canonical characteristics of the whole ensemble (both quenched and unquenched) are conserved along the PQ process and how important is the Markovian assumption on the stochastic evolution for the martingale and underlying canonical structure. We will discuss separately the Markovian and non-Markovian cases, as well as distinguish the dynamics with or without the detailed balance.
The organization of the present paper is as follows: Section II presents our model setup (II.1) and previous results (II.2), where we explain the protocol of PQ, the property that we call hidden martingale. Section III we first introduce the two-story ensemble and argue that the detailed balance and the Markovian dynamics are required for this ensemble to be canonical (III.1). Then we extend the PQ defined on any Markovian transition network (III.2), where the condition of detailed balance is also relaxed (III.3). Then the section IV deals with the case of non-Markovian systems. We argue that the canonical structure that supported the hidden martingale of PQ is generally broken even if the (trajectory-wise) detailed balance is initially assured (IV.1). Finally, we focus on the spin model with delayed interaction (IV.2) and show how the operation of PQ interferes with the non-Markovian delay through the time-interval between the consecutive quench. Finally, Section V summarizes the results, and a comparison is made with the linear voter model.
## II Models and previous results
### Model setup
Our previous studies focused on a globally-coupled Ising spins model, consisting of \(N_{0}\) spins interacting pairwise on a complete graph with a coupling constant \(j=\frac{J}{N_{0}}\). The Hamilto
nian reads :
\[\mathcal{H}=-j\sum_{i<j}^{N_{0}}s_{i}s_{j}=-\frac{j}{2}M_{tot}^{2}+\frac{j}{2}N_{0}, \tag{1}\]
where \(M_{tot}=\sum_{i=1}^{N_{0}}s_{i}\). In this system, we attribute spin indices \(i\) so that, after the \(T\)-th Progressive Quenching operation \((1\leq T\leq N_{0})\), the first \(T\) spins \(\{s_{1},\ldots,s_{T}\}\) constitute the _quenched_ part and the remaining \(N_{0}-T\) spins, \(\{s_{T+1},\ldots,s_{N_{0}}\}\), the _unquenched_ or _free_ part. We denote by \(M_{T}\) the quenched magnetization at the \(T\)-th stage:
\[M_{T}=\sum_{i=1}^{T}s_{i}.\]
The Hamiltonian for the remaining part then reads :
\[\mathcal{H}_{T}=-j\sum_{T+1\leq i<j\leq N_{0}}s_{i}s_{j}-(jM_{T}+h)\sum_{i=T+1 }^{N_{0}}s_{i}, \tag{2}\]
where \(h\) is the external field. We denote by \(Z_{T,M_{T}}\) the partition function of the partially quenched system characterized by \(\mathcal{H}_{T}\) above. Hereafter we set the inverse temperature \(\beta=(k_{\rm B}T)^{-1}=1\) through an appropriate unit of energy. The mean magnetization of the _free_ part at constrained thermal equilibrium, which we denote by \(m_{T,M_{T}}^{(eq)},\) reads
\[m_{T,M_{T}}^{(eq)}=\partial\ln Z_{T,M_{T}}/\partial h. \tag{3}\]
With such a setting, the operation of PQ is to move and transform a spin from the _free_ part into the _quenched_ part in such a way that the value of that spin is maintained as it is. Therefore, if the system has \(T\) quenched spins, the subsequently quenched spin \(s_{T+1}\) satisfies
\[\mathbb{E}[s_{T+1}|M_{T}]=m_{T,M_{T}}^{(eq)}. \tag{4}\]
When we regard \(T\), the number of fixed spins, as a "time", we have two stochastic processes : the one being \(M_{T},\) the magnetization of the _quenched_ part of the system, and the second being \(m_{T,M_{T}}^{(eq)},\) the equilibrated mean spin of the _free_ part.
### Brief review of the previous results
_Hidden martingale_ : We showed previously [12; 13; 14] that, if we let the free spins reach thermal equilibrium after each quenching step, or equivalently quench the spin at \(\pm 1,\) respectively, with the probability, \(P(s_{T+1}=\pm 1)=\frac{1}{2}(1\pm m_{T,M_{T}}^{(eq)})\), then the evolution of \(m_{T,M_{T}}^{(eq)}\) satisfies the martingale law [15]:
\[\mathbb{E}[m_{T,M_{T}}^{(eq)}|M_{s}]=m_{s,M_{s}}^{(eq)}\quad(T\geq s). \tag{5}\]
This property is said "hidden" as it does not involve the main process \(M_{T}\).
_Consequences of the hidden martingale_ : This particular property gives a great amount of information about the evolution of \(M_{T}\)[12; 13], first allowing an approximate prediction of the final distribution of \(M_{T=N_{0}}\) given the one at the early stage, even before the distribution bimodality appears. Secondly, the hidden martingale implies an invariance of the path probabilities under a local modification of the trajectory. This property allowed us to derive a thermodynamic decomposition of the probability distribution of \(M_{N_{0}}\), linking it to the canonical ensemble [14]. More concretely, the probability of observing the quenched magnetization \(M_{N_{0}}\) starting from an unconstrained thermal equilibrium, \(P^{\rm PQ}(M_{N_{0}})\) reads:
\[P^{\rm PQ}(M_{N_{0}})\propto\begin{pmatrix}N_{0}\\ \frac{N_{0}+M_{N_{0}}}{2}\end{pmatrix}e^{\frac{1}{N_{0}}M_{N_{0}}^{2}} \tag{6}\]
This property can be extended to the system in which a part of it has already been quenched with a certain magnetization.
_Recycled Quenching (RQ) [14]_ : Moreover, we have studied the effect of an "annealing" by which we relax a randomly chosen spin among the fixed ones in the course of PQ. When we repeat the cycle of random unquenching and random quenching, the distribution of the fixed magnetization \(M_{T}\) of \(T\) quenched spins converges asymptotically to the distribution which we would have by applying PQ up to \(T\) spins starting from unconstrained equilibrium.
## III PQ in Markovian models
In this section, we limit our consideration to the Markovian evolution models and investigate how the canonicity of the statistics plays a role.
### Canonicity upon PQ in two-story ensemble
In the _Note added in proof_ of [14] we predicted in the forthcoming paper a further description about the origin of canonical distribution of the final quenched magnetization, \(M_{T=N_{0}}\), in fact with the implicit assumption of Markovian dynamics. The main questions are:
(i) A problem of combinatorics (Sec. III.1.1): How the quenched ensemble is compatible with the canonical statistics upon the consecutive quenching operations where the unquenched spins are in constrained equilibrium?
(ii) A problem of dynamics (Sec. III.1.2): At the level of discrete spins - and even more microscopic - how is the operation of quenching compatible with the reversible evolution, given the apparent Deborah number exceeding unity? Indeed, physically, the quenching process implies rendering towards zero the transition rate for the flipping of the spin in question.
Combinatorial approach
First, we introduce the notion of _two-story ensemble_, the way of characterizing the statistics of \(N_{0}\) spins which is convenient for the PQ. We separate those \(N_{0}\) spins into two groups, \(\{s_{1},\ldots,s_{T}\}\), and the remainder, \(\{s_{T+1},\ldots,s_{N_{0}}\}\) (keeping the quenched/free spins distinction in mind) and we introduce the sub-totals of spins through \(M_{T}=\sum_{i=1}^{T}s_{i}\) and \(\mu_{T}=\sum_{j=T+1}^{N_{0}}s_{j}\). The joint probability \(P_{QF}(M_{T},\mu_{T})\) satisfies \(P_{QF}(M_{T},\mu_{T})=P_{F|Q}(\mu_{T}|M_{T})P_{Q}(M_{T}),\) where \(P_{Q}(M_{T})\) is the marginal and \(P_{F|Q}(\mu_{T}|M_{T})\) is the conditional probability.1 We interpret this identity in the way that \(P_{Q}(M_{T})\) characterizes the _families_ of spin configurations in the quenched part, \(\{s_{1},\ldots,s_{T}\},\) while \(P_{F|Q}(\mu_{T}|M_{T})\) reflects the sub-ensemble of the spin configuration, \(\{s_{T+1},\ldots,s_{N_{0}}\},\) in each family member. The configurations in the same family are realized ergodically, while those configurations belonging to a distinct quenched family are non-ergodic in the two-story ensemble. (In our model on the complete lattice, we further replaced \(\{s_{1},\ldots,s_{T}\}\) by \(M_{T}\) as a collective tag of the family.)
Footnote 1: For the simplicity of notations we suppressed the index \(T\) as the number of quenched spins. For example, it is understood that \(P_{QF}(M_{T},\mu_{T})\) is for the \(T\) quenched spins.
The above is for a particular two-story ensemble. The different values of \(T\) define the distinct construction of two-story ensembles. In the context of PQ, however, we introduce a particular form of connection between a two-story ensemble \(\{P_{Q}(M_{T}),P_{F|Q}(\mu_{T}|M_{T})\}\) and its "neighbor" ensemble, \(\{P_{Q}(M_{T+1}),P_{F|Q}(\mu_{T+1}|M_{T+1})\}\). This connection is schematically explained in Fig.1, where \(N_{\pm}\) and \(n_{\pm}\) denote the numbers of \(\pm 1\) spins in the quenched part and free part, respectively. Initially, all the spins are unquenched (\(T=0\) and \(M=0\)) and thought to be in equilibrium without an external field. The probability for \(\mu_{0}\), denoted by \(P_{F}^{(can)}(\mu_{0})\) reads:
\[P_{F}^{(can)}(\mu_{0})=\mathcal{N}_{0,0}\binom{N_{0}}{\frac{N_{0}+\mu_{0}}{2} }e^{\frac{j}{N_{0}}\mu_{0}^{2}}, \tag{7}\]
where the normalization constant \(\mathcal{N}_{0,0}\) is such that \(\sum_{\mu=-N_{0}}^{N_{0}\,(mod\,2)}P_{F}^{(can)}(\mu_{0})=1.\) We further assume that, upon the quenching of the \((T+1)\)-th spin, the \((N_{0}-T)\) free spins have already been re-equilibrated under the given fixed magnetization, \(M_{T}\). In Appendix A we show by induction that the joint probabilities, \(P_{QF}(M_{T},\mu_{T})\) for all \(T\), obey the canonical statistics if the initial weight \(P_{F}(\mu_{0})\) obeys the canonical statistics and that the PQ fixes the value of any one of the free spins under constrained canonical equilibrium. Regarding the statistics of quenched spins, the above result implies that at any stage, for example, the \(T\)-th stage, their magnetization \(M_{T}\) is distributed as if the \(T\) spins were randomly sampled from an equilibrium ensemble of \(N_{0}\) spins.
_Relation to martingale :_ In light of the canonicity underlying the two-story ensemble of quenched and free spins, the mechanism that allowed the "martingality" of \(m^{\text{eq}}{}_{T,M_{T}}\equiv E[s_{T+1}|M_{T}]\) is easily understood: While \(E[s_{T+1}|M_{T}]\) originally meant the expectation of the spin \(s_{T+1}\) upon quenching in the presence of the magnetization \(M_{T}\) due to the \(T\) already fixed spins, the underlying canonicity allows to map it to the equilibrium expectation of \(s_{T+1}\) when the spins \(\{s_{1},\ldots,s_{T}\}\) have the magnetization \(M_{T}\). Together with the homogeneity among the free spins, \(\{s_{T+1},\ldots,s_{N_{0}}\},\) we finally regard \(m^{\text{eq}}{}_{T}\) as the canonical expectation \(E^{(can)}[s_{N_{0}}|M_{T}]\). The "\(M_{T}\)-martingality" for the latter follows directly from the tower rule applied to \(m_{T}\equiv E[z|s_{1},\ldots,s_{T}]\), see Appendix B, where \(z\) stand for any random variable belonging to the above canonical ensemble. In this viewpoint, we can understand better the effect of Recycled Quenching (RQ) [14] mentioned at the end of Section II.2. After applying the RQ infinitely many times, the probability \(P_{Q}(M_{T})\) of having the quenched magnetization \(M_{T}\) over \(T\) quenched spins in fact obeys the canonical marginal distribution, \(\sum_{\mu_{T}}P^{(can)}(\mu_{T},M_{T}),\) where \(P^{(can)}(\mu_{T},M_{T})\) is the joint canonical distribution of the spins when \(T\) randomly chosen spins have the magnetization \(M_{T}\).
The picture of the two-story ensemble and the underlying canonical statistics should apply to systems other than the Ising spins on a complete lattice. See, as an example, Appendix C for the \(q=3\) Potts model. Note that the equivalence between the martingale and the local invariance [14] is, however, specific to the Ising spin model.
#### ii.2.2 Dynamical approach - Finite time reversible operation
At the level of discrete spins, Glauber's algorithm [16; 17] is a representative model of continuous-time Markovian evolution. In this model the flipping of the Ising spin \(s_{i}\) in the presence of the interacting energy,
\[E_{i}(t)=j\sum_{k(\neq i)}s_{k}(t), \tag{8}\]
Figure 1: Schematic representation of the process of updating the two-story ensemble. The blocks represent the partition of the spins into quenched part (the left column in blue) and the free part (the right column in red). According to the sign of spins, each column is subdivided: \(N_{+}+N_{-}=T+1\) and \(n_{+}+n_{-}=N_{0}-(T+1)\) while \(N_{+}-N_{-}=M(=M_{T+1})\) and \(n_{+}-n_{-}=\mu(=\mu_{T+1})\).
is characterized by the transition rate of the single-spin flip:
\[P[s_{i}(t+dt)=-s_{i}(t)]=\frac{dt}{2\varepsilon_{i}(t)}(1-s_{i}(t)\tanh(\beta E_{ i}(t))), \tag{9}\]
where the characteristic time \(\varepsilon_{i}(t)\) may depend on the time \(t\). In this context the operation of quenching the spin \(s_{T+1}\) is to render \(\varepsilon_{T+1}(t)\) to \(+\infty\). On the other hand, we know that, if the time constants \(\{\varepsilon_{i}\}\) are static, the above algorithm can establish the canonical distribution as steady state. While the latter does not immediately imply that the quenching, or general time-dependent modulation of \(\varepsilon_{i}\)'s, allows the canonicity to be kept intact against the dynamic perturbation, it is assured by the fact that the the Kullback-Leibler divergence,
\[D(P\|P^{can})=-\sum_{\{s_{i}\}}P(\{s_{i}\},t)\ln\frac{P(\{s_{i}\},t)}{P^{can}( \{s_{i}\})},\]
is a Lyapunov functional of the Markovian evolution of \(P(\{s\},t)\) whether or not \(\{\varepsilon_{i}\}\) are time-dependent2. Figure 2 demonstrates that the Glauber model keeps the canonicity whatsoever the choice of characteristic times \(\{\varepsilon_{i}\},\) either static or dynamic.
Footnote 2: Into the generic inequality, \(D(\mathsf{K}P\|\mathsf{K}Q)\leq D(P\|Q)\) for the probability vectors \(P\) and \(Q\) with a transfer matrix \(\mathsf{K}\), we substitute \(P=P_{\mathrm{f}},\)\(Q=P^{can}\) and \(\mathsf{K}=\mathsf{1}+dt\,\mathsf{R}\), where \(\mathsf{R}\) is the rate matrix. Then we have \(D(P_{t+dt}\|P^{can})\leq D(P_{\mathrm{f}}\|P^{can})\).
Below the Ising-spin scale, the individual spin may be visualized as a state point in a double-well potential. A well-known example is the bit-memory analyzed by Landauer [18]. In the presence of an external field or interactions with other spins, the potential is generally asymmetric and fluctuating. It is, therefore, generally impossible to raise the barrier of a double-well potential strictly reversibly in a finite time. It is, nevertheless, instructive to quantify the irreversibility. The partial entropy production introduced by Shiraishi [19; 20; 21] may be fitted for this purpose. If we approximately discretize the coordinate \(x\) of the double-well potential (Figure 3(a)), the partial entropy production associated with the (nearby) transitions \(x^{\prime}\to x\) denoted by \(\dot{S}_{x,x^{\prime}}\) reads
\[\dot{S}_{x,x^{\prime}}=R_{xx^{\prime}}p_{x^{\prime}}\ln\frac{R_{xx^{\prime}}p _{x^{\prime}}}{R_{x^{\prime}x}p_{x}}+R_{x^{\prime}x}p_{x}-R_{xx^{\prime}}p_{x^ {\prime}}, \tag{10}\]
where \(p_{x}\) is the probability and \(R_{x^{\prime},x}\) is the transition rate from \(x\) to \(x^{\prime}\) and we assumed that the time-reversed state of \(x\) is \(x\) itself. When the potential is modified sufficiently slowly relative to the microscopic timescale, the probability flows, \(R_{xx^{\prime}}p_{x^{\prime}}-R_{x^{\prime}x}p_{x}\), with \(x\) and \(x^{\prime}\) within the same valley, remain effectively zero through the detailed balance, with the only exception around the barrier top. In Appendix D we demonstrate that, by focusing on the barrier top, this framework gives the famous Landauer's entropic loss by \(\ln 2\) upon the erasure process of a bit memory. By contrast, in the present context, the quenching of a spin is made so that the DB is observed _including_ at the vicinity of the barrier top. Then the local entropy production \(\dot{S}_{x,x^{\prime}}\) in (10) vanishes everywhere. With more than one spin, the above argument should be lifted to a high-dimensional phase space and the associated continuous transition networks. This is the background of the following subsections.
Figure 2: Invariance of the steady state distribution of Glauber algorithm with different values of \(\varepsilon,\) either fixed with time, variating, or chosen at random at each time step. \(\mathcal{U}[0,1]\) stands for a uniform random variable over the interval \([0,1]\).
Figure 3: Different viewpoints of Progressive Quenching. (a) A double-well potential as a microscopic model of single-spin quenching. The dashed parts of the curves are those inaccessible by thermal activation with the experimental time. (b) Schematic illustration of a cubic Markov transition network modulated by PQ (in the case of three Ising spins for example). After quenching the first degree of freedom denoted by \(x_{1}\), the cubic network is separated into two square disconnected sub-networks. (c) Schematic illustration of a network transformation. Some edges whose net probability flow has been zero are removed (dashed line).
### PQ viewed in the transition network
Coming back to the discrete description of spins, we aim to extend the PQ to the context of Markovian transition networks (TN). This subsection is a preparation for that purpose, where we translate the PQ of a spin system in the language of TN. The further extension will be discussed in Section III.3.
We consider a system with \(N\) degrees of freedom denoted by \(\{x_{1},x_{2},\ldots,x_{N}\}\). The set of possible values of \(x_{i}\) is denoted by \(A_{i}\). For example, \(A_{i}=\{-1,1\}\) for an Ising spin \(s_{i}\). The state space \(A\) then reads \(A\equiv\bigotimes_{i=1}^{N}A_{i}\). Any state \(\alpha\in A\) can then be described by a set of degrees of freedom, \(\{x_{1},x_{2},\ldots,x_{N}\}\). Inversely any variable \(x_{i}\) is the function of the state, \(x_{i}(\alpha)\). The transition network in \(A\) is such that (i) if we exclude the simultaneous change of more than one variable, the topology of transition edges is hyper-rectangle, and (ii) if any one variable, e.g. \(x_{i},\) is quenched, the network is divided into two groups, losing the ergodicity. Fig 3(b) illustrates (i) and (ii), where an initial TN graph is divided into non-connected subgraphs.
Let \(\mathbf{\mathcal{R}}\) be the rate matrix of the master equation for the network on \(A\):
\[\frac{d\vec{P}}{dt}=\mathbf{\mathcal{R}}\vec{P},\]
and let \(\vec{P}^{\rm st}\) be the steady state distribution; \(\mathbf{\mathcal{R}}\vec{P}^{\rm st}=\vec{0}\). We also introduce the net probability current from \(\alpha\) to \(\alpha^{\prime}\) through
\[J_{\alpha^{\prime}\leftarrow\alpha}\equiv\mathcal{R}_{\alpha^{ \prime}\leftarrow\alpha}P_{\alpha}-\mathcal{R}_{\alpha\leftarrow\alpha^{\prime }}P_{\alpha^{\prime}}.\]
When the detailed balance (DB) is established for the steady state, \(\vec{P}^{\,\rm st}\), we have \(J_{\alpha^{\prime}\leftarrow\alpha}=0\) for all the pair of states, \((\alpha,\alpha^{\prime})\).
Having the progressing quenching in mind, we introduce the class-Kronecker delta, \(\delta^{(T)}_{\alpha,\alpha^{\prime}}(=\delta^{(T)}_{\alpha^{\prime},\alpha})\) through
\[\delta^{(T)}_{\alpha,\alpha^{\prime}}=\left\{\begin{array}{ll}1:&\wedge_{i=1 }^{T}\{x_{i}(\alpha)=x_{i}(\alpha^{\prime})\}\\ 0:&\text{otherwise}\end{array}\right., \tag{11}\]
that is, it picks up those pair of states that belongs to the same quenched degrees of freedom, \(\{x_{1},\ldots,x_{T}\}\). When the progressive quenching has fixed \(\{x_{1},\ldots,x_{T}\}\) but leaves the other variables free to fluctuate, the modified rate matrix, which we denote by \(\tilde{\mathcal{R}}_{T,\alpha^{\prime}\leftarrow\alpha}\) is given as
\[\tilde{\mathcal{R}}_{T,\alpha^{\prime}\leftarrow\alpha}=\delta^{(T)}_{\alpha,\alpha^{\prime}}\mathcal{R}_{\alpha^{\prime}\leftarrow\alpha} \tag{12}\]
for \(\alpha\neq\alpha^{\prime}\), and \(\tilde{\mathcal{R}}_{T,\alpha\leftarrow\alpha}=-\sum_{\beta(\neq\alpha)} \tilde{\mathcal{R}}_{T,\beta\leftarrow\alpha}\) for the diagonal element to satisfy the normalization conditions, \(\sum_{\alpha^{\prime}}\tilde{\mathcal{R}}_{T,\alpha^{\prime}\leftarrow\alpha}=0\) for \(\forall\alpha\). Eq. (12) simply means the state transition is possible only when \(\delta^{(T)}_{\alpha,\alpha^{\prime}}=1\).
A simple but important observation is that if the steady state of the unquenched system, \(\vec{P}^{\rm st},\) satisfies the detailed balance, then we have a trivial rewriting for every pair \((\alpha,\alpha^{\prime}),\)
\[0 =J_{\alpha^{\prime}\leftarrow\alpha}\] \[=\mathcal{R}_{\alpha^{\prime}\leftarrow\alpha}P^{\rm st}_{\alpha }-\mathcal{R}_{\alpha\leftarrow\alpha^{\prime}}P^{\rm st}_{\alpha^{\prime}}\] \[=\delta^{(T)}_{\alpha,\alpha^{\prime}}\big{(}\mathcal{R}_{ \alpha^{\prime}\leftarrow\alpha}P^{\rm st}_{\alpha}-\mathcal{R}_{\alpha \leftarrow\alpha^{\prime}}P^{\rm st}_{\alpha^{\prime}}\big{)}\] \[=\tilde{\mathcal{R}}_{\alpha^{\prime}\leftarrow\alpha}P^{\rm st}_ {\alpha}-\tilde{\mathcal{R}}_{\alpha\leftarrow\alpha^{\prime}}P^{\rm st}_{ \alpha^{\prime}}. \tag{13}\]
This means that \(\vec{P}^{\,\rm st}\) satisfies also the DB condition for the quenched system. The steady states of \(\tilde{\mathcal{R}}\) are in general not unique because of the _broken ergodicity_ (see Fig 3(b). Nevertheless the canonical distribution, \(\vec{P}^{\rm st},\) is among the possible steady states.
### PQ of Markovian transition network without detailed balance
When the states space \(A\) is not a product space corresponding to multiple degrees of freedom of the system, we may still consider the action of quenching as the elimination of a part of bidirectional edges from the transition network (TN). If the detailed balance (DB) condition is not globally satisfied, the removal of bidirectional edges in a TN generally causes the modification of its steady-state distribution. The inset of Fig 4 shows just a simple example, where the stationary state has a circulation of probability. Before "quenching" the stationary probability on the three states is \(\{p_{1},p_{2},p_{3}\}=\left\{\frac{1}{3},\frac{1}{3},\frac{3}{3}\right\}.\) When we remove the edges between the states \(1\) and \(2,\) the stationary probability becomes \(\left(r^{2}+rr^{\prime}+r^{2}\right)^{-1}\left\{r^{\prime 2},r^{2},r^{ \prime}\right\}.\) It is only when \(r=r^{\prime}\) that the detailed balance holds globally, and the stationary distribution remains unchanged by this operation.
When we consider the general TN and ask when the removal of bidirectional edges leaves the steady state probability intact, a rule of thumb is as follows:
When a pair of states, \((\alpha,\alpha^{\prime})\) realize the vanishing net probability flow, \(J_{\alpha^{\prime}\leftarrow\alpha}=0,\) we can eliminate simultaneously \(\mathcal{R}_{\alpha^{\prime}\leftarrow\alpha}\) and \(\mathcal{R}_{\alpha\leftarrow\alpha^{\prime}}\) without perturbing the stationary distribution. The demonstration follows the idea of (13) above. We suppose that the initial TN has a steady state \(\vec{P}^{\,\rm(st)}\). We denote by \(\chi_{Q}\) all those pairs of states for which the net probability flow vanishes, i.e.,
\[\chi_{Q}\equiv\left\{(\alpha,\alpha^{\prime})\,|\,\mathcal{R}_{\alpha^{\prime }\leftarrow\alpha}P^{\rm(st)}_{\alpha}-\mathcal{R}_{\alpha\leftarrow\alpha^{ \prime}}P^{\rm(st)}_{\alpha^{\prime}}=0\right\}. \tag{14}\]
Figure 4: An example of a two-layered transition network whose steady state does not verify the detailed balance for any pair of states but yet has the possibility of a “quench” leaving the stationary probabilities intact. See the main text (III.3) for the detail. (inset) A simple example of a stationary Markov chain without detailed balance. If \(r\neq r^{\prime}\), there is a non-zero probability flux, and cutting a link will change the stationary distribution.
Here \(\chi_{Q}\)'s suffix \(Q\) stands for "quenchable". We introduce the "optional"-Kronecker delta, \(\delta^{(Q)}_{\alpha,\alpha^{\prime}}\) through
\[\delta^{(Q)}_{\alpha,\alpha^{\prime}}=\delta^{(Q)}_{\alpha^{\prime},\alpha}= \left\{\begin{array}{ll}1\text{ or 0 (optional) }:&(\alpha,\alpha^{\prime})\in\chi_{Q}\\ 1&:&\text{otherwise}\end{array}\right., \tag{15}\]
that is, \(\delta^{(Q)}\) can vanish only for the pairs whose net steady probability flow is zero. We then "quench" the original TN according to the "optional"-Kronecker delta:
\[\tilde{\mathcal{R}}_{\alpha^{\prime}\leftarrow\alpha}\equiv\delta^{(Q)}_{ \alpha,\alpha^{\prime}}\mathcal{R}_{\alpha^{\prime}\leftarrow\alpha}. \tag{16}\]
We can check that the "quenched" TN still has \(\vec{P}^{\text{(st)}}\) as the stationary state. In fact for every \(\alpha^{\prime}\)
\[\sum_{\alpha}(\tilde{\mathcal{R}}_{\alpha^{\prime}\leftarrow \alpha}P^{\text{(st)}}_{\alpha}-\tilde{\mathcal{R}}_{\alpha\leftarrow\alpha^{ \prime}}P^{\text{(st)}}_{\alpha^{\prime}})\] \[= \sum_{\alpha}\delta^{(Q)}_{\alpha,\alpha^{\prime}}(\mathcal{R}_{ \alpha^{\prime}\leftarrow\alpha}P^{\text{(st)}}_{\alpha}-\mathcal{R}_{\alpha \leftarrow\alpha^{\prime}}P^{\text{(st)}}_{\alpha^{\prime}})\] \[= \sum_{\alpha}(\mathcal{R}_{\alpha^{\prime}\leftarrow\alpha}P^{ \text{(st)}}_{\alpha}-\mathcal{R}_{\alpha\leftarrow\alpha^{\prime}}P^{\text{( st)}}_{\alpha^{\prime}})\] \[= 0. \tag{17}\]
Here, to go to the third line, we have used the fact that, whenever the pair \((\alpha,\alpha^{\prime})\) is \(\not\in\chi_{Q}\), we have \(\delta^{(Q)}_{\alpha,\alpha^{\prime}}=1\) by definition. The last equality is the stationary condition for the original TN. The main part of Fig. 4 gives an example in which the TN does not have a global detailed balance but the "quenching" of TN is possible. The system has two layers, \(\mathcal{R}\) and \(\mathcal{E}\). The former layer has the Glauber dynamics allowing detailed balance among \(\{\alpha_{1},\alpha_{2},\alpha_{3},\alpha_{4},\alpha_{5}\}\). The latter layer \(\mathcal{E}\) undergoes the stochastic circulation among \(\{\varepsilon_{1},\varepsilon_{2},\varepsilon_{3},\varepsilon_{4}\}\). We assume that the four values \(\varepsilon_{k}\) (\(k=1,\ldots,4\)) are the global time constant of the Glauber dynamics for the first layer \(\mathcal{R}\). Then we can quench the bidirectional edges for any pairs of nodes on this layer.
_Remark_ : The modification of the transition rates, \(\mathcal{R}_{\alpha^{\prime}\leftarrow\alpha}\mapsto\tilde{\mathcal{R}}_{ \alpha^{\prime}\leftarrow\alpha}\) and \(\mathcal{R}_{\alpha\leftarrow\alpha^{\prime}}\mapsto\tilde{\mathcal{R}}_{ \alpha\leftarrow\alpha^{\prime}}\) should be realized _pairwise_ simultaneously, either instantaneously or gradually but with keeping the ratio \(\tilde{\mathcal{R}}_{\alpha^{\prime}\leftarrow\alpha}(t)/\tilde{\mathcal{R}}_{ \alpha\leftarrow\alpha^{\prime}}(t)\) constant so as to maintain the flow-free condition, \((\tilde{\mathcal{R}}_{\alpha^{\prime}\leftarrow\alpha}(t)P^{\text{(st)}}_{ \alpha}-\tilde{\mathcal{R}}_{\alpha\leftarrow\alpha^{\prime}}(t)P^{\text{( st)}}_{\alpha^{\prime}})=0\).
## IV PQ in non-Markovian models
The previous results are valid only for Markovian systems. Our study of Progressive Quenching is now extended to non-Markovian systems, whether the detailed balance (DB) is verified (Section IV.1) or not (Section IV.2). The examples given are the Ising spin systems studied above but with memory effects.
### System with hidden spins satisfying detailed balance
#### iv.1.1 Model, effective coupling and DB
In this part, we recall two known aspects of non-Markovian processes through the case studies under a simple setup. As a model we consider a chain of \(N\) "visible" spins \(\{s_{i}\}\) with ferromagnetic nearest-neighbor coupling \(J\). We suppose also that the neighboring spin pairs, say \(s_{i}\) and \(s_{i+1}\) share a "hidden" spin \(\sigma_{i+\frac{1}{2}}\), through the coupling \(K\). Fig.5 shows the case of a closed chain with three visible spins and three hidden ones. The energy of the entire system reads
\[\mathcal{E}=-\sum_{i=1}^{N}Js_{i}s_{i+1}-K\sum_{i=1}^{N}(s_{i}+s_{i+1})\sigma_ {i+\frac{1}{2}}, \tag{18}\]
where \(s_{N+1}\equiv s_{1}\). After taking the sub-trace over \(\sigma\)'s, the effective energy \(\tilde{E}_{s}\) and the effective partition function \(\tilde{\mathcal{Z}}\) read:
\[\tilde{\mathcal{Z}}=\sum_{\{s_{i}\}}e^{-\beta\tilde{E}_{s}},\quad\tilde{E}_{s }=-\sum_{i=1}^{N}\tilde{J}s_{i}s_{i+1}, \tag{19}\]
where the effective, temperature-dependent, coupling constant \(\tilde{J}\) is
\[\tilde{J}\equiv J+(2\beta)^{-1}\ln\cosh(2\beta K). \tag{20}\]
The visible spins, therefore, follow the canonical statistics with the apparent coupling \(\tilde{J}\) as long as the single-time statistics are concerned. If the whole system evolves by a Markovian dynamics such as the Glauber model, the observer who has only access to the visible spins \(\{s_{i}\}\) finds its non-Markovian evolution. The non-Markovian nature in a simple case is demonstrated in Appendix F.1. About the visible spins, there is no more instantaneous detailed balance (DB). Nevertheless, if the whole system \(\{s,\sigma\}\) obeys a Markovian evolution with DB, the visible spins still satisfy a _trajectory-wise_ detailed balance:
\[\mathbb{P}([\{s_{i}(t)\}_{i=1}^{N}]_{t=0}^{T})=\mathbb{P}([\{s_{i}(t)\}_{i=0}^{N }]_{t=0}^{*T}), \tag{21}\]
where \([\{s_{i}(t)\}_{i=0}^{N}]_{t=0}^{*T}\) denotes the time reversal of the forward trajectory, \([\{s_{i}(t)\}_{i=1}^{N}]_{t=0}^{T}\). The derivation of (21) is given in Appendix F.2.
#### iv.1.2 Effects of the PQ
_Quenching generally breaks canonicity:_ In principle, the quenching of a visible spin _can_ accompanies any actions on the hidden part even though the system starts by the canonical ensemble. For example, we can imagine the case in which the quenching of the visible spin \(s_{i}\) in Fig.5(a) imposes a specific value for the hidden spin \(\sigma_{i+\frac{1}{2}}\) at the \(i\)-th step of quenching. The general principle mentioned above may have exceptions through deliberately designed actions of the PQ. Two such cases are demonstrated below.
_Case of unbroken canonicity upon PQ:_ We take up again the non-Markovian model shown in Fig. 5(a), with the energy given by Eq.(18) wherein the steady state the Detailed-Balance (DB) holds. A Glauber algorithm is used to simulate the dynamics of the whole system, and we progressively quench exclusively the visible spins while the hidden variables remain intact. According to Sec III.2 the PQ of that system, in particular the selective quenching of visible spins, should not modify the distribution as the two-story ensemble. Fig. 5(b) (thick curves) verifies this idea, where the probability distribution of the (visible) magnetization, \(M\equiv S_{1}+S_{2}+S_{3}\), after all these spins have been quenched. Here the quenching of visible spins is progressively done with a regular (dimensionless) interval, \(\Delta T/\epsilon=0,1\) and \(15\) (solid curves), where \(\Delta T/\epsilon=0\) is equivalent to the snapshot of the equilibrium ensemble before quench. The distributions are independent of the interval \(\Delta T/\epsilon\).
We also examined another _ad hoc_ protocol in which the fixation of visible spin accompanies the modification of the coupling parameters, \(J\) and \(K\). At every quenching the value of \(J\) is reduced by 50% whole that of \(K\) is incremented so that the effective coupling \(\tilde{J}\) of Eq.(20) remains unchanged. When we monitor the magnetization of visible spins, \(M\equiv S_{1}+S_{2}+S_{3}\), its distribution after sufficient interval \(\Delta T/\epsilon\) recovers the canonical one, by construction (Fig.5(b): dotted curve). Nevertheless, there is a visible transient before the equilibration in \(M\), which we monitor through its variance \(\langle M^{2}\rangle\), see Fig.5(c) (cf. \(\langle M\rangle=0\)). The fluctuations of \(M\) are transiently attenuated as a fast response to the reduction of \(J\), then it recovers gradually the canonical level (horizontal dotted line) due to the compensatory increment of \(K\).
### System with delayed interactions breaking detailed balance (Choi-Huberman model)
We have seen in the previous subsection (Sec.IV.1) that the conservation of the canonicity upon the PQ requires a Markovian evolution rule, in addition to the detailed balance in the starting steady state. In the last part of this paper we study the effect of PQ on the non-Markovian system whose steady state has been broken from the beginning, i.e., before quenching the system's degrees of freedom, to have a better understanding of the PQ.
#### iv.2.1 Original Choi-Huberman model and its steady state
The starting model is the one introduced by Choi and Huberman in 1985 [22]. In their model the interactions between spins have a delay \(\tau\) with respect to the instantaneous ones, i.e. each spin at \(t\) "sees" the other spins at \((t-\tau)\). The probability of flipping of the spin \(s_{i}\) reads:
\[P[s_{i}(t+dt)=-s_{i}(t)]=\frac{dt}{2\varepsilon}\left[1-s_{i}(t)\tanh(\beta E_ {i}(t-\tau))\right] \tag{22}\]
with \(E_{i}\) being defined by Eq.(8). Except for the limit of the Glauber model [16] with \(\tau=0\), the steady state distribution should break the detailed balance because \(\tau>0\) invalidates the
Figure 5: (a) Model of non-Markovian spin system consisting of the visible (\(S_{i}\)) and hidden (\(\sigma_{i+\frac{1}{2}}\)) spins, see Eq.(18). In any case studied below the initial state is the canonical equilibrium. (b) Probability distribution of the visible spins \(M=\tilde{S}_{1}+S_{2}+S_{3}\) after all the visible spins have been fixed. For the solid curves \(J\) and \(K\) are kept at \((1/3)(k_{\rm B}T)\) with the interval between consecutive quench being \(\Delta T/\epsilon=0,1\) and \(15\) respectively. The dashed curve corresponds to variating values of \(J\) and \(K\) - see the main text for the detailed protocol. (c) Transient processes corresponding to the protocol for the dashed curve in (b). The second moment \(E[M^{2}]\) (cf. \(E[M]=0\)) of the whole visible magnetization, \(M=S_{1}+S_{2}+S_{3},\) is plotted against the scaled time, \(t/\epsilon\) after each quenching. The statistical average is taken over multiple realizations of the process.
time-reversal symmetry. We characterize the irreversibility by the non-dimensionalized parameter, \(a\equiv\tau/\varepsilon.\) Throughout this section (IV.2) the numerical simulation based on (22) is done with the time mesh \(dt/\varepsilon=0.1\) for \(a\geq 1\) and \(dt=0.05\) for \(a<1.\) In the Appendix G we show analytically that the steady state depends on the kinetic parameter, \(\varepsilon\) (via \(a\)), for the system with 2 spins. We recall that the canonical equilibrium, i.e. \(a=0,\) is independent of \(\varepsilon.\) Numerically, we show in Fig.6(a) how the steady state distribution of the system with 8 spins depends on the irreversibility parameter \(a.\) We see that the larger the value of \(a,\) the more paramagnetic (unimodal) the system behaves as compared with the bimodal distribution with Markovian limit, \(a=0.\) Intuitively, when the delay \(\tau\) is augmented, the cooperative fluctuations among the spins are lessened.
#### iv.2.2 Effects of the PQ of the Choi-Huberman model
If we introduce the PQ in the above model of Choi-Huberman, what effect should we expect? First, we studied how the two-story distribution of the total magnetization evolves as a function of the number of quenched spins. The irreversibility parameter \(a\) is kept at \(1.07\) where the intact distribution is unimodal (see Fig.6(a)). We have given a large enough time interval \(\Delta T\) between the consecutive quenching so that \(\Delta T/\varepsilon=15\gg a.\) Leaving the details in Appendix H, we found that the evolution is qualitatively similar to Fig.6(a), where the increment in the number of quenched spins corresponds to the _reduction_ of the irreversibility parameter \(a.\) This result may be qualitatively understandable because the quenching of the spin \(s_{i}\) amounts to the replacement of \(\varepsilon_{i}\) by \(\infty,\) or the reduction of \(a_{i}\) to 0, for that spin.
When the time interval between the consecutive quenching, \(\Delta T,\) is not exceedingly larger than either \(\varepsilon\) or \(\tau,\) the second dimension-free parameter, \(\Delta T/\varepsilon,\) comes into play in addition to \(a.\) Fig.6(b) shows how the final distribution of the total magnetization, \(M,\) depends on the values of \(\Delta T/\varepsilon,\) where the non-dimensionalized delay \(a\) is again fixed at 1.07. Note that with \(\Delta T/\varepsilon=0\) the steady state ensemble of the original Choi-Huberman model is entirely copied by PQ as the quenched ensemble. With increasing the value of \(\Delta T/\varepsilon,\) the free spins have more time to adapt to the quenched part, and the distribution of
Figure 6: Analyses of the distribution of the total magnetization, \(M=\sum_{i=1}^{N}s_{i},\) in the Choi-Huberman (C-H) model and its PQ with 8 spins (\(N=8\)) for (a),(b) and (c) and with 4 spins (\(N=8\)) for (d). (a) Plot of the steady state distribution for different values of non-dimensionalized delay, \(a=\tau/\varepsilon.\) The canonical distribution (\(\tau=0\)) corresponds to the \(a=0\) case. (b) Distribution of the magnetization after the Progressive Quenching has been completed with different values of \(\Delta T/\varepsilon.\) The steady-state distribution as well as the canonical distributions are also plotted for comparison. (c) Plot of the standardized second moment of the magnetization after the PQ, \(E[M^{2}]/E^{(0)}[M^{2}],\) versus the time interval parameter, \(\Delta T/\varepsilon,\) where \(a=1.07\) is kept the same as (b). The levels of the steady state (\(\Delta T/\varepsilon=0\)) and of the canonical case (\(a=0\)) are also shown by dashed horizontal lines. (d) Contour plot of the standardized mean square of the final quenched magnetization, \(E[M^{2}]/E^{(0)}[M^{2}],\) on the plane of \((a,\Delta T/\varepsilon).\) The symbol indicates the point where \(E[M^{2}]\) has been calculated at least over \(4\times 10^{5}\) samples and the contours are thereby calculated using the ContourPy library.
\(M\) undergoes the change which is a qualitatively similar manner to the case of _decreasing_ the value of \(a\).
The above results in Figs.6(a) and (b) motivate to study the possible synthetic effect of \(a\) and \(\Delta T/\varepsilon\), or the possible characterization by \((\Delta T/\varepsilon)/a(=\Delta T/\tau)\). Nevertheless, the comparison on the level of the probability distribution of \(M\) is too complicated. We, therefore, characterize each distribution by the second moment \(E[M^{2}]\) standardized by its canonical value (i.e., for \(a=0\) and arbitrary \(\Delta T/\varepsilon\)), which we denote by \(E^{(0)}[M^{2}]\), all knowing that some subtle aspects of the distribution will be lost. For example, the equality, \(E[M^{2}]=E^{(0)}[M^{2}]\), does not mean that the distribution is identical to the canonical one. Fig.6(c) shows this type of "projection" from the Fig.6(b), being complemented by more data points. Somewhat surprisingly the ratio \(E[M^{2}]/E^{(0)}[M^{2}]\) exceeds unity for \(\Delta T/\varepsilon\)\({}_{\sim}^{>}8\). The unimodal-bimodal transition of the distribution takes place where \(E[M^{2}]/E^{(0)}[M^{2}]=0.9\) approximately (see below).
Fig. 6(d) summarizes the contours of \(E[M^{2}]/E^{(0)}[M^{2}]\) on the plane of \(a\) and \(\Delta T/\varepsilon\), as the landscape of correlation among quenched spins. Here the total number of spins is \(N=4\) because of the limited computing time to ensure good statistics. In fact, \(E[M^{2}]/E^{(0)}[M^{2}]\) represents rather well the characteristics of the probability distribution of \(M\). Especially the unimodal-bimodal transition of the distribution of \(M\) is found to occur where \(E[M^{2}]/E^{(0)}[M^{2}]\simeq 0.9\) (data not shown). Along the vertical axis with \(a=0\) the model is the reversible canonical one, therefore, \(E[M^{2}]/E^{(0)}[M^{2}]=1\) by definition. However, there is another contour of "canonical" level. The zone above this contour is "super-canonical" realizing \(E[M^{2}]/E^{(0)}[M^{2}]>1\) although the excess part is very small. This reveals some synergistic effect of the three characteristic time constants, \(\varepsilon,\tau\) and \(\Delta T\). The weak undulation of the landscape seen around where the two "canonical" contours meet is due to the smallness of the system, not being an artifact of statistical error, because (i) the sample size is large enough (\(4\times 10^{6}\)) and (ii) the amplitude of undulation decreases when the system size is doubled (\(N=8,\) data not shown). Also, the "super-canonical" feature is more enhanced, rather than contrary, for the larger system size.
In the parameter region below the second non-vertical "canonical" contour, \(E[M^{2}]/E^{(0)}[M^{2}]=1,\) the landscape of \(E[M^{2}]/E^{(0)}[M^{2}]\) is monotone with respect to both \(a\) and \(\Delta T/\varepsilon\). This suggests that there is a compensating nature of \(\Delta T\) for the delay \(\tau\). Yet, near the origin, the perturbation by \(a\) is dominant over the influence of \(\Delta T/\varepsilon\).
## V Summary and remark
In the previous works [12; 13; 14] we have studied the "hidden" martingale property in the progressive quenching (PQ), that is, the martingale property of the mean of the next quenched spin, associated with the stochastic evolution of the total quenched magnetization. In the present work, we have demonstrated and numerically verified that the canonical property of the two-story ensemble is behind the martingality and that both the detailed balance and the Markovianity of the stochastic evolution are required for such structure to be maintained. Under these conditions, the canonicity is conserved even without allowing the unquenched spins to reach a quasi-equilibrium before the subsequent fixation of spins as far as the system starts with a canonical thermal ensemble. (cf. When we go down to a more microscopic scale the detailed balance may become incompatible with the quenching operation.) When the two-story canonical structure is assured the hidden martingale holds through the tower-rule applied to the conditional canonical expectations.
The operation of PQ amounts to stretching to infinity the microscopic response time of individual spin, which is \(\varepsilon\) for the Glauber model. Based on this observation we applied the notion of PQ to the general Markovian transition network on the one hand, and also to the non-Markovian models on the other hand. In the non-Markovian process, even when the system realizes a trajectory-wise detailed balance, the quenching may involve uncontrollable/unobservable modifications in the underlying freedoms that constitute the memory of the observable parts, and such changes can cause the breaking of canonicity of the observable part.
We also applied the PQ to the system for which the detailed balance is absent even in the unquenched steady ensemble. In the case of PQ on the Choi-Huberman (C-H) model, the operation of PQ can be unambiguously formulated. Monitoring through the variance of the total magnetization we examined the interplay between the intrinsic non-Markovian parameter \(\tau\) of the dimension of time and the time interval between the subsequent quenching, \(\Delta T\). While the canonical correlation that favored the cooperative fluctuations of spins is attenuated by the non-Markovian delay \(\tau\), the operation of quenching reinforces the cooperative fluctuations through \(\Delta T\).
The last remark is on the similarity and difference between the PQ and some form of "linear voter models", see [23] for introduction. In a typical example, the binary (\([1,0]\)) site (say \(x_{i}\)) and its neighbor (say \(x_{i}+n_{i}\)) are chosen at random at each discrete time step and the state of \(x\) copies the state of \(x+n\). In that model, \(M_{t}:=\sum_{i=1}^{N_{0}}x_{i}(t)/N\), where \(N\) is the system size, is deemed to be either 1 or 0, according to the so-called martingale convergence theorem (see, for example, [24] Sec. 11.2, "Martingale Convergence Theorems" Example 11.16), while the mean of \(M_{\infty}\) is \(M_{0}/N\) by the martingale property of \(M_{t}\). If we compare such a model with our PQ of spins, a difference is that \(M_{t}\) of the voter model eventually goes only to 1 or 0 unlike our PQ, while the similarity is that (i) both models have a martingale observable, and (ii) the individual realization tends to be polarized due to the interaction with the environment that has a long memory.
|
2303.11042 | Hospitalization Length of Stay Prediction using Patient Event Sequences | Predicting patients hospital length of stay (LOS) is essential for improving
resource allocation and supporting decision-making in healthcare organizations.
This paper proposes a novel approach for predicting LOS by modeling patient
information as sequences of events. Specifically, we present a
transformer-based model, termed Medic-BERT (M-BERT), for LOS prediction using
the unique features describing patients medical event sequences. We performed
empirical experiments on a cohort of more than 45k emergency care patients from
a large Danish hospital. Experimental results show that M-BERT can achieve high
accuracy on a variety of LOS problems and outperforms traditional
nonsequence-based machine learning approaches. | Emil Riis Hansen, Thomas Dyhre Nielsen, Thomas Mulvad, Mads Nibe Strausholm, Tomer Sagi, Katja Hose | 2023-03-20T11:48:36Z | http://arxiv.org/abs/2303.11042v1 | # Hospitalization Length of Stay Prediction using Patient Event Sequences
###### Abstract
Predicting patients' hospital length of stay (LOS) is essential for improving resource allocation and supporting decision-making in healthcare organizations. This paper proposes a novel approach for predicting LOS by modeling patient information as sequences of events. Specifically, we present a transformer-based model, termed Medic-BERT (M-BERT), for LOS prediction using the unique features describing patients' medical event sequences. We performed empirical experiments on a cohort of more than \(45k\) emergency care patients from a large Danish hospital. Experimental results show that M-BERT can achieve high accuracy on a variety of LOS problems and outperforms traditional non-sequence-based machine learning approaches.
Keywords:length of stay prediction transformers sequence models
## 1 Introduction
Increasingly scarce hospital resources challenge (often oversaturated) hospital wards, with a negative impact on the quality of health care at the hospitals [1] Models for predicting the remaining time of patient admissions, i.e., patient length of stay (LOS), could be invaluable for healthcare facilities to plan the availability of beds, staff, and other essential resources. For instance, automatic prediction of discharge time could be used in administrative planning systems for preemptively freeing in-hospital resources to alleviate hospital ward oversaturation [20]. However, LOS prediction is a challenging problem, requiring methods for handling missing data [18] and integration of temporal event dependencies.
Previous work on LOS prediction models patient hospitalizations using tabular data with imputation techniques for replacing missing values [2]. While tabular data is the most common data representation in Machine Learning (ML) models, it has several drawbacks. Among others, it does not provide immediate
support for integrating the temporal dependencies between observations, such as the order of treatments, or the time of conducted procedures. Moreover, standard ML techniques for tabular data, such as artificial neural networks (ANN), gradient boosting (GB), and support vector machines (SVMs), require complete data, hence often relying on imputation techniques when data is incomplete. However, missing data observations in healthcare data are often not missing at random (NMAR), meaning that the mere fact that an observation is missing is in itself important information [11].
To alleviate the problem of temporal dependencies and missing data, attention models, also known as transformers, have recently been investigated for Electronic Health Record (EHR) data formatted as sequences of medical events [10]. However, embedding-based transformer approaches have, to the best of our knowledge, not previously been applied to LOS prediction. This work examines how attention models, can be utilized for this task.
Attention models using self-attention alleviate the inefficiency of recurrence networks for long sequences [19]. However, they still capture significant sequential information by learning from the order of tokens in the sequence. In medical data, multiple observations may be given the same timestamp, with no meaning assigned to their individual order within the corresponding event. For example, a blood panel drawn from a patient contains several individual measurements whose internal order is insignificant. Based on layers of transformer encoders, we propose a revised attention model, henceforth called Medic-BERT (M-BERT), based on the Natural Language Processing (NLP) model BERT [6] and its revised semi-supervised training method. We employ the model for LOS prediction based on sequences of patient specific medical events happening during hospitalization and which exhibit the event concurrences common in patient data. We evaluate our method on a cohort of more than \(45k\) patient admissions from a large Danish hospital with diverse medical events, such as vital measurements, medication administration, laboratory tests, and conducted procedures.
The rest of this paper is structured as follows. Section 2 reviews related work on LOS prediction. Section 3 presents our model for representing hospitalizations as event sequences and our proposed M-BERT model for this unique data. Section 4 describes the large dataset used to evaluate this work and Section 5 the evaluation and its results. We conclude with Section 6. This work is an extended version of Hansen et al. [7] published at AIME 2023.
## 2 Related Work
The LOS prediction problem can be stated in different ways varying in the required resolution of the prediction, applied methods, and structure of patient data. Iwase et al. [9] investigate the use of Machine Learning (ML) methods for LOS prediction of intensive care unit patients (ICU). Using Random Forest (RF), Gradient Boosting (GB), and Artificial Neural Networks (ANN) technologies, on binary stratification of patient admissions into long (more than one week) and short (less than one week) stays. RF showed high predictive performance. In contrast to the tabular structure of their data, we instead attempt to utilize
the temporal dependencies within the data by structuring the patient data as medical event sequences. In Batista et al. [4], patients are stratified into three categories (\(\text{LOS}<3\), \(3<\text{LOS}<10\), \(\text{LOS}>=10\)) using RF and Support Vector Machines (SVM). The highest performance was achieved for RF models. While the nature of their ML models requires manual feature selection and missing data imputation, our method does not require any feature selection for in-hospital measurements, nor does it rely on data imputation.
The most challenging LOS prediction task is regression-based LOS, where the precise hospitalization LOS as a real value is to be predicted. Barsasella et al. [3] investigated a range of classical AI models, such as Decision Trees (DTs), RFs, Logistic Regression (LR), and SVMs for real-valued LOS prediction. While standard ML models work for tabular structured patient data, they neglect the important temporal dependencies between in-hospital patient events. While much work has been done in LOS prediction and regression using standard ML methods, only a few works attempt the usage of sequence models.
Seminal work in the area of transformer models for patient sequence data includes that of Li et al. [12] and Rasmy et al. [16], where sequences of diagnosis codes for consecutive hospitalizations are used as input to a BERT like transformer model for diagnosis code prediction tasks. While Rasmy et al. [16] investigate prolonged \(\text{LOS}>7\) days event prediction as a model pre-training task, they do not predict future LOS for new patients. Furthermore, while their approaches are only investigated for sequences consisting of diagnosis codes, we evaluate sequences consisting of multiple patient event types.
In Meng el al. [13], a transformer model is used to predict the onset of depression. They integrate various medical types for patient event sequences, including diagnosis and medication events. While their sequences include code-based medical events, we extend with medical measurement events, such as laboratory tests and vital measures while encoding the event value as part of the event token.
The work closest to ours is that of Song et al. [19], where an attention model is used to predict categorical LOS based on patient sequences. Whereas their approach targets dense time-series data, where the same measurements are present at every timestep, instead, we investigate a learned embedding approach where the input sequences consist of codes with different types of medical events.
## 3 Transformer Models for EHR
In this section, we describe how EHR data can be modeled as medical event sequences and our usage of transformer models for learning from these sequences.
### Hospitalizations as Event Sequences
Patient hospitalizations can be naturally modeled as sequences of medical procedures for determining, measuring, or diagnosing the patient's condition. Other medical procedures are therapeutic and intend to treat or cure the patient. To
standardize how medical procedures are described, medical facilities code procedure concepts using accepted medical taxonomies often used in ML applications [8], such as the Anatomical Therapeutic Classification (ATC) [17] for medication administration and the International Classification of Diseases and Related Health Problems (ICD) [5] for condition diagnosis. Hence, a patient hospitalization can be described as the sequence of concept tokens detailing medical procedures pertaining to a patient coded using medical concepts from accepted medical taxonomies. An example patient hospitalization sequence is illustrated in Figure 1.
Furthermore, as a patient's medical history is crucial for correct management and treatment, we pre-pend the patient's medical history to the hospitalization sequence as a tokenized vector as illustrated in Figure 1. The vector consists of 38 tokens describing the patient's medical history, including comorbidities from the Charlson Index [21], five years of medications prescription history grouped by the first level of the ATC hierarchy [17], and the mode, time, and initial triage category [23] of hospitalization. The historical information included in the vector is summarized in Table 1.
### Hospital Measurement Events
For some medical procedures, such as vital measurements and laboratory tests, a numerical measurement value accompanies the procedure. While other works in transformer models for EHR data disregard the numerical values of measurements [13, 16, 12], we instead integrate this
\begin{table}
\begin{tabular}{l l}
**Data** & **\#Tokens** \\ \hline Comorbidities & 18 \\ Prescription history & 14 \\ Mode, time, \& triage & 6 \\ \end{tabular}
\end{table}
Table 1: Patient medical history.
Figure 1: Illustration of a patient event sequence. Starting with the patient’s medical history as summarized in Table 1, the patient is initially diagnosed with the ICD-10 code _Z039_. Subsequent vital measurements and laboratory tests are performed to monitor the patient’s state and determine the patient’s underlying condition. Consequently, the patient is diagnosed with acute costitis without hematuria (ICD-10 code _N300_), and antibiotic treatment is initiated with nitrofurantoin (ATC code _J01XE01_). After additional procedure and therapeutic medical events, the patient is released from the hospital.
information as part of the patient input sequences because numerical measurement values add important information regarding the state of a patient. For example, the knowledge that a temperature measurement was performed is naturally important information. Still, from the measurement value of \(40.1^{\circ}C\), we learn that the patient has a fever. Using patient-specific threshold values for measurements based on age, gender, and pregnancy status, we map measurement values into tokens representing either normal, abnormal-low, or abnormal-high findings. For example, given that we measure an albumin level of 56 g/L for a 31-year-old male patient, we would create the token _albumin-high_ to reflect that the value of the measurement was above what is expected (36-48 g/L) for a patient with the given demography.
### Transformer Models for EHR Data
To investigate the challenge of LOS, we examine a modified version, henceforth termed Medic-BERT (M-BERT), of the Bidirectional Encoder Representations from Transformers (BERT) [6] model for EHR data. BERT is an NLP model based on a stack of encoder layers from the transformer architecture introduced by Vaswani et al. [22]. We argue that sequence models, such as BERT exhibit properties beneficial for solving medical tasks based on EHR data. The transformer encoder naturally handles the complex long-term dependencies that occur between medical concepts through its utilization of multi-head self-attention. BERT can naturally integrate disparate modalities, such as diagnostic end therapeutic events, as each event is encoded as an n-dimensional vector token. Furthermore, BERT naturally operates in domains with irregular intervals between events, as is the case with EHR data. We, therefore, investigate our modified version of BERT for LOS prediction for patient event sequences. The M-BERT architecture is illustrated in Figure 2.
M-BERT learns an embedding for each medical event token while trained toward LOS prediction. The position embedding enables the model to learn from the temporal dependencies within a sequence. We use a static position embedding as described in Waswani et al. [22]; however, modified for the usage on medical event sequences as described in Section 3.4. As patient demographics are a vital part of any medical prediction model, we pay special attention to
Figure 2: Medical event sequence pre-pended with the patient’s medical history.
this information by adding a trainable age and sex embedding at each medical event [12]. Furthermore, as in the original BERT model, we use the special classification (CLS) token as a final aggregate representation of the sequence. Hence, as illustrated in Figure 2, the CLS representation is fed to a linear output layer for LOS classification and regression tasks.
### Position Embedding
Due to the nature of patient care and hospital administration, some measurement events tend to chunk together. Clinicians will often conduct various measurements, such as blood pressure, temperature, and heart rate, over a short time and later persist the information into the patient's EHR. Hence, we are sometimes prevented from knowing the specific times and order of medical events. This effect is most frequent for vital measurements and laboratory tests. Multiple laboratory tests are often conducted on the same patient sample, such as a single blood sample, making it impossible to know the chronological order for such measurement events. An example sequence of patient events chunking together is illustrated in Figure 3. To enable the model to understand that these events have no fixed ordering, we assign the same position embedding to events co-located in time as illustrated in the position embedding of Figure 2. For example, the \(L_{1}\) and \(L_{2}\) events are both mapped to the \(E_{P_{3}}\) position embedding.
## 4 Data
We compiled a Danish dataset of patients admitted to a large hospital in northern Jurtland from the period 2018-2021 to investigate transformer models for LOS prediction. The dataset consists of \(48,177\) emergency care patients with admissions longer than one day. Figure 4 illustrates the distribution of the remaining length of stay for the patient cohort. In a clinical setting of resource allocation for emergency care patients, we are mostly interested in the planning of patient care for a couple of weeks in advance. Hence, we clip long patient stays to 30 days of admission to better fit and optimize for the clinical setting, as we are not interested in precisely predicting the long-tail distribution of the data.
Figure 3: Patient event sequence illustrating events grouping together.
In Denmark, each person can be uniquely identified by an identification number from the central person register (CPR) henceforth referred to as a CPR-number. As medical events are each associated with a unique CPR-number, it is possible to connect the medical data pertaining to a patient from disparate databases. As mentioned in Section 3.1, we divide patient medical data into historic information and admission-specific information. Historic information includes prescriptions, comorbidities, and mode and time of hospitalization, whereas admission information includes laboratory tests, vital measurements, hospital-administered medicine, and procedure codes.
Danish laboratory tests are coded using the Nomenclature for Properties and Units (NPU) terminology [15]. NPU ensures that laboratory tests are standardized and patient examinations can be used and understood by all clinicians. As detailed in Section 3.2, we integrate the semantics of laboratory test results as part of the event token by appending either high, low, or normal with respect to the patient demographics to the laboratory event token, thus encoding the meaning of the result. Vital measurements are extracted from a system called Clinical Suite and consist of the 7 most common vital observations, including temperature, oxygen saturation, BMI, pulse, respiration rate, blood pressure (systolic and diastolic), and oxygen supplement. As with laboratory tests, patient-specific thresholds are used to encode the meaning of the result into event tokens. In-hospital administered medication is coded using the ATC taxonomy [17] and consists of more than 5k chemical substances. Procedure codes specify in-hospital procedures performed
\begin{table}
\begin{tabular}{l l l}
**Event Type** & **Tokens** & **Occurrences** \\ \hline Lab Tests & 748 & \(2,774,790\) \\ Vital Measures & 22 & \(837,931\) \\ Medication & \(1,441\) & \(376,591\) \\ Procedures & 2049 & \(247,924\) \\ History & 81 & \(1,880,580\) \\ \end{tabular}
\end{table}
Table 2: Concept types with their occurrences in the dataset.
Figure 4: Illustrations for length of stay distribution over patient hospitalizations.
on patients. While diagnosis events are an important medical modality, the dataset does not contain the time of such events. Hence, we omit diagnosis events from patient sequences. The event types with distinct tokens and total occurrence in the data are summarized in Table 2.
## 5 Empirical Evaluation and Results
In this section, we explain the experimental settings and results of the empirical evaluation.
### Experimental Setting
Transformer models are trained, as a rule, using unsupervised pre-training for learning general token embeddings, followed by supervised fine-tuning targeting a specific downstream task. As we are only focusing on a single prediction task, we directly train model parameters and token embeddings toward the downstream task of LOS prediction without pre-training. The experimental code is available online4.
Footnote 4: [https://github.com/dkw-aau/medic_transformer](https://github.com/dkw-aau/medic_transformer)
We use the admission data gathered within the first 24 hours of admissions for LOS predictions. Hence, we remove sequence events happening after 24 hours of admission. We evaluate our approach on three LOS experiments of increasing complexity. The first two experiments are a **Binary** classification of LOS \(>\) 2 days, and a three-class **Category** task of LOS \(>\) 2, \(2\leq\) LOS \(\leq\) 7, and LOS \(>\) 7 days with class balances as illustrated in Figure 3(b). The last experiment, termed **Real**, is a regression task with the objective of predicting the LOS as the real number, with a histogram of admission times as illustrated in Figure 3(a).
We evaluate our approach against three ML models: RF, ANN, and SVM. We use implementations of the models from the Sklearn library [14] with standard model hyperparameters. In preparing samples for these models, we use the latest measured value for each event type (within 24 hours of admission) as input features [9]. Subsequently, the mean of variables is used for imputation of missing data values, and variables are scaled to values between 0 and 1. Lastly, a chi\({}^{2}\) test is used in feature selection for the selection of the 50 most relevant features.
Our model is trained on a random split distributed as 80/10/10 of all patient samples for training, validation, and testing. We use the loss of the evaluation data for early stopping training if the loss does not go down within ten training epochs. The model architecture has six hidden layers with an intermediate layer size of 288, eight attention heads, and input token embeddings have a size of 288. We truncate sequences to 256 tokens, as most sequences adhere to this limit. To counter overfitting, we add a dropout layer with a probability of 10% after the output of the final encoder layer, attention dropout at every layer with a probability of 10%, and weight decay of \(0,003\). Furthermore, experiments were performed with a learning rate of 1e-5.
### Results
Table 3 presents the Area Under the Receiver Operating Characteristics (AUROC) and harmonic mean of precision and recall (F1) values for the **Binary** and **Category** experimental settings and Mean Absolute Error (MAE) and Mean Squared Error (MSE) for the LOS regression task. Furthermore, Figure 4(a) presents the AUROC curves for the **Binary** experimental setting. M-BERT outperforms the traditional ML techniques in all experimental settings. The results indicate that transformer models may be able to leverage the temporal dependencies inherent to patient EHR data for increased predictive accuracy. Furthermore, being a transformer-based model, M-BERT overcomes the challenge of missing data and imputation, as patient sequences are not required to contain the same medical events, nor the same sequence length. Figure 4(b) presents the AUROC for binary LOS prediction stratified by age groups for those with more than 100 patient samples. Interestingly, the model performance is stable for different age groups, with an AUROC of 0.78 for the age groups 60-70 and 70-80 years. Furthermore, stratification based on the sex of patients yielded similar results, pointing to the robustness of M-BERT for LOS classification.
Figure 5: AUROC plots for experimental results on the binary prediction task.
\begin{table}
\begin{tabular}{l c c c c c c} \hline & \multicolumn{2}{c}{**Binary**} & \multicolumn{2}{c}{**Category**} & \multicolumn{2}{c}{**Real**} \\ \cline{2-7} & **AUROC** & **F1** & **AUROC** & **F1** & **MAE** & **MSE** \\ \hline RF & 0.72 & 0.70 & 0.66 & 0.45 & 4.18 & 39.08 \\ ANN & 0.67 & 0.68 & 0.63 & 0.43 & 4.09 & 38.10 \\ SVM & 0.70 & 0.70 & 0.65 & 0.38 & 3.56 & 43.36 \\ \hline M-BERT & **0.78** & **0.77** & **0.74** & **0.54** & **3.42** & **37.48** \\ \hline \end{tabular}
\end{table}
Table 3: Experimental results.
## 6 Conclusion
We have presented a novel approach for predicting LOS by modeling patient information as event sequences. Our approach adapts the transformer machine-learning approach for sequence prediction, which is able to handle the unique features of medical event sequences, namely grouped events and a variety of data types. Our empirical evaluation on a large cohort of emergency care patients from a Danish hospital demonstrates that our model can achieve high accuracy on various LOS problems, while outperforming traditional non-sequence machine learning approaches. Future work could include pre-training of the transformer-based model on a medical task to further improve its performance. Overall, the proposed approach has the potential to improve resource allocation and support decision making in healthcare organizations by providing accurate predictions of LOS. All experimental code is available online[4].
## Acknowledgments
This work is partially supported by the Poul Due Jensen Foundation and the Region North Denmark Health Innovation Foundation.
|
2301.10525 | Symplectomorphisms and spherical objects in the conifold smoothing | Let $X$ denote the `conifold smoothing', the symplectic Weinstein manifold
which is the complement of a smooth conic in $T^*S^3$, or equivalently the
plumbing of two copies of $T^*S^3$ along a Hopf link. Let $Y$ denote the
`conifold resolution', by which we mean the complement of a smooth divisor in
$\mathcal{O}(-1) \oplus \mathcal{O}(-1) \to \mathbb{P}^1$. We prove that the
compactly supported symplectic mapping class group of $X$ splits off a copy of
an infinite rank free group, in particular is infinitely generated; and we
classify spherical objects in the bounded derived category $D(Y)$ (the
three-dimensional `affine $A_1$-case'). Our results build on work of
Chan-Pomerleano-Ueda and Toda, and both theorems make essential use of working
on the `other side' of the mirror. | Ailsa Keating, Ivan Smith | 2023-01-25T11:10:16Z | http://arxiv.org/abs/2301.10525v2 | # Symplectomorphisms and spherical objects in the conifold smoothing
###### Abstract.
Let \(X\) denote the 'conifold smoothing', the symplectic Weinstein manifold which is the complement of a smooth conic in \(T^{*}S^{3}\), or equivalently the plumbing of two copies of \(T^{*}S^{3}\) along a Hopf link. Let \(Y\) denote the 'conifold resolution', by which we mean the complement of a smooth divisor in \(\mathcal{O}(-1)\oplus\mathcal{O}(-1)\to\mathbb{P}^{1}\). We prove that the compactly supported symplectic mapping class group of \(X\) splits off a copy of an infinite rank free group, in particular is infinitely generated; and we classify spherical objects in the bounded derived category \(D(Y)\) (the three-dimensional 'affine \(A_{1}\)-case'). Our results build on work of Chan-Pomerleano-Ueda and Toda, and both theorems make essential use of working on the 'other side' of the mirror.
###### Contents
* 1 Introduction
* 2 Wrapped Fukaya categories of Liouville manifolds
* 3 The conifold smoothing
* 4 The conifold resolution
* 5 Proof of Theorem 1.1
* 6 Classification of sphericals
* 7 Miscellania
## 1. Introduction
### Results
Consider the open symplectic manifold
\[X=\{u_{1}v_{1}=z-1,\,u_{2}v_{2}=z+1\}\subset\mathbb{C}^{4}\times\mathbb{C}^{*}\]
where \(u_{i},v_{i}\in\mathbb{C}\) and \(z\in\mathbb{C}^{*}\), with the restriction \(\omega\) of the standard exact Kahler form from \(\mathbb{C}^{4}\times\mathbb{C}^{*}\). Projection to \(z\in\mathbb{C}^{*}\) defines a Morse-Bott-Lefschetz fibration on \(X\), from which one sees that \(X\) is also a plumbing of two copies of \(T^{*}S^{3}\) along a Hopf link.
**Theorem 1.1**.: _There is a split injection \(\mathbb{Z}^{*\infty}\to\pi_{0}\operatorname{Symp}_{c}(X,\omega)\), where \(\mathbb{Z}^{*\infty}\) denotes the free group on countably infinitely many generators. In particular, the symplectic mapping class group \(\pi_{0}\operatorname{Symp}_{c}(X,\omega)\) is infinitely generated._
The generators of the image \(\mathbb{Z}^{*\infty}\) in \(\pi_{0}\operatorname{Symp}_{c}(X,\omega)\) will be Dehn twists in certain Lagrangian spheres in \(X\).
**Corollary 1.2**.: _There is a 3-dimensional Stein domain \((X^{\prime},\partial X^{\prime})\), with both \(X^{\prime}\) and \(\partial X^{\prime}\) simply connected, and with \(\pi_{0}\operatorname{Symp}_{c}(X^{\prime})\) infinitely generated._
By contrast, for a simply-connected compact six-manifold \(X^{\prime}\) with simply connected boundary, \(\pi_{0}\operatorname{Diff}_{c}(X^{\prime})\) is always of finite type, cf. [12] and Lemma 7.8.
The Milnor fibres of the \(A_{k}\)-singularities for \(k>1\) contain infinitely many exact Lagrangian spheres up to Lagrangian isotopy, but these are obtained from a single sphere under the action of the symplectic mapping class group. We show, in Corollary 7.2, that the action of \(\pi_{0}\operatorname{Symp}_{c}(X^{\prime})\) on the set of isotopy classes of Lagrangian 3-spheres in \(X^{\prime}\) has infinitely many orbits, answering a folk question sometimes attributed to Fukaya.
The mirror \(Y\) to \(X\) is obtained from the small resolution of the 3-fold ordinary double point \(xy=(1+w)(1+t)\subset\mathbb{C}^{4}\) by removing the pullback of the divisor \(\{wt=0\}\). Our methods also resolve the long-standing open problem of classifying all the spherical objects in \(D(Y)\) (i.e. classifying sphericals in the 'three-dimensional affine \(A_{1}\)-case'). One consequence of this classification is:
**Theorem 1.3**.: _The spherical objects in \(D(Y)\) form a single orbit under a natural action of the pure braid group \(\operatorname{PBr}_{3}\)._
It may be worth highlighting that Theorem 1.1, which is a purely symplectic statement, relies essentially on passing to the other side of mirror symmetry, to exploit a computation of a space of stability conditions accessible only via algebraic geometry; whilst Theorem 1.3, which is a purely algebro-geometric statement, relies essentially on passing back to the symplectic setting, to exploit constraints arising from Nielsen-Thurston theory and symplectic dynamics on surfaces.
**Remark 1.4**.: Propositions 6.4 and 6.16 together imply that if \(\phi\in D(Y)\) is a Torelli autoequivalence (i.e. acting trivially on \(K\)-theory) which is not a power of a spherical twist, and \(S,S^{\prime}\) are any spherical objects in \(D(Y)\), then the total rank of \(\operatorname{Ext}^{*}(\phi^{k}(S),S^{\prime})\) grows exponentially in \(k\) (moreover the exponential growth rate is an algebraic integer and is at least \(\log(2)/12\)[16], etc). This follows from the identification of the ranks of these morphism groups with geometric intersection numbers of curves on a surface, and classical results in surface dynamics. This connects to ideas around categorical entropy, cf. [10].
### Context
Understanding the symplectic mapping class group is a classical and long-standing problem in symplectic topology. Theorem 1.1 is the first example of a (finite type) symplectic manifold for which the compactly supported symplectic mapping class group is known to be infinitely generated. It complements a result of [14], see also [15], which constructed compact symplectic manifolds for which the'symplectic Torelli group' (of mapping classes acting trivially on cohomology) was infinitely generated, and a result of [1] which gave a compact smooth manifold with a family of symplectic forms \(\omega_{t}\) for which the rank of \(\pi_{0}\operatorname{Symp}(X,\omega_{t})\) grew unbounded with \(t\). In contrast, we expect the symplectic mapping class group of a \(K3\) surface, or of (the mirror to) a log Calabi-Yau surface, to be finitely generated.
The classification of spherical objects in either the derived category of an algebraic variety or the Fukaya category of a symplectic manifold is also a well-known and active area of research. In particular, for the Milnor fibres of type \(A\) surface singularities a large body of work yielded a complete classification [13, 14, 15]. Much less is known on threefolds. For'small' categories associated to flopping contractions, namely the full subcategory of objects supported on the exceptional locus and with trivial push-forward, a classification was
recently obtained in [14], see also [15]. Theorem 1.3 is the first example which treats the 'big' category of all complexes supported on the exceptional locus, where one encounters an affine hyperplane arrangement for the associated chamber of flops (or stability conditions) rather than a finite hyperplane arrangement.
### Remarks on the proofs
Homological mirror symmetry for the conifold has been established by Chan, Pomerleano and Ueda in [16], both for the wrapped Fukaya category \(\mathcal{W}(X)\) and the compact subcategory generated by the 'compact core' of \(X\). In particular, there is a \(\mathbb{C}\)-linear equivalence \(\mathcal{W}(X)\simeq D(Y)\). Crucially, both categories are also linear over a larger ring \(R=SH^{0}(X)\simeq\Gamma(\mathcal{O}_{Y})\). The group of stability conditions on \(D(Y)\), and the group of autoequivalences of that category, was analysed by Toda [17, 18] (see also [14] for closely related results in a more general setting).
The heart of the proof of Theorem 1.1 is to show that the part of the autoequivalence group of \(\mathcal{W}(X)\) coming from compactly supported symplectomorphisms surjects to the subgroup of \(R\)-linear autoequivalences of \(D(Y)\) which moreover act trivially on \(K\)-theory. This uses in an essential way the detailed knowledge of \(\operatorname{Auteq}D(Y)\) arising from the computation of the space of stability conditions. By contrast, the key ingredient in the proof of Theorem 1.3 is a growth rate estimate for ranks of Floer cohomology under autoequivalences, which is proved using a combination of localisation-type theorems (to reduce Floer theory computations to a two-dimensional surface inside the six-dimensional \(X\)) together with classical results of Nielsen and Thurston on dynamics of pseudo-Anosov maps.
The symplectic topology of two-dimensional log Calabi-Yau surfaces has been fruitfully illuminated through mirror symmetry [10, 11]. This paper treats one simple three-dimensional example, but many of the techniques and results seem likely to generalise.
### Acknowledgements
The authors are grateful to Spencer Dowdall, Paul Hacking, Daniel Pomerleano, Oscar Randal-Williams, Paul Seidel and Michael Wemyss for helpful conversations and correspondence.
This material is based upon work supported by the National Science Foundation under Grant No. 1440140, while the authors were in residence at the Mathematical Sciences Research Institute in Berkeley, California, during the Fall semester 2022. A.K. was partially supported by EPSRC Open Fellowship EP/W001780/1. I.S. is grateful to the Clay Foundation for support at MSRI as a Clay Senior Scholar, and to EPSRC for support through Frontier Research grant EP/X030660/1 (in lieu of an ERC Advanced Grant).
## 2. Wrapped Fukaya categories of Liouville manifolds
### Preliminary wrapped Fukaya categories
Let \((W,\theta)\) denote a finite type complete Liouville manifold (i.e. the completion of a Liouville domain \(\mathring{W}\) by cylindrical ends). A Liouville domain \((\mathring{W},\theta)\) canonically determines a finite type complete Liouville manifold \((W,\theta)\). A Lagrangian is cylindrical in \((W,\theta)\), resp. \((\mathring{W},\theta)\), if it is invariant under the Liouville vector field outside a compact set, resp. in a collar neighbourhood of the boundary. Note that any cylindrical Lagrangian for \((W,\theta)\) is exact Lagrangian isotopic, via the inverse Liouville flow, to a Lagrangian which is the completion of a cylindrical Lagrangian in \((\mathring{W},\theta)\).
**Definition 2.1**.: The preliminary wrapped Fukaya category \(\mathcal{W}_{\mathrm{pr}}(W,\theta)\) is defined as the wrapped Fukaya category set-up as in [11, Section 2.4], with the following choices: the category is ungraded, with coefficient ring \(\mathbb{Z}/2\); and Lagrangians are exact and cylindrical
(no further brane data is required as there are no gradings). We will write \(\mathcal{W}_{\mathrm{pr}}(\mathring{W},\theta)\) for \(\mathcal{W}_{\mathrm{pr}}(W,\theta)\) when we wish to emphasise the choice of domain.
'Preliminary' is intended as in [10]. We will later restrict to Lagrangians which admit brane structures including a choice of \(Spin\)-structure and grading, yielding a category \(\mathcal{W}(W,\theta)\) which is defined over \(\mathbb{Z}\) and \(\mathbb{Z}\)-graded.
### Actions of symplectomorphisms
An exact symplectomorphism of \((\mathring{W},\theta)\) is a diffeomorphism \(\phi:\mathring{W}\to\mathring{W}\) such that \(\phi^{*}\theta=\theta+df\), where \(f\) needn't have support in \(\mathrm{Int}(\mathring{W})\) (and is formally defined on a small open neighbourhood of \(\mathring{W}\) in \(W\)). Let \(\mathrm{Symp}(\mathring{W})\) be the group of all symplectomorphisms of \(\mathring{W}\), equipped with the \(C^{\infty}\) topology. Let \(\mathrm{Symp}_{\mathrm{ex}}(\mathring{W},\theta)\) denote the subgroup of exact symplectomorphisms; the connected component of the identity in \(\mathrm{Symp}_{\mathrm{ex}}(\mathring{W},\theta)\) is \(\mathrm{Ham}(\mathring{W},\theta)\) the subgroup of Hamiltonian diffeomorphisms of \((\mathring{W},\theta)\).
**Lemma 2.2**.: _The group \(\pi_{0}\,\mathrm{Symp}_{\mathrm{ex}}(\mathring{W},\theta)\) acts on \(\mathcal{W}_{\mathrm{pr}}(W,\theta)\)._
Proof.: Fix \(\phi\in\mathrm{Symp}_{\mathrm{ex}}(\mathring{W},\theta)\). Cylindrical Lagrangians for \((\mathring{W},\theta)\) get mapped under \(\phi\) to cylindrical Lagrangians for \((\mathring{W},\phi^{*}\theta)\). This induces a map \(\phi:\mathcal{W}_{\mathrm{pr}}(\mathring{W},\theta)\to\mathcal{W}_{\mathrm{pr }}(\mathring{W},\phi^{*}\theta)\) taking a Lagrangian \(L\) to \(\phi(L)\) and pulling back all Floer data.
On the other hand, as \(\phi\) is an exact symplectomorphism of \(\mathring{W}\), \(\phi^{*}\theta=\theta+df\), for some smooth function \(f\). We claim that \((\mathring{W},\theta+tdf)\) is a Liouville domain for all \(t\in[0,1]\). For \(t=1\), this is true tautologically via pullback, with the Liouville flow \(Z_{\theta+df}\) for \((\mathring{W},\theta+df)\) being the pushforward under \(\phi\) of the Liouville flow \(Z_{\theta}\) for \((\mathring{W},\theta)\). Now notice, first, that \((\partial\mathring{W},\ker(\theta+tdf))\) is a contact manifold for all \(t\in[0,1]\): the positivity one needs comes from interpolating between two volume forms of the same sign. And second, notice that the vector field \(\omega\)-dual to \(\theta+tdf\) is a convex combination of the two outward pointing vectors fields \(Z_{\theta}\) and \(Z_{\theta+df}\), and so is also outward pointing. Thus \(\{(\mathring{W},\theta+tdf)\}_{t\in[0,1]}\) is a smooth family of Liouville domains.
The cylindrical completions of \(\{(\mathring{W},\theta+tdf)\}_{t\in[0,1]}\) give a smooth family of finite type complete Liouville manifolds, say \(\{(W,\theta_{t})\}_{t\in[0,1]}\). This is automatically a convex symplectic deformation in the sense of [11, Section 2]. By [11, Lemma 5], there is a smooth family of diffeomorphisms \(h_{t}:W\to W\), with \(h_{0}=\mathrm{id}\), such that \(h_{t}^{*}\theta_{t}=\theta_{0}+dg_{t}\), where \((t,x)\mapsto g_{t}(x)\) is a compactly supported function on \([0,1]\times W\). Inspection of the proof shows that the family \(h_{t}\) is canonical up to isotopy through diffeomorphisms with the same properties.
This implies that we have a trivial inclusion of Liouville manifolds \((W,\theta)\hookrightarrow(W,\theta_{1})\) in the sense of [10, Definition 2.4]. By [10, Lemma 3.41] (see also [10, Lemma 3.4]), there is an induced equivalence \(c:\mathcal{W}_{\mathrm{pr}}(W,\theta)\to\mathcal{W}_{\mathrm{pr}}(W,\theta_{1})\). Postcomposing \(\phi:\mathcal{W}_{\mathrm{pr}}(\mathring{W},\theta)\to\mathcal{W}_{\mathrm{pr }}(\mathring{W},\phi^{*}\theta)\) with the inverse to \(c\) gives an autoequivalence of \(\mathcal{W}_{\mathrm{pr}}(\mathring{W},\theta)\), which we will also denote \(\phi\).
To ensure that we obtain a group homomorphism, we require a consistency of continuation maps. This follows from the compatibility of the Liouville inclusions obtained from [11, Section 2] with concatenation of families of finite type complete Liouville manifolds.
Finally, if \(\phi\) and \(\phi^{\prime}\) are in the same connected component of \(\mathrm{Symp}_{\mathrm{ex}}(\mathring{W},\theta)\), they are related by a Hamiltonian isotopy. Furthermore, if \(\phi\) is Hamiltonian, the maps constructed in the two steps of the argument above are evidently quasi-inverse. This completes the proof.
**Lemma 2.3**.: _cf. [1, Lemma 1.1]. The inclusion \(\operatorname{Symp}_{\mathrm{ex}}(\mathring{W},\theta)\hookrightarrow \operatorname{Symp}(\mathring{W})\) is a weak homotopy equivalence._
Proof.: An analogous statement for complete Liouville manifolds is contained in [1, Lemma 1.1]. This shows that given an arbitrary symplectomorphism \(\psi:(W,\theta)\to(W^{\prime},\theta^{\prime})\) between finite type complete Liouville manifolds, there exists a symplectic isotopy \(\sigma_{t}:(W,\theta)\to(W,\theta)\), \(t\in[0,1]\), \(\sigma_{0}=\mathrm{id}\), such that \(\psi\circ\sigma_{1}\) is exact. Inspecting the proof, we see that the isotopy is canonical up to deformation within the class with the same properties, and that the argument can be carried out for families of symplectomorphisms parametrised by arbitrary compact cell complexes.
We now want to upgrade this to a statement for Liouville domains. We start with the argument for a single map. Suppose \(\psi:\mathring{W}\to\mathring{W}\) is an arbitrary symplectomorphism. Tautologically, this induces a symplectomorphism from the conical extension of \((\mathring{W},\theta)\), say \((W,\theta)\), to the conical extension of \((\mathring{W},\psi^{*}\theta)\), say \((W^{\prime},\theta^{\prime})\). Let us also denote this symplectomorphism by \(\phi\). By construction, the cylindrical end \(E=[0,\infty)\times\partial\mathring{W}\) for \((W,\theta)\) is taken to the one for \((W,\psi^{*}\theta)\), with \(\psi\) preserving \(s\)-level contact shells, \(s\in[0,\infty)\), and \(\psi\circ\rho_{s}=\rho^{\prime}_{s}\circ\psi\), where \(\rho_{s}\) is the Liouville flow for \((W,\theta)\) and \(\rho^{\prime}_{s}\) the one for \((W^{\prime},\theta^{\prime})\).
We now apply [1, Lemma 1.1] to get a symplectic isotopy \(\sigma_{t}\), \(t\in[0,1]\), as above. Inspecting their proof, we see that outside a compact set, \(\sigma_{t}\) is given by integrating a vector field of the form \(e^{-s\mathring{V}}\), where \(\tilde{V}\) is independent of \(s\). In particular, for sufficiently large \(s\), the contact level-set \(\{s\}\times\partial\mathring{W}\) is displaced by \(C^{\infty}\)-small amounts under \(\sigma_{t}\), for any \(t\in[0,1]\). (The statement of their lemma only spells this out for \(C^{0}\).) On the other hand, under \(C^{\infty}\) small perturbations, both the contact condition and the condition that a vector field be outward pointing are open. Fix such a sufficiently large \(s\). For any \(t\in[0,1]\), let \(\mathring{W}_{t}:=W\backslash\sigma_{t}^{-1}([s,\infty)\times\partial\mathring {W})\). By construction, \((\mathring{W}_{t},\theta)\) is a Liouville domain with conical completion \(W\), and is canonically Liouville isomorphic (via \(\sigma_{t}\)) to \((\mathring{W}\cup([0,s]\times\partial\mathring{W}),\theta)\).
Altogether, we have deformed \(\psi\) to an exact symplectomorphism of \((\mathring{W}\cup([0,s]\times\partial\mathring{W}),\theta)\). Conjugating by the time \(s\) Liouville flow now gives the desired map for \(\psi\). Finally, for a family of maps \(\{\psi_{b}\}_{b\in B}\) indexed by a compact cell complex \(B\), one can choose the parameter \(s\) uniformly over \(B\). Together with the observation that the argument in [1] works in families, this completes the proof.
**Corollary 2.4**.: _The group \(\pi_{0}\operatorname{Symp}(\mathring{W})\) acts on \(\mathcal{W}_{\mathrm{pr}}(W,\theta)\)._
Proof.: This now follows immediately from combining Lemmas 2.2 and 2.3.
### Gradings and orientations
We now assume that \(2c_{1}(W,d\theta)=0\in H^{2}(W;\mathbb{Z})\), i.e. the bundle \((\Lambda^{\dim W}T^{*}W)^{\otimes 2}\) is trivial (here \(\dim\) denotes the complex dimension). Then \(W\) admits a grading, i.e. a choice of fibrewise universal cover for the Lagrangian Grassmanian bundle \(\operatorname{Lag}(W)\to W\), say \(\widetilde{\operatorname{Lag}}(W)\to W\). The isomorphism classes of such covers are in one-to-one correspondence with trivialisations of \((\Lambda^{\dim W}T^{*}W)^{\otimes 2}\simeq W\times\mathbb{C}\), and form an affine space over \(H^{1}(W;\mathbb{Z})\) (see [15, Lemma 2.2]).
Fix a grading on \(W\). Given a Lagrangian \(L\subset W\), the trivialisation \((\Lambda^{\dim W}T^{*}W)^{\otimes 2}\simeq W\times\mathbb{C}\) determines a (homotopy class of) map \(L\to\mathbb{C}^{*}\), and so a class in \(H^{1}(L;\mathbb{Z})\), called the Maslov class of \(L\). There exists a lift of \(TL\subset\operatorname{Lag}(W)\) to \(\widetilde{\operatorname{Lag}}(W)\) precisely when \(L\) has Maslov class zero; a choice of such a lift is called a grading on \(L\).
We can now upgrade to the 'full' version of the wrapped Fukaya category, which will be used for the rest of this article.
**Definition 2.5**.: The wrapped Fukaya category \(\mathcal{W}(W,\theta)\) is defined using the set-up in [GPS, Section 2.4], with the following choices: coefficient ring \(\mathbb{Z}\); Lagrangians are exact, cylindrical, Maslov zero and spin, and, as objects of \(\mathcal{W}(W,\theta)\), equipped with brane data comprising a \(Spin\)-structure and a grading.
The compact Fukaya category \(\mathcal{F}(W,\theta)\) is the subcategory of compact Lagrangian submanifolds.
We will often suppress \(\theta\) from the notation when it is clear from the context which primitive we are referring to.
**Remark 2.6**.: For homological mirror symmetry statements, we'll base change the ground ring for \(\mathcal{W}(W)\) and \(\mathcal{F}(W)\) from \(\mathbb{Z}\) to \(\mathbb{C}\); as this will be clear from context, we will also denote those categories by \(\mathcal{W}(W)\) and \(\mathcal{F}(W)\).
Suppose that we are given \(\phi\in\operatorname{Symp}(W)\). We say that \(\phi\) is gradeable if under pullback by \(\phi\), our choice of trivialisation of \((\Lambda^{\dim W}T^{*}W)^{\otimes 2}\) is preserved; a grading for \(\phi\) is then a choice of lift of the pullback map to a bundle automorphism of \(\widetilde{\operatorname{Lag}}(W)\). We let \(\operatorname{Symp}^{\operatorname{gr}}W\) denote the group of graded symplectomorphisms of \(W\), and \(\operatorname{Symp}^{\operatorname{gr}}_{\operatorname{ex}}(W,\theta)\) the subgroup of exact graded symplectomorphisms. From the discussion above there's an exact sequence1 (see [Sei00, Lemma 2.4])
Footnote 1: The final map is not a group homomorphism, but its kernel is nonetheless a subgroup of \(\operatorname{Symp}(W)\).
\[1\to\mathbb{Z}\to\operatorname{Symp}^{\operatorname{gr}}(W)\to\operatorname{ Symp}(W)\to H^{1}(W;\mathbb{Z}). \tag{2.1}\]
If \(\phi\in\operatorname{Symp}(W)\) is gradeable and has compact support, it comes with a preferred grading: the one which fixes \(\widetilde{\operatorname{Lag}}(W)\) outside of the support of \(\phi\). This implies that there is a splitting of the forgetful map from \(\operatorname{Symp}^{\operatorname{gr}}_{c}(W)\) to the group of gradeable compactly supported symplectomorphisms.
The results of the previous subsection readily generalise as follows.
**Lemma 2.7**.: _The group \(\pi_{0}\operatorname{Symp}^{\operatorname{gr}}_{\operatorname{ex}}(\mathring{ W},\theta)\) acts on \(\mathcal{W}(W,\theta)\)._
Proof.: Given \(\phi\in\operatorname{Symp}^{\operatorname{gr}}_{\operatorname{ex}}(\mathring {W},\theta)\), we get a map \(\mathcal{W}(\mathring{W},\theta)\to\mathcal{W}(\mathring{W},\phi^{*}\theta)\) by pulling back all Floer data, taking a Lagrangian \(L\) to \(\phi(L)\), and using the choice of grading for \(\phi\) to determine the grading for \(\phi(L)\) (the spin structure is just pulled back).
We can now follow the proof of Lemma 2.2, noting that in any given connected component of \(\operatorname{Symp}_{\operatorname{ex}}(\mathring{W},\theta)\), either all maps are gradeable or none of them are.
**Lemma 2.8**.: _The inclusion \(\operatorname{Symp}^{\operatorname{gr}}_{\operatorname{ex}}(\mathring{W}, \theta)\hookrightarrow\operatorname{Symp}^{\operatorname{gr}}(\mathring{W})\) is a weak homotopy equivalence._
Proof.: This readily follows from Lemma 2.3, by noticing, first, that for any given connected component of \(\operatorname{Symp}(\mathring{W})\), resp. \(\operatorname{Symp}_{\operatorname{ex}}(\mathring{W},\theta)\), either all maps in it are gradeable or none of them are; and that for each such gradeable component there are \(\mathbb{Z}\) components in \(\operatorname{Symp}^{\operatorname{gr}}(\mathring{W})\), resp. \(\operatorname{Symp}^{\operatorname{gr}}_{\operatorname{ex}}(\mathring{W},\theta)\).
We then immediately get the following:
**Corollary 2.9**.: _The group \(\pi_{0}\operatorname{Symp}^{\operatorname{gr}}(\mathring{W})\) acts on \(\mathcal{W}(W,\theta)\)._
The natural module action of symplectic cohomology on wrapped Floer cohomology shows that \(\mathcal{W}(W)\) admits the structure of a category linear over \(SH^{0}(W)\). From the discussion above, we immediately get that graded symplectomorphisms act on \(\mathcal{W}(X)\) by \(SH^{0}(W)\)-linear equivalences.
## 3. The conifold smoothing
### Descriptions of the conifold smoothing \(X\)
Consider \(\mathbb{C}^{2}\times\mathbb{C}^{2}\times\mathbb{C}^{*}\) with co-ordinates \((u_{1},v_{1},u_{2},v_{2},z)\), equipped with its standard symplectic form \(\omega=d\theta\). The _conifold smoothing_\(X\) is the symplectic submanifold defined by the equations
\[X=\{u_{1}v_{1}=z-1,\ u_{2}v_{2}=z+1\}. \tag{3.1}\]
It is the fibre \(X=X_{1,-1}\) over \((+1,-1)\in\ (\mathbb{C}^{*}\times\mathbb{C}^{*})\backslash\Delta\) of a family of symplectic submanifolds
\[X_{a,b}=\{u_{1}v_{1}=z-a,\ u_{2}v_{2}=z-b\}. \tag{3.2}\]
Let \(\pi:X_{a,b}\to\mathbb{C}^{*}\) denote the projection to the \(z\) co-ordinate. This is a Morse-Bott-Lefschetz fibration, meaning that \(d\pi\) has transversely non-degenerate critical submanifolds. The smooth fibres are isomorphic to \(\mathbb{C}^{*}\times\mathbb{C}^{*}=T^{*}T^{2}\), and there are singular fibres \((\mathbb{C}\vee\mathbb{C})\times\mathbb{C}^{*}\) respectively over \(a\) respectively \(b\).
**Lemma 3.1**.: _For fixed \(a,b\), symplectic parallel transport for the fibration \(\pi:X_{a,b}\to\mathbb{C}^{*}\) is globally well-defined._
Proof.: There is a Hamiltonian \(T^{2}\)-action on \(X_{a,b}\), rotating the \((u_{i},v_{i})\)-planes, and \(\pi\) is a \(T^{2}\)-equivariant map. Given any symplectic fibration with a fibrewise Hamiltonian action of a Lie group \(G\), each component of the moment map for the \(G\)-action is preserved by symplectic parallel transport (cf. [1, Section 5] or [15, Lemma 4.1]). In the case at hand, the moment map is fibrewise proper; the result follows.
**Lemma 3.2**.: _Any matching path \(\gamma\) between the two critical values in the base \(\mathbb{C}^{*}\) of the Morse-Bott-Lefschetz fibration defines a Lagrangian sphere \(S_{\gamma}\) in \(X\)._
Proof.: Paths emanating from \(+1\in\mathbb{C}^{*}\), resp. \(-1\in\mathbb{C}^{*}\) have associated Morse-Bott thimbles \(S^{1}\times D^{2}\), respectively \(D^{2}\times S^{1}\). These restrict to Lagrangian \(T^{2}\)s in nearby fibres. The sphere \(S_{\gamma}\) is then presented via the standard genus one Heegaard splitting of the sphere.
**Lemma 3.3**.: \(X_{a,b}\) _is naturally a Weinstein manifold, with compact core \(S^{3}\cup_{\rm Hopf}S^{3}\)._
Proof.: It suffices to consider \(X=X_{1,-1}\). Reverse Liouville flow on each \(T^{*}T^{2}\) fibre retracts it to the zero-section \(\{|u_{i}|=|v_{i}|\}\), and retracts the \(\mathbb{C}^{*}\) base to the unit circle in the \(z\)-plane. This shows that \(X\) retracts to the union \(S_{0}\cup S_{1}\), where \(S_{i}=S_{\gamma_{i}}\) and the paths \(\gamma_{0}\) respectively \(\gamma_{1}\) form the arcs of the unit circle in the lower respectively upper half-plane in \(\mathbb{C}^{*}_{z}\).
It follows that \(X\) is the Weinstein completion of an open neighbourhood of this compact core, i.e. of a Weinstein domain obtained by plumbing two copies of the disc cotangent bundle \(D^{*}S^{3}\) along a Hopf link.
An affine variety determines a (complete) Weinstein symplectic manifold canonically up to exact symplectomorphism [11, Section 4b]. We will give two further descriptions of \(X\) by giving two further descriptions of the affine variety (3.1).
Recall that if \(Z\subset\mathbb{C}^{2}\) is a Zariski open subset and \(f:Z\to\mathbb{C}\) is a regular function, the _spinning_ of \(f\) is the \(3\)-fold
\[\{(x,y,u,v)\in Z\times\mathbb{C}^{2}\,|\,f(x,y)=uv\}. \tag{3.3}\]
This is a fibration over \(Z\) by affine conics \(\mathbb{C}^{*}\), with singular fibres \(\mathbb{C}\vee\mathbb{C}\) along \(f^{-1}(0)\subset Z\).
**Lemma 3.4**.: \(X\) _can be obtained by spinning \(Z=\mathbb{C}^{2}\backslash\{xy=1\}\) along the map \(f(x,y)=xy-2\)._
Proof.: Let \(g:\mathbb{C}^{2}\to\mathbb{C}\) be given by \(g(x,y)=xy-1=z\), so \(Z=\mathbb{C}^{2}\backslash g^{-1}(0)\) and \(g:Z\to\mathbb{C}^{*}\) has a Lefschetz singular fibre over \(z=-1\). The spinning is then a \(\mathbb{C}^{*}\)-fibration over \(\mathbb{C}^{2}\backslash\{xy=1\}\) with singular fibres over \(\{xy-1=1\}\). It is globally cut out of \(\mathbb{C}^{2}\times\mathbb{C}^{2}\times\mathbb{C}^{*}\), with co-ordinates \((x,y,u,v,z)\), by the equations \(xy=z+1\) and \(uv=z-1\).
\(Z\) contains an immersed Lagrangian \(2\)-sphere, obtained from parallel transport around the unit circle in the \(z\)-plane, cf. [Sei, Section 11]. Viewing this as a union of two Lefschetz thimbles fibred over the upper and lower half-circles with common boundary in the fibre \(z=1\), the immersed \(2\)-sphere naturally'spins' to the compact core \(S_{0}\cup_{\operatorname{Hopf}}S_{1}\) of \(X\) described above, i.e. the matching \(3\)-spheres are obtained as \(S^{1}\)-fibrations over the thimbles with fibres degenerating over the boundary of the \(2\)-disc.
Our final description of \(X\) is as a log Calabi-Yau \(3\)-fold. We won't use this directly, but it may be of independent interest. Consider \(\mathbb{P}^{1}\times\mathbb{P}^{1}\times\mathbb{P}^{1}\), which we view as the trivial \(\mathbb{P}^{1}\times\mathbb{P}^{1}\)-bundle over the base \(\mathbb{P}^{1}\). We blow up this variety along the (disjoint) curves \(\mathbb{P}^{1}\times\{pt\}\times\{1\}\) and \(\{pt\}\times\mathbb{P}^{1}\times\{-1\}\) to obtain a variety \(\overline{X}\) with an induced map \(\overline{X}\to X\to\mathbb{P}^{1}\). Let \(D\) denote the divisor given by the union of two smooth fibres of \(\overline{X}\to\mathbb{P}^{1}\) together with the subset of the toric boundary comprising its four components isomorphic to the first Hirzebruch surface \(\mathbb{F}_{1}\), see Figure 1.
**Lemma 3.5**.: \(X\) _is symplectically equivalent to the symplectic completion of \(\overline{X}\backslash D\)._
Proof.: In an affine chart \(\mathbb{C}_{p}\times\mathbb{C}_{q}\times\mathbb{C}_{z}^{*}\) we blow up \(\{p=1=z\}\) and \(\{q=1=-z\}\). Taking \([\lambda,\mu]\in\mathbb{P}^{1}\) and \([\tilde{\lambda},\tilde{\mu}]\in\mathbb{P}^{1}\) to be homogeneous co-ordinates on two different copies of \(\mathbb{P}^{1}\), the blow-up is given by
\[(p-1)/(z-1)=\lambda/\mu;\ (q-1)/(z+1)=\tilde{\lambda}/\tilde{\mu}.\]
Letting \(u_{1}=q-1\), \(u_{2}=p-1\), \(v_{1}=\tilde{\mu}/\tilde{\lambda}\) and \(v_{2}=\mu/\lambda\) gives
\[u_{1}v_{1}=z+1,\ u_{2}v_{2}=z-1;\]
Figure 1. The toric compactification of \(X\); blow up the thickened black edges on the cube for \(\mathbb{P}^{1}\times\mathbb{P}^{1}\times\mathbb{P}^{1}\). This slices off wedges of the corresponding moment polytope. Two of the four \(\mathbb{F}_{1}\)-boundary components of the result have been shaded.
since \(z\in\mathbb{C}^{*}\) none of \(u_{i},v_{j}\) can be infinite. Now remove the irreducible divisor from the projective blow-up where any of the \(u_{i},v_{j}\) co-ordinates do become infinite. This meets the general fibre in its toric boundary; each special fibre is again toric, isomorphic to \((\mathbb{P}^{1}\times\mathbb{P}^{1})\cup_{\mathbb{P}^{1}\times\{pt\}}(\mathbb{P }^{1}\times\mathbb{P}^{1})\), and the divisor meets this in a hexagon of lines (one does not remove the entire line of intersection of the two components, but only its intersection with the rest of the boundary).
### Exact Lagrangian submanifolds
As remarked previously in Lemma 3.2, any matching path \(\gamma\) between the two critical values in the base of the Morse-Bott-Lefschetz fibration \(X\to\mathbb{C}^{*}\) defines a Lagrangian sphere \(S_{\gamma}\) in the total space \(X\). We will later be particularly interested in the collection of matching spheres \(\{S_{i}\}_{i\in\mathbb{Z}}\) corresponding to the matching paths given in Figure 2.
Suppose we are given an embedded \(S^{1}\subset\mathbb{C}^{*}\backslash\{-1,1\}\) in the base of the Morse-Bott-Lefschetz fibration, avoiding the critical values. By similar considerations to those in Lemmas 3.1 and 3.2, parallel transporting the \(T^{2}\) vanishing cycle about the \(S^{1}\) gives a Lagrangian \(T^{3}\). This \(T^{3}\) will be exact precisely when the \(S^{1}\) in the base is Hamiltonian isotopic to the unit circle in \(\mathbb{C}^{*}\). (Recall here that we are using the standard Kahler form for \(\mathbb{C}^{*}\) as an affine manifold.) For any choice of such circle containing all of \(\{-1,0,1\}\), call the associated exact torus \(T\).
### Gradings
As \(X\) is an affine complete intersection in Euclidean space, we have that \(2c_{1}(X)=0\), and the isomorphism classes of gradings on \(X\) form an affine space over \(H^{1}(X;\mathbb{Z})\cong\mathbb{Z}\). Our convention is that throughout this paper we work with a fixed, distinguished grading for \(X\): the unique one for which the exact Lagrangian torus \(T\) defined above has Maslov class zero (this will also be true of any other tori fibred over \(S^{1}\)s containing \(0\) in the base of the Morse-Bott-Lefschetz fibration).
We now want to fix gradings for the \(S_{i}\). Fix an arbitrary grading for \(S_{0}\). For \(i\neq 0\), we want to grade \(S_{i}\) as follows: first, note that we can make fibrewise Morse perturbations such that \(S_{0}\) and \(S_{i}\) intersect transversally in \(4|i|\) points. One can check that there is a choice of grading for \(S_{i}\) such that: for \(i>0\), as generators for the Floer complex \(CF^{*}(S_{0},S_{-i})\), there are \(i+1\) intersection points with grading zero; \(2i\) points with grading one; and \(i-1\) points with grading two; and for \(i<0\), the same is true for \(CF^{*}(S_{i},S_{0})\).
**Remark 3.6**.: These gradings are chosen with mirror symmetry in mind: \(S_{i}\) will be mirror to \(\mathcal{O}_{C}(-i)\) for a fixed \((-1,-1)\) curve \(C\) in a threefold \(Y\), see Section 5.
We fix an arbitrary grading on the Lagrangian torus \(T\).
Figure 2. Lagrangian matching spheres \(S_{i}\) (with matching paths \(\gamma_{i}\)) and Lagrangian discs \(L_{i}\) (see Theorem 5.1).
### Symplectomorphisms of \(X\): first considerations
Let \((\mathring{X},\theta)\) be a Liouville domain such that \((X,\theta)\) is its associated completion. Following the notation in Section 2.2, let \(\operatorname{Symp}(\mathring{X})\) denote the group of all symplectomorphisms of \(\mathring{X}\) equipped with the \(C^{\infty}\) topology, and let \(\operatorname{Symp}_{\operatorname{ex}}(\mathring{X},\theta)\) be the subgroup of exact symplectomorphisms. We define graded versions \(\operatorname{Symp}^{\operatorname{gr}}(\mathring{X})\) and \(\operatorname{Symp}_{\operatorname{ex}}^{\operatorname{gr}}(\mathring{X})\) similarly.
By Lemma 2.8, the inclusion \(\operatorname{Symp}_{\operatorname{ex}}^{\operatorname{gr}}(\mathring{X}, \theta)\hookrightarrow\operatorname{Symp}^{\operatorname{gr}}(\mathring{X})\) is a weak homotopy equivalence. We define \(\operatorname{Symp}^{\operatorname{gr}}(X)\) to mean \(\operatorname{Symp}^{\operatorname{gr}}(\mathring{X})\), and \(\operatorname{Symp}_{\operatorname{ex}}^{\operatorname{gr}}(X)\) to mean \(\operatorname{Symp}_{\operatorname{ex}}^{\operatorname{gr}}(\mathring{X},\theta)\). (Similarly for ungraded versions.) For any other Liouville domain \((\mathring{X}^{\prime},\theta)\) with completion \((X,\theta)\), there is a natural weak homotopy equivalence \(\operatorname{Symp}^{\operatorname{gr}}(\mathring{X})\to\operatorname{Symp}^ {\operatorname{gr}}(\mathring{X}^{\prime})\), and similarly for exact maps. In particular, the group \(\pi_{0}\operatorname{Symp}^{\operatorname{gr}}(X)\) is independent of choices. By Corollary 2.9, \(\pi_{0}\operatorname{Symp}^{\operatorname{gr}}(X)\) acts on the Fukaya category \(\mathcal{W}(X)\); this readily restricts to an action on \(\mathcal{F}(X)\).
Let \(\operatorname{Symp}_{c}(X)\) denote the group of compactly supported symplectomorphisms of \(X\).
**Lemma 3.7**.: _Any element \(\phi\in\operatorname{Symp}_{c}(X)\) is strongly exact: there exists a compactly supported function \(f\) such that \(\phi^{*}\theta=\theta+df\), where \(\theta\) is the Liouville form on \(X\)._
Proof.: As \(\phi\) is a symplectomorphism, \(\phi^{*}\theta-\theta\) is a closed one-form; moreover, away from a compact set, \(\phi\) is the identity, and so \(\phi^{*}\theta-\theta\) vanishes there. This means it defines a class in \(H^{1}_{c}(X;\mathbb{R})\). By Poincare duality, this is isomorphic to \(H_{5}(X;\mathbb{R})\), which vanishes as \(X\) has the homotopy type of a 3-cell complex. Thus \(\phi^{*}\theta-\theta=df\) for some function \(f\), locally constant outside the domain of \(\phi\). Finally, as the boundary at infinity of \(X\) is connected, we see that we can assume that \(f\) vanishes outside a compact set.
**Lemma 3.8**.: _Any element \(\phi\in\operatorname{Symp}_{c}(X)\) is gradeable. Morever, it admits a preferred grading; this determines a splitting of the forgetful map \(\operatorname{Symp}_{c}^{\operatorname{gr}}(X)\to\operatorname{Symp}_{c}(X)\)._
Proof.: See [10, Remark 2.5]. As \(\phi\) has compact support, the obstruction to grading it is a map \(h\in[X,\mathbb{C}^{*}]\) such that \(h\equiv 1\) outside a compact set, i.e. a class in \(H^{1}_{c}(X;\mathbb{Z})\). As this vanishes, any such \(\phi\) is gradeable. Finally, as noted in Section 2.3, any gradeable compactly supported symplectomorphism has a distinguished grading: the one which is the identity on \(\widetilde{\operatorname{Lag}}(X)\) outside a compact set.
It is then immediate that \(\pi_{0}\operatorname{Symp}_{c}(X)\) acts on \(\mathcal{W}(X)\) and \(\mathcal{F}(X)\).
There is a well-known source of compactly supported symplectomorphisms: for any matching path \(\gamma\) in the base of the Morse-Bott-Lefschetz fibration, the Dehn twist \(\tau_{S_{\gamma}}\in\pi_{0}\operatorname{Symp}_{c}X\). By [10, Lemma 4.13], \(\tau_{S_{\gamma}}\) is the monodromy associated to a full right-handed twist in the path \(\gamma\subset\mathbb{C}^{*}\). More precisely, suppose we parametrise the full right-handed twist as a one-parameter family of compactly supported diffeomorphisms of \(\mathbb{C}^{*}\), giving a map \(\rho:[0,1]\times\mathbb{C}^{*}\to\mathbb{C}^{*}\); then \(\tau_{S_{\gamma}}\) can be interpreted as the monodromy of the family \(X_{\rho(t,-1),\rho(t,1)}\), \(t\in[0,1]\), with the notation from Equation 3.2. We will revisit this in Lemma 3.10.
### The \(\operatorname{PBr}_{3}\) action
We start with some generalities on the pure braid group \(\operatorname{PBr}_{3}\). We will usually think of it as a subgroup of the fundamental group of the space of configurations of two points in \(\mathbb{C}^{*}\), with basepoint \(\{-1,1\}\subset\mathbb{C}^{*}\); equivalently, it's naturally isomorphic to \(\pi_{1}((\mathbb{C}^{*})^{2}\backslash\Delta)\).
By [11, Theorem 2.3], we have a presentation for \(\operatorname{PBr}_{3}\) with three generators \(R_{1},R_{2}\) and \(R_{3}\), given by the full twists described in Figure 3, and the sole relation
\[R_{1}R_{2}R_{3}=R_{2}R_{3}R_{1}=R_{3}R_{1}R_{2}.\]
The element \(R_{1}R_{2}R_{3}\) generates the centre \(Z(\mathrm{PBr}_{3})\cong\mathbb{Z}\). Geometrically, it corresponds to an inverse Dehn twist in a boundary \(S^{1}\).
Quotienting by the center, we get isomorphisms
\[\mathrm{PBr}_{3}\simeq\mathrm{PBr}_{3}\,/Z(\mathrm{PBr}_{3})\times Z(\mathrm{PBr }_{3})\simeq(\mathbb{Z}*\mathbb{Z})\times\mathbb{Z}.\]
This decomposition as a direct product of groups is of course not unique: for instance we could have \((\mathbb{Z}\langle R_{i}\rangle*\mathbb{Z}\langle R_{j}\rangle)\times\mathbb{Z} \langle R_{1}R_{2}R_{3}\rangle\) for any \(i\neq j\).
Given any matching path \(\gamma\) between \(-1\) and \(+1\) in \(\mathbb{C}^{*}\), consider the full twist \(t_{\gamma}\in\mathrm{PBr}_{3}\). We want to characterise the subgroup generated by these twists.
**Proposition 3.9**.: _Let \(\mathrm{PBr}_{3}^{c}\triangleleft\mathrm{PBr}_{3}\) be the subgroup of \(\mathrm{PBr}_{3}\) generated by the twists \(t_{\gamma}\). Then: (i) \(\psi\in\mathrm{PBr}_{3}\) lies in \(\mathrm{PBr}_{3}^{c}\) if and only if the winding number of 1 about 0 and the winding number of \(-1\) about 0 are both zero. In other words,_
\[\mathrm{PBr}_{3}^{c}=\ker\{\mathrm{PBr}_{3}\xrightarrow{\text{\tiny$\left( \text{forget $-1$, forget 1)}$}}\mathrm{PBr}_{2}\times\mathrm{PBr}_{2}\}\]
_where \(\mathrm{PBr}_{2}\) is thought of as the fundamental group of the configuration space of one point in \(\mathbb{C}^{*}\), based at \(\{1\}\) for the first factor and at \(\{-1\}\) for the second. (ii) We have an isomorphism \(\mathrm{PBr}_{3}^{c}\cong\mathbb{Z}^{*\infty}\), with generators the full twists \(t_{i}\) in the matching paths \(\gamma_{i}\) for \(S_{i}\) from Figure 2._
Proof.: Given any \(\gamma\), it is immediate that \(t_{\gamma}\in\ker\{\mathrm{PBr}_{3}\xrightarrow{\text{\tiny$\left(\text{forget $-1$, forget 1)}$}}\mathrm{PBr}_{2}\times\mathrm{PBr}_{2}\}\), and so the inclusion one way is clear.
Now suppose that we're given a pure braid
\[\psi\in K=\ker\{\mathrm{PBr}_{3}\xrightarrow{\text{\tiny$\left(\text{forget $-1$, forget 1)}$}}\mathrm{PBr}_{2}\times\mathrm{PBr}_{2}\}.\]
To analyse this, first consider simply forgetting the first strand. This gives a short exact sequence of groups:
\[0\to\mathbb{Z}*\mathbb{Z}\to\mathrm{PBr}_{3}\xrightarrow{\text{\tiny forget 1}}\mathbb{Z}\to 0\]
where to see that the kernel is a \(\mathbb{Z}*\mathbb{Z}\), think about pulling the first and second strands (corresponding to \(-1\) and \(0\) with our current choices) taut. The kernel now presents as the subgroup of \(\pi_{1}\operatorname{Conf}(\mathbb{C}^{*},2)\) generated by pushing the third point around the second point; and pushing the third point around the first point, i.e. \(R_{2}\) and \(R_{3}\) in the notation of Figure 3. This is naturally the fundamental group of the wedge of two \(S^{1}\)s.
Putting things together, we have
\[K=\ker\{\mathbb{Z}\langle R_{2}\rangle*\mathbb{Z}\langle R_{3}\rangle \xrightarrow{\text{\tiny forget 1}}\mathrm{PBr}_{2}\}\]
A pure braid \(\psi\in\mathbb{Z}\langle R_{2}\rangle*\mathbb{Z}\langle R_{3}\rangle\) is in \(K\) if and only if the power of \(R_{2}\) in the abelianisation of \(\mathbb{Z}\langle R_{2}\rangle*\mathbb{Z}\langle R_{3}\rangle\) vanishes. To conclude, consider the infinite cyclic cover of the wedge of
Figure 3. The generators for \(\mathrm{PBr}_{3}\)
two circles with group of deck transformations \(\mathbb{Z}\langle R_{2}\rangle\). The image of \(\pi_{1}\) of this cover in \(\mathbb{Z}\langle R_{2}\rangle*\mathbb{Z}\langle R_{3}\rangle\) is \(K\); we now see that it is a \(\mathbb{Z}^{*\infty}\) with generators \(R_{2}^{-i}R_{3}R_{2}^{i}=t_{i}\), \(i\in\mathbb{Z}\).
Mapping \(R_{2}^{-i}R_{3}R_{2}^{i}\) to the Dehn twist \(\tau_{S_{i}}\) gives a map \(\operatorname{PBr}_{3}^{c}\to\pi_{0}\operatorname{Symp}_{c}X\); this is compatible with the description of \(\tau_{S_{i}}\) as the monodromy of the full twist in \(\gamma_{i}\) from [13, Lemma 4.13]. We will now upgrade this to a representation of the whole of \(\operatorname{PBr}_{3}\) as symplectomorphisms; in order to do this, we have to enlarge the group we are working in to include non-compactly supported exact symplectomorphisms.
**Lemma 3.10**.: _There is a map_
\[\operatorname{PBr}_{3}\to\pi_{0}\operatorname{Symp}_{\operatorname{ex}}X \tag{3.4}\]
_extending the composition \(\operatorname{PBr}_{3}^{c}\to\pi_{0}\operatorname{Symp}_{c}X\to\pi_{0} \operatorname{Symp}_{\operatorname{ex}}X\). This is compatible with the action of \(\operatorname{PBr}_{3}\) on the collection of matching paths: if \(\gamma\) is a matching path, \(R\in\operatorname{PBr}_{3}\), and \(\gamma^{\prime}=R\cdot\gamma\), then the image of \(R\) in \(\pi_{0}\operatorname{Symp}_{\operatorname{ex}}X\) takes \(S_{\gamma}\) to \(S_{\gamma^{\prime}}\) (up to Hamiltonian isotopy)._
Proof.: Consider \(\mathbb{C}^{4}\times\mathbb{C}^{*}\times(\mathbb{C}^{*})^{2}\) with coordinates \((u_{1},v_{1},u_{2},v_{2},z,a,b)\) and its standard Kahler form. Consider the smooth affine subvariety
\[\mathcal{X}=\{u_{1}v_{1}=z-a,u_{2}v_{2}=z-b\}. \tag{3.5}\]
Projection to the final two coordinates \((a,b)\) defines a symplectic fibre bundle \(\mathcal{X}\to(\mathbb{C}^{*})^{2}\). The fibre over \((a,b)\in(\mathbb{C}^{*})^{2}\backslash\Delta\) is the space \(X_{a,b}\) from Equation 3.2, and the family has three-fold ordinary double points along \(\Delta=\{a=b\}\).
Now on the one hand, \(\pi_{1}((\mathbb{C}^{*})^{2}\backslash\Delta)\) is naturally the pure braid group \(\operatorname{PBr}_{3}\). On the other hand, given any path \(\gamma:[0,1]\to(\mathbb{C}^{*})^{2}\backslash\Delta\) in the smooth locus, we can construct parallel transport maps over \(\gamma\) which are globally defined and strongly exact, by the argument of [16, Lemma 2.2]. Symplectic monodromy then defines the representation (3.4). The fact that the fibres over points of \(\Delta\) have a single ordinary double point implies that the monodromy around a path which is a meridian to \(\Delta\) (oriented anticlockwise as the boundary of a disc positively transverse to \(\Delta\) at one point) is a Dehn twist (see also [13, Lemma 4.13]). This implies that (3.4) extends the previously defined \(\operatorname{PBr}_{3}^{c}\to\pi_{0}\operatorname{Symp}_{c}X\). It is then standard, cf. [10], that the action is compatible with the natural action on matching paths.
**Remark 3.11**.: The proof readily gives a stronger result: we have a natural map \(\operatorname{PBr}_{3}\to\pi_{0}\operatorname{Symp}_{\partial}X\), where \(\operatorname{Symp}_{\partial}(X)\) denotes the group of strongly exact symplectomorphisms, i.e. diffeomorphisms \(\phi\) of \(X\) such that \(\phi^{*}\theta=\theta+df\) for a compactly supported function \(f\), where \(\theta\) is the Liouville form on \(X\). We will not need this for our purposes.
**Remark 3.12**.: Let us justify the notation \(\operatorname{PBr}_{3}^{c}\). Using the Morse-Bott-Lefschetz perspective, given any element of \(\operatorname{PBr}_{3}\), we could try to build a symplectomorphism of \(X\) by first getting fibrewise symplectomorphisms via parallel transport, and then upgrading to a symplectomorphism of the total space. (For technical reasons we use a different set-up in the present work.) For what elements \(\psi\in\operatorname{PBr}_{3}\) does parallel transport induce the identity for fibres near infinity? Consider some arc \(\tilde{\gamma}\) from \(+\infty\) to \(0\), avoiding the marked points. We want to know when the monodromy induced by the concatenated path \(\tilde{\gamma}\#(-\psi\tilde{\gamma})\) is the identity. Now notice that this is equivalent to asking that \(\tilde{\gamma}\#(-\psi\tilde{\gamma})\) have winding number \(0\) about both \(-1\) and \(+1\), which is in turn equivalent to \(\psi\) being in the kernel of the two forgetful maps to \(\operatorname{PBr}_{2}\times\operatorname{PBr}_{2}\).
Let \(W^{2n}\) be a symplectic manifold with \(c_{1}(W)=0\), and fix a compatible almost complex structure \(J\). A choice of holomorphic volume form \(\Theta_{J}\in\Gamma(\Lambda^{n}_{\mathbb{C}}T^{*}W)\) defines a distinguished fibrewise universal cover \(\widetilde{\operatorname{Lag}}(W)\to\operatorname{Lag}(W)\) of the fibre bundle \(\operatorname{Lag}(W)\to W\) whose fibre is the Lagrangian Grassmannian \(Gr_{Lag}(T_{w}W)\cong U(n)/O(n)\). Explicitly, we have a map
\[\Theta_{J}:\operatorname{Lag}(W)\to S^{1},\qquad(v_{1},\dots,v_{n})\mapsto \Theta_{J}(v_{1}\wedge\dots\wedge v_{n})/|\Theta_{J}(v_{1}\wedge\dots\wedge v_{ n})|\]
and we consider the bundle \(\widetilde{\operatorname{Lag}}(W)\to\operatorname{Lag}(W)\) with fibre the pairs \((\Lambda,t)\in\operatorname{Lag}(W)\times\mathbb{R}\) with \(\Theta_{J}(\Lambda)=e^{i\pi t}\).
**Lemma 3.13**.: _The map \(\operatorname{PBr}_{3}\to\pi_{0}\operatorname{Symp}_{\operatorname{ex}}(X)\) can be lifted to graded symplectomorphisms._
Proof.: The form \(d\log(z)\wedge d\log(u_{1})\wedge d\log(u_{2})\) defines a holomorphic volume form on (3.2), and so the family \(\mathcal{X}\to\operatorname{Conf}_{2}(\mathbb{C}^{*})\) over configuration space from (3.5) admits a relative holomorphic volume form. Via the preceding construction, this defines a fibration \(\mathcal{X}^{\infty}\to\mathcal{X}\) with fibre the universal cover \(U(\widehat{n})/O(n)\). A choice of Ehresmann connection for this fibration enables one to lift (not necessarily uniquely) the parallel transport maps for \(\mathcal{X}\to\operatorname{Conf}_{2}(\mathbb{C}^{*})\) to parallel transport maps for \(\mathcal{X}^{\infty}\to\operatorname{Conf}_{2}(\mathbb{C}^{*})\), which in particular shows that these symplectomorphisms admit coherent gradings.
#### Choice of gradings for the action of \(PBr_{3}\)
We now fix a preferred lift \(\operatorname{PBr}_{3}\to\pi_{0}\operatorname{Symp}_{\operatorname{ex}}^{ \operatorname{gr}}(X)\), by specifying the lifts of each of the three generators \(R_{1},R_{2}\) and \(R_{3}\). First, the image of \(R_{3}\) is the Dehn twist \(\tau_{0}=\tau_{S_{0}}\); we give it the preferred grading for a compactly supported symplectomorphism. Second, we grade the image of \(R_{2}\), say \(\lambda\), by asking for \(\lambda^{i}S_{0}=S_{-i}\) as graded objects. Finally, to choose the grading for the image of \(R_{1}\), say \(\rho\), recall that \(R_{1}R_{2}R_{3}\) generates the center of \(\operatorname{PBr}_{3}\), and corresponds to the inverse Dehn twist in a boundary parallel circle; in particular, the product \(\rho\circ\lambda\circ\tau_{S_{0}}\) takes \(S_{0}\) to itself as an ungraded object. Fixing a grading on \(\rho\) is the same as fixing the shift of \(S_{0}\) under \(\rho\circ\lambda\circ\tau_{S_{0}}\); we choose this to be [1].
**Remark 3.14**.: The central map \(\rho\circ\lambda\circ\tau_{S_{0}}\) is isotopic in \(\pi_{0}\operatorname{Symp}(X)\) to the identity (this follows from the proof of Lemma 3.10), though through maps acting non-trivially on the boundary. From a purely symplectic perspective, the grading of the central element \(\rho\circ\lambda\circ\tau_{S_{0}}\) could have been chosen to be any power of a shift. (In particular, there isn't a 'preferred' shift as we are not in \(\pi_{0}\operatorname{Symp}_{c}(X)\).) We will later see that from an HMS perspective it is natural to choose the shift [1].
### \(K\)-theoretic actions of symplectomorphism groups
We have that \(K(\mathcal{W}(X))=\mathbb{Z}^{2}\) generated by two cotangent fibres \(F_{0}\) and \(F_{1}\), by for instance [GPS, Theorem 1.13] or [CRGG]. This is naturally identified with \(H_{3}(X,\partial_{\infty}X;\mathbb{Z})\), where \(\partial_{\infty}X\) denotes the conical end of \(X\). Moreover, there is a pairing
\[K(\mathcal{F}(X))\times K(\mathcal{W}(X))\to\mathbb{Z},\qquad(K,L)\mapsto \chi(HF^{*}(L,K)). \tag{3.6}\]
Evaluating this on fibres and zero-sections shows that \(\mathbb{Z}^{2}=\langle S_{0},S_{1}\rangle\to K(\mathcal{F}(X))\) is injective. (Note we don't have an explicit description for \(K(\mathcal{F}(X))\).)
**Definition 3.15**.: Let the numerical Grothendieck group of \(\mathcal{F}(X)\), \(K_{num}(\mathcal{F}(X))\), be the group \(K(\mathcal{F}(X))/\ker\), where \(\ker\) is the subgroup generated by compact Lagrangians \(L\) s.t. \(\chi(HF^{*}(L,-))=0\) for every object \(-\) in \(\mathcal{W}(X)\).
The pairing 3.6 descends to a non-degenerate pairing \(K_{num}(\mathcal{F}(X))\times K(\mathcal{W}(X))\to\mathbb{Z}\). This implies that \(K_{num}(\mathcal{F}(X))=\mathbb{Z}^{2}=\langle S_{0},S_{1}\rangle.\) This is naturally identified with \(H_{3}(X;\mathbb{Z})\), and
the non-degenerate pairing
\[K_{num}(\mathcal{F}(X))\times K(\mathcal{W}(X))\to\mathbb{Z}\]
is identified with the intersection pairing
\[H_{3}(X;\mathbb{Z})\times H_{3}(X,\partial_{\infty}X;\mathbb{Z})\to\mathbb{Z}.\]
Both \(\operatorname{Symp}^{\operatorname{gr}}(X)\) and \(\operatorname{Symp}_{c}(X)\) (with preferred gradings) act on \(K(\mathcal{F}(X))\) and \(K(\mathcal{W}(X))\) compatibly with this pairing. This implies that there is an induced action on the numerical Grothendieck group. On the other hand, as the intersection pairing of \(S_{0}\) with both \(S_{0}\) and \(S_{1}\) vanishes, and similarly for \(S_{1}\), we see that the standard map \(H_{3}(X;\mathbb{Z})\to H_{3}(X,\partial_{\infty}X;\mathbb{Z})\) has zero image. The following is now immediate:
**Corollary 3.16**.: _Suppose that \(f\in\pi_{0}\operatorname{Symp}^{\operatorname{gr}}(X)\). Then \(f_{*}\) acts on \(K_{\text{num}}(\mathcal{F})\). Moreover, \(f\) induces the identity on \(K(\mathcal{W})\) if and only if it induces the identity on \(K_{\text{num}}(\mathcal{F})\)._
For arbitrary compactly supported maps, we have the following.
**Lemma 3.17**.: _Compactly supported homeomorphisms act trivially on \(H_{3}(X;\mathbb{Z})\)._
Proof.: Let \(\phi\in\operatorname{Homeo}_{c}(X)\) be a compactly supported homeomorphism. Considering the induced action on the long exact sequence for the pair \((X,\partial_{\infty}X)\), we get a commutative diagram:
(3.7)
where all coefficient rings are \(\mathbb{Z}\). By assumption, \(\phi_{*}:H_{3}(\partial_{\infty}X)\to H_{3}(\partial_{\infty}X)\) is the identity. On the other hand, recall that in this case the map \(H_{3}(X)\to H_{3}(X,\partial_{\infty}X)\) has zero image. It's then immediate that \(\phi_{*}:H_{3}(X)\to H_{3}(X)\) is the identity.
Combined with Corollary 3.16, this implies that any \(f\in\pi_{0}\operatorname{Symp}_{c}X\) acts trivially on \(K_{num}(\mathcal{F})\) and \(K(\mathcal{W})\).
Lemma 3.17 will also imply that there are infinity many orbits of Lagrangians spheres in \(X\); see Corollary 7.2 for a more general statement. On the other hand, by the proof of Proposition 3.9, we see that all of the \(S_{\gamma}\) are contained in the orbits of the \(S_{i}\), \(i\in\mathbb{Z}\), under the action of the group generated by Dehn twists in the \(S_{i}\).
**Remark 3.18**.: We'll see in Section 5 that the subgroup \(\mathbb{Z}^{*\infty}\) which maps to \(\pi_{0}\operatorname{Symp}_{c}X\) actually split injects into it. This will use mirror symmetry. If one only wants to show that the action of \(\mathbb{Z}^{*\infty}\) is faithful, one could do this instead by using a generalisation of Khovanov-Seidel's work on faithful braid group actions, see Proposition 6.4.
## 4. The conifold resolution
Consider the threefold ordinary double point \(\{xy=(1+w)(1+t)\}\subset\mathbb{C}^{4}\). This has a small resolution, which is the total space of \(\mathcal{O}(-1)\oplus\mathcal{O}(-1)\to\mathbb{P}^{1}\). Let \(D\) be the pullback under the small resolution of the divisor \(\{wt=0\}\subset\mathbb{C}^{4}\). Let \(Y\) be the open subset of \(\mathcal{O}(-1)\oplus\mathcal{O}(-1)\) given by the complement of \(D\). Let \(C\subset\mathcal{O}(-1)\oplus\mathcal{O}(-1)\) denote the zero-section, which lies in the complement of the divisor \(D\) and hence in \(Y\).
A brief calculation shows that \(\operatorname{Pic}(Y)\cong\mathbb{Z}\). This is generated by a line bundle \(\mathcal{L}\) such that \(\mathcal{O}_{C}(i)\otimes\mathcal{L}\cong\mathcal{O}_{C}(i+1)\) (by the push-pull formula).
Let \(R=\Gamma(\mathcal{O}_{Y})\). We have that \(\operatorname{Spec}R\cong\{xy=(1+w)(1+t)\}\backslash\{wt=0\}\subset\mathbb{C} ^{4}\) is the blow-down of \(Y\).
**Remark 4.1**.: Start with \(\mathbb{P}^{1}\times\mathbb{P}^{1}\times\mathbb{P}^{1}\); blow up \(Z=\mathbb{P}^{1}\times\{-1\}\times\{0\}\cup\{-1\}\times\mathbb{P}^{1}\times\{\infty\}\), and let \(F\) be the proper transform of the toric divisor for \((\mathbb{P}^{1})^{3}\). Then one can check that the variety \(Y\) is isomorphic to \(\operatorname{Bl}_{Z}(\mathbb{P}^{1})^{3}\backslash Z\). In particular, \(Y\) is log Calabi-Yau.
We write \(D(Y)\) for the bounded derived category of \(Y\).
**Definition 4.2**.: The category \(\mathcal{D}\subset D(Y)\) is the full subcategory of \(D(Y)\) of complexes whose cohomology sheaves are (set-theoretically) supported on \(C\subset Y\).
**Remark 4.3**.: Definition 4.2 is taken from [11]. The paper [10] defines \(\mathcal{D}\) as the derived category of the abelian subcategory of \(\operatorname{Coh}(Y)\) of sheaves (set-theoretically) supported on \(C\). This yields the same category as Definition 4.2 by [14, Lemma 0AEF]. The alternative definition makes it manifest that \(\mathcal{D}\) is determined by a formal neighbourhood of \(C\) in \(Y\). On the other hand, the formal neighbourhood of a flopping curve is determined by its length [14], where a flopping curve has normal bundle \(\mathcal{O}(-1)\oplus\mathcal{O}(-1)\) if and only if it has length one. So the category \(\mathcal{D}\) depends only on \(C\) being a \((-1,-1)\)-curve.
From Remark 4.3, or directly, one sees that \(\mathcal{D}\) is generated by \(\mathcal{O}_{C}\) and \(\mathcal{O}_{C}(-1)\), so the Grothendieck group \(K(\mathcal{D})\) of \(\mathcal{D}\) is \(\mathbb{Z}^{2}\), generated by \([\mathcal{O}_{C}],[\mathcal{O}_{C}(-1)]\).
**Proposition 4.4**.: _Suppose \(\phi\) is an autoequivalence of \(D(Y)\). Then \(\phi\) is Fourier-Mukai, i.e. induced by an object \(P_{\phi}\in D(Y\times Y)\). Moreover, \(P_{\phi}\) is unique up to isomorphism._
Proof.: Existence of \(P_{\phi}\) is in [13, Corollary 2.16]. (See also [1, Theorem 7.13] and [16, Theorem 1.1].) Uniqueness follows from Toen [17, Corollary 8.12], see [13, Remark 9.11].
The contraction map \(Y\to\operatorname{Spec}(R)\) gives \(D(Y)\) the structure of an \(R\)-linear category, via pullback. In particular, we can consider those autoequivalences of \(D(Y)\) which are \(R\)-linear.
**Lemma 4.5**.: _Let \(\phi\) be an \(R\)-linear autoequivalence of \(D(Y)\) and \(x\in Y\backslash C\). Then \(\phi\) preserves the skyscraper sheaf \(\mathcal{O}_{x}\) up to a shift._
Proof.: For any sheaf \(\mathcal{E}\in D(Y)\), one can consider the ideal
\[\operatorname{Ann}(\mathcal{E}):=\{r\in R\,|\,r\cdot\mathcal{E}=0\}.\]
By \(R\)-linearity, \(\operatorname{Ann}\mathcal{O}_{x}=\operatorname{Ann}(\phi\mathcal{O}_{x})\). On the other hand, note that \(\operatorname{Ann}\mathcal{O}_{x}\) is simply the maximal ideal of functions vanishing at \(x\). Thus \(\phi\mathcal{O}_{x}\) is also supported at \(x\), and hence by [11, Lemma 4.5] agrees with \(\mathcal{O}_{x}\) up to a shift.
To simplify notation, we will denote \(Y\times_{\operatorname{Spec}R}Y\) by \(Y\times_{R}Y\).
**Lemma 4.6**.: \(R\)_-linear autoequivalences of \(D(Y)\) preserve \(\mathcal{D}\) and are induced by Fourier-Mukai kernels with support on \(Y\times_{R}Y\)._
Proof.: Let \(\phi\) be an \(R\)-linear autoequivalence of \(D(Y)\) with Fourier-Mukai kernel \(P_{\phi}\in D(Y\times Y)\). By Lemma 4.5, for each \(x\in Y\backslash C\), the fibre of \(\operatorname{Supp}P_{\phi}\to Y\) above \(x\) is zero dimensional. On the other hand, the fibre of \(\operatorname{Supp}P_{\phi}\to Y\) above any point is connected [11, Lemma
6.11]. Thus for any \(x\in Y\backslash C\), the fibre of \(\operatorname{Supp}P_{\phi}\to Y\) above \(x\) is simply \(\{x\}\). By [10, Lemma 3.29], this means that
\[(\operatorname{Supp}P_{\phi})|_{Y\backslash C\times Y\backslash C}=\Delta_{Y \backslash C}.\]
Using [10, Lemma 6.11] again, it follows that
\[\operatorname{Supp}P_{\phi}\subset\Delta_{Y}\cup C\times C.\]
Both parts of the lemma follow.
Let \(\operatorname{Stab}\mathcal{D}\) denote the space of Bridgeland stability conditions on \(\mathcal{D}\)[1]. We use standard notation: we will write \((Z,\{\mathcal{P}[\theta]\}_{\theta\in\mathbb{R}})\) for the stability condition with central charge \(Z:K(\mathcal{D})\to\mathbb{C}\) and full additive subcategory of phase \(\theta\) given by \(\mathcal{P}[\theta]\subset\mathcal{D}\). Following [11], let \(\operatorname{Stab}_{n}\mathcal{D}\) denote the subspace of normalised Bridgeland stability conditions, i.e. stability conditions whose central charge \(Z\) satisfies \(Z([\mathcal{O}_{x}])=-1\). \(\operatorname{Stab}_{n}\mathcal{D}\) has an open subset consisting of stability conditions with the standard heart \(\mathcal{D}\cap\operatorname{Coh}Y\) and central charges of the form
\[Z_{\beta+i\zeta}(E):=-\int e^{-(\beta+i\zeta)}\operatorname{ch}E\quad\text{ for}\quad\beta+i\zeta\in A(\mathcal{D})_{\mathbb{C}}\]
where \(A(\mathcal{D})_{\mathbb{C}}\) denotes the complexified ample cone. (See [11, Lemma 4.1]; these correspond to the neighbourhood of the large volume limit.) Following [11, Definition 4.2], let \(\operatorname{Stab}_{n}^{\circ}\mathcal{D}\) be the connected component of \(\operatorname{Stab}_{n}\mathcal{D}\) containing this open subset.
By [11, Theorem 7.1], \(\operatorname{Stab}\mathcal{D}\) is connected. Moreover, we have a rescaling action of \(t\in\mathbb{C}\) on \(\operatorname{Stab}\mathcal{D}\):
\[(Z,\{\mathcal{P}[\phi]\}_{\phi\in\mathbb{R}})\mapsto(e^{-i\pi t}Z,\{\mathcal{ P}[\phi+\operatorname{Re}t]\}_{\phi\in\mathbb{R}})\]
Assembling all this together, we get a commutative diagram
where the top horizontal isomorphism is given by the rescaling action.
**Corollary 4.7**.: _Let \(\phi\) be an \(R\)-linear autoequivalence of \(D(Y)\) which preserves \(\mathcal{O}_{x}\in D(Y)\) for some \(x\in Y\backslash C\) (with no shift). Then the action of \(\phi\) on \(\operatorname{Stab}\mathcal{D}\) preserves \(\operatorname{Stab}_{n}^{\circ}\mathcal{D}\)._
Proof.: This follows from the preceding discussion, on noting that the rescaling action of \(\mathbb{Z}\subset\mathbb{C}\) agrees with the action of shifts.
**Definition 4.8**.: Let \(\operatorname{Auteq}\mathcal{D}\) be the group of \(R\)-linear autoequivalences of \(D(Y)\), and let \(\operatorname{Auteq}^{\circ}\mathcal{D}\) be the subgroup of \(R\)-linear autoequivalences preserving \(\mathcal{O}_{x}\in D(Y)\) for some point \(x\not\in C\).
To justify the notation, we'll see shortly that we could have equally well defined these groups in terms of \(R\)-linear autoequivalences of \(\mathcal{D}\).
**Theorem 4.9**.: _[_11_, 11_]_
_(i) We have an isomorphism_
\[\operatorname{Auteq}\mathcal{D}\cong(\mathbb{Z}*\mathbb{Z})\times\mathbb{Z},\]
_where the first factor has generators the spherical twist in \(\mathcal{O}_{C}(-1)\) and \(\otimes\mathcal{L}\), where \(\mathcal{L}\) generates \(\operatorname{Pic}(Y)\), and the final \(\mathbb{Z}\) factor is the grading shift._
_(ii) We have_ \(\operatorname{Auteq}^{\circ}\mathcal{D}\cong\mathbb{Z}*\mathbb{Z}\)_, again with generators the spherical twist in_ \(\mathcal{O}_{C}(-1)\) _and_ \(\otimes\mathcal{L}\)_. The action of_ \(\operatorname{Auteq}^{\circ}\mathcal{D}\) _on_ \(K(\mathcal{D})\) _induces a homomorphism to the stabiliser of_ \([\mathcal{O}_{x}]\) _in_ \(GL_{2}(\mathbb{Z})=\operatorname{Aut}K(\mathcal{D})\)_, i.e. the infinite dihedral group_ \(\mathbb{Z}\rtimes\mathbb{Z}/2\)_. Let_ \(\operatorname{Auteq}^{\circ}_{\operatorname{Tor}}\mathcal{D}\) _be its kernel. Then the image of this homomorphism is_ \(\mathbb{Z}\)_, and we then obtain a diagram with exact rows_
(4.1)
_The vertical isomorphism maps the standard generators of \(\mathbb{Z}^{*\infty}\) to \(T_{\mathcal{O}_{C}(i)}\), \(i\in\mathbb{Z}\); and the standard generators of \(\mathbb{Z}*\mathbb{Z}\) to \(T_{\mathcal{O}_{C}(-1)}\) and \(\otimes\mathcal{L}\)._
Proof.: For (i), this is [13, Theorem 7.7]. For (ii), the description of \(\operatorname{Auteq}^{\circ}\mathcal{D}\) is clear. To conclude, consider \(\mathcal{L}\) a line bundle generating \(\operatorname{Pic}(Y)\cong\mathbb{Z}\). Up to possibly replacing \(\mathcal{L}\) with its dual line bundle, its action by tensor on \(K(\mathcal{D})\), with respect to the basis \(\{[\mathcal{O}_{C}]-[\mathcal{O}_{C}(-1)],[\mathcal{O}_{C}]\}\), is given by the matrix \(\begin{pmatrix}1&1\\ 0&1\end{pmatrix}\). As \(\otimes\mathcal{L}^{i}\circ T_{\mathcal{O}_{C}}\circ\otimes\mathcal{L}^{-i}= T_{\mathcal{O}_{C}(i)}\), the rest of the claim is then immediate.
The proof shows that the \(\mathbb{Z}\) quotient on the top line of equation 4.1 is naturally identified with \(\operatorname{Pic}(Y)\).
**Remark 4.10**.: For instance comparing with [12, Theorem 1.5], we see that \(\operatorname{Auteq}^{\circ}_{\operatorname{Tor}}\mathcal{D}\) could equivalently have been defined as \(R\)-linear autoequivalences _of the subcategory \(\mathcal{D}\)_ preserving \(\operatorname{Stab}^{\circ}_{n}\mathcal{D}\).
We'll use the following immediate corollary.
**Corollary 4.11**.: _Suppose that \(\phi\in\operatorname{Auteq}\mathcal{D}\) induces the identity on \(K(\mathcal{D})\). Then \(\phi\) must lie in the subgroup \(\mathbb{Z}^{*\infty}\times 2\mathbb{Z}\), where \(\mathbb{Z}^{*\infty}\subset\mathbb{Z}*\mathbb{Z}\) is the subgroup generated by spherical twists in the \(\mathcal{O}_{C}(i)\); and \(2\mathbb{Z}\) denotes even shifts._
## 5. Proof of Theorem 1.1
For an \(A_{\infty}\)-category \(\mathcal{C}\) we write \(D(\mathcal{C}):=H^{0}(\operatorname{Tw}(\mathcal{C}))\) for the cohomological category of the category of twisted complexes; this is a triangulated category in the classical sense. We remark that the group \(\operatorname{Auteq}\mathcal{C}\) of \(A_{\infty}\)-autoequivalences of \(\mathcal{C}\) admits a natural homomorphism to the group \(\operatorname{Auteq}(D(\mathcal{C}))\) of triangulated equivalences of \(D(\mathcal{C})\).
The primary mirror symmetric input to the proof of Theorem 1.1 is the following:
**Theorem 5.1**.: _[_1_, Theorem 4.2]_ _There is a quasi-equivalence of \(\mathbb{C}\)-linear triangulated categories_
\[\Upsilon:D\mathcal{W}(X)\stackrel{{\sim}}{{\longrightarrow}}D(Y) \tag{5.1}\]
_with \(\Upsilon(S_{i})=\mathcal{O}_{C}(-i)\)._
It follows that \(D(\mathcal{W})\) is in fact split-closed. By construction, the mirror equivalence identifies the subcategory \(\langle S_{0},S_{1}\rangle\) of \(D\mathcal{W}(X)\) generated by \(S_{0}\) and \(S_{1}\) with \(\mathcal{D}\); passing to \(K\)-theory,
the numerical Grothendieck group \(K_{\operatorname{num}}\mathcal{F}(X)\) is identified with \(K(\mathcal{D})\). Moreover, [11] explicitly identify the objects mirror to \(\mathcal{O}_{Y}\) and \(\mathcal{O}_{Y}(1)\): these are Lagrangian \(\mathbb{R}^{3}\)s, say \(L_{0}\) and \(L_{1}\), fibred over the base of the Morse-Bott-Lefschetz fibration, see Figure 2, cf. [11, Figure 1.2 and Theorem 1.2].
**Corollary 5.2**.: _The equivalence \(\Upsilon\) of (5.1) entwines the \(SH^{0}(X)\)-linear structure of \(D\mathcal{W}(X)\) and the \(R\)-linear structure of \(D(Y)\)._
Proof.: The existence of an isomorphism of \(\mathbb{C}\)-algebras \(SH^{0}(X)\cong\Gamma(\mathcal{O}_{Y})=R\) is proved in [11, Appendix]. Both linear structures can be viewed as the general fact that a \(k\)-linear \(dg\)-category or \(A_{\infty}\)-category \(\mathcal{C}\) is also linear over the Hochschild cohomology \(HH^{0}(\mathcal{C})\). At the cohomological level (which is all that we require) this is classical, and a chain-level version is established in [12, Section 5.1]. By [10], the wrapped category \(\mathcal{W}(X)\) is generated by \(L_{0}\) and \(L_{1}\) and is homologically smooth and non-degenerate. This means in particular that the open-closed map \(HH_{*}(\mathcal{W}(X))\to SH^{*}(X)\) is an isomorphism. Since \(\mathcal{W}(X)\) is also a Calabi-Yau category [10], the closed-open map \(SH^{*}(X)\to HH^{*}(\mathcal{W}(X))\) is also an isomorphism. The action of \(\Upsilon\) on \(HH^{*}\) therefore induces a distinguished isomorphism \(SH^{0}(X)\cong R\). The result follows.
Recall the exact, Maslov zero torus \(T\) introduced in Section 3.2, which was graded in Section 3.3.
**Lemma 5.3**.: _Let \((T,\zeta)\in\operatorname{Ob}\mathcal{W}(X)\) be any Lagrangian brane associated to \(T\). Then under the mirror symmetry isomorphisms above, \((T,\zeta)\) is mirror to \(\mathcal{O}_{x}[k]\) for some \(x\in Y\backslash C\) and shift \(k\in\mathbb{Z}\)._
Proof.: Let \(\mathcal{E}\) be the mirror to \((T,\zeta)\). By inspection, we can find Hamiltonian isotopies so that the image of \(T\) is disjoint from the sphere \(S_{0}\), respectively \(S_{1}\); and the Floer theory of \(T\) with each of \(L_{0}\) and \(L_{1}\) has a single generator, see Figure 2. By considering Floer cohomology with \(S_{0}\) and \(S_{1}\), \(\mathcal{E}\) is orthogonal to \(\mathcal{O}_{C}\) and \(\mathcal{O}_{C}(-1)\), and hence the whole of \(\mathcal{D}\). Since \(\mathcal{E}\) is orthogonal to \(\mathcal{O}_{p}\) for any \(p\in C\), \(\mathcal{E}\) has support on \(Y\backslash C\).
[11, Theorem 5.3] shows that \(\mathcal{O}_{Y}\oplus\mathcal{O}_{Y}(1)\) is a tilting object in \(D(Y)\). It is shown in the proof of [1, Theorem 7.2.1] that whenever a smooth variety \(Z\to A\) is projective over \(A\), where \(A\) is affine, and \(\mathcal{E}_{Z}\) is a tilting object for \(Z\), and \(F\in D(Z)\) has non-proper support, then \(\operatorname{Ext}^{\ell}(\mathcal{E}_{Z},F)\) has infinite rank for some \(\ell\). This implies that in our case \(\mathcal{E}\) has proper support.
Finally, the only proper subvarieties of \(Y\backslash C\) are finite unions of points. Since \((T,\zeta)\) has the self-Floer cohomology of a three-torus, the mirror \(\mathcal{E}\) is simple and has no negative self-Exts. By [15, Lemma 4.5], \(\mathcal{E}=\mathcal{O}_{x}[k]\), some \(k\in\mathbb{Z}\).
**Remark 5.4**.: Varying the local system \(\zeta\) on \(T\), one expects the family of branes \((T,\zeta)\) in \(X\) to correspond to a cluster chart \((\mathbb{C}^{*})^{3}\) on the mirror \(Y\) (suitably interpreted, to toric SYZ fibres). There are other exact Lagrangian tori in \(X\) with vanishing Maslov class, for instance fibred over an \(S^{1}\) in the base \(\mathbb{C}^{*}\) of the Morse-Bott-Lefschetz fibration which encloses one critical fibre and the puncture. Appropriate such tori do have non-trivial Floer theory with \(S_{0}\) or \(S_{1}\), for certain choices of local systems. In that sense Lemma 5.3 amounts to saying that our choice of \(T\) is picking the "correct" cluster chart. (Compare to the closely related two-dimensional situation studied in [1].)
Lemma 4.5 then implies that for any \(\phi\in\pi_{0}\operatorname{Symp}^{\operatorname{gr}}X\), \(\phi(T,\zeta)\) is equal to \((T,\zeta)\) up to a shift in \(D\mathcal{W}(X)\).
**Lemma 5.5**.: _For any \(\phi\in\pi_{0}\operatorname{Symp}_{c}X\), equipped with the preferred grading, \(\phi(T,\zeta)\) is equal to \((T,\zeta)\) in \(D\mathcal{W}(X)\)._
Proof.: Recall from Lemma 3.8 that any compactly supported symplectomorphism of \(X\) is gradeable and canonically graded. Such a map therefore acts on the set of graded Lagrangian submanifolds in \(X\). The action of a symplectomorphism on the set of objects \(\operatorname{Ob}\mathcal{W}(X)\) of the wrapped category is compatible with the action on graded Lagrangians.
Let \(\alpha\) be the angle coordinate for \(z\in\mathbb{C}^{*}\). For all \(t\in\mathbb{R}_{\geq 0}\), let \(\sigma_{t}\) be the time \(t\) flow of the symplectic vector field \(Z_{\alpha}\) which is \(\omega\)-dual to the closed one-form \(d\alpha\). (Our convention is that this is inward-pointing with respect to the base \(\mathbb{C}^{*}\).) We consider the path \(\{\sigma_{t}\circ\phi\circ\sigma_{t}^{-1}\}_{t\in\mathbb{R}}\) of graded symplectomorphisms. For sufficiently large \(t\), the map \(\phi^{\prime}:=\sigma_{t}\circ\phi\circ\sigma_{t}^{-1}\) has support in some compact set \(V\) disjoint from \(T\); moreover, we can assume that \(X\backslash V\) is path connected (and so contains both \(T\) and a conical end of \(X\)). As we are using the canonical grading for the map \(\phi^{\prime}\), we see that it must therefore fix the grading on \((T,\zeta)\). On the other hand, recall that we already know that \(\phi\) fixes \((T,\zeta)\) up to shift. The action of compactly supported symplectomorphisms on the set of graded Lagrangians only depends on the path-component in the group \(\operatorname{Symp}_{c}(X)\) with its \(C^{\infty}\)-topology. Since \(\phi\) lies in the same component of the space of graded compactly supported symplectomorphisms as \(\phi^{\prime}\), it must also fix the grading of \((T,\zeta)\).
**Definition 5.6**.: Define \(\operatorname{Symp}^{\circ}(X)\subset\operatorname{Symp}^{\operatorname{gr}}(X)\) to be the subgroup of graded symplectomorphisms which take \((T,\zeta)\) to itself in \(D\mathcal{W}(X)\).
**Lemma 5.7**.: _Consider the map \(\operatorname{PBr}_{3}\to\pi_{0}\operatorname{Symp}^{\operatorname{gr}}X\) defined earlier. Then the intersection of the image with \(\pi_{0}\operatorname{Symp}^{\circ}X\) is precisely the subgroup generated by \(\tau_{S_{0}}\) and \(\lambda\), i.e. the image of the subgroup \(\mathbb{Z}*\mathbb{Z}\) generated by \(R_{2}\) and \(R_{3}\)._
Proof.: By construction, the image of \(R_{3}\) is \(\tau_{S_{0}}\), with its preferred grading as a compactly supported map; and the image of \(R_{2}\) is \(\lambda\), a non-compactly supported map which maps \(S_{i}\) to \(S_{i+1}\) for all \(i\in\mathbb{Z}\). Now \(\lambda\) induces an \(R\)-linear autoequivalence of \(\mathcal{W}(X)\), and the mirror autoequivalence in \(\operatorname{Auteq}\mathcal{D}\) must take \(\mathcal{O}_{C}(-i)\) to \(\mathcal{O}_{C}(-i-1)\). By Theorem 4.9, the only possibility is for this map to be \(\otimes\mathcal{L}\), which fixes \(\mathcal{O}_{x}\). By Lemma 5.3, we then have \(\lambda\in\pi_{0}\operatorname{Symp}^{\circ}X\).
Finally, recall that we choose gradings on the image of \(\operatorname{PBr}_{3}\) so that the generator of the center, i.e. \(R_{1}R_{2}R_{3}\), maps to a shift by one. The claim then follows.
Any element of \(\pi_{0}\operatorname{Symp}^{\operatorname{gr}}(X)\) induces an \(SH^{0}(X)\)-linear autoequivalence of \(\mathcal{W}(X)\). Lemma 4.6, together with the HMS equivalence \(\operatorname{Auteq}_{SH^{0}}(\mathcal{W}(X))\simeq\operatorname{Auteq}_{R} (D(Y))=\operatorname{Auteq}\mathcal{D}\), then implies that the image of \(\pi_{0}\operatorname{Symp}^{\operatorname{gr}}(X)\) in \(\operatorname{Auteq}_{SH^{0}}(\mathcal{W}(X))\) preserves the subcategory \(\langle S_{0},S_{1}\rangle\) (equivalent under mirror symmetry to \(\mathcal{D}\)). Moreover, from our definitions, any element of \(\pi_{0}\operatorname{Symp}^{\circ}(X)\) lands not only in \(\operatorname{Auteq}(\mathcal{D})\) but in fact in the subgroup \(\operatorname{Auteq}^{\circ}(\mathcal{D})\). Altogether, this gives a diagram:
\[\begin{CD}\pi_{0}\operatorname{Symp}_{c}(X)@>{}>{}>\pi_{0}\operatorname{Symp}^{ \circ}(X)\\ @V{}V{}V@V{}V{}V\\ \operatorname{Auteq}_{\operatorname{Tor}}(\langle S_{0},S_{1}\rangle)@>{}>{}> \operatorname{Auteq}(\langle S_{0},S_{1}\rangle)\\ @V{\simeq}V{}V@V{\simeq}V{}V\\ \operatorname{Auteq}_{\operatorname{Tor}}^{\circ}(\mathcal{D})@>{}>{}> \operatorname{Auteq}^{\circ}(\mathcal{D})\end{CD} \tag{5.2}\]
with the bottom vertical arrows the HMS equivalences.
Recall from Lemma 3.16 that \(\pi_{0}\operatorname{Symp}^{\operatorname{gr}}(X)\), and in particular \(\pi_{0}\operatorname{Symp}^{\circ}(X)\), acts on the numerical \(K\)-theory \(K_{\operatorname{num}}(\mathcal{F}(X))\).
Passing to the mirror side, consider the image of \(\pi_{0}\operatorname{Symp}^{\operatorname{gr}}(X)\) in \(\operatorname{Auteq}^{\circ}\mathcal{D}\). As the image of \(\operatorname{Auteq}^{\circ}\mathcal{D}\) in \(GL(K(\mathcal{D}))\) lies in a \(\mathbb{Z}\) subgroup, it follows that the image of \(\pi_{0}\operatorname{Symp}^{\operatorname{gr}}(X)\) in \(GL(K_{\operatorname{num}}\mathcal{F}(X))\) lies in the mirror \(\mathbb{Z}\) subgroup. We are now ready to state a categorical version of our main theorem.
**Theorem 5.8**.: _(i) For graded symplectomorphisms, we have_
\[\operatorname{PBr}_{3}\hookrightarrow\pi_{0}\operatorname{Symp}^{\operatorname {gr}}(X)\twoheadrightarrow\operatorname{Auteq}_{SH^{0}}D\mathcal{W}(X) \cong\operatorname{Auteq}\mathcal{D}\cong\operatorname{PBr}_{3}, \tag{5.3}\]
_where under the decomposition \(\operatorname{PBr}_{3}\cong(\mathbb{Z}*\mathbb{Z})\times\mathbb{Z}\) with generators \(R_{2}\), \(R_{3}\) and \(R_{1}R_{2}R_{3}\), \(R_{2}\) maps to \(\lambda\), \(R_{3}\) maps to \(\tau_{S_{0}}\), and \(R_{1}R_{3}R_{3}\) maps to the shift. Moreover, these group homomorphisms compose to give the identity._
_(ii) Exact symplectomorphisms which preserve_ \((T,\zeta)\in\operatorname{Ob}D\mathcal{W}(X)\) _fit into the following commutative diagram:_
(5.4)
_where the vertical compositions from the top to bottom lines are the identity. The map \(\pi_{0}\operatorname{Symp}^{\circ}(X)\to\mathbb{Z}\) is given by taking the action on \(K_{\operatorname{num}}(\mathcal{F}(X))\), itself naturally identified with \(K(\mathcal{D})\), and noting that the image lies in the \(\mathbb{Z}\) subgroup._
Proof.: Part (i) is immediate from Theorem 4.9 and the preceeding construction and discussion of the pure braid group action \(\operatorname{PBr}_{3}\to\pi_{0}\operatorname{Symp}^{\operatorname{gr}}X\).
For part (ii), the map from the first to the second line is defined by combining the Section 3 constructions with Lemma 5.7. Lemma 3.17 implies that the image of \(\pi_{0}\operatorname{Symp}_{c}(X)\) lies in the subgroup \(\operatorname{Auteq}_{\operatorname{Tor}}^{\circ}\mathcal{D}\) of \(\operatorname{Auteq}^{\circ}\mathcal{D}\), which allows us to go from the second line to the third. The isomorphism of the final two rows is given by Theorem 4.9.
To check that composition from the first to the final row is an isomorphism, we proceed as follows. For the first column, this follows from having identified the twists \(\tau_{S_{i}}\) and \(T_{\mathcal{O}_{C}(-i)}\), \(i\in\mathbb{Z}\) in the homological mirror symmetry isomorphism of Theorem 5.1.
For the second column, recall we have a preferred non-compactly supported symplectomorphism \(\lambda\), together with a grading choice such that it lives in \(\pi_{0}\operatorname{Symp}^{\circ}(X)\). With that choice of grading, \(\lambda\) takes \(S_{i}\) to \(S_{i+1}\), for all \(i\in\mathbb{Z}\) (with no shifts). At the level of \(K\)-theory, this is the same action as the tensor with our fixed generator \(\mathcal{L}\) for \(\operatorname{Pic}(Y)\). This implies that \(\pi_{0}\operatorname{Symp}^{\circ}(X)\) surjects onto \(\mathbb{Z}\). Surjectivity on the second column then follows from the exactness of the sequence
\[\operatorname{Auteq}^{\circ}_{\operatorname{Tor}}\mathcal{D}\to\operatorname{ Auteq}^{\circ}\mathcal{D}\to\mathbb{Z}.\]
Given that the composition from the first to the final row is an isomorphism, injectivity from the first to the second row, and surjectivity from the second to the third row, are then automatic.
Theorem 1.1 immediately follows.
## 6. Classification of sphericals
Let \(\mathcal{S}\) be the collection of Lagrangian spheres which we know of in \(X\): the images of the \(S_{i}\), \(i\in\mathbb{Z}\), under the action of the group \(\mathbb{Z}^{*\infty}\) generated by the Dehn twists in the \(S_{i}\). In other words, these are all the spheres associated to matching paths between the two critical values in the base of our Morse-Bott-Lefschetz fibration, with all possible choices of gradings.
**Definition 6.1**.: [12, Section (3a)] We say an object \(S\) of \(D\mathcal{W}(X)\) is spherical if \(\operatorname{HF}^{*}(S,S)\cong H^{*}(S^{3};\mathbb{Z})\), and for any object \(Z\) of \(D\mathcal{W}(X)\), \(\operatorname{HF}^{*}(S,Z)\) and \(\operatorname{HF}^{*}(Z,S)\) are of finite total rank and the composition
\[\operatorname{HF}^{3-*}(Z,S)\otimes\operatorname{HF}^{*}(S,Z)\to\operatorname {HF}^{3}(S,S)\cong\mathbb{Z}\]
is a non-degenerate pairing.
**Theorem 6.2**.: _Let \(S\) be a spherical object in \(D\mathcal{W}(X)\). Then \(S\) is quasi-isomorphic to an element of \(\mathcal{S}\) (as a Lagrangian brane, i.e. together with a choice of grading)._
We will break the proof into a number of stages.
### Twists act trivially on numerical \(K\)-theory
Fix a spherical object \(S\in D\mathcal{W}(X)\) and consider the associated spherical twist \(T_{S}\). By [13] we can assume that the category \(D\mathcal{W}(X)\) of perfect modules over \(\mathcal{W}(X)\) has a strictly \(SH^{0}(X)\)-linear chain model. The total evaluation morphism \(\operatorname{Hom}(S,\bullet)\otimes S\to\bullet\) is then \(SH^{0}\)-linear, and the cone inherits an \(SH^{0}(X)\)-linear structure. This means that \(T_{S}\) is linear as an autoequivalence. By Theorem 5.8, we know that \(T_{S}\) is an element of \(\operatorname{Auteq}_{SH^{0}}D\mathcal{W}(X)\cong\operatorname{Auteq} \mathcal{D}\cong\operatorname{PBr}_{3}\).
**Lemma 6.3**.: \(T_{S}\) _acts trivially on \(K(\mathcal{D})\cong K_{num}(\mathcal{F}(X))\)._
Proof.: Under the equivalence \(\Upsilon:D\mathcal{W}(X)\to D(Y)\) of Theorem 5.1, \(S\) corresponds to some complex \(\mathcal{E}_{S}\) of sheaves on \(Y\). As in the proof of Lemma 5.3, the argument in the proof of [1, Theorem 7.2.1] shows that \(\mathcal{E}_{S}\) must have proper support in \(Y\): otherwise it would have infinite rank morphisms in some degree with the tilting object in \(D(Y)\), which violates the definition of being spherical. As \(\mathcal{E}_{S}\) is also indecomposable, it must have connected support, which by the classification of proper subvarieties of \(Y\) implies that the support is either \(C\) or
a single point. By [10, Lemma 4.5], the support must be \(C\), and so in fact \(\mathcal{E}_{S}\) belongs to the category \(\mathcal{D}\subset D(Y)\). It follows that \(S\) belongs to the subcategory of \(\mathcal{W}(X)\) generated by \(S_{0}\) and \(S_{1}\) and in particular belongs to the compact Fukaya category \(\mathcal{F}(X)\subset\mathcal{W}(X)\).
The spherical twist \(T_{S}\) acting on \(\mathcal{F}(X)\) fits into an exact triangle
(6.1) \[\operatorname{Hom}_{\mathcal{F}}(S,\bullet)\otimes\underbrace{S\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
**Lemma 6.5**.: _After a Hamiltonian isotopy of \(X\), the matching spheres \(S_{a}\) and \(S_{b}\) meet exactly in the finite set of transverse intersection points of the curves \(s_{a}\) and \(s_{b}\). In particular, all intersection points lie in the common fixed locus of the involutions \(\iota_{1}\) and \(\iota_{2}\)._
Proof.: Assume that two matching paths \(\gamma_{a}\) and \(\gamma_{b}\) intersect transversally in \(k\) points apart from the end points. Then by construction, the \(S_{a}\) and \(S_{b}\) intersect cleanly in two \(S^{1}\)s (above each end point) and \(k\) copies of \(T^{2}\) (above each interior intersection point); and their restrictions to \(X^{\text{inv}}\), i.e. the matching spheres \(s_{a}\) and \(s_{b}\) in \(X^{\text{inv}}\), intersect transversally in \(4+4k\) points (with the end points contributing two each, and each 'interior' intersection point contributing 4).
There is a symplectic neighbourhood theorem for cleanly intersecting Lagrangian submanifolds, saying that near a component \(B\) of the clean intersection locus they are modelled by the conormal bundle of \(B\) inside the cotangent bundle of either one. Starting from this, and following the proof of [13, Proposition 5.15] (which treats the case of a single involution), there is an \((\iota_{1},\iota_{2})\)-equivariant small Hamiltonian perturbation of \(S_{a}\), fixing the invariant locus, so that \(S_{a}\) and \(S_{b}\) intersect transversally in exactly the intersection points of \(s_{a}\pitchfork s_{b}\).
Any isotopy of \(\gamma\) in the base (rel endpoints) is covered by a Hamiltonian isotopy of \(S_{\gamma}\) in \(X\) and a Hamiltonian isotopy of \(s_{\gamma}\) in \(X^{\text{inv}}\). Assume that after isotopy rel end points, the matching paths \(\gamma_{a}\) and \(\gamma_{b}\) intersect minimally rel endpoints, in \(I(\gamma_{a},\gamma_{b})\) points. Then after Hamiltonian isotopy in \(X\), we may assume \(S_{a}\) and \(S_{b}\) intersect transversally in \(4+4I(\gamma_{a},\gamma_{b})\) points, all fixed by both \(\iota_{j}\).
**Lemma 6.6**.: _Suppose that there is a compatible \(J=\{J_{t}\}_{t\in[0,1]}\) on \(X\) with the properties that_
1. \(J\) _is_ \(G\)_-equivariant;_
2. _the moduli spaces of_ \(J\)_-holomorphic curves_ \(u:\mathbb{R}\times[0,1]\to X\) _with boundary conditions on_ \(S_{a}\) _and_ \(S_{b}\) _are regular._
_Then_
\[\operatorname{HF}(S_{a},S_{b};\mathbb{Z}/2)\cong\operatorname{HF}(s_{a},s_{b };\mathbb{Z}/2)\]
Proof.: We have arranged that all intersection points lie in the fixed locus, so the underlying Floer cochain groups agree.
Now suppose that \(u:D\to X\) is a Floer holomorphic disc contributing to the differential and that \(\operatorname{Im}(u)\nsubseteq X^{\text{inv}}\). Then notice that each of \(u,\iota_{1}\circ u,\iota_{2}\circ u\) and \(\iota_{1}\circ\iota_{2}\circ u\) give holomorphic discs; moreover, these are either a pair of distinct holomorphic discs (e.g. if \(\operatorname{Im}(u)\subset\operatorname{Fix}(\iota_{1})\) but \(\operatorname{Im}(u)\nsubseteq\operatorname{Fix}(\iota_{2})\)) or a quadruple (if \(\operatorname{Im}(u)\nsubseteq\operatorname{Fix}(\iota_{1})\) and \(\operatorname{Im}(u)\nsubseteq\operatorname{Fix}(\iota_{2})\)). Equivariant transversality means that all these discs are regular, so as we are working with \(\mathbb{Z}/2\) coefficients, the contribution to the differential is zero.
It follows that the differential on \(\operatorname{CF}(S_{a},S_{b};\mathbb{Z}/2)\) co-incides with that on \(\operatorname{CF}(s_{a},s_{b};\mathbb{Z}/2)\).
For computing Floer cohomology of curves on a surface, one can achieve regularity with a time-independent almost complex structure. The computation of \(\operatorname{HF}(s_{a},s_{b};\mathbb{Z}/2)\) is then completed by:
**Lemma 6.7**.: _For any compatible almost complex structure \(J\) which makes projection to the base \(\mathbb{C}^{*}\) holomorphic, there are no non-constant holomorphic discs with boundary on \(s_{a}\) and \(s_{b}\) contained wholly in the fixed locus \(X^{\text{inv}}\)._
Proof.: Suppose we had such a disc \(u:D\to X^{\mathrm{inv}}\); it projects to a disc \(\pi\circ u\) in the base (this will in general be immersed). By minimality of intersection, the image of \(\pi\circ u\) must contain at least one of the critical values. Consider the auxiliary Riemann surface given by pulling back the cover \(X^{\mathrm{inv}}\to\mathbb{C}^{*}\) under \(\pi\circ u\), say \(\Sigma_{u}\to D\). This is a four-fold cover with two possible types of critical values / branch points, modelled on the branching patterns for \(\pm 1\subset\mathbb{C}^{*}\). Now notice that for any positive number of such branch points, \(\Sigma_{u}\to D\) cannot have a section for topological reasons. Thus there are no such holomorphic discs \(u\to X^{\mathrm{inv}}\).
Lemma 6.7 immediately implies that for our choices,
\[\mathrm{HF}(s_{a},s_{b};\mathbb{Z}/2)\cong\mathrm{CF}(s_{a},s_{b};\mathbb{Z}/2)\]
which has rank \(4+4I(\gamma_{a},\gamma_{b})\). Subject to achieving the hypotheses of Lemma 6.6, this completes the proof of Proposition 6.4.
### Equivariant transversality
Let \((M,\omega)\) be an exact symplectic manifold with a symplectic involution \(\iota:M\to M\) which setwise preserves a pair of exact Lagrangian submanifolds \(L_{0},L_{1}\subset M\). We write \(M^{\iota}\) for the fixed locus, and \(L_{j}^{\iota}\) for the fixed set of \(\iota|_{L_{j}}\), which is Lagrangian in \(M^{\iota}\). We suppose that the \(L_{i}\) intersect transversally, which implies that the \(L_{i}^{\iota}\) intersect transversally in the fixed point locus. Let \(n=\dim_{\mathbb{C}}(M)\), \(n_{\iota}=\dim_{\mathbb{C}}(M^{\iota})\) and let \(n_{\mathrm{anti}}=n-n_{\iota}\) be the complex codimension of the fixed locus. The normal bundle \(\nu_{M^{\iota}}\to M^{\iota}\) is a complex bundle of rank \(n_{\mathrm{anti}}\); this is the \((-1)\)-eigenspace of the action of \(\iota\) on \(T_{p}M\) for \(p\in M^{\iota}\).
We consider \(\iota\)-equivariant paths of compatible almost complex structures \(J=\{J_{t}\}_{t\in[0,1]}\) on \(M\). Khovanov and Seidel [14, Proposition 5.13] explain how to choose \(J_{t}\) so as to achieve regularity for those curves contributing to \(\mathrm{CF}(L_{0},L_{1})\) which are not contained wholly in the fixed locus \(M^{\iota}\) of the involution.
Unfortunately, their work does not immediately apply to our setting: whilst Lemma 6.7 implies that, for suitable data, we have no Floer solutions lying in \(X^{\mathrm{inv}}\subset X\) for topological reasons, there may be curves which lie in the fixed loci \(X^{\iota_{j}}\) of either one of the two involutions. In general the existence of such curves can obstruct the existence of any \(\iota_{j}\)-equivariant regular \(J\), and hence _a fortiori_ of a \(G\)-equivariant regular \(J\). More precisely, if \(x_{+},x_{-}\) are \(\iota_{j}\)-invariant intersections of \(S_{a}\) and \(S_{b}\) for which the difference in their Maslov indices, computed in the fixed locus \(X^{\iota_{j}}\), is greater than the difference of the indices computed in the total space, it is impossible to achieve equivariant transversality (because the dimension of the space of solutions in the fixed set is greater than the dimension in the total space).
A sufficient criterion for vanishing of this obstruction was given in [14, Section 3.5].
**Definition 6.8**.: A _stable normal trivialization_ consists of two pieces of data:
1. a stable trivialization \(\phi:\nu_{M^{\iota}}\times\mathbb{C}^{r}\to M^{\iota}\times\mathbb{C}^{n_{ \mathrm{anti}}+r}\) of the normal bundle to the fixed point set, which is unitary (respects both the complex and symplectic structures);
2. homotopies \(h_{k}:[0;1]\times L_{k}^{\mathrm{inv}}\to Gr_{\mathrm{Lag}}(n_{\mathrm{anti}}+r)\), for \(k=0,1\), such that (i) \(h_{0}(0,\cdot)=\phi_{*}(\nu_{L_{0}^{\mathrm{inv}}}\times\mathbb{R}^{r})\) and \(h_{0}(1,\cdot)=\mathbb{R}^{n_{\mathrm{anti}}+r}\), (ii) \(h_{1}(0,\cdot)=\phi_{*}(\nu_{L_{1}^{\mathrm{inv}}}\times i\mathbb{R}^{r})\) and \(h_{1}(1,\cdot)=i\mathbb{R}^{n_{\mathrm{anti}}+r}\).
Let \(\hat{M}=M\times\mathbb{C}^{r}\) with \(\hat{\iota}=\iota\times(-1)\), then \(\hat{M}^{\iota}=M^{\iota}\), and if we set \(\hat{L}_{0}=L_{0}\times\mathbb{R}^{r}\) and \(\hat{L}_{1}=L_{1}\times i\mathbb{R}^{r}\), then \(\hat{L}_{i}^{\iota}=L_{i}^{\iota}\). We fix the convention that the linear Lagrangian subspaces \(\mathbb{R}\) and \(i\mathbb{R}\) in \(\mathbb{C}\) are graded so the rank one Floer cohomology \(HF(\mathbb{R},i\mathbb{R})\cong\mathbb{Z}\) is concentrated in degree zero. There is a canonical isomorphism \(\mathrm{CF}(L_{0},L_{1})\cong\mathrm{CF}(L_{0}\times\mathbb{R}^{r},L_{1} \times i\mathbb{R}^{r})\) of Floer
complexes, where the first complex lives inside \(M\) and the second inside \(\hat{M}\) equipped with a product almost complex structure.
**Lemma 6.9**.: _Suppose \((M,L_{0},L_{1})\) admit a stable normal trivialization \(\Upsilon=(\phi,h_{0},h_{1})\). There is a Hamiltonian symplectomorphism \(\Phi=\Phi_{\Upsilon}:\hat{M}\to\hat{M}\) satisfying (i) \(\Phi|_{M^{\iota}}=\text{id}\); (ii) \(\nu_{\Phi(\hat{L}_{0})}=\mathbb{R}^{n_{\text{anti}}+r}\); (iii) \(\nu_{\Phi(\hat{L}_{1})}=i\mathbb{R}^{n_{\text{anti}}+r}\)._
Proof.: This is proved in the discussion after [10, Definition 19].
We have the Floer equation
\[\begin{split}&\partial_{s}u+J_{t}(u)\big{(}\partial_{t}u-X_{t}(u )\big{)}=0,\\ & u(s,0)\in L_{0},\ \ u(s,1)\in L_{1}.\end{split} \tag{6.2}\]
where \(X_{t}\) denotes the time-dependent Hamiltonian flow of \(H_{t}\). The next definition 6.10 and subsequent Lemma 6.11 are taken from [10, Section 14c].
**Definition 6.10**.: Suppose \((M,L_{0},L_{1})\) have stably trivial normal structure, and fix a stable normal trivialisation \(\Upsilon\) and Hamiltonian symplectomorphism \(\Phi=\Phi_{\Upsilon}\) as above. We define \(\Upsilon\)_-constrained Floer perturbation data_ for \((\Phi(\hat{L}_{0}),\Phi(\hat{L}_{1}))\) to comprise \((\{\hat{H}_{t}\},\{\hat{J}_{t}\})_{0\leq t\leq 1}\), where
1. \(\hat{H}_{t}\), \(\hat{J}_{t}\) are \(\hat{\iota}\)-invariant;
2. for some extension of the unitary trivialisation \(\phi\) to a map \(\bar{\phi}:\mathbb{C}^{r}\times U(M^{t})\to\mathbb{C}^{n_{\text{anti}}+r}\), the derivatives \(\overline{\partial}_{j_{t}}\bar{\phi}\) vanish to first order along \(M^{t}\);
3. for some \(h\in(0,\pi/2)\), the flow \(X_{t}\) of \(\hat{H}_{t}\) acts by rotation by \(e^{iht}\) on \(\nu_{L_{0}^{\iota}}\).
**Lemma 6.11**.: \(\Upsilon\)_-constrained Floer perturbation data are generically regular for all solutions \(u:\mathbb{R}\times[0,1]\to M^{\iota}\) to the equations (6.2) with image inside \(M^{\iota}\)._
Proof.: We briefly review the discussion from [10, Section 14c]. Suppose \(u\) satisfies (6.2) for \(\Upsilon\)-constrained perturbation data. Using the first order information on \(\hat{J}_{t}\) and \(\hat{H}_{t}\) along \(M^{t}\), [10, Lemma 14.6] shows that, globally over \(\mathbb{R}\times[0,1]\), the anti-invariant part \(D^{\text{anti}}_{u,\hat{J}_{t}}\) of the linearisation can be written explicitly as an operator \(\partial_{s}+i\partial_{t}+h\). By a direct consideration of Fourier coefficients, [10, Lemma 11.5] shows that this is injective. It follows that invariant curves \(u\) are regular in the total space if and only if they are regular in the fixed point set. To prove Lemma 6.11 it then suffices to show that the class of \(\Upsilon\)-constrained data is sufficiently large. The \(\hat{H}_{t}\) may be prescribed arbitrarily both on \(M^{\iota}\) and outside a small open neighbourhood of \(M^{\iota}\), provided that each is \(\hat{\iota}\)-invariant. In a unitary splitting of \(T\hat{M}|_{M^{\iota}}\), the off-diagonal parts of the almost complex structures \(\hat{J}_{t}\) vanish and the assumptions constrain the component tangential to \(M^{\iota}\), but the normal part may also be prescribed arbitrarily. This gives sufficient flexibility to appeal to standard transversality arguments to see that regular data are generic in this class.
**Corollary 6.12**.: _Suppose that \((M,L_{0},L_{1})\) have stably trivial normal structure. A choice of stable normal trivialisation \(\Upsilon\) defines a (non-empty, open) set of \(\hat{\iota}\)-invariant time-dependent almost complex structures on \(\hat{M}\) which are simultaneously regular for all moduli spaces of solutions to (6.2) in both \(\hat{M}\) and \(\hat{M}^{\iota}=M^{\iota}\)._
Proof.: For generic invariant time-dependent \(J\), regularity holds at all curves \(u:\mathbb{R}\times[0,1]\to\hat{M}\) which are not entirely contained in \(M^{\iota}\) (this is a direct adaptation of the usual regularity theory, and follows from [10, Proposition 5.13]). For curves \(u\) inside the fixed locus,
after stabilising and applying a Hamiltonian symplectomorphism, Lemma 6.9 reduces us to a product situation, where we may invoke Lemma 6.11 to find an open set of \(\Upsilon\)-constrained perturbation data which are regular.
The upshot of the discussion at this point is that, if a stable normal trivialisation exists, we can replace \((M,L_{0},L_{1},\iota)\) by \((\hat{M},\hat{L}_{0},\hat{L}_{1},\hat{\iota})\) in such a way that the Floer complexes \(\operatorname{CF}(\hat{L}_{0},\hat{L}_{1})=\operatorname{CF}(L_{0},L_{1})\) are naturally identified (and similarly in the fixed point set \(M^{\iota}\), which never changes); and the complex \(\operatorname{CF}(\hat{L}_{0},\hat{L}_{1})\) admits a regular equivariant \(J\).
### Two involutions
Now suppose that \(M\) carries an action of \(G=\mathbb{Z}/2\times\mathbb{Z}/2\), generated by involutions \(\iota_{1}\) and \(\iota_{2}\), which again preserve the Lagrangians \(L_{j}\) setwise. We assume that the fixed submanifolds \(M^{\iota_{1}}\) and \(M^{\iota_{2}}\) meet cleanly in the global fixed locus \(M^{G}\).
**Lemma 6.13**.: _Suppose that_
1. \(M\) _admits an action of_ \(\mathbb{Z}/2\times\mathbb{Z}/2\) _generated by_ \(\iota_{1}\) _and_ \(\iota_{2}\)_;_
2. _the triple_ \((M,L_{0},L_{1})\) _admits a stable normal polarisation for_ \(\iota_{1}\)_;_
3. _the triple_ \((M^{\iota_{1}},L_{0}^{\iota_{1}},L_{1}^{\iota_{1}})\) _admits a stable normal polarisation for_ \(\iota_{2}|_{M^{\iota_{1}}}\)_;_
4. _the intersection points of_ \(L_{0}\) _and_ \(L_{1}\) _all lie in the common fixed locus_ \(M^{G}\)_, so_ \(L_{0}\cap L_{1}=L_{0}^{G}\cap L_{1}^{G}\)_._
_Then there is a cochain homotopy equivalence \(\operatorname{CF}(L_{0},L_{1};\mathbb{Z}/2)\simeq\operatorname{CF}(L_{0}^{G},L_{1}^{G};\mathbb{Z}/2)\)._
Proof.: We apply the involutions iteratively. Existence of a stable normal trivialisation for \(\iota_{1}\) gives an equivariant \(J\) which identifies the Floer complexes \(\operatorname{CF}(L_{0},L_{1};\mathbb{Z}/2)=\operatorname{CF}(L_{0}^{\iota_ {1}},L_{1}^{\iota_{1}};\mathbb{Z}/2)\), since all intersection points are fixed and all non-fixed discs come in pairs (cf. the proof of Lemma 6.6). Note that, even though this identification of complexes arises from stabilisation (and goes via the intermediate complex \(\operatorname{CF}(\hat{L}_{0},\hat{L}_{1})\)), the fixed locus \(M^{\iota_{1}}\) for the first involution is never changed. Since the involution \(\iota_{2}|_{M^{\iota_{1}}}\) admits a stable normal trivialisation, we repeat the argument to identify \(\operatorname{CF}(L_{0}^{\iota_{1}},L_{1}^{\iota_{1}};\mathbb{Z}/2)\) with \(\operatorname{CF}(L_{0}^{G},L_{1}^{G};\mathbb{Z}/2)\).
**Remark 6.14**.: More generally, there is a natural notion of compatibility of stable normal trivialisations for a pair of involutions. The normal bundle satisfies \(\nu_{M^{G}}\cong\nu_{M^{\iota_{1}}}\oplus\nu_{M^{\iota_{2}}}\), and \(\iota_{2}\) acts on the normal space \(\nu_{M^{\iota_{1}}}\). This gives an obvious notion of a stable normal trivialisation for \(\iota_{1}\) being \(\iota_{2}\)-equivariant. Since the ordering of the generators \(\iota_{j}\) for \(G\) was arbitrary, we will say that \((M,L_{0},L_{1})\) admits _compatible_ stable normal trivialisations. By an equivariant version of Moser's theorem, the Hamiltonian symplectomorphism constructed in Lemma 6.9 can be constructed \(\iota_{2}\)-equivariantly (extending \(\iota_{2}\) from \(M\) to \(\hat{M}\) by the trivial involution on the second factor).
### Constructing stable normal trivialisations
We return to \(X\), its action of \(G=\mathbb{Z}/2\times\mathbb{Z}/2\), and the \(G\)-invariant Lagrangian matching spheres \(S_{a}\) and \(S_{b}\). The following is an analogue of [10, Lemma 31].
**Lemma 6.15**.: _The triple \((X,S_{a},S_{b})\) admits compatible stable normal trivialisations._
Proof.: Consider the map \(\mathbb{C}^{4}\times\mathbb{C}^{*}\to\mathbb{C}^{2}\) taking \((u_{1},v_{1},u_{2},v_{2},z)\mapsto(u_{1}v_{1}-z,u_{2}v_{2}-z)\). Then \(X\) is the regular fibre over \((-1,1)\), so we have a short exact sequence of complex vector bundles
\[0\to TX\to T(\mathbb{C}^{4}\times\mathbb{C}^{*})\to T\mathbb{C}^{2}\to 0. \tag{6.3}\]
There is a contractible space of splittings of such a sequence (from a Hermitian metric) which gives a canonical homotopy class of stable trivialisation of \(TX\); thinking about the family
of regular fibres \(X_{c,d}\) over \(\operatorname{Conf}_{2}(\mathbb{C}^{*})\subset\mathbb{C}^{2}\) shows the canonical homotopy class of stable trivialisation is preserved by parallel transport and the \(\operatorname{PBr}_{3}\)-action.
The Lagrangian \(S_{a}\) arises as a vanishing cycle, which means we can find a disc \(D\subset\mathbb{C}^{2}\) normal to the diagonal and a one-parameter family \(\mathcal{X}_{D}\to D\) with fibre \(X\mapsto 1\) and such that \(\mathcal{X}_{D}\supset\Delta_{a}\) contains a Lagrangian thimble fibred over an arc \([0,1]\subset D\) and with boundary \(\Delta_{a}\cap X=S_{a}\).
We have a commutative diagram of vector bundles over \(S_{a}\),
(6.4)
Since \(\Delta_{a}\) is contractible, \(T\Delta_{a}\) is trivial, and this induces a stable trivialization of \(TS_{a}\); the nullhomotopy of the stabilised Lagrangian Gauss map required for Definition 6.8 thus comes from an extension of that map to \(\Delta_{a}\). The commutativity of (6.4) shows that the complexification of this trivialization is (canonically) homotopic to the trivialization of \(TX|_{S_{a}}\) arising from the previous stable trivialisation of \(TX\) from (6.3).
Because of our previous remarks about parallel transport, the same is true for any matching sphere \(S_{b}\), and furthermore the argument goes through equivariantly. More explicitly, \(TX\) has a \(G\)-equivariant stable trivialization, compatible with a \(G\)-equivariant stable trivialisation of \(TS_{a}\) in the sense that under the natural isomorphism \(TS_{a}\otimes\mathbb{C}\cong TX|_{S_{a}}\), there is an equivariant homotopy between these two trivializations. Restriction to fixed point sets of either generator of \(G\) and to the corresponding anti-invariant directions along those fixed-point loci now yields the desired compatible stable normal trivializations.
At this point we have satisfied all of the hypotheses of Lemma 6.13, bearing in mind that in Lemma 6.5 we constructed Hamiltonian isotopies of the matching spheres after which \(S_{a}\cap S_{b}=s_{a}\cap s_{b}\), i.e. all intersection points lie in the common fixed point locus. This completes the construction of the \(J\) required for Lemma 6.6, and hence our discussion of the proof of Proposition 6.4.
### Dynamics on surfaces
Having a formula for ranks of Floer cohomologies of matching spheres in terms of intersection numbers on the surface \(X^{\mathrm{inv}}\) leads to growth rate bounds, using classical arguments from dynamics on surfaces.
**Proposition 6.16**.: _Suppose that an autoequivalence \(\phi\in\mathbb{Z}^{*\infty}\) is such that for all \(S_{i},S_{j}\in\mathcal{S}\), the rank of the Floer cohomology group \(\operatorname{HF}(S_{i},\phi^{n}S_{j};\mathbb{Z}/2)\) grows at most linearly with \(n\). Then, up to shifts, \(\phi\) is a power of a spherical twist in an element of \(\mathcal{S}\)._
Proof.: The Nielsen-Thurston theorem (see [10]) says that elements of mapping class groups of punctured surfaces are either periodic, reducible, or pseudo-Anosov. Concretely for \(\operatorname{PBr}_{3}\) these cases simplify as follows. The only periodic elements are powers of the full twist (i.e. elements in the centre of \(\operatorname{PBr}_{3}\)); the only reducible elements are powers of a full twist in a matching path between marked points; any other element is pseudo-Anosov, which by definition means that it is pseudo-Anosov as a mapping of the four-punctured sphere given by collapsing and removing the boundary of the disc. If \(\phi\) is pseudo-Anosov, its lift \(\hat{\phi}\) to the four-fold cover \(X^{\mathrm{inv}}\) introduced in Proposition 6.4 is still pseudo-Anosov (as it preserves two transverse singular foliations, namely the preimages under the branched cover of the foliations
preserved by \(\phi\) downstairs). Then for any two simple closed curves \(\ell,\ell^{\prime}\) on \(X^{\text{inv}}\) the geometric intersection number \(\iota(\ell,\hat{\phi}^{k}\ell^{\prime})\) grows exponentially in \(k\)[13, Expose 12, Theoreme II]. Taking \(\ell,\ell^{\prime}\) to be lifts of arcs \(\gamma_{i},\gamma_{j}\) downstairs, this geometric intersection number gives a lower bound for the rank of \(\operatorname{HF}(S_{i},\phi^{k}S_{j};\mathbb{Z}/2)\) by Proposition 6.4.
Suppose, then, that the growth of Floer cohomology is at most linear. Since \(\phi\in\mathbb{Z}^{*\infty}\), if it is the image of a power of a full twist in a matching path, that matching path must be associated to some element of \(\mathcal{S}\). If the growth rate is linear for some \(S_{i},S_{j}\), then \(\phi\) is a shift of a non-trivial power of a Dehn twist in a matching sphere; if the growth rate is trivial for all \(i,j\) then \(\phi\) is the image of a central element, and acts by a shift.
**Lemma 6.17**.: _For a spherical twist \(\phi=T_{S}\), the rank of \(\operatorname{HF}(S_{i},\phi^{n}S_{j};\mathbb{Z}/2)\) grows at most linearly with \(n\)._
Proof.: This follows immediately from the exact triangle for a spherical twist (6.1).
### A spherical twist is a power of a Dehn twist
The conclusion of the previous section is that, as an autoequivalence of \(D\mathcal{W}(X)\), \(T_{S}=\tau_{\gamma}^{l}[k]\), where \(\tau_{\gamma}\) denotes the Dehn twist in the matching sphere \(S_{\gamma}\) associated to a matching path \(\gamma\), and \(k,l\in\mathbb{Z}\) are to be determined.
**Lemma 6.18**.: _We have \(k=0\), so the spherical twist \(T_{S}=\tau_{\gamma}^{l}\) is the image of some power of a Dehn twist in a matching path._
Proof.: By conjugating with a suitable element of \(\mathbb{Z}^{*\infty}\), we may assume without loss of generality that \(S_{\gamma}=S_{1}\), mirror to \(\mathcal{O}_{C}(-1)\). In particular, since \(S_{1}\) is disjoint from the Lagrangian thimble-like object \(L_{0}\) (see Figure 2), we have that
\[T_{S}(L_{0})=\tau_{\gamma}^{l}(L_{0})[k]=L_{0}[k].\]
Suppose for contradiction that \(k\neq 0\). Then in the exact triangle
the second arrow is given by an element of \(HW^{0}(L_{0},L_{0}[k])=HW^{k}(L_{0},L_{0})\). But [1, Remark 6.1] computes that the endomorphisms of \(L_{0}\) are concentrated in degree zero, so this vanishes. This implies
\[HF^{*}(S,L_{0})\otimes S\simeq L_{0}\oplus L_{0}[k]\]
Taking self-endomorphisms of this object, since \(HW^{*}(L_{0},L_{0})=SH^{0}(X)\) is infinite-dimensional, whilst \(HF^{*}(S,S)\) has finite rank, we see that \(HF^{*}(S,L_{0})\) must be an infinite-dimensional vector space, which contradicts that \(S\) is spherical. We conclude that the shift satisfies \(k=0\).
### Reduction to the'small category' \(\mathcal{C}\)
Continuing with the same notation, in the course of the previous proof we showed that \(T_{S}(L_{0})=L_{0}\). Strengthening this:
**Lemma 6.19**.: \(HF^{*}(S,L_{0})=0=HF^{*}(L_{0},S)\)_._
Proof.: For any given \(n\in\mathbb{Z}\),
\[HF^{*}(S,L_{0})=HF^{*}(S,T_{S}^{n}(L_{0}))=HF^{*}(T_{S}^{-n}S,L_{0})=HF^{*}(S[- 2n],L_{0})=HF^{*+2n}(S,L_{0})\]
where we have used that for any 3-dimensional spherical object \(\hat{S}\), the twist functor acts on the object itself by a non-trivial shift, \(T_{\hat{S}}(\hat{S})=\hat{S}[2]\).
Taking \(n\neq 0\) shows that the vector space \(HF^{*}(S,L_{0})\) is periodic with some non-trivial grading shift (so infinite rank) or vanishes; again by the definition of \(S\) being spherical, it must be the latter. Thus, \(HF^{*}(S,L_{0})=0\). By non-degeneracy of the pairing
\[HF^{*}(S,L_{0})\otimes HF^{*}(L_{0},S)\to\mathbb{C}\]
(or just by repeating the argument with the order of the branes reversed) we conclude that \(HF^{*}(L_{0},S)=0\) also.
In the course of the proof of Lemma 6.3, we showed that the mirror object \(\mathcal{E}_{S}\in D(Y)\) to \(S\) belonged to the full subcategory \(\mathcal{D}\) of complexes with cohomology supported on the curve \(C\). There is a'small' category, cf. [10] for instance,
\[\mathcal{C}=\{F\in D(Y)\,|\,\operatorname{supp}(F)\subset C,\,R\pi_{*}(F)=0\} \subset\mathcal{D}\subset D(Y)\]
of objects of \(D(Y)\) which are both cohomologically supported on \(C\) and have trivial push-forward under the small resolution.
**Lemma 6.20**.: _The mirror complex \(\mathcal{E}_{S}\in\mathcal{D}\) belongs to the subcategory \(\mathcal{C}\)._
Proof.: Recall that \(L_{0}\) is mirror to \(\mathcal{O}_{Y}\), so
\[HF^{*}(L_{0},S)=0\Rightarrow\operatorname{Ext}^{*}(\mathcal{O}_{Y},\mathcal{E }_{S})=0.\]
Pushing down under the contraction \(\pi:Y\to\operatorname{Spec}(R)=A\), we obtain
\[\operatorname{Ext}^{*}(\mathcal{O}_{A},R\pi_{*}\mathcal{E}_{S})=0.\]
However, \(A\) is affine and \(\mathcal{O}_{A}\) is a spanning object for the derived category \(D(A)\), so this forces \(R\pi_{*}\mathcal{E}_{S}=0\). We deduce that \(\mathcal{E}_{S}\in\mathcal{C}\).
### Conclusion
To recap, if \(S\in D\mathcal{W}(X)\) is spherical, it has a mirror \(\mathcal{E}_{S}\) which is a spherical object in \(\mathcal{D}\subset D(Y)\). We showed that the twist \(T_{\mathcal{E}_{S}}\) is mirror to \(\tau_{S_{\gamma}}\) for some \(S_{\gamma}\in\mathcal{S}\). Conjugating with a \(\operatorname{PBr}_{3}\) element taking \(\gamma\) to \(\gamma_{1}\), wlog \(T_{\mathcal{E}_{S}}\) is mirror to \(\tau_{S_{1}}\), and we saw above that moreover we get that \(\mathcal{E}_{S}\) is a spherical object in the small category \(\mathcal{C}\subset D(Y)\). The classification of spherical objects in \(\mathcal{C}\) is standard (cf. [11, 12]; in the terminology of [11], in our case \(\mathcal{C}\) corresponds to the one-vertex no-loop quiver). There is a unique spherical object up to shift, given by \(\mathcal{O}_{C}(-1)\). Translating back under the mirror equivalence \(D\mathcal{W}(X)\simeq D(Y)\), it follows that \(S\) is quasi-isomorphic to some shift of \(S_{\gamma}\).
This concludes the proof of Theorem 6.2. Passing back through the mirror, this readily implies Theorem 1.3.
## 7. Miscellania
### Addition of subcritical Weinstein handles
We will need the following generalisation of Lemma 3.17.
**Lemma 7.1**.: _Suppose \(X^{\prime}\) is a Weinstein 6-manifold given by iteratively adding any sequence of Weinstein one- and two-handles to \(X\). Let \(f\in\operatorname{Homeo}_{c}(X^{\prime})\) be a compactly supported homeomorphism. Then \(f_{*}\) acts trivially on \(H_{3}(X^{\prime};\mathbb{Z})\)._
Proof.: Take all homology groups to have coefficient ring \(\mathbb{Z}\). Comparing with the proof of Lemma 3.17, it's enough to show that the map \(H_{3}(X^{\prime})\to H_{3}(X^{\prime},\partial X^{\prime})\) has vanishing image. Consider the non-degenerate intersection pairing
\[H_{3}(X^{\prime})\times H_{3}(X^{\prime},\partial X^{\prime})\to\mathbb{Z}.\]
As \(X\subset X^{\prime}\) is given by adding Weinstein one- and two-handles, the inclusion induces an isomorphism \(H_{3}(X)\cong H_{3}(X^{\prime})\). In particular, \(H_{3}(X^{\prime})\cong\mathbb{Z}^{2}\), generated by \(S_{0}\) and \(S_{1}\). As the intersection pairing of \(S_{i}\) and \(S_{j}\) vanishes for any \(i,j\), we see that for any \(i\) the image of \(S_{i}\) in \(H_{3}(X^{\prime},\partial X^{\prime})\) vanishes. This completes the proof.
**Corollary 7.2**.: _Suppose \(X^{\prime}\) is any Weinstein 6-manifold given by iteratively adding Weinstein one- and two-handles to \(X\). Then under the action of \(\pi_{0}\operatorname{Symp}_{c}(X^{\prime})\), there are infinitely many orbit sets of Lagrangian spheres in \(X^{\prime}\)._
This answers a folklore question often attributed to Fukaya.
Proof.: The spheres \(S_{i}\) belong to pairwise distinct classes in \(H_{3}(X^{\prime};\mathbb{Z})\), and any compactly supported symplectomorphism acts trivially on \(H_{3}(X^{\prime};\mathbb{Z})\) by Lemma 7.1.
**Corollary 7.3**.: _Suppose \(X^{\prime}\) is a Weinstein 6-manifold given by iteratively adding any sequence of Weinstein one- and two-handles to \(X\). Then we have a pair of group homomorphisms_
\[\mathbb{Z}^{*\infty}\hookrightarrow\pi_{0}\operatorname{Symp}_{c}(X^{\prime}) \twoheadrightarrow\mathbb{Z}^{*\infty}\]
_which compose to the identity._
In particular, by adding a two-handle, we can find such \(X^{\prime}\)s which are simply connected.
Proof.: We have a natural isomorphism of wrapped Fukaya categories \(D\mathcal{W}(X)\simeq D\mathcal{W}(X^{\prime})\), compatible with an isomorphism between \(SH^{0}(X)\) and \(SH^{0}(X^{\prime})\). (By [10, 11], the categories \(\mathcal{W}(X)\) and \(\mathcal{W}(X^{\prime})\) are non-degenerate and equivalent. The equivalence induces an isomorphism of Hochschild (co)homologies. On the other hand, by [10], the non-degeneracy implies that \(HH_{*}(\mathcal{W}(X))\to SH^{*}(X)\) is an isomorphism. It's then immediate that we get a compatible isomorphism between \(SH^{0}(X)\) and \(SH^{0}(X^{\prime})\).)
This implies that a compactly supported symplectomorphism \(\phi\in\pi_{0}\operatorname{Symp}_{c}(X^{\prime})\) induces an \(SH^{0}(X)\)-linear autoequivalence of \(D\mathcal{W}(X)\). As before, combining mirror symmetry for \(X\) with Lemmas 4.5, 4.6 and 5.3, we get that \(\phi\) preserves the subcategory \(\langle S_{0},S_{1}\rangle\) (mirror to \(\mathcal{D}\)), and fixes the torus \((T,\zeta)\), as an object of \(D\mathcal{W}(X)\), up to a shift.
By Lemma 7.1, \(\phi\) acts as the identity \(H_{3}(X^{\prime};\mathbb{Z})\cong H_{3}(X;\mathbb{Z})\), i.e. on the numerical \(K\)-theory \(K_{\operatorname{num}}\mathcal{F}(X^{\prime}))\). As before, Corollary 4.11 implies that we get a map \(\pi_{0}\operatorname{Symp}_{c}(X^{\prime})\to\mathbb{Z}^{*\infty}\times 2 \mathbb{Z}\). Projecting gives a map \(\pi_{0}\operatorname{Symp}_{c}(X^{\prime})\to\mathbb{Z}^{*\infty}\), which is surjective by considering the image of the Dehn twists. The claim then follows.
### Non-exact deformations
By open-ness of the symplectic condition, for a small neighbourhood \(U\) of \((0,0)\in H^{2}(X;\mathbb{R})\cong\mathbb{R}^{2}\), there is a non-exact symplectic deformation of \((X,d\theta)\) to \((X,\omega_{a,b})\) with \([\omega_{a,b}]=(a,b)\in U\).
We can in fact exhibit non-exact deformations \((X,\omega_{a,b})\) for arbitrary \((a,b)\in\mathbb{R}^{2}\), via the Morse-Bott-Lefschetz fibration. Let \(t_{c}:T^{*}S^{1}\to T^{*}S^{1}\) be a symplectomorphism of flux \(c\) (translating the cylinder). Start with the previously considered Morse-Bott-Lefschetz fibration \(\pi:X\to\mathbb{C}^{*}\), with Morse-Bott singular fibres at \(\pm 1\). This is a locally Hamiltonian fibration, with monodromies \(\operatorname{id}\times\tau_{S^{1}}\) about \(-1\) and \(\tau_{S^{1}}\times\operatorname{id}\) about \(1\). We define \((X,\omega_{a,b})\) to be the total space of the corresponding locally Hamiltonian fibration with local monodromies \(t_{a}\times\tau_{S^{1}}\) about \(-1\) and \(\tau_{S^{1}}\times t_{b}\) about \(1\).
For any \((a,b)\), we can still construct Lagrangian matching spheres \(S_{i}\) for the same matching paths as in Figure 2. By inspection, whenever \(a\) and \(b\) are both non-zero, \(S_{i}\) and \(S_{j}\) are disjoint if \(i\neq j\).
**Corollary 7.4**.: _For \(a\neq 0\) and \(b\neq 0\), there is an injection \(\mathbb{Z}^{\oplus\infty}\hookrightarrow\pi_{0}\operatorname{Symp}_{c}(X,\omega _{a,b})\)._
Proof.: The Dehn twists in the \(S_{i}\) define a homomorphism \(\mathbb{Z}^{\oplus\infty}\to\pi_{0}\operatorname{Symp}_{c}^{\operatorname{ gr}}(X,\omega_{a,b})\). We claim this is injective by considering the action on graded Lagrangian spheres: each \(S_{i}\) admits a \(\mathbb{Z}\)-torsor of gradings, and the twist \(\tau_{S_{i}}\) shifts the grading on \(S_{i}\) by \(2\); as they are disjoint, it does not change the grading on any \(S_{j}\) for \(i\neq j\). Finally, as the forgetful map from \(\operatorname{Symp}_{c}^{\operatorname{gr}}(X,\omega_{a,b})\) to gradeable compactly supported symplectomorphisms of \((X,\omega_{a,b})\) splits, we get an injection \(\mathbb{Z}^{\oplus\infty}\hookrightarrow\pi_{0}\operatorname{Symp}_{c}(X, \omega_{a,b})\) as required.
**Remark 7.5**.: By inspection, for our description of the \(S_{i}\) as matching spheres, there is no compact subset of \(X\) containing all of the matching spheres \(S_{i}\): as \(|i|\) increases, the \(S_{i}\) have arbitrarily large maximal radial coordinate in the conical end for (the deformation of) \(X\).
### Homeomorphism & diffeomorphism groups
**Lemma 7.6**.: _The composite \(\mathbb{Z}^{\ast\infty}\to\pi_{0}\operatorname{Symp}_{c}(X,d\theta)\to\pi_{0} \operatorname{Diff}_{c}(X)\) factors through \(\mathbb{Z}^{\oplus\infty}\)._
Proof.: This immediately follows by noting that this is the same map as the composite \(\mathbb{Z}^{\ast\infty}\to\pi_{0}\operatorname{Symp}_{c}(X,\omega_{a,b})\to\pi_ {0}\operatorname{Diff}_{c}(X)\) for any deformation \((X,\omega_{a,b})\) of \((X,d\theta)\), and recalling that the Dehn twists in the \(S_{i}\) generate a \(\mathbb{Z}^{\oplus\infty}\) in \(\pi_{0}\operatorname{Symp}_{c}(X,\omega_{a,b})\).
Recall that \(\pi_{1}(X)=\mathbb{Z}\). We view \(X\) as the completion of a Weinstein domain \((X,\partial X)\) and let \((X^{\prime},\partial X^{\prime})\) denote the Weinstein domain obtained by adding one subcritical two-handle to \(X\) so as to kill its fundamental group.
**Lemma 7.7**.: _The boundary \(\partial X^{\prime}\) is diffeomorphic to \((S^{2}\times S^{3})\#(S^{2}\times S^{3})\)._
Proof.: The boundary \(\partial X\) has \(\pi_{1}(\partial X)=\mathbb{Z}\). On the boundary, the subcritical handle addition acts by a surgery, removing \(S^{1}\times D^{4}\) and regluing \(D^{2}\times S^{3}\), and so \(\pi_{1}(\partial X^{\prime})=\{1\}\). The long exact sequence for \((X,\partial X)\) shows that \(H_{2}(\partial X)\cong H_{2}(X)\cong\mathbb{Z}^{2}\), and hence \(H_{2}(\partial X^{\prime};\mathbb{Z})=\mathbb{Z}^{2}\) also. In particular, the homology of \(X^{\prime}\) is torsion-free. Since \(H^{2}(X^{\prime};\mathbb{Z})\to H^{2}(X;\mathbb{Z})\) is an isomorphism, \(X^{\prime}\) still has vanishing first Chern class, which shows that \(\partial X^{\prime}\) is spin. Simply-connected spin \(5\)-manifolds are classified [1]; when the homology is torsion-free they are given by \((S^{2}\times S^{3})^{\#r}\) where \(r\) is the second Betti number.
Recall that we previously constructed a diagram
**Lemma 7.8**.: _The map \(\mathbb{Z}^{\oplus\infty}\to\pi_{0}\operatorname{Diff}_{c}(X^{\prime})\) factors through a finite rank abelian group._
Proof.: If \((M,\partial M)\) is a simply-connected six-manifold with simply-connected boundary, then Kupers shows [12, Theorem A, Remark (iv)] that the mapping class group \(\pi_{0}\operatorname{Diff}_{c}(M)\) has finite type in the sense that its classifying space admits a CW structure with only finitely many cells in each dimension. The proof actually yields a stronger result, see [12, Theorem 4.2], which is that there is an exact sequence
\[1\to\Gamma\to\pi_{0}\operatorname{Diff}_{c}(M)\to G\to 1\]
where \(\Gamma\) is a finitely generated abelian group, and \(G\) is an arithmetic group. For any group fitting into such an exact sequence, any abelian subgroup is finitely generated. |
2302.07509 | Automated Movement Detection with Dirichlet Process Mixture Models and
Electromyography | Numerous sleep disorders are characterised by movement during sleep, these
include rapid-eye movement sleep behaviour disorder (RBD) and periodic limb
movement disorder. The process of diagnosing movement related sleep disorders
requires laborious and time-consuming visual analysis of sleep recordings. This
process involves sleep clinicians visually inspecting electromyogram (EMG)
signals to identify abnormal movements. The distribution of characteristics
that represent movement can be diverse and varied, ranging from brief moments
of tensing to violent outbursts. This study proposes a framework for automated
limb-movement detection by fusing data from two EMG sensors (from the left and
right limb) through a Dirichlet process mixture model. Several features are
extracted from 10 second mini-epochs, where each mini-epoch has been classified
as 'leg-movement' or 'no leg-movement' based on annotations of movement from
sleep clinicians. The distributions of the features from each category can be
estimated accurately using Gaussian mixture models with the Dirichlet process
as a prior. The available dataset includes 36 participants that have all been
diagnosed with RBD. The performance of this framework was evaluated by a
10-fold cross validation scheme (participant independent). The study was
compared to a random forest model and outperformed it with a mean accuracy,
sensitivity, and specificity of 94\%, 48\%, and 95\%, respectively. These
results demonstrate the ability of this framework to automate the detection of
limb movement for the potential application of assisting clinical diagnosis and
decision-making. | Navin Cooray, Zhenglin Li, Jinzhuo Wang, Christine Lo, Mahnaz Arvaneh, Mkael Symmonds, Michele Hu, Maarten De Vos, Lyudmila S Mihaylova | 2023-02-15T08:00:28Z | http://arxiv.org/abs/2302.07509v1 | # Automated Movement Detection with Dirichlet Process Mixture Models and Electromyography
###### Abstract
Numerous sleep disorders are characterised by movement during sleep, these include rapid-eye movement sleep behaviour disorder (RBD) and periodic limb movement disorder. The process of diagnosing movement related sleep disorders requires laborious and time-consuming visual analysis of sleep recordings. This process involves sleep clinicians visually inspecting electromyogram (EMG) signals to identify abnormal movements. The distribution of characteristics that represent movement can be diverse and varied, ranging from brief moments of tensing to violent outbursts. This study proposes a framework for automated limb-movement detection by fusing data from two EMG sensors (from the left and right limb) through a Dirichlet process mixture model. Several features are extracted from 10 second mini-epochs, where each mini-epoch has been classified as 'leg-movement' or 'no leg-movement' based on annotations of movement from sleep clinicians. The distributions of the features from each category can be estimated accurately using Gaussian mixture models with the Dirichlet process as a prior. The available dataset includes 36 participants that have all been diagnosed with RBD. The performance of this framework was evaluated by a 10-fold cross validation scheme (participant independent). The study was compared to a random forest model and outperformed it with a mean accuracy, sensitivity, and specificity of 94%, 48%, and 95%, respectively. These results demonstrate the ability of this framework to automate the detection of limb movement for the potential application of assisting clinical diagnosis and decision-making.
Dirichlet Process, REM sleep behaviour disorder, RBD, movement detection, Gaussian mixture model
## I Introduction
Ongoing research into sleep continues to highlight its significance to mental and physical well being [1]. Studies of numerous sleep disorders appear to preempt the onset of numerous neurological disorders. This includes rapid-eye movement (REM) sleep behaviour disorder (RBD), where mounting evidence suggests that this parasomnia predicts Parkinson's disease (PD) by years, potentially decades [2, 3]. This predictive ability provides an opportunity to explore preventative medicine and better understand how neurodegenerative disorders develop over time. PD is the second most prevalent neurodegenerative disease worldwide, affecting more than four million people [4]. Beyond the major impact to quality of life and increased mortality, the chronic nature and growing disability of PD incurs major healthcare expenses that will only continue to escalate in countries with an ageing population [5, 6]. More work is required to understand the development of this disorder so that preventative measures can be devised. RBD represents one potentially promising early predictor for a large part of PD sufferers, possibly providing a clear avenue to target remedies before the onset of PD.
Characteristic muscle activity associated with RBD includes complex and simple limb movements. For sleep studies, limb movement activity is captured using EMG sensors, which are within the electrostatic categorisation of sensing technology [7]. Clinicians are taught to visually identify EMG activity without a clear and precise definition. Visually identifying muscle activity to describe limb movement is also critical in diagnosing restless leg syndrome (RLS) and periodic limb movement disorder (PLMD). RLS has been found to be one of the most common sleep disorders in the United States of America [8, 9]. One study suggests RLS and PLMD are associated with cardiovascular disease and hypertension [10], while another has found a link between secondary RLS (occurs secondary to other medical conditions) and cardiovascular disease [11].
The AASM has defined RLS as an urge to move the legs, which must begin or worsen at rest, be partially or totally relieved when in movement, and occurs predominantly at night [12]. These movements must not by accounted for by another conditions such as leg cramps, arthritis, or positional discomfort. PLMD is far less common and is characterised by periodic episodes of repetitive limb movement during sleep and is distinct from RBD or RLS.
With the prevalence of sleep disorders continually increasing and the growing demand to better understand sleep and its implications on physiology (for example in RBD and RLS), the burden placed on sleep clinics is great and their efforts often hampered by manually-laborious diagnostic procedures. As a result researchers are keen to explore the viability of automated diagnostic support-tools to increase efficiency, accuracy, and productivity. Furthermore, the utility of automated sleep analysis, provides the opportunity to better understand sleep and its association with neurodegenerative and cardiovascular diseases.
The rest of this paper is organised as follows. Section II presents an overview of related work, Section III details the problem formulation for the automated movement detection with Dirichlet process models and how fusion of EMG data
from the left and right leg movement is performed.
## II Related Work
Numerous studies aim to provide automated techniques to identify various sleep disorders, sleep stages, or even specific sleep characteristics. A select few algorithms look at automating the detection of abnormal movement during sleep using EMG signals from the chin. The AASM stipulates at least a single EMG sensor to be placed on the chin in order to clinically analyse sleep and specifically identify abnormal muscle movement [12]. Diagnosing bruxism requires the evidence of teeth grinding during sleep, as such a few studies exist detailing a portable device to detect bruxism episodes [13, 14]. These two studies focused on using a simple EMG amplitude threshold in combination with heart rate elevation (measured from an ECG sensor) to identify bruxism episodes. This study demonstrated the predictive ability of an algorithm to aid in identifying bruxism, however the degree of variation and complexity of sleep disorder movements would mean that a simple threshold would not suffice for applications in PLMD and RBD. As a result the concept of automated movement detection algorithm lends itself towards a non-parametric model that can incorporate numerous sensors and compensate for movement which can vary greatly in magnitude and severity. A handful of other studies demonstrate this through limb movement detection in participants with RBD and PLMD [15, 16, 17].
In one study, Cesari _et al._ (2018), demonstrated the utility of a non-parametric probabilistic model to distinguish leg-movement from resting EMG mini-epochs [15]. From a dataset containing \(27\) healthy controls and \(36\) participants diagnosed with PLMD, this study was able to utilise this semi-supervised approach to detect PLMD participants with \(82\)%. As an extension of this study, Cesari _et al._ (2019), applied this technique on a mixed cohort of \(27\) healthy controls, \(36\) individuals diagnosed with PLMD, and \(29\) participants diagnosed with RBD [17]. While these studies didn't explore the performance of limb-movement detection (as manual annotations of limb movement are rare), it did validate its utility in distinguishing RBD and PLMD participants from healthy individuals. In a follow-up study this technique was expanded to a German sleep study that was able to assess the performance of limb-movement detection through the PLMS-index [16] using three EMG sensors (from the chin, left tibia, and right tibia). This German dataset contained \(240\) participants that were healthy controls or diagnosed with combinations of PD, PLMD, and RBD [16]. Each participant was given a PLMS-index score which details the average number of limb movements per hour of sleep. Using the aforementioned techniques [15, 17], this study demonstrated an automated classification of participants with PLMD and RBD with an accuracy of \(88.75\)% and \(84.17\)%, respectively [16]. Once more this study was able to assess the performance of limb movement detection by achieving an automated PLMS-index score that correlated to the manual score by \(84.99\)% and only had a slight bias towards over-predicting the PLMS-index [16]. However, these studies are limited in that they provide a proxy to individual event detection of limb movement (the PLMS-index) without exploring the limb-movements as seen or annotated by sleep clinicians. Nonetheless these studies have demonstrated the utility and potential of limb movement detection in the automated identification of specific sleep disorders.
## III Problem Formulation
In a previous study, Li _et al._ 2020 demonstrated the utility of a Dirichlet Process (DP) mixture model to automate the detection of sleep apnea segments and motivated movement detection in this study [18]. The advantage of this framework is in the data-driven approach to learn number of clusters within the mixture models. The DP is defined as a distribution over distributions [19]. Namely, where each observation of \(x_{i}\) is generated from a distribution with parameter(s) \(\mathbf{\theta_{i}}\), which itself is generated from a prior distribution \(G\):
\[\mathbf{\theta_{i}}\mid G\sim G \text{for each }i \tag{1}\] \[x_{i}\mid\mathbf{\theta_{i}}\sim F(\mathbf{\theta_{i}}) \text{for each }i, \tag{2}\]
where \(F(\mathbf{\theta_{i}})\) is the distribution of \(x_{i}\) given parameter(s) \(\mathbf{\theta_{i}}\) (note that differing \(\mathbf{\theta_{i}}\)s are not necessarily distinct values).
Consider a measurable space and any finite partitions \(\{T_{1},....,T_{K}\}\) of it. If \(G\sim DP(\alpha,G_{0})\), then:
\[(G(T_{1}),...,G(T_{K}))\sim\text{Dir}(\alpha G_{0}(T_{1}),...,\alpha G_{0}(T_{ K})). \tag{3}\]
where \(G_{0}\) is defined as the base distribution with a concentration parameter \(\alpha\).
The DP can be constructed by considering a unit length stick that is divided into an infinite number of segments represented by \(\pi_{k}\), in the following manner:
\[\beta_{k} \sim\text{Beta}(1,\alpha) \tag{4}\] \[\pi_{k} =\beta_{k}\prod_{j=1}^{k-1}(1-\beta_{j})=\beta_{k}\left(1-\sum_{l =1}^{k-1}\pi_{l}\right). \tag{5}\]
where \(\mathbf{\pi}=\{\pi_{k}\}_{k=1}^{\infty}\) is a sequence of mixture weights and \(k\) denotes the index of the component. Finally a DP is constructed in the following way:
\[\mathbf{\theta_{k}^{*}}\sim G_{0} \tag{6}\] \[G =\sum_{k=1}^{\infty}\pi_{k}\delta_{\theta_{k}^{*}}\] (7) \[G \sim\text{DP}(\alpha,G_{0}), \tag{8}\]
where \(\{\theta_{k}^{*}\}_{k=1}^{\infty}\) are independent and identically distributed (i.i.d.) random variables drawn from the base distribution \(G_{0}\) along with draws for weights (\(\pi_{k}\)) as expressed in (5).
Consider features extracted from the \(i\)-th segment as \(x_{i}\), its distribution can be expressed as follows:
\[p(x_{i})=\sum_{k=1}^{K}\pi\mathcal{N}(x_{i};\mathbf{\theta_{k}^{*}}), \tag{9}\]
where \(\mathcal{N}(.)\) denotes the Gaussian distribution and the parameters of the \(k\)-th component are denoted by \(\mathbf{\theta_{k}^{*}}\stackrel{{\Delta}}{{=}}\{\mathbf{\mu_{k}^{*}},\mathbf{ \Sigma_{k}^{*}}\}\). The mean vector and variance matrix of the \(k\)-th Gaussian component are represented by \(\mathbf{\mu_{k}^{*}}\) and \(\mathbf{\Sigma_{k}^{*}}\), respectively.
Mixture model theory assumes that each \(x_{i}\) is generated by first choosing a cluster, indexed by an assignment variable \(z_{i}\) according to a categorical distribution of \(\mathbf{\pi}=[\pi_{1},...,\pi_{K}]\)[19, 20]. The \(x_{i}\) observations are then generated from the chosen component with the parameter \(\mathbf{\theta_{i}}=\mathbf{\theta_{z_{i}}^{*}}\). Because the number of components, \(K\), and the distribution weights, \(\mathbf{\pi}\), are unknown and are solved using the available obsevsations. The framework of the DP allows to solve this problem and when combined with the stick-breaking process (detailed before) the generative model can be described as follows:
\[z_{i} \sim\mathbf{\pi} \tag{10}\] \[x_{i} \sim\mathcal{N}(\mathbf{\theta_{z_{i}}^{*}}), \tag{11}\]
where \(\{\mathbf{\theta_{k}^{*}}\}_{k=1}^{\infty}\) are distinct values of the parameters \(\mathbf{\theta_{k}^{*}}\)s, sampled independently from the base distribution \(G_{0}(\mathbf{\theta^{*}}\mid\lambda)\) (detailed in (6), where \(\lambda\) is the hyperparameter of \(G_{0}\)) and the distribution of \(\mathbf{\pi}\) is given in (5).
Suppose the parameters \(\mathbf{\theta_{k}^{*}}\)s and \(\beta_{k}\)s are denoted as \(\mathbf{\Theta}=\{\mathbf{\theta_{k}^{*}}\}_{k=1}^{\infty}\) and \(\mathbf{\beta}=\{\beta\}_{k=1}^{\infty}\), respectively. The random variables \(\mathbf{\beta}\) are drawn independently from a Beta distribution as defined in (4). Let \(\mathbf{z}=\{z_{i}\}_{i=1}^{N}\) be the cluster assignments of \(N\) training features \(\mathbf{X}=\{x_{i}\}_{i=1}^{N}\) and \(\mathbf{W}=\{\mathbf{\beta},\mathbf{\Theta},\mathbf{z}\}\) be the collection of all latent parameters. Often in clustering problems the predictive density is calculated, and given the features \(\mathbf{X}\) for training and a new sample \(x^{\prime}\) for testing, the probability of \(x^{\prime}\) being generated from the trained model can be expressed using the product-rule:
\[p(x^{\prime}\mid\mathbf{X})\] \[=\int p(x^{\prime}\mid z^{\prime},\mathbf{W},\mathbf{X})p(z^{\prime}\mid \mathbf{W},\mathbf{X})p(\mathbf{W}\mid\mathbf{X})dz^{\prime}d\mathbf{W} \tag{12}\] \[=\int p(x^{\prime}\mid z^{\prime},\mathbf{\beta},\mathbf{\Theta},\mathbf{z}, \mathbf{X})p(z^{\prime}\mid\mathbf{\beta},\mathbf{\Theta},\mathbf{z},\mathbf{X})p(\mathbf{W}\mid\mathbf{X })dz^{\prime}d\mathbf{W}\] (13) \[=\int p(x^{\prime}\mid z^{\prime},\mathbf{\Theta})p(z^{\prime}\mid \mathbf{\beta})p(\mathbf{W}\mid\mathbf{X})dz^{\prime}d\mathbf{W}\] (14) \[=\int p(x^{\prime}\mid\mathbf{\theta_{z^{\prime}}^{*}})p(z^{\prime} \mid\mathbf{\beta})p(\mathbf{W}\mid\mathbf{X})dz^{\prime}d\mathbf{W} \tag{15}\]
where \(z^{\prime}\) is the cluster assignment of the testing data \(x^{\prime}\). From (15) we can observe the first term, \(p(x^{\prime}\mid\mathbf{\theta_{z^{\prime}}^{*}})\), can be calculated from (9) and (11), while the second term, \(p(z^{\prime}\mid\mathbf{\beta})\) can be solved by (5) and (10). However, the last term, \(p(\mathbf{W}\mid\mathbf{X})\), is intractable but can be approximated using a variational distribution. A variational distribution is designed as a family of factorised distributions as described by meanfield variational inference [21]:
\[q(\mathbf{W};\mathbf{\phi})=\prod_{k=1}^{K}\big{[}q(\beta_{k};\phi_{k}^{\beta})q(\mathbf{ \theta_{k}^{*}};\phi_{k}^{\theta^{*}})\big{]}\prod_{i=1}^{N}q(z_{i}) \tag{16}\]
where \(q(z_{i})\)s are categorical distributions, \(\phi_{k}^{\beta}\) and \(\phi_{k}^{\theta^{*}}\) are parameters of distributions of \(q(\beta_{k})\) and \(q(\mathbf{\theta_{k}^{*}})\), with \(\phi_{k}=\{\phi_{k}^{\beta},\phi_{k}^{\theta^{*}}\}\). Through variational inference these parameters are updated iteratively to find a minima, details of the derivation are detailed in [22]. As a result (15) can be rewritten as:
\[p(x^{\prime}\mid\mathbf{X})= \int p(x^{\prime}\mid\mathbf{\theta_{z^{\prime}}^{*}})p(z^{\prime} \mid\mathbf{\beta})q(\mathbf{W};\mathbf{\phi})dz^{\prime}d\mathbf{W} \tag{17}\] \[= \int p(x^{\prime}\mid\mathbf{\theta_{z^{\prime}}^{*}})p(z^{\prime} \mid\mathbf{\beta})\prod_{k=1}^{K}\big{[}q(\beta_{k};\phi_{k}^{\beta})q(\mathbf{\theta_{ k}^{*}};\phi_{k}^{\theta^{*}})\big{]}\] \[\prod_{i=1}^{N}q(z_{i})\;dz^{\prime}\;d\beta\;d\mathbf{\theta^{*}}\;d \mathbf{z} \tag{18}\]
which can be calculated analytically. In this study, the DP Gaussian mixture model (DPGMM) was applied in the context of leg-movement detection in order to aid clinicians identify abnormal segments of sleep.
Sleep medicine in its current form demands clinicians laboriously analyse polysomnography (PSG) recordings in order to make diagnostic decisions. These logistical bottle-necks often hinder epidemiological studies to better understand the link between sleep disorders and physiology, where RBD is just a single example. This study aims to utilise sleep recordings from RBD participants that contain annotated notes of limb-movement to assess a supervised probabilistic model of limb movement detection.
## IV Polysomnography Data
The John Radcliffe (JR) hospital retains PSG recordings as part of National Health Service (NHS) routine care for individuals suspected of having RBD. This study applied through the Clinical Trials and Research Governance (CTRG) to access anonymised case records for patients who were suspected of having RBD and later confirmed through these recordings. In addition to complete PSG data, these records included: age, sex, diagnosis (recorded by clinical staff) and treatment received at time of recording. PSG recordings were anonymised by those who had authority to access the data. In total \(36\) participants were included in the PSG recordings and are summarised in Table I. This dataset provided two nights of full PSG recordings for each participant. Please note the male bias in the dataset, which is representative of the male predominance of RBD [23]. This study complied with the requirements of the Department of Health Research Governance Framework for Health and Social Care 2005 and was approved by the Oxford University hospitals National Health Service (NHS) Trust (HH/RA/PID 11957).
All PSG recordings include an EMG of the submentalis muscle (chin) and are annotated by sleep experts that detail
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Cohort** & **\#** & **Female** & **Male** & **Age (years)** \\ \hline RBD Participants & 36 & 2 & 34 & \(64.3\pm 7.96\) \\ \hline \end{tabular}
\end{table} TABLE I: Dataset used for this study provided from the John Radcliffe hospital.
the sleep stage for every 30 second epoch. Datasets that were annotated using the Rechtschaffen and Kales rules [24] were converted to AASM sleep stages (S3 and S4 were combined and interpreted as N3), which include wake, REM, N1, N2, and N3 [12].
Included with these recordings are annotations, that provide movement descriptions along with a timestamp. The descriptors provided are inconsistent and entirely dependent on each sleep technician, they even include spelling errors. All recordings are provided with EMG electrodes placed on the left and right tibias (TIBL and TIBR, respectively). Consequently, this study focused on descriptors that detail leg movements, where examples of text are detailed in bold in Table II.
## V Data Processing and Model Training
### _Signal Preprocessing_
All EMG signals from participants were re-sampled at \(256\)Hz and filtered between \(10\) and \(100\)Hz (as this is the expected EMG frequency spectrum [25]), using an \(8^{\text{th}}\)-order bandpass filter. Finally a \(10^{\text{th}}\)-order \(50\)Hz notch filter was also used to suppress noise from mains supply.
### _Movement Window Size_
While this dataset provided manual annotations of limb movements with a given time-stamp, there is no detail on the duration of the movement. The AASM ascribes limb movement duration varies between \(0.50\) and \(10\) seconds [12]. Motivated by a data-driven approach, this study sought to identify all unique annotations during REM sleep and to manually verify annotations that clearly describe leg limb movements. A distribution of absolute amplitude values \(10\) seconds before and after the annotation indicated that the majority of activity occurred on average two seconds before and \(10\) seconds after the annotated time-stamp. As a result features extracted for the purposes of this study in order to detect leg-movement focused on \(10\) second windows.
### _Feature Extraction_
From each \(10\) second window numerous features were calculated in order to train models to understand leg-movement and the absence of leg-movement. These include commonly used features that describe visual characteristics, such as maximum amplitude (Amax), mean amplitude (Amean), standard deviation (Astd), variance, and the \(75^{\text{th}}\) percentiles. Another popular feature used was the average power between \(10-50\)Hz, which was calculated by integrating (rectangular method) the power spectral density function. EMG energy, as described by Liang _et al_. 2012, was also extracted and measures the mean absolute amplitude over each mini-epoch in order to quantify body movement [26]. The entropy of each mini-epoch was also calculated, which measures the variability of the distribution of the amplitude values [27]. The EMG relative spectral power (RSP) was also calculated for frequencies between \(10\)-\(12\)Hz (RSP alpha), \(12\)-\(30\)Hz (RSP beta), and \(30\)-\(40\)Hz (RSP gamma). Additional features included commonly used metrics for evaluating EMG signals to detect the absence of atonia.
These features included the spectral edge frequency, defined as the frequency below which \(95\%\) of the signal power is contained [28]. The atonia index was also calculated for each mini-epoch, which has been associated with RBD identification since 2008 and was further improved in 2010 [29, 30]. The quantified motor activity (QMA) technique was also used to extract the QMA amplitude, QMA baseline, QMA duration, and the QMA percentage from each mini-epoch. The fractal exponent was also extracted, which measures signal complexity by fitting a linear line to a double logarithmic graph of spectral power density versus frequency [31]. Our previous work has demonstrated the utility of the fractal exponent in RBD detection [32]. Finally the manually annotated sleep stage was also added as a feature to focus models to identify movement during REM stages of sleep.
### _Feature Selection_
It was prudent to utilise feature selection algorithms to identify the most parsimonious set of features to train an effective leg-movement detection classification model. This study employed the minimum redundancy - maximum relevance (mRMR) feature selection algorithm, through the calculation of mutual information [33].
### _Classification_
This study chose a Dirichlet Process (DP) mixture model to classify leg-movements based on EMG features. This section details the DP framework and how extracted features are used
\begin{table}
\begin{tabular}{l l} \hline \hline & Descriptors \\ \hline
1. Arousal & 26. mouthing and arm movements \\
2. EVENT 6 & 27. move arms \\
3. EVENT5 & 28. move both arms \\
4. Event 11 & 29. **move foot** \\
5. Event 15 & 30. move hands \\
6. Event 16 & 31. move head \\
7. Event 17 & 32. **move head and legs** \\
8. Event 19 & 33. move head and right arm \\
9. Event 20 & 34. move head from side to side \\
10. Event 21 & 35. move left \\
11. Event 22 & 36. move left arm \\
12. Event 4 & 37. **move legs** \\
13. Event 7 & 38. move limb \\
14. Event 9 & 39. move right arm \\
15. Limb Movement & 40. moving arm \\
16. arm & 41. moving hands \\
17. arm movements & 42. moving head \\
18. event 23 & 43. shift position \\
19. fine movements of head & 44. **shift legs** \\
20. good range of jerks & 45. shift position \\
21. good range of jerks & 46. shifting limbs \\
22. hand fiddling & 47. shifting position \\
23. head moves from side to side & 48. **small twitches leading to leg jerk** \\
24. head twitch & 49. **straight legs** \\
25. **lwg switch** & 50. twitchy hands \\ \hline \hline \end{tabular}
\end{table} TABLE II: A list of descriptors detailing movement in the polysomography recordings. Text in bold are identified as leg limb movement based on text.
to form two distributions, describing leg-movement and no leg-movement. These distributions are modelled by two Gaussian mixture models (GMM), with a DP as a prior. This work was inspired by the success of this classification in the sleep apnea detection using oxygen saturation data as detailed by Li _et al._ 2019 [18].
#### V-C1 Movement Detection from a Dirichlet Process Mixture Model
A selected number of features, as described in Section V-D, are extracted from segments that have leg-movement and no leg-movement. The classification of these segments can be analysed by comparing the probability of each segment being generated from models of 'leg-movement' and 'no leg-movement'.
The distributions of features from 'leg-movement' and 'no leg-movement' segments can be modelled by two Gaussian mixture models (GMMs), as a GMM can approximate any distribution accurately by setting an appropriate number of components and adjusting parameters. For this study the two GMM models are the same but are trained using different segments, those from 'leg-movements' and 'no leg-movements'.
Training data, \(\mathbf{X}\), consisted of features from 'leg-movements', \(\mathbf{X^{1}}=\{x_{i}^{1}\}_{i=1}^{N_{1}}\), and 'no leg-movements', \(\mathbf{X^{0}}=\{x_{i}^{0}\}_{i=1}^{N_{0}}\). The probability of testing data, \(x^{\prime}\), being generated from either model can be calculated using (18). Finally a mini-epoch can be classified as 'leg-movement' by:
\[\log\frac{p(\mathbf{x^{\prime}}\mid\mathbf{X^{1}})}{p(\mathbf{x^{\prime}}\mid\mathbf{X^{0}})} \geq c. \tag{19}\]
where \(c\) is the threshold for classification, influencing the balance of sensitivity and specificity. This was shown to be effective in a study on apnea detection [18]. While the idea of independently control for each limb seems trivial, the literature on independent limb movement is not definitive. Studies in human locomotion have demonstrated various degrees of dependence and relative independence [34]. This is further compounded by the question of independent limb movement during sleep, but for the purposes of this study we have assumed that they are independent. Therefore, features derived from the left and right limb electromyogram sensors can be considered independent sources and the log-likelihood can be expressed as follows:
\[p(\mathbf{x^{\prime}}\mid\mathbf{X}) =p(\mathbf{l^{\prime}},\mathbf{r^{\prime}}\mid\mathbf{L},\mathbf{R}) \tag{20}\] \[=p(\mathbf{l^{\prime}}\mid\mathbf{L})\cdot p(\mathbf{r^{\prime}}\mid\mathbf{R})\] (21) \[\log\frac{p(\mathbf{l^{\prime}}\mid\mathbf{L^{1}})}{p(\mathbf{l^{\prime}} \mid\mathbf{L^{0}})} +\log\frac{p(\mathbf{r^{\prime}}\mid\mathbf{R^{1}})}{p(\mathbf{r^{\prime}}\mid \mathbf{R^{0}})}\geq c. \tag{22}\]
where training data \(\mathbf{L}\) and \(\mathbf{R}\), consisted of features from left and right limb sensors, respectively. While testing data \(\mathbf{l^{\prime}}\) and \(\mathbf{r^{\prime}}\) are from left and right sensors, respectively. Using cross-fold validation the \(c\) threshold was optimised based on the F1-score.
## VI Results & Discussion
Using the LEMG and REMG signals available in the PSG recordings described in Section IV, an overlay of all limb-movement annotations are detailed in Figure 1 (ten seconds before and after an annotation). From this figure we can observe that most amplitude activity occurs two seconds before and eight seconds after an annotation of leg-movement. This attribute informed the decision to extract features from 10 second mini-epochs. These features were used to train and test the DPGMM to detect mini-epochs with leg-movements through a 10-fold cross-validation scheme.
The results of 'leg-movement' detection using the DPGMM are detailed in Table III along with classification from a random forest model. The DPGMM provides superior precision and F1-score, but achieves a smaller sensitivity compared to the random forest model. The relatively low sensitivity might be due to the wide distribution of features for mini-epoch with and without leg-movement. As a result the trained model becomes sensitive to mini-epochs with strong activity indicative of leg-movement and was unsuccessful at classifying mini-epochs with small segments of movement activity. Nonetheless, the DPGMM was able to achieve a mean precision of \(0.25\) and a mean specificity of \(0.95\). While this performance might not be able to identify all leg-movements, its precision and specificity might mean this technique is effective at detecting movement for the purposes of RBD identification and diagnosis. As instances of leg-movement have a wide spectrum with respect to EMG amplitude activity (for each episode and for every participant), this becomes the underlying cause of misclassification. The DPGMM outperforms the random forest model because it can take into account the features describing different levels of activity when estimating distributions and their Gaussian components based on training data. Once more the features that optimised the DPGMM can be analysed to identify important features in leg-movement detection. While movement during REM sleep constitutes a major criteria for diagnosing RBD, leg movement, specifically, might not be the most frequent [35, 36]. However, this application of targeting leg movement for RBD participants provides a proof-of-concept that could be applied to other limbs and sleep disorders.
During the feature selection process (part of cross-validation), the number of instances when each feature was included ('votes') in the trained model are detailed in Figure 2 as a proxy for feature importance. It is clear to see similarities between the importance of left and right limb features, where the annotated sleep stage was the most important for both models of the left and right limbs. This is not surprising that the leg-movement annotations were only identified for REM sleep, resulting in a model focused on the feature of annotated sleep stage. In this study manually annotated sleep staging was
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|} \hline & **Accuracy** & **Sensitivity** & **Specificity** & **Precision** & **F1** \\ \hline
**RF** & \(0.50\pm 0.028\) & \(0.79\pm,0.12\) & \(0.50\pm 0.03\) & \(0.17\pm 0.058\) & \(0.27\pm 0.082\) \\ \hline
**DRGMM** & \(0.94\pm 0.033\) & \(0.48\pm 0.19\) & \(0.55\pm 0.037\) & \(0.25\pm 0.14\) & \(0.30\pm 0.15\) \\ \hline \end{tabular}
\end{table} TABLE III: Results of leg-movement detection using a random forest (RF) model compared to the Dirichlet process Gaussian mixture (DPGMM) model.
already provided, but remains an arduous and time-consuming process, which would hamper any automated process to detect leg-movements and in-turn individual with RBD or PLMD. Additionally, important features also included the atonia index, motor activity (duration), and fractal exponent. These features are prominent because they are able to quantify EMG activity effectively and are more robust to noise.
A visual representation of the DPGMM leg-movement detection algorithm is depicted in Figure 3. It is clear to see from this example the left and right leg EMG signals provides information to detection leg-movement. However, from this example we can also observe how slight perturbations in the EMG signal can cause false-positives, reducing the precision of the algorithm.
This study could be further validated by incorporating additional data from healthy control participants and those with other sleep disorders. Furthermore, these leg-movement detection results could provide metrics to identify individuals with specific sleep disorders such as RBD and PLMD. While annotated data for sleep movement is limited and difficult to source, the potential to explore unsupervised methods and the application of transfer learning may prove fruitful. Furthermore the utilisation of GMMs provides the ability to analyse uncertainty assessments, which would provide an interesting future extension of this work. Future work might also look towards including video data [37] or utilising non-contact ultrasound Doppler sensors [38] for the purposes of leg-movement detection or more general movement detection. A further extension of this work could look to incorporate automatic sleep staging to avoid time-consuming and laborious manual sleep staging, providing a much more viable automated diagnostic tool.
## VII Conclusion
The proposed framework described in this study was able to effectively identify leg-movement activity in a dataset of participants diagnosed with RBD by fusing EMG sensors from the left and right limb. To classify leg-movement mini-epochs, four GMMs are trained using features from left and right sensors and from mini-epochs containing 'leg-movement' and 'no leg-movement'. All parameters are derived from the training data by setting the prior of the GMMs as DPs. The most important features as determined by the mRMR feature selection algorithm was the annotated sleep stage, atonia index, motor activity (duration), and the fractal exponent. Future work will look to utilise these models to identify participants with specific sleep disorder, while incorporating additional datasets, and the inclusion of other features from video data.
## Acknowledgment
The work reported here was sponsored by Research England's Connecting Capability Fund award CCF18-7157 - promoting the Internet of Things via Collaboration between HEIs and Industry (Pitch-In). This research was also supported by Research Council UK (RCUK) Digital Economy Programme (Oford Centre for Doctoral Training in Healthcare Innovation - grant EP/G036861/1, Sleep and Circadian Neuroscience Institute (SCNi), National Institute for Health Research (NIHR) Oxford Biomedical Research Centre (BRC) and the Engineering and Physical Sciences Research Council (EPSRC - grant
Fig. 1: This figure illustrates the signal amplitude from the left and right limb electromyogram in the period ten seconds before and after a leg-movement annotation (provided by sleep clinicians).
EP/N024966/1). We are grateful to EPSRC for funding this work through EP/T013265/1 project NSF-EPSRC: "ShiRAS. Towards Safe and Reliable Autonomy in Sensor Driven" and the support for ShiRAS by the National Science Foundation under Grant NSF ECCS 1903466. The content of this article is solely the responsibility of the author and does not necessarily represent the official views of the RCUK, SCNi, NIHR, EPSRC, or BRC.
|
2307.04806 | The Dragon-II simulations -- II. Formation mechanisms, mass, and spin of
intermediate-mass black holes in star clusters with up to 1 million stars | The processes that govern the formation of intermediate-mass black holes
(IMBHs) in dense stellar clusters are still unclear. Here, we discuss the role
of stellar mergers, star-BH interactions and accretion, as well as BH binary
(BBH) mergers in seeding and growing IMBHs in the \textsc{Dragon-II} simulation
database, a suite of 19 direct $N$-body models representing dense clusters with
up to $10^6$ stars. \textsc{Dragon-II} IMBHs have typical masses of $m_{\rm
IMBH} = (100-380)$ M$_\odot$ and relatively large spins $\chi_{\rm IMBH} >
0.6$. We find a link between the IMBH formation mechanism and the cluster
structure. In clusters denser than $3\times 10^5$ M$_\odot$ pc$^{-3}$, the
collapse of massive star collision products represents the dominant IMBH
formation process, leading to the formation of heavy IMBHs ($m_{\rm IMBH} >
200$ M$_\odot$), possibly slowly rotating, that form over times $<5$ Myr and
grow further via stellar accretion and mergers in just $<30$ Myr. BBH mergers
are the dominant IMBH formation channel in less dense clusters, for which we
find that the looser the cluster, the longer the formation time ($10-300$ Myr)
and the larger the IMBH mass, although remaining within $200$ M$_\odot$. Strong
dynamical scatterings and relativistic recoil efficiently eject all IMBHs in
\textsc{Dragon-II} clusters, suggesting that IMBHs in this type of cluster are
unlikely to grow beyond a few $10^2$ M$_\odot$. | Manuel Arca Sedda, Albrecht W. H. Kamlah, Rainer Spurzem, Francesco Paolo Rizzuto, Mirek Giersz, Thorsten Naab, Peter Berczik | 2023-07-10T18:00:56Z | http://arxiv.org/abs/2307.04806v1 | The Dragon-II simulations - II. Formation mechanisms, mass, and spin of intermediate-mass black holes in star clusters with up to 1 million stars
###### Abstract
The processes that govern the formation of intermediate-mass black holes (IMBHs) in dense stellar clusters are still unclear. Here, we discuss the role of stellar mergers, star-BH interactions and accretion, as well as BH binary (BBH) mergers in seeding and growing IMBHs in the Dragon-II simulation database, a suite of 19 direct \(N\)-body models representing dense clusters with up to \(10^{6}\) stars. Dragon-II IMBHs have typical masses of \(m_{\rm{IMBH}}=(100-380)\)\(\rm{M}_{\odot}\) and relatively large spins \(\chi_{\rm{IMBH}}>0.6\). We find a link between the IMBH formation mechanism and the cluster structure. In clusters denser than \(3\times 10^{5}\)\(\rm{M}_{\odot}\) pc\({}^{-3}\), the collapse of massive star collision products represents the dominant IMBH formation process, leading to the formation of heavy IMBHs (\(m_{\rm{IMBH}}>200\)\(\rm{M}_{\odot}\)), possibly slowly rotating, that form over times \(<5\) Myr and grow further via stellar accretion and mergers in just \(<30\) Myr. BBH mergers are the dominant IMBH formation channel in less dense clusters, for which we find that the looser the cluster, the longer the formation time (\(10-300\) Myr) and the larger the IMBH mass, although remaining within \(200\)\(\rm{M}_{\odot}\). Strong dynamical scatterings and relativistic recoil efficiently eject all IMBHs in Dragon-II clusters, suggesting that IMBHs in this type of cluster are unlikely to grow beyond a few \(10^{2}\)\(\rm{M}_{\odot}\).
keywords: methods: numerical - galaxies: star clusters: general - stars: general, black holes
## 1 Introduction
Despite the great progresses in observations, marked by the detection of intermediate-mass black hole (IMBH) candidates with masses as low as \(50,000\)\(\rm{M}_{\odot}\)(Chilingarian et al., 2018), and the first detection of an IMBH with mass \(\sim 150\)\(\rm{M}_{\odot}\) formed from the merger of two massive stellar BHs (named GW190521 Abbott et al., 2020), IMBHs remain elusive objects whose existence in the \(M_{\rm{IMBH}}=10^{2}-10^{5}\)\(\rm{M}_{\odot}\) mass range is largely debated (see recent reviews by Mezcua, 2017; Greene et al., 2020).
Several IMBH candidates have been proposed in galactic and extragalactic clusters (Lanzoni et al., 2013; Lutzgendorf et al., 2013; Maccarone and Servillat, 2008; Zocchi et al., 2019, 2016; Wrobel et al., 2018; van der Marel and Anderson, 2010; Strader et al., 2012; Lin et al., 2018; Kiziltan et al., 2017; Bash et al., 2008; Askar et al., 2017; Arca-Sedda, 2016; Abbate et al., 2018; Tiengo et al., 2022; Area Sedda et al., 2019), but none of the explorations conducted so far led to conclusive results, making IMBH formation processes one of the most intriguing puzzles of modern astronomy.
Numerical and theoretical works on IMBH formation in dense star clusters suggest that the IMBH seeding can occur via three, rather uncertain, pathways (Portegies Zwart
& McMillan, 2002; Portegies Zwart et al., 2004; Giersz et al., 2015; Metzger & Stone, 2016; Antonini & Rasio, 2016; Fragione et al., 2018; Antonini et al., 2019; Arca Sedda et al., 2020; Di Carlo et al., 2021; Gonzalez et al., 2021; Rizzuto et al., 2021, 2022b; Arca-Sedda et al., 2021c; Arca Sedda et al., 2021b): multiple stellar mergers, accretion of stellar matter onto a stellar BH, or repeated BH mergers. These mechanisms are not mutually exclusive: multiple stellar mergers can form a very massive star (VMS) that eventually collides with a stellar BH and the collision product further grows by merging with other BHs in the cluster. These processes could explain the formation of supermassive BHs (SMBHs) in galactic nuclei (Rees, 1984). A further formation channel could be via formation and collapse of a supermassive star, the so-called direct collapse scenario for SMBH seedings in galactic nuclei (Silk & Rees, 1998; Madau & Rees, 2001; Bromm & Loeb, 2003; Regan & Haehnelt, 2009). A similar process, aided by stellar collisions and gaseous accretion, could operate also in the most massive globular clusters, provided that they accrete a significant amount of the gas in which they are embedded at formation (Gieles et al., 2018).
The impact of multiple stellar mergers onto the IMBH buildup depends in part on the possible insurgence of pair-instability (PISN) and pulsational pair-instability supernova (PPISN) mechanisms. Stars that develop an He core with mass in the range \(m_{\rm He}=(64-135)\) M\({}_{\odot}\) undergo PISN and explode leaving no remnant, whilst stars \(m_{\rm He}=(32-64)\) M\({}_{\odot}\) suffer a strong mass loss owing to PPISN and leave remnants with a mass generally lighter than \(40-50\) M\({}_{\odot}\). These explosive mechanisms result in the so-called upper mass-gap, a region of the mass spectrum \(m_{\rm BH}=40-150\) M\({}_{\odot}\) where no BHs are expected. The boundaries of the upper mass-gap are rather uncertain, and depend on many details, among which the stellar evolution model, stellar rotation, the rate of thermonuclear reactions (Woosley & Heger, 2021; Vink et al., 2021; Stevenson et al., 2019; Farmer et al., 2019; Costa et al., 2021; Farrell et al., 2021; Tanikawa et al., 2021; Mapelli et al., 2020). Stellar mergers can actually overcome PISN and PPISN by mixing stars in different evolutionary stage, a mechanism that permits to increase the stellar mass but keep the He core below the threshold for these explosive mechanisms to develop (see e.g. Spera et al., 2019). Stellar mergers of this type have proven to be a viable way to generate upper-mass gap BHs in star clusters and, in some cases, IMBHs (Di Carlo et al., 2021, 2020; Kremer et al., 2020; Rizzuto et al., 2021; Arca-Sedda et al., 2021c; Rizzuto et al., 2022b; Gonzalez et al., 2021; Banerjee, 2022; Costa et al., 2022; Rodriguez et al., 2022; Ballone et al., 2022; Wang et al., 2022).
Whilst there is some general consensus about the outcome of stellar mergers, also thanks to the development of detailed hydrodynamical simulations coupled with stellar evolution models (Ballone et al., 2022; Costa et al., 2022), it is still rather unclear how much mass a massive star can accrete onto a stellar BH. Several works have shown that in the case of a "normal" star merging with a stellar BHs, there is little accretion as most of the energy is radiated away via jets, although the mechanism is highly uncertain and likely depends on the star structure and evolutionary stage (Guillochon & Ramirez-Ruiz, 2013; MacLeod & Ramirez-Ruiz, 2015; Cruz-Osorio & Rezzolla, 2020; Kremer et al., 2022). Hydrodynamical simulations of star-BH close interactions have shown that up to 70% of the star mass remains bound to the BH, but energy arguments suggest that even a tiny amount of accreted matter, \(O(10^{-3}-10^{-2}\) M\({}_{\odot})\), generates enough energy to evaporate the accretion disk and halt the BH growth (Kremer et al., 2022). Nonetheless, recent simulations modelling the common envelope phase of a tight star-BH binary have shown that the BH accretes the stellar core and expels the envelope, a process - possibly accompanied by a SN-like transient - that can spin-up the BH to nearly extremal values regardless the initial spin (Schreder et al., 2020). In multiple main sequence (MS) star collisions, the merger product is expected to be characterised by a compact core and a tenuous envelope with densities as low as \(10^{-10}\) g cm\({}^{-3}\)(Glebbeek et al., 2009). Therefore, it seems reasonable to assume that a BH would eat-up a significant fraction of mass from a massive companion that underwent multiple stellar mergers. Given this, recent works parametrised the amount of accreted matter through an accretion parameter \(f_{c}=0-1\)(Banerjee, 2017; Rizzuto et al., 2021; Arca-Sedda et al., 2021c; Rizzuto et al., 2022b, a).
Repeated BH mergers can potentially build-up upper-mass gap BHs and IMBHs, but their efficiency is inevitably hampered by the development of post-merger recoil originated by anysotropic GW emission (e.g. Campanelli et al., 2007; Lousto & Zlochower, 2008; Lousto et al., 2012), which can easily eject the post-merger product from the parent environment, especially in star clusters with velocity dispersion \(\sigma<100\) km s\({}^{-1}\)(Holley-Bockelmann et al., 2008; Fragione et al., 2022, 2018; Arca Sedda et al., 2021b; Arca-Sedda et al., 2021c; Arca Sedda et al., 2020, 2021a; Mahapatra et al., 2021). Typically, the amplitude of the kick imparted promptly after a merger on the remnant depends on the binary mass ratio and the amplitude and direction of the component spins, and can attain values that span more than two orders of magnitude. Despite its crucial impact on post-merger dynamics, little is known about the natal spin of stellar BHs, let alone IMBHs. Observations of several high-mass X-ray binaries show that BHs in these systems are nearly maximally spinning (see e.g. Qin et al., 2019; Reynolds, 2021), while observations of GW sources suggest that merging BHs are mostly slowly rotating (\(\chi_{\rm BH}<0.5\)) (The LIGO Scientific Collaboration et al., 2021b). From the theoretical point of view, it has been suggested that the evolution of the BH stellar progenitors could significantly impact the natal spin distribution. In single stars and binaries with negligible mass transfer, efficient angular momentum transport driven by magnetic fields could trigger the formation of BHs with natal spins as small as \(\chi_{\rm BH}\lesssim 0.01\) via Taylor-Spruit dynamo (Fuller & Ma, 2019). A significant mass-transfer can, instead, significantly spin-up a BH even if it is spinless at birth, possibly explaining the observed spin of BHs in Galactic low-mass X-ray binaries (\(\chi_{\rm BH}\sim 0.1-0.99\)) (Fragos & McClintock, 2015). Similarly, accretion from a BH progenitor onto a close companion in a binary and subsequent accretion from the companion onto the BH can spin-up the BH in high-mass X-ray binaries, provided that the angular momentum transfer when the companion leaves the MS phase is inefficient (Qin et al., 2019; Gallegos-Garcia et al., 2022). High-mass X-ray binaries with highly spinning BHs are not expected to produce merging BHs, a feature that partly explains the dearth of highly spinning BHs in observed BH mergers (Gallegos-Garcia et al., 2022). In massive binaries undergoing both Roche lobe overflow and common envelope and eventually forming a BH binary (BBH), the first-born BH
can have nearly zero spin or a spin covering a wide range, depending on the stellar prescription adopted, whilst the second BH could have nearly extremal spin (Qin et al., 2018; Bavera et al., 2020; Belczynski et al., 2020). This is likely driven by tidal synchronization of BH progenitors rotation and their mutual orbit (Kushnir et al., 2016; Hotokezaka and Piran, 2017). Nonetheless, massive binaries could also form BHs with negligible spins, provided that their progenitors lose their hydrogen envelope before undergoing SN (Bavera et al., 2020; Belczynski et al., 2020). In the case of BHs formed from star-BH mergers, instead, it has been shown that the accretion of the star core onto the BH can spin-up the BH to extreme values (Schroder et al., 2020). The aforementioned scenarios for BH natal spin can have a significant impact on the properties of IMBHs, depending on their formation mechanism. An IMBH formed via star-BH merger, for example, could be characterised by a large spin, while one formed via the collapse of a VMS could have negligible spin.
Stellar mergers, star-BH interactions, and BBH mergers can also have an impact on the formation of BHs in the upper-mass gap. In the first three observation runs, the LIGO-Virgo-Kagra collaboration (LVC) revolutionized our knowledge of BHs, proving the existence of BHs in and beyond the upper-mass gap. The most updated GW transient catalog (GWTC-3) contains 85 sources associated with the merger of two BHs with a mass above \(m_{\rm BH}=3\) M\({}_{\odot}\) (The LIGO Scientific Collaboration et al., 2021, 2021). Around one-third of them (27) have one component above \(m_{\rm BH}>40.5\) M\({}_{\odot}\), and 8 of them have one component heavier than \(m_{\rm BH}>65\) M\({}_{\odot}\), i.e. two proposed lower limits for the PIS (Belczynski et al., 2016; Spera and Mapelli, 2017). Moreover, 8 sources have a remnant mass \(m_{\rm BH,rem}>100\) M\({}_{\odot}\), 3 of which exceeds the IMBH threshold at 95 confidence level. With the forthcoming fourth observation run (O4), the LVC collaboration will possibly detect further 30-150 merging events, thus future detection will provide further insights on the development of BH mergers with upper-mass gap BHs.
In this work, we discuss the formation of IMBHs and upper mass-gap BHs in the Dragon-II star cluster database, a suite of 19 direct \(N\)-body simulations of star clusters comprised of up to 1 million stars and up to 33% of stars initially in binaries (details about these models are discussed in our companion paper, Arca Sedda et al in prep), performed with the Nbody6++GPU code1(Wang et al., 2015; Kamlah et al., 2022).
Footnote 1: [https://github.com/nbody6ppgpu/Nbody6PPGPU-beijing](https://github.com/nbody6ppgpu/Nbody6PPGPU-beijing)
The paper is organised as follows: in Section 2 we briefly summarise the main features of our models; Section 3 describes how IMBHs form in Dragon-II simulations and what is the impact of different formation channels; whilst Section 4 is devoted to discuss the impact of Newtonian and relativistic dynamics on the mass and spin of IMBHs in dense star clusters. Section 5 summarises the main results of the work.
## 2 Numerical Methods
### Modelling Dragon-II clusters with the Nbody6++GPU code
All Dragon-II clusters are represented by King (1966) models with an dimensionless potential well \(W_{0}=6\), a number of stars of \(N=(120-300-600)\times 10^{3}\), and an initial half-mass radius either \(R_{\rm HM}=0.47,~{}0.80,~{}1.75\) pc. As described in the first paper of the series (Arca Sedda et al, subm., hereafter paper AS-I) this choice is compatible with observations of several Galactic young massive clusters and produce cluster models that broadly match observed masses and half-mass radii of dense clusters in the Magellanic clouds (see Figure 2 in paper AS-I). For all models we adopt a binary fraction \(f_{b}=0.2\)2, defined as the number of binaries normalised to the sum of the range of single stars and binary pairs. For models with \(R_{\rm HM}=2.2\) pc, we run an additional series of models where we adopt \(f_{b}=0.05\) and \(N=(120-300-1,000)\times 10^{3}\). All clusters have the same metallicity, \(Z=0.0005\), a value consistent with the metallicity of several globular clusters in the Milky Way that may host a substantial population of BHs (Arca Sedda et al., 2018; Askar et al., 2018; Weatherford et al., 2020). The reduced computational cost of modelling a smaller amount of binaries permitted us to increase the total number of stars to one million, which is the maximum amount of stars and binaries ever simulated for realistic star cluster models with a direct \(N\)-body code (Wang et al., 2016).
Footnote 2: Note that the binary fraction is defined as \(f_{b}=n_{b}/(n_{s}+n_{b})\), where \(n_{b}\) is the number of binaries. This implies that the fraction of stars initially in binary systems is \(f_{2b}=2f_{b}/(1+f_{b})=0.10-0.33\), with \(f_{b}=0.05,0.2\).
All clusters have been initialised with the McLuster code (Kupper et al., 2011; Kamlah et al., 2022; Leveque et al., 2022,, Leveque et al., 2022,, Leveque in prep), adopting a (Kroupa, 2001) initial mass function limited between 0.08 M\({}_{\odot}\) and 150 M\({}_{\odot}\). Binary eccentricities are drawn from a thermal distribution, whilst semimajor axes follow a distribution flat in logarithmic values limited between the sum of stellar radii and 50 AU (Wang et al., 2015; Kamlah et al., 2022). Binary components are paired according to a uniform mass ratio distribution if their mass exceeds \(m_{*}>5\) M\({}_{\odot}\), whilst lighter stars are paired randomly (Kiminki and Kobulnicky, 2012; Sana et al., 2012; Kobulnicky et al., 2014). All clusters are assumed to orbit on a circular orbit 13.3 kpc away from the centre of a galaxy with total mass \(1.78\times 10^{11}\) M\({}_{\odot}\), assuming for the galaxy a Keplerian gravitational potential. Note that the choice of parameters is such that the velocity curve at the adopted distance is similar to the one observed in the Milky Way. This implies that all Dragon-II clusters are initially well contained inside their Roche lobe, thus the galactic field has little effect on the cluster structural evolution. In all cases but one, we ran two different realisation of each cluster to reduce the impact of statistical fluctuations. Table 1 summarizes the main properties of Dragon-II clusters. The table shows the initial parameters of the clusters, the simulated time \(T_{\rm sim}\), the number of merging compact objects occurring inside the cluster or after their ejection, the absolute maximum mass attained by BHs and the maximum BH mass at the end of the simulation, the number of BHs with a mass above 30 M\({}_{\odot}\) or 40 M\({}_{\odot}\).
For each set of initial conditions, we provide numbers for each independent realisation.
The simulations have been performed with the Nbody6++GPU code, a state-of-the-art direct \(N\)-body integrator that exploits GPU-accelerated high-performance supercomputing (Spurzem, 1999; Nitadori and Aarseth, 2012; Wang et al., 2015; Kamlah et al., 2022). The current version of the code follows the footset of a 50 year old tradition initiated by Sverne Aarseth (Aarseth et al., 1974; Spurzem, 1999; Aarseth, 1999, 2003; Aarseth et al., 2008; Nitadori and Aarseth, 2012; Wang et al., 2015; Kamlah et al., 2022).
The code exploits a 4th-order Hermite integrator with individual block-time step (McMillan, 1986; Hut et al., 1995) and implements a dedicated treatment for close encounters and few-body dynamics based on the Kustaanheimo-Stiefel (KS) regularisation (Stiefel and Kustaanheimo, 1965), the Ahmad-Cohen (AC) scheme for neighbours (Ahmad and Cohen, 1973), and algorithmic chain regularisation (Mikkola and Tanikawa, 1999; Mikkola and Merritt, 2008), which permits to resolve the evolution of binaries with a period \(10^{-10}\) times smaller than the typical dynamical timescales of star clusters.
Recently, a series of improvements have been introduced in the code to treat the formation and merger of relativistic binaries (Rizzuto et al., 2021) and quantify the fraction of stellar matter that can be fed to a stellar BH in binary systems or star-BH collisions (Rizzuto et al., 2022). Stars in Dragon-II clusters are evolved self-consistently from the zero age main sequence through the BSE code (Hurley et al., 2002), conveniently updated to feature state-of-the-art recipes for the evolution of massive stars, the mass spectrum and natal kicks of BHs and NSs, and the physics of (P)PISN, (for a detailed description of stellar evolution in Nbody6++GPU, see Banerjee et al., 2020; Banerjee, 2021; Kamlah et al., 2022). In this work, we use the so-called level-B of stellar evolution (Kamlah et al., 2022, Area Sedda et al., in prep).
After a series of major upgrades described in recent papers (Rizzuto et al., 2021; Kamlah et al., 2022; Rizzuto et al., 2022, 2
### Formation channels and formation times
Despite the relatively small database, our Dragon-II models support the formation of IMBHs via all the three main channels, complementing previous works (Portegies Zwart & McMillan, 2002; Portegies Zwart et al., 2004; Giersz et al., 2015; Rizzuto et al., 2021; Gonzalez et al., 2021; Rizzuto et al., 2022; Maliszewski et al., 2022).
To provide the reader with a clearer idea about how IMBHs form in Dragon-II clusters, we provide below two examples extracted from our simulations.
In the first example, an IMBH with final mass \(m_{\rm IMBH}=350\) M\({}_{\odot}\) forms in a cluster with \(N=120\)k stars, half-mass radius \(R_{\rm HM}=0.47\)pc, and binary fraction \(f_{b}=0.2\). The IMBH formation sequence is sketched in Figure 1.
Initially, a primordial binary with component masses \(m_{p1,p2}=(132+99)\) M\({}_{\odot}\) undergoes a series of strong interactions with a single MS star with mass \(m_{s}=133\) M\({}_{\odot}\) within the inner Myr of cluster evolution. The triple formed this way undergoes both phases of resonant interactions, with an exchange among the binary secondary and the third star, and a phase of hierarchical evolution, until the third body and the companion merge, leaving behind a new binary with component masses \(m_{p1,ps}=(132+231)\) M\({}_{\odot}\), eccentricity \(e\sim 0.001\) and semimajor axis \(a\simeq 225\) R\({}_{\odot}\). After 1.8 Myr, the binary captures a massive companion with mass \(m_{3}=115\) M\({}_{\odot}\) that induces the collision of the two massive stars, eventually leaving behind a VMS with mass \(m_{\rm VMS}=360\) M\({}_{\odot}\), which forms a binary with \(m_{3}\). The two binary components merge during the Hertzsprung-gap (HG) phase of the primary, leading to the formation of a VMS with total mass \(m_{\rm VMS}=365\) M\({}_{\odot}\). After capturing via a hyperbolic collision a small MS star (\(\sim 0.7\) M\({}_{\odot}\)) during the CHe burning phase, the VMS collapses to a BH with final mass \(m_{\rm IMBH,1}=288\) M\({}_{\odot}\) over a total time of \(T_{\rm sim}=2.5\) Myr. Within the subsequent 4 Myr, the newborn IMBH collides with another massive MS star with mass \(m_{\rm MS}=122\) M\({}_{\odot}\), accreting a fraction \(f_{c}=0.5\) of its total mass and reaching a final IMBH mass of \(m_{\rm IMBH}\simeq 350\) M\({}_{\odot}\). This case represents a clear example of how different formation channels, in this case stellar and star-BH mergers, concur to the IMBH seeding and growth.
In the second example, instead, an IMBH with mass \(m_{\rm IMBH}=191\) M\({}_{\odot}\) form from the coalescence of two nearly equal mass BHs. As sketched in Figure 2, the two BHs with masses \(\sim 95\) M\({}_{\odot}\) form from the evolution of two initially independent primordial binaries. After formation, the two BHs are part of different binaries and undergo many binary-single and binary-binary interactions before finding each other and merge after a time of \(\sim 10^{2}\) Myr.
#### 3.1.1 Stellar mergers
In Dragon-II models we find in total 104 stellar mergers with a merger remnant mass heavier than \(m_{\rm VMS}>90\) M\({}_{\odot}\), with 75% of them involving primordial binaries. The typical mass of the merger product is a star with mass in the range \(m_{\rm VMS}=100-350\) M\({}_{\odot}\). In some cases, the same star undergoes 3-4 merging events with stars in different evolutionary phases. Figure 3 shows the post-merger mass as a function of the time at which the merger occurs for all simulations. The plot shows exclusively star-star coalescences, thus it excludes both star-BH and BH-BH merging events. Around 48% of stellar mergers produce a massive MS star, 32% produce a star in the HG, and a core-He burning star in the remaining 22% of cases. The formation of a VMS (\(m_{\rm VMS}>150\) M\({}_{\odot}\)) eventually leaves to either no remnant owing to PISN (\(\sim 23\) cases), a remnant with mass \(m_{\rm BH}=40.5\) M\({}_{\odot}\) owing to PPISN (\(\sim 64\) cases), or an IMBH (\(\sim 2\) cases).
Comparing models with same \(R_{\rm HM}\) and different binary fraction, we find that models with \(f_{b}=0.2\) host a number of mergers 2-5 times larger than the case \(f_{b}=0.05\), a reflection of the fact that most of the mergers involve primordial binaries.
Noteworthy, the two IMBHs form in the densest simulated clusters, i.e. those with \(R_{\rm HM}=0.47\) pc and \(N=(1.2-3)\times 10^{3}\), which are also those with the shortest mass-segregation time (\(T_{\rm seg}\sim 0.3-0.4\) Myr), much shorter than the typical BH formation time (\(>2\) Myr).
#### 3.1.2 Star-black hole collisions
Among all simulations, we find 454 star-BH merger events, the vast majority of which (72%) lead to the formation of BHs with a final mass \(m_{\rm BH}<40.5\) M\({}_{\odot}\), thus they will remain mixed with the population of "ordinary" BHs that never experienced stellar accretion episodes. The remaining mergers leave behind, instead, BHs with a mass falling in the upper-mass gap. More in detail, around 18% of these events trigger the formation of a final BH with a mass in the
\begin{table}
\begin{tabular}{c c c c c c c c|c c|c c|c c|c c|c c c} \hline \hline \(N_{*}\) & \(M_{\rm c}\) & \(R_{h}\) & \(f_{b}\) & \(N_{\rm sim}\) & \(T_{\rm rlx}\) & \(T_{\rm seg}\) & \(T_{\rm sim}\) & \(N_{\rm GW,in}\) & \(N_{\rm GW,out}\) & \(M_{\rm max}\) & \(M_{\rm max,fin}\) & \(N_{>30}\) & \(N_{>40}\) \\
10\({}^{3}\) & 10\({}^{5}\) M\({}_{\odot}\) & pc & & & & Myr & Myr & & Myr & & & & M\({}_{\odot}\) & & M\({}_{\odot}\) & & M\({}_{\odot}\) \\ \hline
120 & 0.7 & 1.75 & 0.05 & 2 & 99 & 2.1 & 2379 & 2326 & 0 & 2 & 2 & 0 & 64 & 76 & 25 & 34 & 0 & 2 & 0 & 0 \\
300 & 1.8 & 1.75 & 0.05 & 2 & 142 & 2.7 & 1196 & 1422 & 0 & 2 & 2 & 2 & 69 & 77 & 40 & 40 & 13 & 13 & 5 & 1 \\
1000 & 5.9 & 1.75 & 0.05 & 2 & 233 & 4.5 & 207 & 194 & 1 & 1 & 4 & 4 & 81 & 146 & 52 & 70 & 149 & 169 & 72 & 85 \\
120 & 0.7 & 1.75 & 0.2 & 2 & 99 & 2.1 & 1710 & 1540 & 2 & 2 & 0 & 2 & 232 & 81 & 38 & 28 & 2 & 0 & 0 & 0 \\
300 & 1.7 & 1.75 & 0.2 & 2 & 142 & 2.7 & 519 & 793 & 1 & 0 & 7 & 5 & 92 & 77 & 65 & 47 & 26 & 26 & 8 & 14 \\
600 & 3.5 & 1.75 & 0.2 & 2 & 189 & 3.4 & 205 & 126 & 0 & 0 & 2 & 5 & 87 & 144 & 59 & 84 & 95 & 103 & 45 & 65 \\
120 & 0.7 & 0.80 & 0.2 & 2 & 30 & 0.7 & 1154 & 1201 & 4 & 3 & 4 & 2 & 120 & 132 & 21 & 27 & 0 & 0 & 0 & 0 \\
300 & 1.7 & 0.80 & 0.2 & 2 & 44 & 0.8 & 307 & 309 & 1 & 0 & 1 & 0 & 93 & 107 & 40 & 43 & 15 & 11 & 2 & 2 \\
120 & 0.7 & 0.47 & 0.2 & 2 & 14 & 0.3 & 1149 & 530 & 2 & 2 & 3 & 1 & 350 & 92 & 50 & 30 & 1 & 0 & 1 & 0 \\
300 & 1.7 & 0.47 & 0.2 & 1 & 20 & 0.4 & 148 & - & 4 & - & 3 & - & 245 & - & 48 & - & 22 & - & 9 & - \\ \hline \end{tabular}
\end{table}
Table 1: Col. 1-4: initial number of stars, cluster mass and half-mass radius, primordial binary fraction. Col. 5: number of indipendent realisations. Col. 6-7: initial relaxation and segregation time. Col. 8: simulated time. Col. 9-10: number of mergers inside the cluster. Col. 11: maximum BH mass during the simulation. Col. 12: maximum BH mass at the end of the simulation. Col. 13-14: number of BHs with a mass \(m_{\rm BH}>30\) M\({}_{\odot}\) or \(>40\) M\({}_{\odot}\) at the last simulation snapshot.
range \(40.5<m_{\rm BH}/\) M\({}_{\odot}<60\), 6% form BHs with masses in the \(60<m_{\rm BH}/\) M\({}_{\odot}<70\) mass range, and the remaining \(\sim 4\%\) produces BHs heavier than \(m_{\rm BH}>70\) M\({}_{\odot}\). Stars involved in a star-BH merger are in different evolutionary stages: HG (40.1%), core He burning (45.2%), MS (5.5%), early/late asymptotic giant branch (AGB, 9%), giant branch (GB, 1.1%), and HG naked He star (0.2%).
Note that we have two different type of star-BH accretion events: one purely dynamical and one induced by stellar evolution. In the purely dynamical case, we have two possibilities: either the BH captures a MS star in a orbit such that the star fills its Roche lobe, or the orbit is sufficiently tight and eccentric that the BH crashes onto the star. In any case, the BH accretes a fraction \(f_{c}\) of the star mass. In the stellar evolution-driven case, instead, the star fills its Roche lobe, mainly when inflating during the HG or the core He burning phase. Even in such case, though, in Nbody6++GPU it is assumed that the BH eats up a fraction \(f_{c}\) of the star mass. Therefore, the stellar type is likely the parameter that better identify the two types of star-BH accretion/merger events.
Figure 4 shows the mass distribution of the merging star, and the BH before/after the merger, and the stellar type of the stars involved in the process.
Two events contribute to IMBH seeding or growth, one of them involves a \(m_{\rm BH}=40.5\) M\({}_{\odot}\) BH that accretes a core He burning star with mass \(m_{\rm VMS}=133\) M\({}_{\odot}\), previously formed via a complex sequence of stellar mergers triggered by binary-binary and binary-single interactions. In such case, the IMBH mass is \(m_{\rm IMBH}=107\) M\({}_{\odot}\). The second event, which we do not show in the histogram to ensure an optimal visibility, involves an IMBH with mass \(m_{\rm IMBH}=288\) M\({}_{\odot}\) and a MS star with mass \(m_{\rm t}\simeq 122\) M\({}_{\odot}\). None of all other interactions lead to the formation of an IMBH, partly owing to our choice to set the accretion factor to \(f_{c}=0.5\). Adopting \(f_{c}=1\) would
Figure 1: Formation of an IMBH in simulation with \(N=120\)k, \(R_{\rm HM}=0.47\) pc, and \(f_{b}=0.2\), realization ID 0. A sequence of massive star mergers triggers the formation of a very massive star (VMS) with mass \(m_{\rm VMS}=365\) M\({}_{\odot}\) that directly collapse to an IMBH. The IMBH later accrete matter from another massive main sequence star, reaching a final mass of \(m_{\rm IMBH}=350\) M\({}_{\odot}\). Different colors correspond to different evolutionary stages: main sequence (MS), common envelope (CE), naked main sequence He star (MSHe), Hertzsprung gap (HG), core He burning (cHeb), and black hole (BH).
Figure 2: Formation of an IMBH in simulation with \(N=120\)k, \(R_{\rm HM}=1.75\) pc, and \(f_{b}=0.2\), realization ID 0. Two massive primordial binaries undergo common envelope that eventually lead to the formation of two nearly equal mass BHs (\(m_{\rm BH}\sim 95\) M\({}_{\odot}\)) that eventually find each other via a complex series of binary-binary interactions. The binary eventually merge and builds-up an IMBH with mass \(m_{\rm IMBH}\simeq 191\) M\({}_{\odot}\). The color-coded legend is the same as in Figure 1.
have lead to an additional population of \(\sim 20\) IMBHs with a mass at formation in the range \(m_{\rm IMBH}=100-160\) M\({}_{\odot}\).
#### 3.1.3 Black hole mergers
The remaining 5 IMBHs in Dragon-II clusters form via BH-BH mergers, all involving upper mass-gap BHs. This highlights the fundamental impact of star-BH accretion events, because they are the main channel through which mass-gap BHs form. Interestingly, all the BH mergers involved in the IMBH buildup have progenitor stars originally in a primordial binary, thus highlighting the crucial role of binary dynamics in the IMBH formation process. At formation, these 5 IMBHs have masses in the range \(m_{\rm IMBH}\simeq(140-323)\) M\({}_{\odot}\) and, in case of negligible GW recoil, further increase up to \(m_{\rm IMBH}\simeq(160-260)\) M\({}_{\odot}\) via one or two repeated (hierarchical) merger events, after being dynamically ejected from the cluster. In the case of zero GW recoil, among all IMBHs in Dragon-II models, only one is ejected from the cluster as a single object. All the other are ejected with a companion and undergo merger within a Hubble time. In two cases, the IMBH undergoes two/three mergers inside the cluster and forms a binary with another BH that is eventually ejected from the cluster, merging in the field within a Hubble time.
.1.4 The link between formation channels, formation times, and the intermediate-mass black hole mass
Despite our sample is rather small, the fact that Dragon-II IMBHs form via all the proposed formation channels can help to provide a possible answer to the intriguing question "Is there a link between the IMBH seeding process and the environment in which this happens?"
Figure 5 shows the IMBH mass as a function of time for different formation channels from the first time the IMBH mass exceeds \(10^{2}\) M\({}_{\odot}\) and until the first BH merger event develops. In other words, we exclude from the plot IMBHs older than the second generation (2g), because GW recoil drastically reduce the probability for multiple generation mergers, as discussed in Section 4.1.
From the plot, it seems that there is a striking relation between the structure of the host cluster and the IMBH formation process. The densest clusters (\(\rho_{\rm cl}>3\times 10^{5}\) M\({}_{\odot}\) pc\({}^{-3}\)) favour the formation of IMBHs via stellar collisions on the short timescales (\(<10\) Myr) and nurture the most massive IMBHs in our sample. IMBHs in these clusters further grow via accretion of stellar material and coalescence with stellar BHs on timescales \(<100\) Myr (see also Maliszewski et al., 2022). In lower density clusters, instead, IMBHs form on longer timescales (\(10-300\) Myr) via star-BH accretion and BBH mergers. In such case, Figure 5 clearly shows a trend, namely that the looser the cluster the longer the formation time and the heavier the IMBH seed mass.
This difference may be related to the core-collapse process, a mechanism driven by mass-segregation and relaxation according to which the cluster core contracts and its density increases up to a maximum point, i.e. the core-collapse. The time at which core-collapse occurs is generally a fraction of the relaxation time, \(t_{\rm cc}=0.2T_{\rm rlx}\)(Portegies Zwart & McMillan, 2002; Fujii & Portegies Zwart, 2014). We find that in clusters with an initial density \(>3\times 10^{5}\) M\({}_{\odot}\) pc the core-collapse occurs before stellar BH forms or massive stars undergo PISN and PPISN, i.e. \(t_{\rm BH}\sim 4\) Myr. This supports the idea that core-collapse facilitate the collision of the stars before they collapse to BH or undergo PISN.
In the case of clusters less dense that \(3\times 10^{5}\) M\({}_{\odot}\) pc, we also note that the smaller the density the larger the IMBH mass. This may be due to the fact that in low-density clusters, where interactions are less energetic and less frequent, the ejection of the most massive BHs via the so-called BH-burning process (e.g. Breen & Heggie, 2013; Arca Sedda et al., 2018; Kremer et al., 2020) is less effective. As a consequence, the heaviest BHs in the closest clusters in our sample have more time to hang around in the cluster and pair-up, as in the case of model IBH_Rh1.75f20N120k.
## 4 Discussion
Newtonian versus relativistic dynamics: intermediate-mass black hole retention and hierarchical mergers frequency
In this work, we want to assess the competing role of Newtonian and relativistic dynamics in determining BH retention and IMBH seeding and growth, thus we adopt the following multi-stepped procedure: a) run all cluster simulations assuming zero GW recoil to verify the possible development of multiple mergers and quantify the impact of Newtonian dynamics on the retention of BH merger remnants, b) quantify the retention probability of remnant BHs, c) re-run models in which BHs undergo repeated mergers with GW recoil enabled.
Figure 3: Formation time and final mass of VMSs formed in our models. We distinguish between VMS turning into an IMBH via direct collapse (black points), forming a “normal” BH (blue points), exploding via PPISN (red points), or via PISN, the latter leaving no remnant (white points).
#### 4.1.1 Newtonian dynamics
Regardless of the formation scenario, an IMBH seed that upon formation is retained in its parent cluster will likely undergo mass-segregation and quickly settles in the cluster centre possibly capturing a companion (e.g. MacLeod et al., 2016; Arca Sedda et al., 2019). The newly formed binary will undergo frequent interactions with surrounding cluster members with mass \(m_{p}\), at a rate
\[\dot{n}_{2-1}\sim n\sigma\pi a^{2}(1-e)^{2}\left(1+\frac{2G(m_{1}+m_{2}+m_{p}) }{a(1-e)\sigma^{2}}\right), \tag{3}\]
where \(n\) is the cluster number density, \(\sigma\) the velocity dispersion, \(m_{1,2}\) the mass of binary components, and \(a\) the binary semimajor axis. If the binary is hard, i.e. \(a\ll 2G(m_{1}+m_{2})/\sigma^{2}\), or highly eccentric, the timescale for these interaction is roughly given by
\[t_{2-1} \sim 6\mathrm{Myr}\left(\frac{n}{10^{5}\mathrm{pc}^{-3}}\right)^{-1} \left(\frac{\sigma}{20\mathrm{km~{}s}^{-1}}\right)\times\] \[\times\left(\frac{m_{1}+m_{2}+m_{p}}{240~{}\mathrm{M}_{\odot}} \right)^{-1}\left(\frac{a}{1\mathrm{AU}}\right)^{-1}(1-e),\]
therefore much shorter than the typical cluster lifetime. Repeated binary-single interactions can have an important effect on the binary evolution: on the one hand, they can extract orbital energy and harden the binary (_Heggie's law_, Heggie, 1975), but, on the other hand, they can become so violent to eject the binary from the cluster, halting the IMBH growth (Maliszewski et al., 2022).
The typical escape velocity of clusters described by a King (1966) model can be conveniently expressed as (see e.g. Fragione et al., 2022)
\[v_{\mathrm{esc}}=2\sqrt{\log(1/c)/\pi}\,(1-c)^{-1/2}\left(\frac{GM}{R_{\mathrm{ HM}}}\right)^{1/2}, \tag{5}\]
where \(c=R_{c}/R_{\mathrm{HM}}\) is the ratio between the core and half-mass radius of the cluster. In Dragon-II models, we find that such parameter attains values \(c=0.2\pm 0.1\) within the whole simulation time and regardless of the initial conditions.
Figure 4: Left panel: mass distribution of BHs (grey filled steps) and stars (red filled steps) involved in a star-BH merger compared to the final BH mass distribution (black open steps). Right panel: stellar type of stars involved in a star-BH merger.
Figure 5: Top panel: Time and mass of IMBHs at formation, i.e. when the IMBH mass exceeds \(10^{2}~{}\mathrm{M}_{\odot}\) for the first time. We distinguish between IMBHs forming via stellar collisions (VMS, squares), binary BH mergers (BH-BH, circles), and star-BH accretion (BH-MS, diamonds). In two cases, the IMBH accretes stellar material from a massive star after seeding. The consequent mass increase is highlighted with dashed lines. The color coding identifies the initial density of the host cluster. Only 1g-IMBHs are considered here.
Therefore, the escape velocity can be rewritten as
\[v_{\rm esc}=(34\pm 3){\rm km/s}\left(\frac{M}{10^{5}~{}{\rm M}_{\odot}}\right)^{1/ 2}\left(\frac{R_{\rm HM}}{1{\rm pc}}\right)^{-1/2}. \tag{6}\]
In all Dragon-II clusters the escape velocity remains below \(v_{\rm esc}<50\) km/s, with the loosest and smallest clusters attaining values in the \(8-20\) km/s range. This relatively small escape velocity has huge implications on the IMBH evolution. In fact, even when GW recoil is not taken into account, all Dragon-II IMBHs are ejected from the parent cluster after a violent interaction with a perturber.
A clear example is a simulation with \(N=300\) k, \(R_{\rm HM}=0.47\) pc, and \(f_{b}=0.2\), in which a binary with mass \(m_{1}+m_{2}=(240+38)\) M\({}_{\odot}\) undergoes a strong scattering with a BH with mass \(m_{p}=44\) M\({}_{\odot}\), which reduces the binary semimajor axis from \(a=0.35\) AU to \(a_{\rm fin}=0.24\) AU and impart to the binary a recoil with amplitude \(v_{\rm rec}=85\) km s\({}^{-1}\). From a theoretical standpoint, a binary undergoing a close interaction with a perturber with mass \(m_{p}\) and consequently shrinking from \(a\) to \(a_{\rm fin}\) receives a kick (Heggie, 1975; Heggie and Hut, 1993; Sigurdsson and Phinney, 1993; Goodman and Hut, 1993; Quinlan, 1996; Antonini and Rasio, 2016; Maliszewski et al., 2022)
\[v_{\rm rec} =\left[\frac{Gm_{1}m_{2}}{a_{\rm fin}(m_{1}+m_{2})}\frac{m_{p}}{ m_{1}+m_{2}+m_{p}}\left(1-\frac{a_{\rm fin}}{a}\right)\right]^{1/2}=\] \[=37.1~{}{\rm km/s}\left(\frac{\mu}{26~{}{\rm M}_{\odot}}\right)^{ 1/2}\left(\frac{q_{p}}{0.12}\right)^{1/2}\left(\frac{a_{\rm fin}}{1{\rm AU}} \right)^{-1/2}\left(1-\frac{x_{\rm fin}}{0.5}\right)^{1/2}, \tag{7}\]
where \(\mu=m_{1}m_{2}/(m_{1}+m_{2})\) and \(q_{p}=m_{p}/(m_{1}+m_{2}+m_{p})\). This equation returns a value \(v_{\rm rec}\simeq 72\) km s\({}^{-1}\) for the aforementioned example. This implies that as long as at least one heavy (\(m_{p}>10\) M\({}_{\odot}\)) perturber remains in the cluster, Newtonian dynamics, in particular close binary-single scatterings, can represent a serious threat to the IMBH retention.
Our analysis highlights the extreme importance of Newtonian dynamics in determining the evacuation of BHs from the parent cluster.
1.2 The impact of black hole natal spins and relativistic recoil on the properties of intermediate-mass black holes
In order to determine the possible properties of IMBHs and their retention probability in Dragon-II models, we implement the following simple model to take into account the impact of spins:
* If a stellar BH involved in the IMBH build-up formed from a single star or from a "non-interacting" binary, we assign a spin of \(\chi_{\rm BH}=0.01\)(Spruit, 2002; Fuller and Ma, 2019).
* In the two cases in which an IMBH forms from the collapse of a VMS assembled via stellar mergers, we assign an initial spin of 0.5. The choice is motivated by the fact that the particularly complex formation processes that lead to the IMBH formation make the IMBH natal spin practically unpredictable. We note that this choice has no effect on our results though, because both IMBHs accretes material from a stellar companion and we assume that this spin-up the IMBH as detailed in the following point.
* If the IMBH feeds on a stellar companion, or if its progenitors are upper-mass gap BHs, i.e. they underwent mass accretion at some point, we assign a spin drawn from a flat distribution in the range \(\chi_{\rm BH}=0.8-1\)(see e.g. Qin et al., 2018; Bavera et al., 2020; Belczynski et al., 2020; Schroder et al., 2020).
* If the IMBH progenitor is a BH formed in a primordial binary, we assign a small spin (\(\chi_{\rm BH}=0.01\)) if it is the first-born or a spin in the range \(\chi_{\rm BH}=0.1-1\)(Qin et al., 2018; Bavera et al., 2020) otherwise.
* If the IMBH formed from a BBH merger, the IMBH spin and mass are calculated according to Jimenez-Forteza et al. (2017) fitting formulae (Arca Sedda et al., 2020, see also).
Note that this model is applied in post-process to the simulation data.
To keep track of the IMBH-BH merging history, we label an IMBH as first generation (1g) if it did not undergo any merger with another compact object. IMBHs formed out of VMS collapse or star-BH accretion are considered 1g. Second generation (2g) and higher generation IMBHs are those that underwent multiple mergers with other compact objects. In Dragon-II models, all merging companions are stellar BHs.
Figure 6 shows the masses and spins of Dragon-II IMBHs assuming zero GW recoil. It appears evident that, upon our assumptions, IMBHs in Dragon-II clusters generally form with a high spin (\(\chi_{\rm IMBH}>0.6\)), unless they form from the collapse of a VMS. Even in such a case, the accretion of matter, which likely spins-up the IMBH, occurs on a sufficiently short timescale (\(t\lesssim 8\) Myr) to make rather unlikely their observation as low-spin objects. In the case of IMBHs forming via multiple BH mergers, note that the IMBH spin decreases at increasing the merger generation (see also Arca Sedda et al., 2021).
Table 2 summarizes the main properties of Dragon-II IMBHs in terms of generation, masses, spins, and recoil velocity at 95% confidence level. These quantities are calculated drawing for each merging event 10,000 times the spin amplitude of the merging components and assuming for the spin directions an isotropic distribution. Looking at the Table, we see that GW recoil has no effect on the IMBH formation probability, because all IMBHs in Dragon-II clusters form either via stellar collapse or have a 1g BH progenitor. Nonetheless, GW recoil crucially affects second and higher generation IMBHs, which typically receive a kick, \(v_{\rm GW}=(200-800)\) km/s, much larger than the escape velocity from the parent cluster, typically \(v_{\rm esc}<50\) km/s. Therefore, the inclusion of GW recoil affects 7 out of 8 IMBHs in our simulations, avoiding both: a) the formation of IMBH-BH binaries that merge after dynamical ejection, a process involving 5 IMBHs in our sample, and b) the development of multiple BH mergers inside the cluster (2 IMBHs). The remaining IMBH is ejected from the cluster as a single object after a strong resonant interaction with other two, fairly massive (\(>30\) M\({}_{\odot}\)), BHs. As a consequence, we find that the number of merging events involving an IMBH decreases from 9 in the no-recoil case, to just 2, despite this represents the lowest value possible. The possible detection of GWs emitted from IMBH-BH binaries with future detectors, especially those operating in the deci-Hz frequency band, could help shed a light on the IMBH formation efficiency and retention probability (see e.g. Gair et al., 2011; Arca Sedda et al., 2020, 2021; Abbott et al., 2022).
#### 4.1.3 Simulations implementing a self-consistent treatment for gravitational recoil
The post-process treatment applied to simulation data provides an effective way to place constraints on the IMBH retention probability without the need to explore the wide associated parameter space. Nonetheless, a fully self-consistent simulation implementing also GW recoils would provide useful insights on, e.g. the impact of the IMBH displacement onto the development of new merging events.
To prove the impact of GW recoil in a self-consistent way, we focus on the two models in which the IMBH undergoes repeated mergers, namely models IBH_Rh1.75f20N120k, which ultimately form a 4g-IMBH, and IBH_Rh0.47f20N300k, which instead leads to a 3g-IMBH.
Practically speaking, we restart the simulation from the snapshot immediately before the merging event and apply to the merger remnant a kick. For simplicity, rather than extracting the kick from a distribution we assign the merger a given kick, as described below. Generally, we adopt a GW kick sufficiently small to ensure the IMBH retention after the merger. This choice permits us to investigate whether the IMBH can be retained in the cluster, it further grows, or it is anyway ejected owing to Newtonian or relativistic effects.
**Model ID: IBH_Rh1.75f20N120k** The IMBH in this model forms from the merger of two upper mass-gap BHs with masses \(m_{\rm BH1}+m_{\rm BH2}=(95.5+95.8)\) M\({}_{\odot}\). Therefore, the IMBH is already 2g at formation, and receives a kick \(v_{\rm rec}>171\) km/s at 95% confidence level (see Table 2). For comparison, the cluster escape velocity at the time of the merger is around \(v_{\rm esc}=12\) km/s.
Adopting the spin model described in Section 4.1.2, based on stellar evolution models, we find that the IMBH has a tiny fraction (\(P_{20}<0.2\%\)) to receive a kick \(v_{\rm GW}<20\) km/s. However, if the IMBH progenitors have negligible spins for some reason, for example if the IMBH progenitor is slowly rotating and the angular momentum transport is essentially driven by meridional currents (Zahn, 1992; Belczynski et al., 2020), the probability for \(v_{\rm GW}<20\) km/s(5 km/s) rises up to 84%(21%), significantly increasing the IMBH retention probability.
Therefore, we re-run the simulation and assign to the IMBH promptly after formation a GW kick of either \(v_{\rm GW}=5\) km/s (small kick) or 20 km/s (large kick). As expected, in the large kick model, the kick of \(v_{\rm GW}=20\) km/s exceeds the cluster escape velocity and the IMBH promptly leaves the cluster.
Figure 6: Evolutionary tracks of IMBHs in Dragon-II models in terms of IMBH mass (x-axis), spin (top panel), and recoil kick (bottom panel). IMBHs that form via stellar mergers or BH-star interactions are assumed to have zero kicks at formation. The error bars enclose the 95th percentile. Dashed lines identify the evolutionary path of each IMBH, whilst the horizontal dashed line in the bottom panel marks a velocity threshold of 100 km/s, i.e. typical value of galactic nuclei similar to the Milky Way.
In the small kick model, where \(v_{\rm GW}=5\) km/s, the 2g-IMBH is retained in the cluster and sinks back to the cluster centre where, after a long series of interactions with other stellar BHs, captures a BH with mass \(m_{\rm BH}=28\) M\({}_{\odot}\) and is ejected from the cluster with a velocity of just 15.3 km/s. The ejected IMBH-BH binary has an eccentricity \(e=0.57\) and a period of \(P=190\) days, and a corresponding merger time \(t_{\rm GW}\sim 10^{3}\) Hubble times. For the sake of comparison, in the zero GW recoil model, the IMBH pairs with a BH with mass \(m_{\rm BH}=40.5\) M\({}_{\odot}\) and is ejected from the cluster, merging within a Hubble time (see Appendix A).
**Model ID: IBH_Rho.4720N300k** Let us now consider the other model, named IBH_RD.4720N300k. Since the IMBH in this model forms via stellar collisions, its mass at birth is fairly large \(m_{\rm IMBH}=217\) M\({}_{\odot}\). After only 17 Myr, when the cluster escape velocity is around \(v_{\rm esc}=46.5\) km/s, this 1g-IMBH merges with an upper mass-gap BH with mass \(m_{\rm BH}=51.7\) M\({}_{\odot}\). The resulting 2g-IMBH receives a GW kick with amplitude \(v_{\rm kick}>99\) km/s at 95% confidence level. The probability to obtain a kick of \(\simeq 50\) km/s is of the order of \(\sim 0.1\)%, regardless of the spin distribution choice. Therefore, we re-run the simulation shortly before the merger event and assign to the merger remnant either a small (\(v_{\rm GW}=20\) km/s) or large (\(v_{\rm GW}=100\) km/s) recoil kick. In the case of \(v_{\rm esc}=100\) km/s the merger remnant promptly leaves the cluster, as expected.
In the case of \(v_{\rm GW}=20\) km/s, instead, the 2g-IMBH remains in the cluster core and undergoes a series of resonant interactions with two BHs, which drives the IMBH to merge after just 25.5 Myr with an upper-mass gap BH with (\(m_{\rm BH,2}=63\) M\({}_{\odot}\)). The 3g-IMBH, with a mass \(m_{\rm 3g}\simeq 300\) M\({}_{\odot}\), receives a kick \(v_{\rm GW}>90\) km/s regardless of the amplitude and direction of progenitors' spins, hence it leaves the cluster promptly after the merging event.
The impact of relativistic effects on the chaotic nature of \(N\)-body dynamics is apparent in this case: The displacement caused by the GW recoil favor the onset of the three-body interactions that led to the merger. For comparison, in the zero-kick model the two BHs never find each other.
## 5 Conclusion
In this work we have analysed the properties of IMBHs formed in the Dragon-II cluster models, a suite of 19 direct \(N\)-body simulations representing star clusters initially made up of \(\leq 10^{6}\) stars, up to 33% of which initially paired in a binary. Our main results can be summarised as follows:
* Out of 19 models, 8 IMBHs form in Dragon-II clusters, following three main formation channels: a) collapse of a VMS formed via repeated stellar mergers (2 IMBHs), b) accretion of stellar material onto stellar BHs (1), c) BH-BH mergers (5). The IMBHs have typical masses in the range \(m_{\rm IMBH}=(100-370)\) M\({}_{\odot}\). Aside IMBH seeding, the aforementioned formation channels significantly contribute to the population
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \(R_{\rm HM}\) & \(f_{b}\) & \(N\) & \(M_{\rm IMBH}\) & \(\chi_{\rm IMBH}\) & \(v_{\rm GW}\) & \# of \\ pc & \(10^{-2}\) & \(10^{5}\) & M\({}_{\odot}\) & & km/s & gen \\ \hline
[MISSING_PAGE_POST]
\hline \end{tabular}
\end{table}
Table 2: Properties of the IMBH in Dragon-II models. Col. 1-3: half-mass radius, binary fraction, and number of stars. Col. 4-6: median values of the mass, spin, and GW recoil of IMBHs. Errors represent the 95th percentile. Col. 7: BH generation. The asterisk marks a 1st generation IMBH – formed via stellar mergers – that grew through a merger with a massive star.
of BHs with masses in the upper mass-gap, for which we derive a formation efficiency of \(\eta_{\rm gap}=3.44\times 10^{-5}\) M\({}_{\odot}^{-1}\) [Table 1 and Figures 1-4].
\(\bullet\) Despite the small sample, we find a striking relation between the IMBH formation channel and the host cluster properties. Stellar mergers dominate IMBH formation in the densest clusters, operating on short timescale (10 Myr) and producing the most massive IMBHs (\(>200\) M\({}_{\odot}\)). Star-BH interactions and BBH mergers, instead, dominate IMBH formation in less dense clusters, showing that the lower the cluster the longer the IMBH formation time (\(10-300\) Myr), and the larger the IMBH seed mass [Figure 5].
\(\bullet\) When relativistic recoil is neglected, Newtonian dynamics represents a serious threat to IMBH retention and growth. In fact, all IMBHs are ejected from Dragon-II cluster through strong dynamical interactions. Nonetheless, in the Newtonian scenario some IMBHs undergo multiple IMBH-BH mergers reaching up to the fourth generation. The inclusion of GW recoil severely impacts the IMBH growth process, limiting the IMBH merger history to two generations. We implement a simple model for BH natal spins, based on stellar evolution models, to infer the IMBH mass and spins. In our fiducial model IMBHs are characterised by masses up to 376 M\({}_{\odot}\) and relatively large spins, i.e. \(\chi_{\rm IMBH}>0.6\). The inclusion of relativistic kicks in the simulations enables a fully self-consistent description of the IMBH merging process and reveal how hard is for IMBHs to be retained in their parent clusters. Nonetheless, even in the unlikely case the IMBH receives small GW kicks and avoid ejection, our simulations confirm how chaotic and unpredictable the evolution of the post-merger IMBH can be. For example, in one simulation the inclusion of the kick can favour the merger of the IMBH with a BH more massive than in the zero GW kick case [Table 2 and Figure 6].
The Dragon-II simulations represent one of the few numerical models (see also e.g. Mapelli et al., 2022; Maliszewski et al., 2022) in which all the three main channels proposed for the formation of IMBHs have been confirmed. Our analysis of the Dragon-II database suggests that: i) IMBHs form preferentially via collapse of stellar merger products (BBH mergers) in clusters more (less) dense than \(3\times 10^{5}\) M\({}_{\odot}\) pc\({}^{-3}\), ii) have large spins at formation \(\chi_{\rm BH}>0.6\), iii) live most of their life with a BH companion, iv) are unlikely to grow beyond a few hundred M\({}_{\odot}\) because of the efficiency of dynamical scatterings and the impact of relativistic recoil.
## Appendix A The evolution and growth of IMBHs in Dragon-II clusters
In this section, we discuss in detail the evolutionary history of the 8 IMBHs in Dragon-II clusters, their main properties, and retention probability. In the following we indicate with \({\rm BH1,~{}2}\) and with letters \(a\), \(b\) the IMBH progenitors, and with \(p1,~{}p2\) the progenitors of the IMBH progenitors, in such a way that \(p1a,~{}p2a\) indicates the two progenitors of the primary BH that eventually led to the IMBH. All the main properties of the IMBHs are summarised in Table 2.
**IMBH No. 1: IBH_Rh1.75f5m100k.** In one cluster model with \(R_{\rm HM}=1.75\) pc, \(f_{b}=0.05\), \(N=10^{6}\), the IMBH forms via the merger of two BHs with masses \(m_{\rm BH,1}=86.3\) M\({}_{\odot}\) and \(m_{\rm BH,2}=58.9\) M\({}_{\odot}\). The primary BH is the byproduct of a merger between a PPIS BH and a massive star in the HG phase \(m_{p1a}+m_{p2a}=(40.5+91.7)\) M\({}_{\odot}\) in a primordial binary, and we assume that it spins-up during its growth, assigning it a spin \(\chi_{\rm BH,1}>0.8\). The secondary BH, instead, forms from the merger of two stars in a primordial mass, with masses \(m_{p1b}+m_{p2b}=(37+82)\) M\({}_{\odot}\), with the lighter component being a naked He MS star and the heavier a star in the HG phase. We assign the companion BH a spin \(\chi_{BH,2}=0.01\). The resulting IMBH (2g) has a mass \(m_{\rm 2g}=138.4^{+1.8}_{-3.0}\) M\({}_{\odot}\) and spin \(\chi_{\rm 2g}=0.76^{+0.11}_{-0.27}\), with the spin increasing at decreasing the mass. In the simulation with GW recoil disabled, the IMBH forms a binary with a BH with mass \(m_{\rm BH}=40.5\) M\({}_{\odot}\) -- formed from a single star -- and ultimately merge after being ejected outside the cluster, leading to a final IMBH (3g) with a mass \(m_{\rm 3g}=174.0^{+2.6}_{-4.6}\) M\({}_{\odot}\) and \(\chi_{\rm 3g}=0.68^{+0.20}_{-0.40}\). However, the GW recoil associated with the formation of the 2g-IMBH is sufficiently large (\(v_{\rm GW}=150-2200\) km/s) to make the retention of the IMBH and its further growth impossible.
**IMBH No. 2: IBH_Rh1.75f20N120k.** The second IMBH in the sample (simulation with \(R_{\rm HM}=1.75\) pc, \(f_{b}=0.2\), \(N=120\)) forms through a BH-BH merger with component masses \(m_{\rm BH,1}+m_{\rm BH,2}=(95.5+95.8)\) M\({}_{\odot}\). The previous evolution of these massive BHs is rather complex. The primary forms from the accretion of a MS star with mass \(m_{\rm p2a}=110\) M\({}_{\odot}\) and a BH (\(m_{\rm p1a}=40.5\) M\({}_{\odot}\)) previously formed from the merger of two MS stars in a primordial binary. We thus assign the primary BH a spin \(\chi_{\rm BH,1}=0.8-1\). The secondary, instead, forms from the merging of two stars in a primordial binary during the HG phase of the heavier component. We assign the secondary BH a small spin \(\chi_{\rm BH,2}=0.01\). The resulting IMBH (2g) has a mass \(m_{\rm 2g}=181.8^{+1.8}_{-2.7}\) M\({}_{\odot}\) and spin \(\chi_{\rm 2g}=0.72^{+0.10}_{-0.15}\). When GW recoil is disabled, the IMBH undergoes a second merger with a BH with mass \(m_{\rm BH,2}=40.5\) M\({}_{\odot}\) that did not experience significant mass-transfer, thus likely characterised by a low spin. After the merger, the IMBH (3g) has a mass \(m_{\rm 3g}=217.8^{+2.5}_{-4.5}\) M\({}_{\odot}\) and spin \(\chi_{\rm 3g}=0.65^{+0.20}_{-0.45}\). It forms a binary that is ejected and merges outside the cluster, leaving a 4g-IMBH with final mass \(m_{\rm 4g}=253.9^{+2.9}_{-5.9}\) M\({}_{\odot}\) and spin \(\chi_{\rm 4g}=0.56^{+0.28}_{-0.34}\). There is a probability of \(\sim 0.2\%\) for the GW recoil imparted on the 2g-IMBH to remain below \(v_{\rm GW}<20\) km/s, i.e. sufficiently smaller to be retained in the cluster. However, when the 3g-IMBH forms, the post-merger kick is in the range \(v_{\rm GW}=35-2000\) km/s, definitely larger than the cluster escape velocity. We discuss the results from a self-consistent simulation of the evolution of the 2g-IMBH in Section 4.1.3.
**IMBH No. 3: IBH_Rh1.75f20N500k.** The third IMBH forms in model with \(R_{\rm HM}=1.75\) pc, \(f_{b}=0.2\), and \(N=600,000\) through the merger of two BHs with mass \(m_{\rm BH,1}=74.7\) M\({}_{\odot}\) and \(m_{\rm BH,2}=68.8\) M\({}_{\odot}\), both being byproduct of a stellar merger event in two primordial binaries. We assume that both BHs have negligible spins, which leads to an IMBH (2g) with a mass \(m_{\rm 2g}=136.6^{+1.2}_{-1.9}\) and spin \(\chi_{\rm 2g}=0.72^{+0.08}_{-0.15}\). The post-merger recoil is sufficiently small (\(v_{\rm GW}=20-45\) km/s) to retain the IMBH. The IMBH eventually merges with a BH with mass \(m_{\rm BH,2}=18\) M\({}_{\odot}\) (for which \(\chi_{\rm BH,2}=0.01\)) after
being ejected from the cluster. The final IMBH (3g) has a mass \(m_{\rm 3g}=152.7^{+1.5}_{-2.4}\) M\({}_{\odot}\) and spin \(\chi_{\rm 3g}=0.61^{+0.22}_{-0.36}\).
**IMBH No. 4: IBH_Rh0.8f20N120k.** The fourth IMBH forms in model \(R_{\rm HM}=0.8\) pc, \(f_{b}=0.2\), \(N=120,000\) from two BHs with masses \(m_{\rm BH,1}=79.8\) M\({}_{\odot}\) and \(m_{\rm BH,2}=40.5\) M\({}_{\odot}\). The primary formed from a star-BH merger in a primordial binary involving a BH \(m_{\rm p1a}=40.5\) M\({}_{\odot}\) and a star in the HG phase with mass \(m_{\rm p2a}=78.5\). We assign a spin \(\chi_{\rm BH,1}>0.8\) to the primary and a small spin to the secondary, which did not undergo any significant matter accretion phase. The IMBH (2g) formed this way has a mass \(m_{\rm 2g}=115.6^{+1.3}_{-3.0}\) M\({}_{\odot}\) and spin \(\chi_{\rm 2g}=0.74^{+0.15}_{-0.36}\). In absence of GW recoil, the IMBH captures a BH with mass \(m_{\rm BH,2}=39\) M\({}_{\odot}\), which experienced mass transfer in a primordial binary, and finally merge outside the cluster. In this case, we assign to the stellar BH a spin in the \(0-1\) range, which leads to an IMBH (3g) with final mass \(m_{\rm 3g}=149.8^{+2.0}_{-0.3}\) M\({}_{\odot}\) and \(\chi_{\rm 3g}=0.67^{+0.22}_{-0.35}\). The kick received by the 2g-IMBH, however, is large enough (\(v_{\rm GW}>100\) km/s) to kick the IMBH out before the binary can form.
**IMBH No. 5: IBH_Rh0.8f20N120k.** Even the fifth IMBH, which forms in model \(R_{\rm HM}=0.8\) pc, \(f_{b}=0.2\), and \(N=120,000\), is the byproduct of a BBH merger. The primary, with a mass \(m_{\rm BH,1}=80.7\) M\({}_{\odot}\), forms from the merger of two MS stars, and we assume negligible spin. The companion, with a mass \(m_{\rm BH,2}=51.5\) M\({}_{\odot}\), forms from mass transfer in a primordial binary, thus we assume that its spin is distributed in the \(\chi_{\rm BH,2}=0.8-1\) range. The resulting IMBH has a mass \(m_{\rm 2g}=126.4^{+0.7}_{-1.0}\) M\({}_{\odot}\) and spin \(\chi_{\rm 2g}=0.67^{+0.06}_{-0.08}\). In the case of no GW recoil, the IMBH captures a BH with mass \(m_{\rm BH}=30\) M\({}_{\odot}\) formed from a single star (thus \(\chi_{\rm BH}=0.01\)), and the resulting binary is eventually ejected from the cluster, ultimately merging outside the cluster and leaving behind an IMBH with mass \(m_{\rm 3g}=153.0^{+1.4}_{-2.1}\) M\({}_{\odot}\) and spin \(\chi_{\rm 3g}=0.62^{+0.19}_{-0.42}\). Even in this case, though, the GW kick imparted onto the 2g-IMBH (\(v_{\rm GW}>60\) km/s) is larger than the cluster escape velocity.
**IMBH No. 6: IBH_Rh0.8f20N300k.** The sixth IMBH forms in a cluster with \(R_{\rm HM}=0.8\) pc, \(f_{b}=0.2\), and \(N=300,000\), from the coalescence of a PBTSB BH (\(m_{\rm BH}=40.5\) M\({}_{\odot}\), negligible spin) and a massive star in the HG phase (\(m_{\rm HG}=133\) M\({}_{\odot}\)). The IMBH, with mass \(m_{\rm 1g}=107\) M\({}_{\odot}\), likely spins-up during the interaction with its stellar companion. The IMBH is eventually ejected as a single object in consequence of a resonant strong scattering involving two BHs with masses \(m_{\rm BH,1}=35.2\) M\({}_{\odot}\) and \(m_{\rm BH,2}=67.7\) M\({}_{\odot}\).
**IMBH No. 7: IBH_Rh0.47f20N120k.** The seventh, and most massive, IMBH, forms in one of the most compact Dragon-II clusters (\(R_{\rm HM}=0.47\) pc, \(f_{b}=0.2\), and \(N=120,000\)). A complex series of stellar mergers triggers the IMBH seeding, leading to an IMBH with mass \(m_{\rm 1g}=288\) M\({}_{\odot}\) that eventually collides with a massive MS star with mass \(m_{\rm MS}=122\) M\({}_{\odot}\). The resulting IMBH, which can be considered half-way between first and second generation, has a mass \(m_{\rm 1g*}=350\) M\({}_{\odot}\) and likely a large spin, \(\chi_{\rm 1g*}\sim 0.8-1\), owing to the mass accretion process. The IMBH captures a stellar BH with mass \(m_{\rm BH,2}=29\) M\({}_{\odot}\) formed from a single star, for which we assume negligible spin. The IMBH-BH binary is eventually ejected in a strong binary-single interaction and merges outside the cluster, leading to a 2g-IMBH with mass \(m_{\rm 2g}=376.5^{+0.8}_{-3.7}\) M\({}_{\odot}\) and spin \(\chi_{\rm 2g}=0.79^{+0.17}_{-0.27}\).
**IMBH No. 8: IBH_Rh0.47f20N300k.** The last IMBH forms in the densest Dragon-II cluster (\(R_{\rm HM}=0.47\) pc, \(f_{b}=0.2\), and \(N=300,000\)). Initially, an IMBH seed with mass \(m_{\rm 1g}=189\) M\({}_{\odot}\) forms via subsequent mergers of massive stars. It later collides with a MS star with mass \(m_{\rm MS}=51.7\) and shortly after with two low mass stars, leaving behind an IMBH (1g*) with mass \(m_{\rm 1g*}=217\) M\({}_{\odot}\) and high-spin triggered by mass accretion. The IMBH undergoes merger with a low-spin BH with mass \(m_{\rm BH}=27\) M\({}_{\odot}\), forming a 2g-IMBH with a mass \(m_{\rm 2g}=241.4^{+0.8}_{-3.3}\) M\({}_{\odot}\) and spin \(\chi_{\rm 2g}=0.77^{+0.18}_{-0.37}\). In absence of GW recoil, the 2g-IMBH further merge with a low-spin BH (mass \(m_{\rm BH}=38\) M\({}_{\odot}\)) after being ejected in the cluster, leading to a 3g-IMBH characterised by \(m_{\rm 3g}=275.3^{+1.8}_{-5.4}\) M\({}_{\odot}\) and spin \(\chi_{\rm 3g}=0.63^{+0.28}_{-0.39}\). When GW recoil are taken into account, the 2g-IMBH receives a kick \(v_{\rm GW}>40\) km/s, thus larger than the cluster escape velocity. We explore more in detail the retention of this IMBH in Section 4.1.
## Acknowledgements
The authors thank the referee for their constructive report and feedback. The authors warmly thank Agostino Leveque for their help and assistance in using their implementation of the McLustre code, and Giuliano Iorio, Sara Rastello, and Michela Mapelli for useful comments and discussion. This work benefited of the support from the Volkswagen Foundation Trilateral Partnership through project No. 97778 "Dynamical Mechanisms of Accretion in Galactic Nuclei" and the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Project-ID 138715338 - SFB 881 "The Milky Way System"), and by the COST Action CA16104 "GWverse". The authors gratefully acknowledge the Gauss Centre for Supercomputing e.V. for funding this project by providing computing time through the John von Neumann Institute for Computing (NIC) on the GCS Supercomputer JUWELS Bocoster at Julich Supercomputing Centre (JSC). MAS acknowledges funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 101025436 (project GRACE-BH, PI: Manuel Arca Sedda). AWHK is a fellow of the International Max Planck Research School for Astronomy and Cosmic Physics at the University of Heidelberg (IMPRS-HD). The work of PB was supported by the Volkswagen Foundation under the special stipend No. 9BS70. PB acknowledge the support within the grant No. AP14869395 of the Science Committee of the Ministry of Science and Higher Education of Kazakhstan ("Trime model of Galactic center dynamical evolution on cosmological time scale"). The work of PB was supported under the special program of the NRF of Ukraine Leading and Young Scientists Research Support - "Astrophysical Relativistic Galactic Objects (ARGO): life cycle of active nucleus", No. 2020.02/0346. RS acknowledges support by Yunnan Academician Work-sation of Wang Jingxiu (No. 202005AF150025) and thanks
Max Planck Institute for Astrophysics (Thorsten Naab) for hospitality during many visits.
MG was partially supported by the Polish National Science Center (NCN) through the grant No. 2021/41/B/ST9/01191.
FPR acknowledge the support by the European Research Council via ERC Consolidator Grant KETJU (no. 818930).
## Data availability
The data from the runs of these simulations and their initial models will be made available upon reasonable request by the corresponding author. The Nbody6++GPU code is publicly available3. The McLuster version used in this work will soon be available. A similar version is described in Leveque et al. (2022).
Footnote 3: [https://github.com/nbody6ppgpu/Nbody6PPGPU-beijing](https://github.com/nbody6ppgpu/Nbody6PPGPU-beijing)
|
2303.07620 | Prismatic $F$-crystals and Lubin-Tate $(\varphi_q,Γ)$-modules | Let $L/\mathbb{Q}_p$ be a finite extension. We introduce $L$-typical prisms,
a mild generalization of prisms. Following ideas of Bhatt, Scholze, and Wu, we
show that certain vector bundles, called Laurent $F$-crystals, on the
$L$-typical prismatic site of a formal scheme $X$ over
$\mathrm{Spf}\mathcal{O}_L$ are equivalent to $\mathcal{O}_L$-linear local
systems on the generic fiber $X_\eta$. We also give comparison theorems for
computing the \'etale cohomology of a local system in terms of the cohomology
of its corresponding Laurent $F$-crystal. In the case $X =
\mathrm{Spf}\mathcal{O}_K$ for $K/L$ a $p$-adic field, we show that this
recovers the Kisin-Ren equivalence between Lubin-Tate
$(\varphi_q,\Gamma)$-modules and $\mathcal{O}_L$-linear representations of
$G_K$ and the results of Kupferer and Venjakob for computing Galois cohomology
in terms of Herr complexes of $(\varphi_q,\Gamma)$-modules. We can thus regard
Laurent $F$-crystals on the $L$-typical prismatic site as providing a suitable
notion of relative $(\varphi_q,\Gamma)$-modules. | Samuel Marks | 2023-03-14T04:16:01Z | http://arxiv.org/abs/2303.07620v2 | # Prismatic \(F\)-crystals and Lubin-Tate \((\varphi_{q},\Gamma)\)-modules
###### Abstract
Let \(L/\mathbb{Q}_{p}\) be a finite extension. We introduce \(L\)_-typical prisms_, a mild generalization of prisms. Following ideas of Bhatt, Scholze, and Wu, we show that certain vector bundles, called Laurent \(F\)-crystals, on the \(L\)-typical prismatic site of a formal scheme \(X\) over \(\operatorname{Spf}\mathcal{O}_{L}\) are equivalent to \(\mathcal{O}_{L}\)-linear local systems on the generic fiber \(X_{\eta}\). We also give comparison theorems for computing the etale cohomology of a local system in terms of the cohomology of its corresponding Laurent \(F\)-crystal. In the case \(X=\operatorname{Spf}\mathcal{O}_{K}\) for \(K/L\) a \(p\)-adic field, we show that this recovers the Kisin-Ren equivalence between Lubin-Tate \((\varphi_{q},\Gamma)\)-modules and \(\mathcal{O}_{L}\)-linear representations of \(G_{K}\) and the results of Kupferer and Venjakob for computing Galois cohomology in terms of Herr complexes of \((\varphi_{q},\Gamma)\)-modules. We can thus regard Laurent \(F\)-crystals on the \(L\)-typical prismatic site as providing a suitable notion of relative \((\varphi_{q},\Gamma)\)-modules.
## 1 Introduction
Let \(K/\mathbb{Q}_{p}\) be a \(p\)-adic field, let \(K_{\infty}\) be the \(p\)-adic completion of the infinite cyclotomic extension \(K(\zeta_{p^{\infty}})\), and let \(\Gamma_{K}=\operatorname{Gal}(K_{\infty}/K)\). In this setting, Fontaine's theory of \((\varphi,\Gamma)\)-modules [15] gives an equivalence of categories
\[\operatorname{Mod}_{\mathbf{A}_{K}}^{\varphi,\Gamma_{K},et}\simeq \operatorname{Mod}_{W(K_{\infty}^{\flat})}^{\varphi,\Gamma_{K},et}\simeq \operatorname{Rep}_{\mathbb{Z}_{p}}(G_{K})\]
between - on the representation theoretic side - the category of finite free \(\mathbb{Z}_{p}\)-linear representations of the absolute Galois group \(G_{K}=\operatorname{Gal}(\overline{K}/K)\) and - on the semi-linear algebraic side - categories of \((\varphi,\Gamma)\)-modules over the _perfect_ period ring \(W(K_{\infty}^{\flat})\) or a certain _deperfected_ period ring \(\mathbf{A}_{K}\subseteq W(K_{\infty}^{\flat})\). Here, the word "deperfected" refers to the fact that the imperfect sub-\(\mathbb{F}_{p}\)-algebra \(\mathbf{E}_{K}=\mathbf{A}_{K}/p\subseteq K_{\infty}^{\flat}=W(K_{\infty}^{ \flat})/p\) becomes \(K_{\infty}^{\flat}\) under completed perfection.
Following the discussion in [26, SS0.2], we distinguish between two ways one might hope to relativize the theory of \((\varphi,\Gamma)\)-modules. First, one might hope for a _geometric_ relativization. On the representation theoretic side, this means replacing \(\operatorname{Rep}_{\mathbb{Z}_{p}}(G_{K})\) with etale local systems \(\operatorname{Loc}_{\mathbb{Z}_{p}}(X_{\eta})\) on the generic fiber of a formal scheme \(X/\mathbb{Z}_{p}\). One then hopes to get a corresponding semi-linear algebraic category of objects which can be thought of as \((\varphi,\Gamma)\)-modules varying over the base \(X\). The most satisfactory candidate here is the
_Laurent F-crystals_ of [8]. Recall that these are vector bundles \(\mathcal{M}\in\operatorname{Vect}(X_{\underline{\Delta}},\mathcal{O}_{\underline{ \Delta}}[\frac{1}{\mathcal{I}}]^{\wedge}_{(p)})^{\phi=1}\) over a certain structure sheaf on the prismatic site of \(X\) equipped with an isomorphism \(\phi^{*}\mathcal{M}\stackrel{{\sim}}{{\to}}\mathcal{M}\). Bhatt-Scholze's key theorem is as follows.
**Theorem 1.1**.: [8, corollary 3.8] _Let \(X\) be a bounded formal scheme adic over \(\operatorname{Spf}\mathbb{Z}_{p}\) with adic generic fiber \(X_{\eta}\). Then there is an equivalence \(\operatorname{Vect}(X_{\underline{\Delta}},\mathcal{O}_{\underline{\Delta}}[ \frac{1}{\mathcal{I}}]^{\wedge}_{(p)})^{\phi=1}\simeq\operatorname{Loc}_{ \mathbb{Z}_{p}}(X_{\eta})\)._
In the case \(X=\operatorname{Spf}\mathcal{O}_{K}\) for \(K/\mathbb{Q}_{p}\) a \(p\)-adic field, work of Wu [43] shows that
\[\operatorname{Vect}((\mathcal{O}_{K})_{\underline{\Delta}},\mathcal{O}_{ \underline{\Delta}}[\frac{1}{\mathcal{I}}]^{\wedge}_{(p)})^{\phi=1}\simeq \operatorname{Mod}^{\varphi,\Gamma_{K},et}_{\mathbf{A}_{K}}\simeq \operatorname{Mod}^{\varphi,\Gamma_{K},et}_{W(K^{\flat}_{\infty})},\]
recovering Fontaine's original theory.
**Remark 1.2**.: Due to obstructions related to the fact that Cohen rings can be formed functorially for perfect fields (via the Witt vector construction) but not for arbitrary characteristic \(p\) fields, it is significantly easier to give a relative construction of \((\varphi,\Gamma)\)-modules over the perfect period ring \(W(K^{\flat}_{\infty})\); for example, relative \((\varphi,\Gamma)\)-modules over a perfect period sheaf \(W(\mathcal{O}^{\flat}_{X})\) are defined in work of Kedlaya and Liu [26]. In follow-up work, Kedlaya and Liu [27] attempt to define satisfactory imperfect period sheaves via an axiomatic approach, but these axioms fail to attain in the important Lubin-Tate case discussed below [37]. On the other hand, the Bhatt-Scholze approach to relative \((\varphi,\Gamma)\)-modules circumvents this difficulty using the theory of prisms [7], which can be viewed as deperfections of perfectoid rings.
Alternatively, one might also want _arithmetic_ relativizations of the theory of \((\varphi,\Gamma)\)-modules. On the representation theory side, this means replacing the \(\mathbb{Z}_{p}\) in \(\operatorname{Rep}_{\mathbb{Z}_{p}}(G_{K})\) with affinoid algebras over \(\mathbb{Z}_{p}\), as in [1, 28, 3]. The simplest such case is to study \(\operatorname{Rep}_{\mathcal{O}_{L}}(G_{K})\) for \(K/L/\mathbb{Q}_{p}\) a finite subextension. A key goal of this paper is to extend Bhatt-Scholze's prismatic approach to relative \((\varphi,\Gamma)\)-modules to this case. We do this by introducing a mild generalization of prisms, which we call \(L\)-typical prisms, and the \(L\)-typical prismatic site \(X_{\underline{\Delta}_{L}}\) of a formal scheme \(X/\mathcal{O}_{L}\). This done, we show the following.
**Theorem 1.3**.: _Let \(L/\mathbb{Q}_{p}\) be a finite extension with uniformizer \(\pi\), and let \(X\) be a bounded formal scheme adic over \(\operatorname{Spf}\mathcal{O}_{L}\) with adic generic fiber \(X_{\eta}\)._
1. _There is an equivalence of categories_ \[\operatorname{Vect}(X_{\underline{\Delta}_{L}},\mathcal{O}_{\underline{\Delta }}[\frac{1}{\mathcal{I}}]^{\wedge}_{(\pi)})^{\phi=1}\simeq\operatorname{Loc}_{ \mathcal{O}_{L}}(X_{\eta})\] _between Laurent_ \(F\)_-crystals on_ \(X_{\underline{\Delta}_{L}}\) _and_ \(\mathcal{O}_{L}\)_-local systems on_ \(X_{\eta}\)_._
2. _If_ \(\mathcal{M}\in\operatorname{Vect}(X_{\underline{\Delta}_{L}},\mathcal{O}_{ \underline{\Delta}}[\frac{1}{\mathcal{I}}]^{\wedge}_{(\pi)})^{\phi=1}\) _and_ \(T\in\operatorname{Loc}_{\mathcal{O}_{L}}(X_{\eta})\) _correspond under the equivalence above, then there is an isomorphism_ \[R\Gamma(X_{\underline{\Delta}_{L}},\mathcal{M})^{\phi=1}\cong R\Gamma(X_{\eta, et},T).\]
Note that this theorem comes with an etale comparison generalizing [18, theorem 1.9(i)], itself a generalization of the Bhatt-Scholze etale comparison [7, theorem 1.8(4)]. Here and throughout the paper, if \(E\) is a complex in a derived category with an endomorphism \(\phi\), then \(E^{\phi=1}:=\mathrm{Cone}(\phi-\mathrm{id})[-1]\) is the mapping cocone of \(\phi-\mathrm{id}\).
Before going on, we say a few words about \(L\)-typical prisms, which were independently defined by Ito and called "\(\mathcal{O}_{L}\)-prisms" in his concurrent work [20]. The category of \(L\)-typical prisms is a mild generalization of the category of prisms, arising by replacing \(\delta\)-rings with what we call \(\delta_{L}\)-algebras. In the same way that \(p\)-complete \(\delta\)-rings relate to \(\mathbb{Z}_{p}\)-algebras with a lift of Frobenius, \(\delta_{L}\)-algebras relate to \(\mathcal{O}_{L}\)-algebras with a lift of \(q\)-Frobenius. And just as the category of prisms has a subcategory of perfect prisms, which is equivalent to the category of (integral) perfectoid rings (this is what we mean when we say that prisms can be viewed as "deperfections of perfectoid rings"), we will show the following.
**Theorem 1.4**.: _Let \(L/\mathbb{Q}_{p}\) be a finite extension. The categories of \(L\)-typical prisms and perfectoid \(\mathcal{O}_{L}\)-algebras (i.e. integral perfectoid rings which are also \(\mathcal{O}_{L}\)-algebras) are equivalent._
**Remark 1.5**.: The notion of \(\delta_{L}\)-algebras defined here coincides with Borger's notion of a \(\pi\)-typical \(\Lambda_{\mathcal{O}_{L}}\)-ring [10]. More generally, following a suggestion of Kisin, the author suspected that Borger's \(\Lambda\)-rings were the right formalism for arithmetically relativizing \((\varphi,\Gamma)\)-modules in general. We hope that this work - which carries out this relativization in the simplest case beyond \(\mathbb{Z}_{p}\)-coefficients - provides evidence that the same techniques will be useful more generally.
Fix now a Lubin-Tate formal \(\mathcal{O}_{L}\)-module \(\mathcal{G}\) corresponding to the uniformizer \(\pi\) of \(\mathcal{O}_{L}\). If \(K/L\) is a \(p\)-adic field, then we let \(K_{\infty}\) be the \(p\)-adic completion of the infinite extension \(K(\mathcal{G}[\pi^{\infty}])\) formed by adjoining the \(\pi\)-power torsion points of \(\mathcal{G}\). In this case, one can use the periods of \(\mathcal{G}\) to construct an element \(\omega\in W(K_{\infty}^{\flat})\otimes_{W(\mathbb{F}_{q})}\mathcal{O}_{L}\) and a period ring \(\mathbf{A}_{K}\subseteq W(K_{\infty}^{\flat})\otimes_{W(\mathbb{F}_{q})} \mathcal{O}_{L}\) (different in general from the ring \(\mathbf{A}_{K}\) discussed above, but coinciding in the cyclotomic case \(\mathcal{G}=\mu_{p^{\infty}}\)). One also gets a category \(\mathrm{Mod}_{\mathbf{A}_{K}}^{\varphi_{q},\Gamma_{K}}\) of _Lubin-Tate \((\varphi_{q},\Gamma)\)-modules_, first studied by Kisin and Ren [29] following ideas of Fontaine, and recently a subject of significant interest in the context of explicit reciprocity laws, \(p\)-adic local Langlands, and Iwasawa theory [2, 3, 35, 36, 16].
In SS3.3 we give general constructions for producing interesting subprisms of a perfect \(L\)-typical prisms. When applied with inputs derived from periods of \(\mathcal{G}\) and the perfect \(L\)-typical prism \((A_{\inf}(\mathcal{O}_{K_{\infty}}),\ker\theta)\) corresponding via theorem 1.4 to the perfectoid \(\mathcal{O}_{L}\)-algebra \(\mathcal{O}_{K_{\infty}}\), we show that this construction produces a prism \((\mathbf{A}_{K}^{+},(q_{n}(\omega)))\) with \(\mathbf{A}_{K}=\mathbf{A}_{K}^{+}[\frac{1}{q_{n}(\omega)}]_{(\pi)}^{\wedge}\). This period ring interestingly depends on the Lubin-Tate formal group \(\mathcal{G}\); for example, we construct a prismatic logarithm map \(T\mathcal{G}\to\mathbf{A}_{L}^{+}\{1\}\) to the Breuil-Kisin twist, as in [5]. Using the prism \((\mathbf{A}_{K}^{+},(q_{n}(\omega)))\), we show that theorem 1.3 recovers both the Kisin-Ren equivalence \(\mathrm{Mod}_{\mathbf{A}_{K}}^{\varphi_{q},\Gamma_{K},et}\simeq\mathrm{Rep}_{ \mathcal{O}_{L}}(G_{K})\) as well as the computation of Galois cohomology in terms of \(\varphi\)-Herr complexes from [30].
**Theorem 1.6**.: _Let \(L/\mathbb{Q}_{p}\) be a finite extension with uniformer \(\pi\), and let \(K/L\) be a \(p\)-adic field._
1. _There are equivalences of categories_ \[\operatorname{Mod}_{\mathbf{A}_{K}}^{\varphi_{q},et} \simeq\operatorname{Mod}_{W_{L}(K_{\infty}^{\flat})}^{\varphi_{q},et}\simeq\operatorname{Vect}((\mathcal{O}_{K_{\infty}})_{\underline{ \mathbb{A}}_{L}},\mathcal{O}_{\underline{\mathbb{A}}}[\tfrac{1}{\mathcal{I}} ]_{(\pi)}^{\wedge})^{\phi=1}\simeq\operatorname{Rep}_{\mathcal{O}_{L}}(G_{K_{ \infty}})\] \[\operatorname{Mod}_{\mathbf{A}_{K}}^{\varphi_{q},\Gamma_{K},et} \simeq\operatorname{Mod}_{W_{L}(K_{\infty}^{\flat})}^{\varphi_{q},\Gamma_{K},et}\simeq\operatorname{Vect}((\mathcal{O}_{K})_{\underline{ \mathbb{A}}_{L}},\mathcal{O}_{\underline{\mathbb{A}}}[\tfrac{1}{\mathcal{I}} ]_{(\pi)}^{\wedge})^{\phi=1}\simeq\operatorname{Rep}_{\mathcal{O}_{L}}(G_{K}).\] _(Here_ \(W_{L}(K_{\infty}^{\flat})=A_{\inf}(\mathcal{O}_{K_{\infty}})[\frac{1}{\ker \theta}]_{(\pi)}^{\wedge}\) _is the period ring corresponding to the perfect_ \(L\)_-typical prism_ \((A_{\inf}(\mathcal{O}_{K_{\infty}}),\ker\theta)\)_.)_
2. _If_ \(M\in\operatorname{Mod}_{\mathbf{A}_{K}}^{\varphi_{q},et}\) _corresponds to_ \(T\in\operatorname{Rep}_{\mathcal{O}_{L}}(G_{K_{\infty}})\) _under the above equivalence, then_ \[R\Gamma(K_{\infty,et},T)\cong\left(M\stackrel{{\phi-1}}{{ \longrightarrow}}M\right)\] _where the complex on the right is concentrated in degrees_ \(0\) _and_ \(1\)_._
3. _If_ \(M\in\operatorname{Mod}_{\mathbf{A}_{K}}^{\varphi_{q},\Gamma_{K},et}\) _corresponds to_ \(T\in\operatorname{Rep}_{\mathcal{O}_{L}}(G_{K})\)_, then_ \[R\Gamma(K_{et},T)\cong C_{\operatorname{cont}}^{\bullet}(\Gamma_{K},M)^{\phi=1}\] _where_ \(C_{\operatorname{cont}}^{\bullet}(\Gamma_{K},M)\) _denotes the continuous cochain complex of_ \(\Gamma_{K}\) _with values in_ \(M\)_._
### Explicit reciprocity laws and Iwasawa theory
A key motivation for this work is explicit reciprocity laws in Iwasawa theory. Let \(K_{n}=\mathbb{Q}_{p}(\zeta_{p^{n}})\) and \(K=\mathbb{Q}_{p}\). In the most classical case, Iwasawa's explicit reciprocity law [21] computes, for a system \(u=(u_{n})_{n}\in\varprojlim K_{n}^{\times}\) of \(p\)-power compatible units and \(m\geq 1\), the image of \(u\) under the composition
\[\lambda_{m}:\varprojlim K_{n}^{\times}\stackrel{{\kappa}}{{ \longrightarrow}}\varprojlim H^{1}(K_{n},\mathbb{Z}_{p}(1))\cong\varprojlim H^ {1}(K_{n},\mathbb{Z}_{p}(k))\stackrel{{\operatorname{Tr}_{K_{n} }/K_{m}}}{{\rightarrow}}H^{1}(K_{m},\mathbb{Z}_{p})\stackrel{{ \operatorname{exp}^{*}}}{{\longrightarrow}}K_{m}\]
where \(\kappa\) is the Kummer map, the isomorphism is a Soule twist1, and \(\exp^{*}\) is the Bloch-Kato dual exponential map [22, II.1.2]. Explicitly,
Footnote 1: Concretely, using the isomorphism \(\varprojlim H^{1}(K_{n},\mathbb{Z}_{p}(1))\cong H^{1}(K,\mathbb{Z}_{p}[ \varprojlim K]\otimes_{\mathbb{Z}_{p}}\mathbb{Z}_{p}(1))\), the Soule twist arises from the isomorphism \(\mathbb{Z}_{p}[\varprojlim K]\longrightarrow\mathbb{Z}_{p}[\varprojlim K] \otimes_{\mathbb{Z}_{p}}\mathbb{Z}_{p}(1)\) of \(G_{K}\)-modules given by \(\gamma\mapsto\gamma\otimes\gamma e\) corresponding to a choice of basis \(e\) of \(\mathbb{Z}_{p}(1)\).
\[\lambda_{m}(u)=p^{-m}u_{m}(\operatorname{dlog}\theta_{u})(u_{m}-1)\]
where \(\theta_{u}\in\mathbb{Z}_{p}[\![T]\!]^{\times}\) is the Coleman power series for \(u\) and \(\operatorname{dlog}\theta=\frac{\theta^{\prime}(T)}{\theta(T)}\). The Iwasawa cohomology group \(H^{1}_{Iw}(K_{\infty}/K,\mathbb{Z}_{p}(1)):=\varprojlim H^{1}(K_{n}/K,\mathbb{ Z}_{p}(1))\) is important, in part, because its contains as an element the Euler system of cyclotomic units. This formula for \(\lambda_{m}\) thereby allows one to relate this Euler system to zeta values.
More generally, let \(L/\mathbb{Q}_{p}\) be a finite extension with uniformizer \(\pi\), let \(\mathcal{G}\) be a Lubin-Tate formal \(\mathcal{O}_{L}\)-module corresponding to \(\pi\), let \(L_{n}=L(\mathcal{G}[\pi^{n}])\), and let \(T\mathcal{G}\in\operatorname{Rep}_{\mathcal{O}_{L}}(G_{L})\) be the Tate module of \(\mathcal{G}\). Then for each \(m\geq 1\) and \(k\in\mathbb{Z}\) there is a map
\[\lambda_{m,k}:\varprojlim L_{n}^{\times}\stackrel{{\kappa}}{{ \to}}H^{1}_{Iw}(L_{\infty}/L,\mathbb{Z}_{p}(1))\cong H^{1}_{Iw}(L_{\infty}/L, T\mathcal{G}^{\otimes-k}(1))\stackrel{{\operatorname{Tr}}}{{ \to}}H^{1}(L_{m},T\mathcal{G}^{\otimes-r}(1))\stackrel{{ \operatorname{exp}_{*}^{*}}}{{\longrightarrow}}L_{m}t^{k}_{\mathcal{G}}t^{-1}_ {cycl}\]
where \(t_{\mathcal{G}}\in D_{dR}(T\mathcal{G}^{\otimes-1})\) and \(t_{cycl}\in D_{dR}(\mathcal{O}_{L}(-1))\) are the usual de Rham periods. Then work of Bloch and Kato [9] gives the explicit reciprocity law
\[\lambda_{m,k}(u)=\frac{1}{k!}\pi^{-mk}(\partial_{\mathcal{G}}^{k}\log\theta_{u })(u_{m})t^{k}_{\mathcal{G}}t^{-1}_{cycl}\]
for \(k\geq 1\), where \(\theta_{u}\in\mathcal{O}_{L}[\![T]\!]\) is again a Coleman power series and \(\partial_{\mathcal{G}}(f(T)):=\frac{1}{g(T)}f^{\prime}(T)\) with \(g(T)dT\) being the invariant differential for \(\mathcal{G}\).
Intuitively speaking, for a fixed \(k\geq 1\), the above explicit reciprocity law for \(\lambda_{m,k}\) extracts information from the system \((u_{n})_{n\geq 1}\) related to the special value of a \(p\)-adic \(L\)-function at \(s=k\). On the other hand, work of Perrin-Riou, Colmez, and Cherbonnier [33, 13] in the cyclotomic case \(\mathcal{G}=\mu_{p^{\infty}}\) and Schneider and Venjakob [35] in the general case shows how to interpolate all of the above "little" explicit reciprocity laws into one "big" explicit reciprocity law which sees the entire \(p\)-adic \(L\)-function at once. More precisely, if \(M\in\operatorname{Mod}^{\varphi_{q},et}_{\mathbf{A}_{L}}\) corresponds to \(T\mathcal{G}\in\operatorname{Rep}_{\mathcal{O}_{L}}(G_{L})\) under theorem 1.6, then there is a big dual exponential map [35, SS5]
\[\operatorname{Exp}^{*}:H^{1}_{Iw}(L_{\infty}/L,\mathcal{O}_{L}(1))\stackrel{{ \sim}}{{\longrightarrow}}M^{\psi=1}\]
where \(\psi\) is a certain endomorphism of \(M\). Moreover, we have \(M\cong\Omega^{1}_{\mathcal{O}_{\mathcal{G}}/\mathcal{O}_{L}}\cong\Omega^{1}_ {\mathcal{O}_{L}[\![T]\!]/\mathcal{O}_{L}}\) and the big explicit reciprocity law
\[(\operatorname{Exp}^{*}\circ\kappa)(u)=\operatorname{dlog}\theta_{u}.\]
Intuitively, this shows how to relate a \(p\)-adic \(L\)-function corresponding to a system \((u_{n})_{n}\) of units to a function \(\theta_{u}\in\mathcal{O}_{\mathcal{G}}\cong\mathbf{A}_{L}\cong\mathcal{O}_{L }[\![T]\!]\) on the Lubin-Tate group \(\mathcal{G}\).
Two ingredients were essential for the above big explicit reciprocity law to be formulated and proved. First, there is a map \(\mathcal{O}_{G}\to\mathbf{A}_{L}\) from the ring of functions on \(\mathcal{G}\) to the period ring for the \(\varphi\)-modules. Second, the period ring \(\mathbf{A}_{L}\) is _imperfect_; indeed, the corresponding perfect period ring \(W_{L}(L^{\flat}_{\infty})\) has \(\Omega^{1}_{W_{L}(L^{\flat}_{\infty})/\mathcal{O}_{L}}=0\), presenting a fundamental obstruction to a big explicit reciprocity law like the one above. Moreover, in [35], \(\psi\) is shown to be related to the endomorphism \(\phi\) of \(\mathbf{A}_{L}\) via Pontryagin duality using an argument that makes use of local Tate duality and a _residue_ pairing
\[\mathbf{A}_{L}\otimes_{\mathbf{A}_{L}}\Omega^{1}_{\mathbf{A}_{L}/\mathcal{O}_ {L}}\stackrel{{\operatorname{res}}}{{\longrightarrow}}\mathcal{O} _{L},\]
which suggests that \(\mathbf{A}_{L}\) being not too much larger than \(\mathcal{O}_{\mathcal{G}}\) is key.
In settings beyond the case of Lubin-Tate formal groups, there are families of little explicit reciprocity laws which lack big explicit reciprocity laws. For instance Kato's generalized explicit reciprocity law [23], a key technical ingredient to Kato's work [24] on Iwasawa main conjectures for modular forms, is used to relate special values of \(L\)-functions with special values of derivatives of logarithms of _Siegel units_, which are certain functions on the \(p\)-divisible group of an elliptic curve.
The author suspects that the path forward in formulating and proving big explicit reciprocity laws in this setting involves constructing certain imperfect prisms \((A,I)\) over (the ordinary locus of) a modular curve \(X\) such that the \(p\)-divisible group \(\mathcal{E}[p^{\infty}]\) of the universal elliptic curve \(\mathcal{E}\to X\) has a map \(\mathcal{O}_{\mathcal{E}[p^{\infty}]}\to A\). Some partial progress is presented in example 3.26: given an ordinary elliptic curve over a \(p\)-complete ring \(R\) equipped with a compatible system of sections \(\operatorname{Spf}R_{n}\to\ker F^{n}\) of the subgroups \(\ker F^{n}\) over etale \(R\)-algebras, the general constructions given in SS 3.3 produce a map \(\mathcal{O}_{\varinjlim}\ker F^{n}\to W((\varinjlim R_{n})^{\flat})\). (If \((\varinjlim R_{n})^{\wedge}_{(p)}\) is perfectoid, then \((W((\varinjlim R_{n})^{\flat}),\ker\theta)\in R_{\underline{\wedge}}\) is a perfect prism.)
### Overview of the proofs
We briefly outline the key ideas in the proofs of theorems 1.3 and 1.6. When \(X=\operatorname{Spf}R\) for a perfectoid \(\mathcal{O}_{L}\)-algebra \(R\), \(\operatorname{Vect}(R_{\underline{\wedge}_{L}},\mathcal{O}_{\underline{\wedge} }[\frac{1}{\mathcal{I}}]^{\wedge}_{(\pi)})^{\phi=1}\simeq\operatorname{Mod} ^{\varphi_{q},et}_{W(R[\frac{1}{\pi}]^{\flat})}\), and theorem 1.3 is shown via standard arguments (due originally to Katz and Fontaine [25, 15]) for relating etale \(\varphi\)-modules and local systems. Theorem 1.3 is then shown in general via a descent argument from the perfectoid case. This crucially relies on the fact that there is a perfection functor \((A,I)\mapsto(A,I)_{\operatorname{perf}}\) which induces an equivalence on the corresponding categories of etale \(\varphi_{q}\)-modules.
**Theorem 1.7**.: _(c.f. [43, theorem 4.6] for the \(\mathbb{Q}_{p}\)-typical case). Let \((A,I)\) be a bounded \(L\)-typical prism with perfection \((A_{\operatorname{perf}},IA_{\operatorname{perf}})\). Then base change induces an equivalence_
\[\operatorname{Mod}^{\phi,et}_{(A,I)} \stackrel{{\sim}}{{\longrightarrow}}\operatorname{Mod }^{\phi,et}_{(A,I)_{\operatorname{perf}}}\] \[M \mapsto M\otimes_{A[\frac{1}{\mathcal{I}}]^{\wedge}_{(\pi)}}A_{ \operatorname{perf}}[\frac{1}{\mathcal{I}}]^{\wedge}_{(\pi)}\]
_between the categories of etale \(\varphi_{q}\)-modules over \((A,I)\) and \((A,I)_{\operatorname{perf}}\)._
The \(X=\operatorname{Spf}\mathcal{O}_{K_{\infty}}\) part of theorem 1.6 follows nearly immediately from theorem 1.3. Intuitively, one would like to conclude the \(X=\operatorname{Spf}\mathcal{O}_{K}\) part by descending along \(Y=\operatorname{Spf}\mathcal{O}_{K_{\infty}}\to X=\operatorname{Spf} \mathcal{O}_{K}\) and picking up a semilinear action of \(\Gamma_{K}=\operatorname{Gal}(K_{\infty}/K)\). However, instead of using this angle of attack, we will use a more delicate descent argument along the Cech nerve \((W_{L}(\mathcal{O}^{\flat}_{K_{\infty}}),\ker\theta)^{\bullet}\) in the perfect prismatic site \((\mathcal{O}_{K})^{\operatorname{perf}}_{\underline{\wedge}_{L}}\). This argument allows us to recover a Laurent \(F\)-crystal \(\mathcal{M}\) over \((\mathcal{O}_{K})_{\underline{\wedge}_{L}}\) from the data of \(M=\mathcal{M}(W_{L}(\mathcal{O}^{\flat}_{K_{\infty}}),\ker\theta)\) and a semilinear action of \(\operatorname{Aut}_{(\mathcal{O}_{K})_{\underline{\wedge}_{L}}}(W_{L}( \mathcal{O}^{\flat}_{K_{\infty}}),\ker\theta)\cong\Gamma_{K}\), and to compute \(R\Gamma((\mathcal{O}_{K})_{\underline{\wedge}_{L}},\mathcal{M})\cong C^{ \bullet}_{\operatorname{cont}}(\Gamma_{K},M)\).
### Structure of the paper
In SS2 we introduce \(\delta_{L}\)-algebras, review ramified Witt vectors, and develop basic results about distinguished elements and perfect \(\delta_{L}\)-algebras. In SS3 we then introduce \(L\)-typical prisms, with perfectoid \(\mathcal{O}_{L}\)-algebras, the proof of theorem 1.4, and the perfection functor appearing in SS3.2. In SS3.3 we describe two general constructions which - given an \(L\)-typical prism \((A,I)\), a perfectoid \(\mathcal{O}_{L}\)-algebra \(R\), and a \(\phi\)-compatible system of maps \((\iota_{n}:A\to R)_{n}\) - produce a map \((A,I)\to(A_{\inf}(R),\ker\theta)\) to the perfect \(L\)-typical prism corresponding to \(R\); the example 3.26 discussed above, involving constructing a map from a sub-\(p\)-divisible group of the \(p\)-divisible group of an elliptic curve to \(W((\varinjlim R_{n})^{\flat})\), is also sited here.
Starting in SS4, we will take \(\mathcal{G}\) to be a Lubin-Tate formal \(\mathcal{O}_{L}\)-module corresponding to a uniformizer \(\pi\) of \(L\). We explain in SS4.1 how to equip \(\mathcal{O}_{\mathcal{G}}\cong\mathcal{O}_{L}[\![T]\!]\) with ideals \((q_{n}(T))\) which turn it into an \(L\)-typical prism; furthermore, the constructions from SS3.3 allow us to, given a choice of basis \(e\) for the rank one \(\mathcal{O}_{L}\)-module \(T\mathcal{G}\), produce an embedding \((\mathcal{O}_{\mathcal{G}},(q_{n}(T)))\hookrightarrow(W_{L}(\mathcal{O}_{L \infty}^{p}),\ker\theta)\) into a perfect prism. Given a \(p\)-adic field \(K/L\), we extend this construction in SS4.2 to give a prism \((\mathbf{A}_{K}^{+},(q_{n}(\omega)))\in(\mathcal{O}_{K})_{\!\bigtriangleup\! _{L}}\) with perfection \((W_{L}(\mathcal{O}_{K\infty}^{\flat}),\ker\theta)\). In SS 4.3 we review the basics of the theory of Lubin-Tate \((\varphi_{q},\Gamma)\)-modules and the \(\Gamma_{K}\)-action on \(\mathbf{A}_{K}\). Then SS 4.4 contains discussion of the prismatic logarithm for \(\mathcal{G}\); we included this section because we believed the construction was interesting, but it plays no further role in this paper.
Finally, SS5 is the technical heart of the paper. In SS5.1 we define \(\varphi_{q}\)-modules over \(L\)-typical prisms and prove theorem 1.7. Then SS5.2 defines Laurent \(F\)-crystals and proves theorem 1.3, with theorem 4.13 following in SS5.3.
### Acknowledgements
This work would not have been possible without the support, guidance, and frequent prophetic suggestions of Mark Kisin. I also thank Alexander Petrov for help with various aspects of the prismatic theory, and Daniel Li-Heurta for help with v-descent results for diamonds. Finally, an earlier version of this document contained errors which were kindly pointed out by Kazuhiro Ito, including that my original statement of theorem 1.4 was incorrect; I thank Dr. Ito for identifying these errors and for helping me arrive at a proof for the corrected theorem 1.4.
## 2 \(\delta_{L}\)-algebras and ramified Witt vectors
Recall that a \(\delta\)-ring is a ring \(A\) together with a map \(\delta:A\to A\) of sets satisfying certain properties which guarantee that
\[\phi: A\longrightarrow A\] \[x\mapsto x^{p}+p\delta(x)\]
is a ring homomorphism lifting the Frobenius endomorphism \(x\mapsto x^{p}\) of \(A/p\). In this section, we will recall a mild generalization of the theory of \(\delta\)-rings which applies in the following context.
Let \(L/\mathbb{Q}_{p}\) be a finite extension with ring of integers \(\mathcal{O}_{L}\), uniformizer \(\pi\), and residue field \(\mathcal{O}_{L}/\pi\) of size \(q\). Then a \(\delta_{L}\)-algebra will be an \(\mathcal{O}_{L}\)-algebra \(A\) equipped with a map \(\delta_{L}:A\to A\) of sets satisfying certain properties which guarantee that
\[\phi(x)=x^{q}+\pi\delta_{L}(x)\]
is a ring homomorphism lifting the \(q\)-Frobenius \(\varphi_{q}(x)=x^{q}\) of \(A/\pi\).
**Remark 2.1**.: By a theorem of Wilkerson [41], \(\delta\)-rings are the same as \(p\)-typical \(\lambda\)-rings, a notion generalized by the \(\Lambda\)-rings of Borger [10]. The results of this section are obtained as special cases of Borger's theory of \(\Lambda\)-rings over \(\mathcal{O}_{L}\) in the \(\pi\)-typical setting.
### Basic theory
**Definition 2.2**.:
1. A \(\delta_{L}\)_-algebra_ is an \(\mathcal{O}_{L}\)-algebra \(A\) equipped with a map \(\delta_{L}:A\to A\) of sets satisfying the identities \[\delta_{L}(\alpha) =\frac{\alpha-\alpha^{q}}{\pi} \text{for }\alpha\in\mathcal{O}_{L}\] \[\delta_{L}(xy) =\delta_{L}(x)y^{q}+x^{q}\delta_{L}(y)+\pi\delta_{L}(x)\delta_{ L}(y) \text{for }x,y\in A\] \[\delta_{L}(x+y) =\delta_{L}(x)+\delta_{L}(y)+\frac{x^{q}+y^{q}-(x+y)^{q}}{\pi} \text{for }x,y\in A\] (2.1) where in (2.1) the expression \(\frac{x^{q}+y^{q}-(x+y)^{q}}{\pi}\) is shorthand for \[-\sum_{i=1}^{q-1}\frac{1}{\pi}\binom{q}{i}x^{i}y^{q-i}\] which makes sense even when \(A\) has \(\pi\)-torsion. If \(A\) is an \(\mathcal{O}_{L}\)-algebra then by a \(\delta_{L}\)_-structure_ on \(A\) we mean a choice of map \(\delta_{L}:A\to A\) as above making \(A\) into a \(\delta_{L}\)-algebra.
2. There is an evident category \(\text{Alg}_{\delta_{L}}\) of \(\delta_{L}\)-algebras, with maps being \(\mathcal{O}_{L}\)-algebra maps which commute with the \(\delta_{L}\)-structures.
3. If \(A\) is a \(\delta_{L}\)-algebra, then we have a map \(\phi_{A,\delta_{L}}:A\to A\) given by \(\phi_{A,\delta_{L}}(x)=x^{q}+\pi\delta_{L}(x)\) which lifts the \(q\)-Frobenius \(\varphi_{q}\) on \(A/\pi\). Using the assumed identities on \(\delta_{L}\), one verifies that \(\phi_{A,\delta_{L}}\) is an \(\mathcal{O}_{L}\)-algebra homomorphism. Usually \(A\) and \(\delta_{L}\) will be clear from context and we will simply write \(\phi_{A}\) or \(\phi\) for \(\phi_{A,\delta_{L}}\).
**Remark 2.3**.:
1. If \(A\) is a \(\pi\)-torsion-free \(\mathcal{O}_{L}\)-algebra and \(\phi\) is an endomorphism lifting \(\varphi_{q}\), then we obtain a \(\delta_{L}\)-structure on \(A\) by \[\delta_{L}(x)=\frac{\phi(x)-x^{q}}{\pi}.\] This is easily seen to give a one-to-one correspondence between \(\delta_{L}\)-structures on \(A\) and lifts of \(\varphi_{q}\) to \(A\). When \(A\) has \(\pi\)-torsion, having a \(\delta_{L}\)-structure is stronger than having a lift of \(\varphi_{q}\).
2. Taking the defining relations of a \(\delta_{L}\)-structure modulo \(\pi\), we see that \(\delta_{L}\)-structure on an \(\mathcal{O}_{L}\)-algebra \(A\) induces an \(\mathbb{F}_{q}\)-module map \(\delta_{L}:A/\pi\to A/\pi\) such that \(\delta_{L}(\alpha)=0\) for \(\alpha\in\mathbb{F}_{q}\) and we have the analogue \[\delta_{L}(xy)=x^{q}\delta_{L}(y)+y^{q}\delta_{L}(x)\] of the Leibnitz rule.
3. The properties defining the map \(\delta_{L}\) evidently depend on the choice of uniformizer \(\pi\), so one might worry that \(\delta_{L}\) algebra structures on an \(\mathcal{O}_{L}\)-algebra \(A\) might depend on the choice of \(\pi\) as well. Fortunately, there is no essential dependence: if \(A\) has a \(\delta_{L}\)-structure with respect to \(\pi\) and \(\pi^{\prime}=u\pi\) for \(u\in\mathcal{O}_{L}^{\times}\) is another uniformizer, then \(\alpha\mapsto u^{-1}\delta_{L}(\alpha)\) is a \(\delta_{L}\)-structure with respect to \(\pi^{\prime}\).
4. See [20, remark 2.2.7] (generalizing [7, remark 2.4]) for an alternative characterization of \(\delta_{L}\)-structures on \(A\) in terms of \(\mathcal{O}_{L}\)-algebra sections of the length \(2\) ramified Witt vectors \(W_{L,2}(A)\). In particular, this characterization immediately implies that the category of \(\delta_{L}\)-algebras admits all limits and colimits, and that they commute with the forgetful functor to \(\mathcal{O}_{L}\)-algebras.
5. The category of \(\delta_{L}\)-algebras is also closed with respect to classical \(I\)-adic completion with respect to an ideal \(I\subseteq A\) containing \(\pi\) (cf. [20, lemma 2.2.10] or the proof of [7, lemma 2.17]).
A key fact about \(\delta_{L}\)-algebras is that the forgetful functor \(\operatorname{Alg}_{\delta_{L}}\to\operatorname{Alg}_{\mathcal{O}_{L}}\) has a right adjoint \(W_{L}\), which is identified with Hazewinkel's ramified Witt vector functor [19]. Explicitly, for \(n\geq 0\), let
\[w_{n}(X_{0},\dots,X_{n})=X_{0}^{q^{n}}+\pi X_{1}^{q^{n-1}}+\dots+\pi^{n-1}X_{ n-1}^{q}+\pi^{n}X_{n}\in\mathcal{O}_{L}[X_{0},\dots,X_{n}]\subseteq\mathcal{O}_{L} [X_{0},X_{1},\dots]\]
be the \(n\)th ghost component polynomial. For any \(\mathcal{O}_{L}\)-algebra \(R\), let \(W_{L}(R)=R^{\mathbb{N}}\) as sets, and let
\[w_{R}:W_{L}(R) \longrightarrow R^{\mathbb{N}}\] \[x=(x_{0},x_{1},\dots) \mapsto(w_{0}(x),w_{1}(x),\dots)\]
be the ghost component map. Since \(w_{R}\) is a bijection when \(R\) is \(\pi\)-torsion free and any \(\mathcal{O}_{L}\)-algebra is a quotient of a free \(\mathcal{O}_{L}\)-algebra, there is a unique choice of \(\mathcal{O}_{L}\)-algebra structure on \(W_{L}(R)\) such that \(w_{R}\) is a map of \(\mathcal{O}_{L}\)-algebras _and_\(W_{L}\) is a functor \(\operatorname{Alg}_{\mathcal{O}_{L}}\to\operatorname{Alg}_{\mathcal{O}_{L}}\); equip \(W_{L}(R)\) with this \(\mathcal{O}_{L}\)-algebra structure. One also checks that the projection map \(W_{L}(R)=R^{\mathbb{N}}\twoheadrightarrow R\) onto the first factor is an \(\mathcal{O}_{L}\)-algebra homomorphism.
The above paragraph explains the \(\mathcal{O}_{L}\)-algebra structure on \(W_{L}(R)\); we now explain the \(\delta_{L}\)-structure. In the case that \(R\) is \(\pi\)-torsion-free, \(W_{L}(R)\) is \(\pi\)-torsion-free as well, so giving a \(\delta_{L}\)-structure is the same as giving a lift of \(q\)-Frobenius. This is provided by the canonical Witt vector Frobenius.
**Proposition 2.4**.: _If \(R\) is an \(\mathcal{O}_{L}\)-algebra, then there are endomorphisms \(F_{R}\) and \(V_{R}\) of \(W_{L}(R)\), natural in \(R\), such that for \(x,y\in W_{L}(R)\) we have_
\[F_{R}(x) \equiv x^{q}\mod\pi W_{L}(R),\] \[F_{R}(V_{R}(x)) =\pi x,\] \[V_{R}(xF_{R}(y)) =V_{R}(x)y,\]
_and the diagrams_
(2.2)
_commute._
Proof.: This uses the same arguments as for \(p\)-typical Witt vectors; see [34, pg. 14] for details.
In fact, \(W_{L}(R)\) has a \(\delta_{L}\)-structure even when \(R\) is not \(\pi\)-torsion-free.
**Lemma 2.5**.: \(W_{L}\) _extends to a functor \(\operatorname{Alg}_{\mathcal{O}_{L}}\to\operatorname{Alg}_{\delta_{L}}\) which is right adjoint to the forgetful functor. Explicitly, this means that if \(A\) is a \(\delta_{L}\)-algebra then any \(\mathcal{O}_{L}\)-algebra map \(A\to R\) lifts to a unique \(\delta_{L}\)-algebra map \(A\to W_{L}(R)\) making the following diagram commute._
(2.3)
Proof.: See [10].
We will make use of two distinct sections of \(W_{L}(R)\twoheadrightarrow R\). One is the usual _Teichmuller_ lift \(r\mapsto[r]\), a multiplicative section which exists for any \(\mathcal{O}_{L}\)-algebra \(R\). In the case that \(R\) also has a \(\delta_{L}\)-structure, another section exists which is moreover a \(\delta_{L}\)-algebra map.
**Proposition 2.6**.:
1. _If_ \(R\) _is any_ \(\delta_{L}\)_-algebra, then there is a unique map_ \(s_{R}:R\to W_{L}(R)\) _of_ \(\delta_{L}\)_-algebras which is a section of_ \(W_{L}(R)\twoheadrightarrow R\)_. It satisfies_ \(w_{n}(s_{R}(\alpha))=\phi_{R}^{n}(\alpha)\) _for all_ \(n\geq 0\) _and_ \(\alpha\in R\)_._
2. _If_ \(R\) _is a_ \(\mathcal{O}_{L}\)_-algebra, then the map_ \[[-]:R \longrightarrow W_{L}(R)\] \[r \mapsto(r,0,0,\dots)\] _is a multiplicative section of_ \(W_{L}(R)\twoheadrightarrow R\)_._
Note that if \(R\) is \(\pi\)-torsion-free, then the formula in (1) uniquely determines the map \(s_{R}\).
Proof.: For part (1), \(s_{R}\) is the unit of the adjunction from lemma 2.5 (i.e. apply the lemma to \(\operatorname{id}:R\to R\)). The formula for \(w_{n}(s_{R}(\alpha))\) follows from the left diagram in (2.2) and the defining property of \(s_{R}\) as
\[w_{n}(s_{R}(\alpha))=w_{0}(F_{W_{L}(R)}^{n}s_{R}(\alpha))=w_{0}(s_{R}(\phi_{R} ^{n}\alpha))=\phi_{R}^{n}\alpha.\]
Part (2) is clear, as one only needs to check that formula given defines a multiplicative map. But let us explain the relationship to part (1): let \(R^{\circ}\) denote \(R\) viewed as a multiplicative monoid. Then the free \(\mathcal{O}_{L}\)-algebra \(\mathcal{O}_{L}[R^{\circ}]\) has a lift of \(q\)-Frobenius induced by \(r\mapsto r^{q}\). Thus applying lemma 2.5 to the canonical map \(\mathcal{O}_{L}[R^{\circ}]\to R\) gives a \(\delta_{L}\)-algebra map \(\mathcal{O}_{L}[R^{\circ}]\to W_{L}(R)\), and the Teichmuller map is the composite
\[R^{\circ}\to\mathcal{O}_{L}[R^{\circ}]\to W_{L}(R).\]
To get the formula \([r]=(r,0,0,\dots)\), one uses the same reasoning as in part (1) to show that this formula holds when \(R\) is \(\pi\)-torsion-free, from which it follows in general.
### Distinguished elements and perfect \(\delta_{L}\)-algebras
This section develops results about distinguished elements and perfect \(\delta_{L}\)-algebras analogous to those in [7, SS2.3,SS2.4].
**Definition 2.7**.: Let \(A\) be a \(\delta_{L}\)-algebra. An element \(d\in A\) is _distinguished_ if \(\delta_{L}(d)\) is a unit of \(A\).
**Remark 2.8**.:
1. As \(\delta_{L}(\pi)=1-\pi^{q-1}\), we have that \(\pi\) is distinguished in any \(\delta_{L}\)-algebra.
2. The significance of distinguished elements is that if \((A,I)\) is a \(L\)-typical prism (to be introduced in SS3), then \(I\) is locally generated by distinguished elements (see condition (iii) in the following lemma). As such, we are interested in the case that \(A\) is \(d\)-adically complete; more generally we will assume that \(d\in\operatorname{Rad}(A)\) is in the Jacobson radical of \(A\).
**Lemma 2.9**.: _Let \(A\) be a \(\delta_{L}\)-algebra, and let \(d\in\operatorname{Rad}(A)\). The following are equivalent:_
1. \(d\) _is distinguished._
2. _The ideal_ \((d)\) _contains a distinguished element._
3. \(\pi\in(d^{q},\phi(d))\)_._
4. \(\pi\in(d,\phi(d))\)_._
Proof.: Clearly \((\mathrm{i})\Longrightarrow(\mathrm{ii})\). Conversely, suppose we have \(d^{\prime}=\alpha d\) for some \(\alpha,d^{\prime}\in A\) with \(d^{\prime}\) distinguished. Applying \(\delta_{L}\) and working mod \((\pi,d)\) (using remark 2.3(2) to simplify) we have
\[\delta_{L}(d^{\prime})\equiv\alpha^{q}\delta_{L}(d)\pmod{(\pi,d)},\]
which shows that \(\delta_{L}(d)\) is a unit in \(A/(\pi,d)\). As \(\pi,d\in\operatorname{Rad}(A)\), we have that \(\delta_{L}(d)\in A^{\times}\) as well.
We now show \((\mathrm{i})\Longrightarrow(\mathrm{iii})\Longrightarrow(\mathrm{iv})\Longrightarrow( \mathrm{i})\). The first implication follows directly from the formula \(\phi(d)=d^{q}+\pi\delta_{L}(d)\), and the second implication is clear. For the last implication, suppose that \(\pi=\alpha d+\beta\phi(d)\) for some \(\alpha,\beta\in A\). Applying \(\delta_{L}\) to this formula and working mod \((\pi,d)\) we get
\[\delta_{L}(\pi)\equiv\delta_{L}(d)(\alpha^{q}+\beta^{q}\delta_{L}(d)^{q-1}) \pmod{(\pi,d)}.\]
Then since \(\pi\) is distinguished in any \(\delta_{L}\)-algebra, we conclude that \(\delta_{L}(d)\) is a unit in \(A/(\pi,d)\) and thus in \(A\) as well.
**Definition 2.10**.: A \(\delta_{L}\)-algebra \(A\) is _perfect_ if \(\phi_{A}\) is an isomorphism.
**Lemma 2.11**.: _(See [7, Lemma 2.28].) Let \(A\) be a \(\delta_{L}\)-algebra. Then if \(\alpha\in A\) is \(\pi\)-torsion, we have \(\phi(\alpha)=0\). In particular, if \(A\) is perfect then \(A\) is \(\pi\)-torsion free._
Proof.: Applying \(\delta_{L}\) to \(\pi\alpha=0\) gives
\[0=\pi^{q}\delta_{L}(\alpha)+\delta_{L}(\pi)\alpha^{q}+\pi\delta_{L}(\pi)\delta _{L}(\alpha)=\pi^{q}\delta_{L}(\alpha)+\delta_{L}(\pi)\phi(\alpha).\]
As \(\delta_{L}(\pi)=1-\pi^{q-1}\) is a unit and
\[\pi^{q}\delta_{L}(\alpha)=\phi(\pi^{q-1}\alpha)-\pi^{q-1}\alpha^{q}=0\]
we are done.
**Lemma 2.12**.: _If \(A\) is a perfect and \(\pi\)-adically complete \(\delta_{L}\)-algebra, and \(d\) is distinguished, then \(d\) is a nonzerodivisor._
Proof.: Suppose that \(d\alpha=0\) and suppose towards a contradiction that \(\alpha\neq 0\). Since \(A\) is \(\pi\)-torsionfree by lemma 2.11 and \(\pi\)-adically complete, we can further assume that \(\pi\nmid\alpha\). Applying \(\delta_{L}\) to \(d\alpha=0\) gives
\[\alpha^{q}\delta_{L}(d)+\delta_{L}(\alpha)\phi(d)=0.\]
Multiplying by \(\phi(\alpha)\) and using that \(d\) is distinguished then implies \(\alpha^{q}\phi(\alpha)=0\). Thus \(\alpha^{2q}\equiv 0\pmod{\pi}\). But as \(\phi\) is a bijection, \(\varphi_{q}\) is injective, so \(\pi|\alpha\), a contradiction.
A key fact about perfect \(\delta_{L}\)-algebras is the following.
**Proposition 2.13**.: _(See [7, Corollary 2.31].) The following functors are equivalences of categories._
Proof.: By lemma 2.11, the forgetful functor has image in \(\pi\)-torsion free rings. By the vanishing of the cotangent complex \(\mathbb{L}_{R/\mathbb{F}_{q}}\) for a perfect \(\mathbb{F}_{q}\)-algebra \(R\) and deformation theory, there is a unique \(\pi\)-adically complete and \(\pi\)-torsion-free \(\mathcal{O}_{L}\)-algebra \(\tilde{R}\) such that \(\tilde{R}/\pi\cong R\). Since \(R\mapsto\tilde{R}\) is clearly quasi-inverse to \(A\mapsto A/\pi\), it suffices to show that \(\tilde{R}\) is naturally isomorphic to forget\((W_{L}(R))\).
Since \(R\mapsto\tilde{R}\) is a functor, \(\tilde{R}\) comes equipped with a canonical lift of \(q\)-Frobenius and thus by lemma 2.5 a canonical map \(s_{\tilde{R}}:\tilde{R}\to W_{L}(R)\) lifting \(\tilde{R}\twoheadrightarrow\tilde{R}/\pi=R\). By [34, prop. 1.1.18], \(W_{L}(R)\) is \(\pi\)-adically complete, so it suffices to show that \(s_{\tilde{R}}\) induces an isomorphism \(R\to W_{L}(R)/\pi\). But this is clear since an inverse is given by the map \(W_{L}(R)/\pi\to R/\pi=R\) induced by \(W_{L}(R)\twoheadrightarrow R\).
**Corollary 2.14**.: _If \(R\) is a perfect \(\mathbb{F}_{q}\) algebra, then \(W_{L}(R)\cong W(R)\otimes_{W(\mathbb{F}_{q})}\mathcal{O}_{L}\) where \(W\) denotes the \(p\)-typical Witt vectors. In particular \(W_{L}(\mathbb{F}_{q})=\mathcal{O}_{L}\)._
## 3 \(L\)-typical prisms
As before, let \(L/\mathbb{Q}_{p}\) be a finite extension with uniformizer \(\pi\) and residue field \(\mathbb{F}_{q}\). In this section, we introduce \(L\)-typical prisms, which are a mild generalization of prisms obtained
by replacing \(\delta\)-rings with \(\delta_{L}\)-algebras. In SS3.1 we define \(L\)-typical prisms and the \(L\)-typical prismatic site of a formal scheme \(X\) over \(\operatorname{Spf}\mathcal{O}_{L}\).
Prisms as defined can be viewed as "deperfections" of perfectoid rings, in the sense that the subcategory of perfect prisms is equivalent to the category of perfectoid rings. Similarly, in SS3.2 we show that the category of \(L\)-typical prisms has a subcategory of perfect \(L\)-typical prisms, which are equivalent to _perfectoid \(\mathcal{O}_{L}\)-algebras_ (i.e. perfectoid rings with an \(\mathcal{O}_{L}\)-algebra structure). We also show that there is a perfection functor for \(L\)-typical prisms.
In SS3.3, we give two constructions which - given an \(L\)-typical prism \((A,I)\), a perfectoid \(\mathcal{O}_{L}\)-algebra \(R\), and a system of \(\phi\)-compatible maps \((\iota_{n}:A\to R)_{n}\) - produce a map \((A,I)\to(A_{\inf}(R),\ker\theta)\) to the perfect \(L\)-typical prism corresponding to \(R\). These constructions will play a crucial role in SS4, where they are used to embed an \(L\)-typical prism coming from a Lubin-Tate formal \(\mathcal{O}_{L}\)-module inside a perfect \(L\)-typical prism.
### Basic theory
**Definition 3.1**.:
1. An \(L\)_-typical prism_ is a pair \((A,I)\) where \(A\) is a \(\delta_{L}\)-algebra and \(I\subseteq A\) is an ideal defining a Cartier divisor on \(\operatorname{Spec}(A)\) such that \(A\) is derived \((\pi,I)\)-complete and \(\pi\in I+\phi_{A}(I)\). A morphism \((A,I)\to(B,J)\) of prisms is a \(\delta_{L}\)-algebra morphism \(f:A\to B\) such that \(f(I)\subseteq J\).
2. An \(L\)-typical prism \((A,I)\) is _perfect_ if \(A\) is a perfect \(\delta_{L}\)-algebra. It is _bounded_ if \(A/I\) has bounded \(\pi^{\infty}\)-torsion, i.e. \(A/I[\pi^{\infty}]=A/I[\pi^{n}]\) for some \(n\geq 0\).
3. If \(X\) is a formal scheme over \(\operatorname{Spf}\mathcal{O}_{L}\) then the (absolute) \(L\)-typical prismatic site \(X_{\underline{\Delta}_{L}}\) has * objects: bounded \(L\)-typical prisms \((A,I)\) together with a map of formal schemes \(\operatorname{Spf}(A/I)\to X\); * morphisms: maps of \(L\)-typical prisms compatible with the structure map to \(X\); * covers: morphisms \((A,I)\to(B,J)\) such that \(A\to B\) is \((\pi,I)\)-completely faithfully flat. If \(X=\operatorname{Spf}(R)\) then we write \(R_{\underline{\Delta}_{L}}\) for \(X_{\underline{\Delta}_{L}}\).
The same definition was independently given in the concurrent work of Ito [20], where \(L\)-typical prisms are called \(\mathcal{O}_{L}\)-prisms in Ito's terminology.
**Remark 3.2**.: For the notions of derived \(I\)-completeness and \(I\)-complete faithful flatness, see [7, SS1.2]. Note that [43] omits the word "faithfully" in the definition of a cover.
As suggested by the definition of the prismatic site, we will only be interested in bounded prisms. In this case, we need not worry about the word "derived" in the definition of a \(L\)-typical prism.
**Lemma 3.3**.: _If \((A,I)\) is a bounded \(L\)-typical prism, then \(A\) is classically \((\pi,I)\) complete._
Proof.: This is the same as in [7, lem. 3.7]. In more detail, we may suppose that \(I=(d)\) for a nonzerodivisor \(d\). Then by the derived \((\pi,d)\)-completeness of \(A\), the fact that \(A/d^{m}\) has bounded \(\pi\)-torsion for all \(m\) (by devissage), and [40, Tag 091X], we have
\[A \cong R\lim_{m}R\lim_{n}(A\otimes_{\mathbb{Z}[d]}^{L}\mathbb{Z}[d ]/(d^{m}))\otimes_{\mathbb{Z}[\pi]}^{L}\mathbb{Z}[\pi]/(\pi^{n})\] \[\cong R\lim_{m}R\lim_{n}A/(d^{m})\otimes_{\mathbb{Z}[\pi]}^{L} \mathbb{Z}[\pi]/(\pi^{n})\cong\lim_{m}\lim_{n}A/(d^{m},\pi^{n})\]
as desired.
If \((A,I)\) is a \(L\)-typical prism with \(I\)_principal_, then lemma 2.9 shows that the condition \(\pi\in(I,\phi(I))\) is equivalent to \(I\) being generated by a distinguished element. Under the weaker assumption that \(I\) is Zariski-locally principal, the condition \(\pi\in(I,\phi(I))\) is equivalent to \(I\) being ind-Zariski-locally generated by a distinguished element. (The 'ind-' is necessary because after after passing to a Zariski open, we may no longer have \((\pi,I)\subseteq\operatorname{Rad}(A)\), which necessitates passing to a further localization along \((\pi,I)\); see [7, footnote 8] for more details.)
**Lemma 3.4**.: _Let \(A\) be a \(\delta_{L}\)-algebra and \(I\subseteq A\) a Zariski-locally principal ideal such that \((\pi,I)\subseteq\operatorname{Rad}(A)\). Then the following are equivalent:_
1. \(\pi\in(I^{q},\phi(I))\)_._
2. \(\pi\in(I,\Phi(I))\)_._
3. _There is a faithfully flat map of_ \(\delta_{L}\)_-algebras_ \(A\to A^{\prime}\) _with_ \(A^{\prime}\) _an ind-(Zariski localization) of_ \(A\) _such that_ \(IA^{\prime}\) _is generated by a distinguished element_ \(d\) _and_ \((\pi,d)\in\operatorname{Rad}(A^{\prime})\)_._
Proof.: We follow [7, lem. 3.1]. Clearly (i)\(\implies\)(ii). For (ii)\(\implies\)(iii), since \(I\) is locally principal we can select \(f_{1},\dots,f_{n}\in A\) generating the unit ideal in \(A\) such that each \(IA[1/f_{i}]\) is principal. Take \(A^{\prime}=\left(\prod A[1/f_{i}]\right)_{(\pi,I)}\), where the subscript denotes Zariski localization along \(V((\pi,I))\). Then \(A^{\prime}\) has a unique \(\delta_{L}\)-structure by the \(L\)-typical analogue of [7, rmk. 2.16], \(A\to A^{\prime}\) is a faithfully flat of \(\delta_{L}\)-algebras, and \(I^{\prime}=IA^{\prime}\) is principal with \(\pi\in(I^{\prime},\phi(I^{\prime}))\). By lemma 2.9, any generator of \(I^{\prime}\) is distinguished.
For (iii)\(\implies\)(i), we would like to check that \(\pi=0\) in \(A/(I^{q},\phi(I))\). This can be checked after faithfully flat extension to \(A^{\prime}\), in which case it follows from lemma 2.9.
Even though \(I\) is only assumed locally principal, \(\phi(I)\) is always principal.
**Lemma 3.5**.: _The ideal \(\phi(I)\) is principal and generated by a distinguished element for any \(L\)-typical prism \((A,I)\)._
Proof.: By lemma 3.4, we can pick \(a\in I^{q},b\in\phi(I)\) so that \(\pi=a+b\). We will show that \(b\) generated \(\phi(I)\). This can be checked after passing to the ind-Zariski-localization \(A^{\prime}\) of lemma 3.4. Let \(d\) be a distinguished generator of \(IA^{\prime}\) so that \(a=\alpha d^{q}\) and \(b=\beta\phi(d)\) in \(A^{\prime}\); it suffices to show that \(\beta\) is a unit. Indeed, applying \(\delta_{L}\) to the equation \(\pi=\alpha d^{q}+\beta\phi(d)\) and working mod \((\pi,d)\) gives
\[\delta_{L}(\pi)\equiv\beta^{q}\delta_{L}(d)^{q}\pmod{(\pi,d)},\]
which implies that \(\beta\) is a unit in \(A^{\prime}/(\pi,d)\) and hence in \(A^{\prime}\), as desired.
### Perfect \(L\)-typical prisms and perfectoid \(\mathcal{O}_{L}\)-algebras
It is shown in [7, Theorem 3.10] that the functor \((A,I)\mapsto A/I\) is (one half of) an equivalence of categories between perfect prisms and perfectoid rings. In fact, this can be taken as the definition of a perfectoid ring, as is done in [4, IV]. We take the same perspective here, initially _defining_ perfectoid \(\mathcal{O}_{L}\)-algebras as those \(\mathcal{O}_{L}\) algebras which come from perfect \(L\)-typical prisms. We will later show (theorem 3.18) that one can equivalently define perfectoid \(\mathcal{O}_{L}\)-algebras as perfectoid rings equipped with an \(\mathcal{O}_{L}\)-algebra structure.
**Definition 3.6**.: An \(\mathcal{O}_{L}\)-algebra \(R\) is a _perfectoid \(\mathcal{O}_{L}\)-algebra_ if it is isomorphic to \(A/I\) for some perfect \(L\)-typical prism \((A,I)\).
The functor from pefectoid rings to perfect prisms is \(R\mapsto(A_{\inf}(R),\ker\theta)\). To generalize this functor to the present setting, recall that the _tilt_ of a ring \(R\) is \(R^{\flat}=\varprojlim_{\varphi_{p}}R/p.\) If \(R\) is an \(\mathcal{O}_{L}\)-algebra, then we have an isomorphism of rings
\[R^{\flat}\cong\varprojlim_{\varphi_{q}}R/\pi\]
so that \(R^{\flat}\) is in fact a perfect \(\mathbb{F}_{q}\)-algebra. If \(R\) is moreover \(\pi\)-adically complete then we have an isomorphism of multiplicative monoids \(R^{\flat}\stackrel{{\sim}}{{\rightarrow}}\varprojlim_{x\mapsto x ^{q}}R\); by composing this with projection onto the first factor of the inverse limit, we get a multiplicative map \(\sharp:R^{\flat}\to R\) explicitly given by
\[x^{\sharp}=\lim_{n\rightarrow\infty}\widehat{x_{n}}^{q^{n}}\qquad\text{where }x=( \dots,x_{1},x_{0})\in\varprojlim_{\varphi_{q}}R/\pi=R^{\flat}\]
and where the \(\widehat{x_{n}}\in R\) are arbitrary lifts of the \(x_{n}\in R/\pi\).
**Definition 3.7**.: If \(R\) is a \(\pi\)-adically complete \(\mathcal{O}_{L}\)-algebra, then let \(A_{\inf}(R)=W_{L}(R^{\flat})\) and \(\theta:A_{\inf}(R)\to R\) be the map given in Witt coordinates by
\[(x_{0},x_{1},\dots)\mapsto\sum_{n\geq 0}\left(x_{n}^{1/q^{n}}\right)^{\sharp} \pi^{n}.\]
By corollary 2.14 we have \(W_{L}(R^{\flat})\cong W(R^{\flat})\otimes_{W(\mathbb{F}_{q})}\mathcal{O}_{L}\), and \(\theta\) is a ring homomorphism coinciding with the base change to \(\mathcal{O}_{L}\) of the usual map \(\theta:W(R^{\flat})\to R\) of \(p\)-adic Hodge theory.
**Remark 3.8**.: If \(L/\mathbb{Q}_{p}\) is unramified and \(\pi=p\), then \(A_{\inf}(R)\) and \(\theta\) coincide with their usual meanings, but this is not the case when \(\mathcal{O}_{L}/\mathbb{Q}_{p}\) is ramified.
**Lemma 3.9**.: _Let \(R\) be a \(\pi\)-adically complete \(\mathcal{O}_{L}\)-algebra with \(\varphi_{q}:R/\pi\to R/\pi\) surjective._
1. _The map_ \(\theta:A_{\inf}(R)\to R\) _is surjective._
2. \(A_{\inf}(R)\) _is_ \((\pi,\ker\theta)\)_-adically complete._
Proof.: First note that \(A_{\inf}(R)\) is \(\pi\)-adically complete by [34, prop. 1.1.18]. Thus part (1) reduces to showing that \(R^{\flat}\to R/\pi\) is surjective, which follows from the assumption that \(\varphi_{q}\) is surjective.
For (2), using again the \(\pi\)-completeness of \(A_{\inf}(R)\), it suffices to check that \(R^{\flat}\) is complete with respect to the ideal \(J=\ker(R^{\flat}\to R/\pi)\) which is the mod \(\pi\) reduction of \(\ker\theta\). Indeed, we have \(R^{\flat}=\varprojlim_{\varphi_{q}}R/\pi\cong\varprojlim_{n}R^{\flat}/J^{q^{n}}\) via the isomorphisms \(R/\pi=R^{\flat}/J\stackrel{{\varphi_{q}^{n}}}{{\to}}R^{\flat}/J^ {q^{n}}\).
The following properties make perfect \(L\)-typical prisms especially well-behaved.
**Lemma 3.10**.: _Let \((A,I)\) be a perfect \(L\)-typical prism._
1. \(I\) _is principal and generated by a distinguished element._
2. \((A,I)\) _is bounded._
3. \(A/I\) _is_ \(\pi\)_-adically complete._
Proof.: (1) follows from lemma 3.5. Let \(d\in A\) be the distinguished generator of \(I\).
For (2), we will in fact show that \(A/d[\pi^{2}]=A/d[\pi]\). Suppose that \(\alpha\in A/d[\pi^{2}]\) so that there is some \(\beta\in A\) with \(\pi^{2}\alpha=\beta d\). Applying \(\delta_{L}\) and working mod \(\pi\), we get that
\[d^{q}\delta_{L}(\beta)+\beta^{q}\delta_{L}(d)\equiv 0\pmod{\pi}.\]
Multiplying by \(\beta^{q}\) and and using that \(\delta_{L}(d)\in A^{\times}\) then gives that \(\pi|\beta^{2q}\). This implies \(\pi|\phi^{2}(\beta)\), so that \(\pi|\beta\) since \(\phi\) is an \(\mathcal{O}_{L}\)-linear isomorphism. Thus we have
\[\pi^{2}\alpha=\pi\beta^{\prime}d\quad\text{for some $\beta^{\prime}\in A$}\]
so that \(d|\pi\alpha\) by lemma 2.11.
(3) follows from [40, Tag 091X] since \(A/d\) is derived \(\pi\)-complete with bounded \(\pi\)-power torsion.
**Proposition 3.11**.: _We have an equivalence of categories_
Proof.: Let \(R=A/I\) be a perfectoid \(\mathcal{O}_{L}\)-algebra coming from a perfect \(L\)-typical prism \((A,I)\). Since \(R\) is \(\pi\)-adically complete by proposition 3.10 and \(\varphi_{q}:R/\pi\to R/\pi\) is surjective (as it's the mod \((\pi,I)\) reduction of \(\phi:A\to A\)), lemma 3.9(1) implies that \(\theta:A_{\inf}(R)\to R\) is surjective. Thus to prove the proposition, it suffices to show that \(A_{\inf}(R)\) identifies with \(A\) in such a way that
commutes (thereby identifying \(I\) with \(\ker\theta\)). Since \(A_{\inf}(R)\) and \(A\) are \(\pi\)-adically complete perfect \(\delta_{L}\)-algebras, by proposition 2.13 it suffices to show that \(A/\pi\) identifies with \(R^{\flat}\) compatibly with the maps to \(A/(\pi,I)=R/\pi\). Indeed, we have a commutative diagram
via the \(I\)-adic completeness of \(A/\pi\).
**Lemma 3.12**.: _A map \(R\to S\) of perfectoid \(\mathcal{O}_{L}\)-algebras is \(\pi\)-completely (faithfully) flat if and only if the corresponding map\(A_{\inf}(R)\to A_{\inf}(S)\) is \((\pi,\ker\theta)\)-completely (faithfully) flat._
Proof.: It is easy to show that \(A_{\inf}(S)\otimes_{A_{\inf}(R)}^{L}A_{\inf}(R)/\ker\theta_{A_{\inf}(R)} \cong S\otimes_{R}^{L}R\) using either the \(L\)-typical analogue of the rigidity result [7, lemma 3.5] or the fact that a distinguished element can only factor as a unit times another distinguished element [7, lemma 2.24]. Thus \(R\to S\) being \(\pi\)-competely (faithfully) flat and \(A_{\inf}(R)\to A_{\inf}(S)\) being \((\pi,\ker\theta)\)-completely (faithfully) flat are both equivalent to \(R/\pi\to S\otimes_{R}^{L}R/\pi\) being (faithfully) flat.
Given a \(L\)-typical prism \((A,I)\) we can form its perfection.
**Definition 3.13**.: If \((A,I)\) is a \(L\)-typical prism, then we write
\[A_{\rm perf}=(\varinjlim_{\phi}A)^{\wedge}_{(\pi,I)}\]
for the (classical) \((\pi,I)\)-completion of the naive perfection \(\varinjlim_{\phi}A\). We call \((A_{\rm perf},IA_{\rm perf})\) the _perfection_ of \((A,I)\).
By remarks 2.3(4)-(5), \(A_{\rm perf}\) is a perfect \(\delta_{L}\)-algebra. We now show that \((A_{\rm perf},IA_{\rm perf})\) is the initial \(L\)-typical prism over \((A,I)\).
**Proposition 3.14**.: _(cf. [7, Lemma 3.9]) Let \((A,I)\) be a \(L\)-typical prism._
1. _The derived_ \((\pi,I)\)_-adic completion of_ \(\varinjlim_{\phi}A\) _coincides with the classical_ \((\pi,I)\)_-adic completion (and thus with_ \(A_{\rm perf}\)_)._
2. _The map_ \((A,I)\to(A_{\rm perf},IA_{\rm perf})\) _is initial among maps from_ \((A,I)\) _to a perfect_ \(L\)_-typical prism._
Proof.: (1) clearly implies (2). To show (1) first note that by construction \(\varinjlim_{\phi}A\) is a perfect \(\delta_{L}\)-algebra. Thus by lemma 2.11\(A\) is \(\pi\)-torsionfree, so that the derived and classical \(\pi\)-adic completions agree. As \(A\to(\varinjlim_{\phi}A)^{\wedge}_{(\pi)}\) factors through \(\phi:A\to A\), lemma 3.5 implies that \(I(\varinjlim_{\phi}A)^{\wedge}_{(\pi)}\) is principal and generated by a distinguished element \(d\). By lemma 2.9 it thus suffices to show that \(d\) is a nonzerodivisor.
For this, suppose that \(fd=0\) for some \(0\neq f\in(\varinjlim_{\phi}A)^{\wedge}_{(\pi)}\); since this ring is \(\pi\)-torsionfree and classically \(\pi\)-adically complete (and thus \(\pi\)-adically separated), we can suppose that \(\pi\nmid f\) by dividing out powers of \(\pi\). Applying \(\delta_{L}\) and working mod \(\pi\) we get
\[f^{q}\delta_{L}(d)+d^{q}\delta_{L}(f)\equiv 0\pmod{\pi}.\]
Multiplying by \(f^{q}\) and using that \(\delta_{L}(d)\) is a unit then shows that \(\pi|f^{2q}\). Thus \(\pi|\phi^{2}(f)\), which implies that \(\pi|f\) since \(\phi\) is a \(\mathcal{O}_{L}\)-linear isomorphism. But this is a contradiction, so \(d\) must be a nonzerodivisor.
Finally, we will show that perfectoid \(\mathcal{O}_{L}\)-algebras coincide with perfectoid rings (in the sense of [6, definition 3.5]) equipped with an \(\mathcal{O}_{L}\)-algebra structure. We begin by establishing a more intrinsic criterion for being a perfectoid \(\mathcal{O}_{L}\)-algebra; indeed the following proposition is the \(L\)-typical version of [4, proposition IV.2.10], in which \(p\) is replaced by \(\pi\) and the Frobenius is replaced by the \(q\)-Frobenius.
**Proposition 3.15**.: _Let \(R\) be an \(\mathcal{O}_{L}\)-algebra. Then \(R\) is a perfectoid \(\mathcal{O}_{L}\)-algebra if and only if_
1. \(R\) _is_ \(\pi\)_-adically complete,_
_._
2. _there exists some_ \(\varpi\in R\) _such that_ \(\varpi^{q}=\pi u\) _for some_ \(u\in R^{\times}\)_,_
3. \(\varphi_{q}:R/\pi\to R/\pi\) _is surjective, and_
4. _the kernel of_ \(\theta:A_{\inf}(R)\to R\) _is principal._
_If \(R\) is assumed \(\pi\)-torsionfree, then the above remains true with (4) replaced by_
1. _if_ \(x\in R[1/\pi]\) _with_ \(x^{q}\in R\)_, then_ \(x\in R\)_._
**Remark 3.16.** Note that once a \(q\)th root of \(\pi u\) as in (2) exists, we get a full \(q\)-power-compatible system of roots \((\varpi^{1/q^{n}})\) by letting \(\varpi^{\flat}\in R^{\flat}\) be any lift of \(\varpi\) along the map \(R^{\flat}\to R/\pi\) (which is surjective by assumption (3)) and then taking \((\varpi^{1/q^{n}})\) to be the image of \(\varpi^{\flat}\) under the bijection \(R^{\flat}\cong\varprojlim_{x\mapsto x^{q}}R\) (which exists by assumption (1)).
_Proof of proposition 3.15._ Suppose that \(R=A/I\) is a perfectoid \({\cal O}_{L}\)-algebra coming from a perfect \(L\)-typical prism \((A,I)\); using proposition 3.11 we can identify \((A,I)\cong(A_{\inf}(R),\ker\theta)\). (1) and (4) follow from proposition 3.10, and (3) follows from the surjectivity of \(\phi:A\to A\). For (2), let \(d\in A_{\inf}(R)\) be a distinguished generator of \(\ker\theta\) (which exists by proposition 3.10). Then we can take \(\varpi=\theta(\phi^{-1}(d))\) since
\[\varpi^{q}=\theta\left(\phi^{-1}(d^{q})\right)=\theta\left(d-\pi\phi^{-1}( \delta_{L}(d))\right)=-\pi\theta(\phi^{-1}(\delta_{L}(d))),\]
with \(u=-\theta(\phi^{-1}(\delta_{L}(d)))\in R^{\times}\).
**Remark 3.17.** Using the diagram
we note that the mod \(\pi\) reduction of the element \(\varpi\) constructed above is the kernel of \(\varphi_{q}:R/\pi\to R/\pi\). Thus the surjective map \(\varphi_{q}:R/\pi\to R/\pi=R/\varpi^{q}\) factors through an isomorphism \(R/\varpi\stackrel{{\sim}}{{\to}}R/\varpi^{q}\). This fact will be used later in the proof.
For the converse, suppose that \(R\) is an \({\cal O}_{L}\)-algebra satisfying (1)-(4); we want to show that \((A_{\inf}(R),\ker\theta)\) is a perfect \(L\)-typical prism. \(A_{\inf}(R)=W_{L}(R^{\flat})\) is a perfect \(\delta_{L}\)-algebra by proposition 2.13 and is \((\pi,\ker\theta)\)-adically complete by lemma 3.9. By assumption \(\ker\theta=(d)\) for some \(d\in A_{\inf}(R)\). Thus by lemma 2.9 it suffices to show that \(d\) is distinguished.
Let \(\varpi,u\in R\) be as in (2), and let \(\omega,v\in A_{\inf}(R)\) be lifts along \(\theta\). Then \(\omega^{q}-\pi v\in\ker\theta\), so we can write
\[\omega^{q}-\pi v=\alpha d\]
for some \(\alpha\in A_{\inf}(R)\). Applying \(\delta_{L}\) to this equation and working mod \((\pi,d)\) (using remark 2.3(2) to simplify) gives
\[-v\delta_{L}(\pi)\equiv\alpha^{q}\delta_{L}(d)\pmod{(\pi,d)}.\]
As \(-v\delta_{L}(\pi)\in(A_{\inf}(R)/(\pi,d))^{\times}\), this shows that \(\delta_{L}(d)\in(A_{\inf}(R)/(\pi,d))^{\times}\) as well, and thus \(\delta_{L}(d)\in A_{\inf}(R)^{\times}\) by \((\pi,d)\)-completeness.
Assume now that \(R\) is \(\pi\)-torsion free. Supposing \(R\) is a perfectoid \(\mathcal{O}_{L}\)-algebra, we prove (4'). Suppose that \(x\in R[1/\pi]\) with \(x^{q}\in R\). Let \(\varpi\in R\) be the element satisfying (2) constructed earlier in this proof; by remark 3.17 we have that the \(q\)-power map \(R/\varpi\to R/\varpi^{q}\) is bijective. Let \(n\geq 0\) be minimal such that \(\varpi^{n}x\in R\) (such an \(n\) exists since \(\varpi^{q}|\pi\)), and suppose towards a contradiction that \(n\geq 1\). Then
\[(\varpi^{n}x)^{q}=\varpi^{nq}x^{q}\in\varpi^{nq}R\subseteq\varpi^{q}R,\]
which implies that \(\varpi^{n}x\in\varpi R\). As \(R\) is \(\varpi\)-torsionfree, this implies that \(\varpi^{n-1}x\in R\), giving the contradiction.
Finally, suppose that \(R\) is a \(\pi\)-torsionfree \(\mathcal{O}_{L}\)-algebra satisfying (1) - (3) and (4'); we will prove (4). Let \(\varpi\in R\) be as in (2), and let \((\varpi^{1/q^{n}})\) be a system of \(q\)-power roots of \(\varpi\), which exists by remark 3.16 (which uses only (1)-(3)). We have that \(\varpi^{1/q^{n}}\bmod\pi\) generates \(\ker(\varphi_{q}^{n}:R/\pi\to R/\pi)\): if \(x\in R\) with \(\pi|x^{q^{n}}\) then the \(q^{n}\)-th power of \(x/\varpi^{1/q^{n}}\in R[1/\pi]\) is in \(R\), so that \(x/\varpi^{1/q^{n}}\in R\) as well by assumption. It follows that the element
\[(\ldots,\overline{\varpi^{1/q^{2}}},\overline{\varpi^{1/q}},\overline{\varpi},0)\in R^{\flat}\]
formed from the \(\bmod\pi\) reductions of the \(\varpi^{1/q^{n}}\) generates \(\ker(R^{\flat}\to R/\pi)=\varprojlim\ker(\varphi_{q}^{n})\). But since \(R\) is \(\pi\)-torsionfree and \(\ker\theta\) is \(\pi\)-adically complete with \(\bmod\pi\) reduction \(\ker(R^{\flat}\to R/\pi)\), this implies that \(\ker\theta\) is principal as well.
**Theorem 3.18**.: _Let \(R\) be a ring. Then \(R\) is a perfectoid \(\mathcal{O}_{L}\)-algebra if and only if \(R\) is a perfectoid ring and an \(\mathcal{O}_{L}\)-algebra._
Proof.: First, suppose that \(R\) is a perfectoid ring and an \(\mathcal{O}_{L}\)-algebra. To show that \(R\) is a perfectoid \(\mathcal{O}_{L}\)-algebra, it suffices to show that \((A_{\inf}(R),\ker\theta)\) is an \(L\)-typical prism (it is automatically perfect by proposition 2.13). This is done in [20, lemma 2.4.3]; we briefly sketch the argument here. First, one shows that \(\ker(\theta:A_{\inf}(R^{\flat})\to R)\) is generated by an element of the form \(\xi=\pi-[\varpi^{\flat}]b\), where \(\varpi\in R\) is such that \(R\) is \(\varpi\)-adically complete and \(\varpi^{p}|p\), the element \(\varpi^{\flat}\in R^{\flat}\) satisfies \((\varpi^{\flat})^{\sharp}=\varpi\), and \(b\in A_{\inf}(R)\). Since any generator of \(\ker(W(R^{\flat})\to R)\) is a nonzerodivisor and \(W(\mathbb{F}_{q})\to\mathcal{O}_{L}\) is flat, any generator of \(\ker\theta\) ia a nonzerodivisor. It is easy to show that \(A_{\inf}(R)\) is \((\pi,\xi)\)-adically complete. And \(\xi\) is distinguished as \(\delta_{L}(\xi)\equiv 1-\pi^{q-1}\pmod{(\pi,\xi)}\) is a unit in \(R\).
Conversely, suppose that \(R\) is a perfectoid \(\mathcal{O}_{L}\)-algebra; we want to show that \(R\) is a perfectoid ring. By lemma 3.19 below, we have that \(R\) can be written as a fiber product \(\overline{R}\times_{\overline{S}}S\) where \(\overline{R}\to\overline{S}\) is a surjection of perfect \(\mathbb{F}_{q}\)-algebras, and \(S\) is a \(\pi\)-torsionfree perfectoid \(\mathcal{O}_{L}\)-algebra. Once we show that \(\overline{R}\), \(\overline{S}\), and \(S\) are perfectoid rings, we may conclude that \(R\) is a perfectoid ring as well by [12, proposition 2.1.4]. Thus we may assume that \(R\) is a perfect \(\mathbb{F}_{q}\)-algebra or \(\pi\)-torsionfree. In the former case, the result is clear since perfect \(\mathbb{F}_{p}\)-algebras are perfectoid rings.
Suppose now that \(R\) is \(\pi\)-torsionfree. By [4, proposition IV.2.10] it suffices to show that \(R\) satisfies the "\(p\)-analogues" of properties (1)-(3),(4') in proposition 3.15:
1. \(R\) is \(p\)-adically complete,
2. there exists some \(\varpi^{\prime}\in R\) such that \((\varpi^{\prime})^{p}=pu^{\prime}\) for some \(u^{\prime}\in R^{\times}\),
3. \(\varphi:R/p\to R/p\) is surjective, and
4. if \(x\in R[1/p]\) with \(x^{p}\in R\), then \(x\in R\).
(1\({}_{\rm p}\)) and (4'\({}_{\rm p}\)) follow immediately from (1) and (4'), respectively. Taking \(\varpi,u\in R\) as in (2) and letting \(e\) be the ramification index of \(L/\mathbb{Q}_{p}\), we conclude (2\({}_{\rm p}\)) by taking \(\varpi^{\prime}=\varpi^{qe/p}\) and \(u^{\prime}=u^{e}\). To show (3\({}_{\rm p}\)), it suffices to show the \(q\)-Frobenius \(\varphi_{q}:R/p\to R/p\) is surjective. So let \(\alpha\in R\); we will successively approximate \(\alpha^{1/q}\) modulo higher powers of \(\pi\). Indeed, by (3) there is \(\beta_{1}\in R\) so that \(\beta_{1}^{q}\equiv\alpha\pmod{\pi}\). Thus \(\frac{\alpha-\beta_{1}^{q}}{u\pi}\in R\), where \(u\in R^{\times}\) is as in (2). Again by (3), there is \(\gamma_{1}\in R\) so that \(\gamma_{1}^{q}\equiv\frac{\alpha-\beta_{1}^{q}}{u\pi}\pmod{\pi}\). Then let \(\beta_{2}=\beta_{1}+\gamma_{1}\varpi\) with \(\varpi\) as in (2). We see that \(\beta_{2}^{q}\equiv\beta_{1}^{q}+\gamma_{1}^{q}u\pi=\alpha\pmod{\pi^{2}}\) (so long as the ramification index \(e\geq 2\); if \(e=1\) then \(\pi=p\), so we were already done). Repeating this process, we get for \(1\leq n\leq e\) elements \(\beta_{n}\in R\) satisfying \(\beta_{n}^{q}\equiv\alpha\pmod{\pi^{n}}\). We then get that \(\beta_{e}^{q}\equiv\alpha\pmod{p}\) as desired.
**Lemma 3.19**.: _(cf. [4, proposition IV.3.2].) Let \(R\) be a perfectoid \(\mathcal{O}_{L}\)-algebra. Then the \(\mathcal{O}_{L}\)-algebras_
\[\overline{R}=R/\sqrt{\pi R},\quad S=R/R[\sqrt{\pi R}],\quad\text{and}\quad \overline{S}=S/\sqrt{\pi S}\]
_are perfectoid \(\mathcal{O}_{L}\)-algebras and the commutative square_
_is Cartesian._
Proof.: This is proved the same way as in [4], so we only sketch the argument here. As in the proof of proposition 3.15, we can write a distinguished generator of \(\ker(\theta:A_{\inf}(R)\to R)\) as \(d=[a_{0}]-\pi u\) for \(a_{0}\in R^{\flat}\) such that \(R^{\flat}\) is \(a_{0}\)-adically complete and \(a_{0}^{\sharp}=\pi\), and \(u\in A_{\inf}(R)^{\times}\) (concretely \(a_{0}=(\varpi^{\flat})^{q}\) for \(\varpi^{\flat}\) as in remark 3.16). Let \(I=(a_{0}^{1/q^{\infty}})\subseteq R^{\flat}\) and \(J=R^{\flat}[I]\). Then the commutative square
is a homotopy fibre square by the general result [4, lemma IV.3.1] regarding perfect \(\mathbb{F}_{p}\) algebras and devissage. Since \(d\) is a nonzerodivisor in all of these perfect \(\delta_{L}\)-algebras by lemma 2.12, the square remains a homotopy fibre square upon application of \(-\otimes_{W_{L}(R^{\flat})}^{L}R\).
It is easy to see that the rings in this square are all perfectoid \(\mathcal{O}_{L}\)-algebras. Finally, one shows that the this cartesian square identifies with the one in the statement of the lemma; see [4, IV] for the details.
### Constructing maps of \(\pi\)-typical prisms
In SS4 we will construct inclusions of \(L\)-typical prisms coming from Lubin-Tate formal groups into perfect \(L\)-typical prisms. The construction used can be understood in at least three different ways, one of which is specific to the scenario in SS4, and two of which are general constructions for producing maps between \(L\)-typical prisms. Here we explain the two general constructions.
**Construction 3.20**.: Let \(R\) be an \(\mathcal{O}_{L}\)-algebra, and let \(A\) be a \(\delta_{L}\)-algebra with a sequence of \(\phi\)-compatible \(\mathcal{O}_{L}\)-algebra maps \(\iota_{n}:A\to R\) for \(n\geq 0\), i.e. a sequence of maps \(\iota_{n}\) making the diagram
commute. We will construct from this data a map \(\iota:A\to W_{L}(R^{\flat})\) of \(\delta_{L}\)-algebras.
Indeed, using that \(\varphi_{q}\) commutes with maps of \(\mathbb{F}_{q}\)-algebras, we can form a map
\[\overline{\iota}:A/\pi \longrightarrow\varprojlim_{\varphi_{q}}R/\pi=R^{\flat}\] \[a \longmapsto(\overline{\iota_{n}}(a))_{n}\]
in characteristic \(p\), where \(\overline{\iota_{n}}:A/\pi\to R/\pi\) denotes the mod \(\pi\) reduction. Then applying the universal property of \(W_{L}\) (lemma 2.5) to the \(\mathcal{O}_{L}\)-algebra map \(A\twoheadrightarrow A/\pi\xrightarrow{\overline{\iota}}R^{\flat}\), we get a \(\delta_{L}\)-algebra map \(\iota:A\to W_{L}(R^{\flat})\). Note that this construction is purely \(\delta_{L}\)-algebraic; it uses nothing from the theory of prisms.
**Remark 3.21**.: Intuitively, we think of the system \((\iota_{n})_{n}\) as giving a way to extract \(\phi\)-power roots in the perfect \(\delta_{L}\)-algebra \(W_{L}(R^{\flat})\). More precisely, it's easy to show that \(\phi^{-m}(\iota(a))\) coincides with \(\iota^{\to m}(a)\), where \(\iota^{\to m}\) denotes the map produced by applying the above construction to the right-shifted system \((\iota_{n+m})_{n}\).
If \(R\) is \(\pi\)-adically complete then we additionally have a map \(\theta:W_{L}(R^{\flat})\to R\). The following lemma computes the composite \(A\stackrel{{\iota}}{{\to}}W_{L}(R^{\flat})\stackrel{{ \theta}}{{\to}}R\) (possibly with a \(\phi\)-twist).
**Proposition 3.22**.: _Fix all notation as above, with \(R\) being a \(\pi\)-adically complete \(\mathcal{O}_{L}\)-algebra. Then for any \(n\geq 0\), we have_
\[\theta\circ\phi^{-n}\circ\iota=\iota_{n}.\]
Proof.: By remark 3.21, it suffices to prove this for \(n=0\); thus we will show that \(\theta\circ\iota=\iota_{0}\). The proof is by direct computation. Fixing some \(a\in A\), it suffices to show that \(\theta(\iota(a))\equiv\iota_{0}(a)\pmod{\pi^{k+1}}\) for all \(k\geq 0\).
We can factor the map \(\iota\) as
\[A\stackrel{{ s_{A}}}{{\longrightarrow}}W_{L}(A)\stackrel{{ W_{L}(\overline{\iota})}}{{\longrightarrow}}W_{L}(R^{\flat})\]
where \(s_{A}\) is the section of proposition 2.6(1). Writing \((s_{0},s_{1},\dots,)=s_{A}(a)\in W_{L}(A)\), we find
\[\theta(\iota(a)) =\sum_{n=0}^{\infty}\left(\overline{\iota(\overline{s_{n}})^{1/q^ {n}}}\right)^{\#}\pi^{n}=\sum_{n=0}^{\infty}\lim_{m\to\infty}\left(\overline{ \iota}(\overline{s_{n}})_{m}\right)^{\wedge,q^{m-n}}\pi^{n}\] \[\equiv\lim_{m\to\infty}\sum_{n=0}^{k}\left(\overline{\iota( \overline{s_{n}})_{k+m}}\right)^{\wedge,q^{k+m-n}}\pi^{n}\pmod{\pi^{k+1}}\] \[=\lim_{m\to\infty}w_{k}\left(\left(\overline{\iota(\overline{s_ {0}})_{k+m}}\right)^{\wedge,q^{m}},\dots,(\overline{\iota(\overline{s_{k}})_{k +m}}\right)^{\wedge,q^{m}}\right). \tag{3.1}\]
Here we've written \(\overline{s_{n}}\) for the mod \(\pi\) reduction of \(s_{n}\in A\), and for \(r=(\dots,r_{1},r_{0})\in R^{\flat}\), we've written \((r_{n})^{\wedge}\) for an arbitrary lift of \(r_{n}\in R/\pi\) to \(R\). Since
\[(\overline{\iota}(\overline{s_{n}})_{k+m})^{q^{m}}=\overline{\iota}(\overline {s_{n}})_{k}\equiv\iota_{k}(s_{n})\pmod{\pi},\]
lemma 3.23 below allows us to continue (3.1):
\[\theta(\iota(a)) \equiv w_{k}(\iota_{k}(s_{0}),\dots,\iota_{k}(s_{k}))\pmod{\pi^{k+1}}\] \[=\iota_{k}(w_{k}(s_{0},\dots,s_{k}))=\iota_{k}(\phi^{k}(a))= \iota_{0}(a)\]
which is what we wanted. Here we've used that \(\iota_{k}\) is an \(\mathcal{O}_{L}\)-algebra map (and thus commutes with \(w_{k}\)) and the fact that \(w_{k}\circ s_{A}=\phi^{k}\) from the second part of proposition 2.6(1).
**Lemma 3.23**.: _If \(R\) is an \(\mathcal{O}_{L}\)-algebra, \(a_{0},\ldots,a_{k},b_{0},\ldots,b_{k}\in R\), and \(a_{n}\equiv b_{n}\pmod{\pi^{s}}\) for \(n=0,\ldots,k\), then_
\[w_{k}(a_{0},\ldots,a_{k})\equiv w_{k}(b_{0},\ldots,b_{k})\pmod{\pi^{s+k}}.\]
Proof.: See [34, lemma 1.1.2].
Proposition 3.22 suggests viewing the map \(\iota:A\to W_{L}(R^{\flat})\) as a map of prisms when doing so makes sense, i.e. when \(R\) is a perfectoid \(\mathcal{O}_{L}\)-algebra and \((A,I)\) is a prism with \(I\subseteq\ker\iota_{0}\). This is the perspective taken in the following construction.
**Construction 3.24**.: Let \((A,I)\) is a \(L\)-typical prism, let \(R\) be a perfectoid \(\mathcal{O}_{L}\)-algebra, and suppose given a map \(\iota_{0}:A/I\to R\). If \((A,I)\) were assumed perfect, then proposition 3.11 would allow us to lift \(\iota_{0}\) into a map \(\iota:(A,I)\to(A_{\inf}(R),\ker\theta)\), but we do not make this assumption. Instead, we further assume given a collection of \(\phi\)-compatible \(\mathcal{O}_{L}\)-algebra maps \(\iota_{n}:A\to R\) as above with \(I\subseteq\ker\iota_{0}\); this will allow us to construct such a lift \(\iota\).
Using the \(\iota_{n}\), we can factor \(\iota_{0}:A/I\to R\) through
\[\left(\varinjlim_{\phi}A\right)/I\stackrel{{(\iota_{n})_{n}}}{{ \longrightarrow}}R,\]
and then, after \(\pi\)-adically completing, through the map of perfectoid \(\mathcal{O}_{L}\)-algebras
\[A_{\mathrm{perf}}/IA_{\mathrm{perf}}=(\varinjlim_{\phi}A)^{\wedge}_{(\pi)}/I \to R.\]
Applying proposition 3.11 to the map \(A_{\mathrm{perf}}/IA_{\mathrm{perf}}\to R\) gives a map of prisms fitting into the following diagram.
We take \(\iota\) to be the composite along the top row, which a map of \(L\)-typical prisms \((A,I)\to(A_{\inf}(R),\ker\theta)\) by construction.
**Proposition 3.25**.: _Let \(X\) be an adic space over \(\operatorname{Spf}\mathcal{O}_{L}\), let \((A,I)\in X_{\underline{\mathbb{A}}_{L}}\), and let \(R\) be a perfectoid \(\mathcal{O}_{L}\)-algebra with a structure map \(\operatorname{Spf}R\to X\) over \(\operatorname{Spf}\mathcal{O}_{L}\). Suppose we have a \(\phi\)-compatible direct system of \(\mathcal{O}_{L}\)-algebra maps \(\iota_{n}:A\to R\) such that \(I\subseteq\ker\iota_{0}\) and the map \(\iota_{0}:A/I\to R\) is an \(X\)-morphism. Then there is morphism_
\[\iota:(A,I)\longrightarrow(W_{L}(R^{\flat}),\ker\theta)\]
_in \(X_{\underline{\mathbb{A}}_{L}}\) reducing to \(\iota_{0}:A/I\to R\). Moreover, the map \(A\to A_{\inf}(R)=W_{L}(R^{\flat})\) of \(\delta_{L}\)-algebras obtained this way coincides with that of construction 3.20._
Proof.: The map \(\iota\) of the proposition is given by construction 3.24; it is immediate that the morphism constructed this way respects the structure maps to \(X\). To show that this coincides with construction 3.20, it suffices to show that the maps \(\iota^{\delta},\iota^{\underline{\Delta}}:A_{\operatorname{perf}}\to W_{L}(R^{ \flat})\) induced by constructions 3.20 and 3.24, respectively, coincide. By the definition of the \(A_{\operatorname{inf}}\) functor, \(\iota^{\underline{\Delta}}\) is \(W_{L}\) of the tilt of \(A_{\operatorname{perf}}/I\to R\), so by proposition 2.13 it suffices to show that taking \(\lambda^{\delta}\bmod\pi\) gives
\[(A_{\operatorname{perf}}/I)^{\flat}\to R^{\flat}.\]
This is easy to check using the identification \((A_{\operatorname{perf}}/I)^{\flat}\cong A_{\operatorname{perf}}/\pi\) from the proof of proposition 3.11.
**Example 3.26**.: Let \(R\) be a \(p\)-adically complete ring, and let \(E\) be an ordinary elliptic curve over \(R\), and let \(E[p^{\infty}]=\varinjlim E[p^{n}]\) denote the \(p\)-divisible group of \(E\). Then by the theory of the canonical subgroup, there are lifts \(F:E\to E^{(p)}\) of the relative Frobenius \(E/p\to(E/p)^{(p)}\) and \(V:E^{(p)}\to E\) of the Verschiebung with \(VF=[p]\). It follows that we have maps
\[\cdots\longrightarrow\ker F^{3}\xrightarrow{[p]}\ker F^{2}\xrightarrow{[p]} \ker F.\]
Note that \(A=\varinjlim\mathcal{O}_{\ker F^{n}}\) (inverse limit taken with respect to the inclusion maps \(\ker F^{n}\hookrightarrow\ker F^{n+1}\)) has a lift of Frobenius given by \(\phi=[p]^{*}\).
For \(n\geq 1\), suppose \(R_{n}\) are etale \(R\)-algebras with sections \(e_{n}:\operatorname{Spf}R_{n}\to\ker F^{n}\). Then, setting \(R_{\infty}=\bigl{(}\varinjlim R_{n}\bigr{)}_{(p)}^{\wedge}\), we have that the maps
\[\iota_{n}:A\twoheadrightarrow\ker F^{n}\xrightarrow{e_{n}^{*}}R_{n} \hookrightarrow R_{\infty}\]
form a \(\phi\)-compatible system. Thus construction 3.20 gives a map \(A\to W(R_{\infty}^{\flat})\).
## 4 Lubin-Tate \((\varphi_{q},\Gamma)\)-modules
In this section, we introduce the key objects involved in Kisin-Ren's theory of \((\varphi_{q},\Gamma)\)-modules. Pleasantly, much of this theory can be succintly stated in the prismatic language developed in section 3.
As before, let \(L/\mathbb{Q}_{p}\) be a finite extension with uniformizer \(\pi\), and let \(q=|\mathcal{O}_{L}/\pi|\). Let \(\mathcal{G}\) be a Lubin-Tate formal \(\mathcal{O}_{L}\)-module.
By a \(p\)_-adic field_\(K/L\) we will mean an algebraic extension such that \(\mathcal{O}_{K}\) is a discrete valuation ring with perfect residue field; equivalently this means that \(K\) has a perfect residue field \(k\) and \(K/W_{L}(k)[1/p]\) is finite. If \(K/L\) is a \(p\)-adic field and \(n\geq 0\) we write
\[K_{n}=K(\mathcal{G}[\pi^{n}])\]
for the extension given by adjoining the \(\pi^{n}\)-torsion points of \(\mathcal{G}\). We also write \(K_{\infty}\) for the \(p\)-adic completion of \(\bigcup K_{n}\) and \(\Gamma_{K}=\operatorname{Gal}(K_{\infty}/K)\). The action of the absolute Galois group
\(G_{K}\) on the free \(\mathcal{O}_{L}\)-rank one Tate module \(T\mathcal{G}\) factors through \(\Gamma_{K}\) and gives an injective character \(\chi_{\mathcal{G}}:\Gamma_{K}\to\mathcal{O}_{L}^{\times}\). If \(K=L\) then by local class field theory \(\chi_{\mathcal{G}}\) is an isomorphism \(\Gamma_{L}\stackrel{{\sim}}{{\to}}\mathcal{O}_{L}^{\times}\).
Throughout this section, we fix once and for all
* a coordinate \(T\) on \(\mathcal{G}\), so that the action of \(\mathcal{O}_{L}\) on \(\mathcal{G}\cong\operatorname{Spf}(\mathcal{O}_{L}[\![T]\!])\) is given by power series \([a](T)\in\mathcal{O}_{L}[\![T]\!]\) for \(a\in\mathcal{O}_{L}\);
* a basis \(e=(e_{n})_{n\geq 0}\) of the free \(\mathcal{O}_{L}\)-module \(T\mathcal{G}\), viewed as a sequence of \(e_{n}\in\mathcal{O}_{\overline{K}}\) such that \([\pi](e_{n})=e_{n-1}\), \(e_{0}=0\), and \(e_{1}\neq 0\).
Note that \([\pi](T)\equiv T^{q}\pmod{\pi}\) and \(K_{n}=K(e_{n})\).
In SS4.1, we'll see that \(\mathcal{O}_{\mathcal{G}}\otimes_{\mathcal{O}_{L}}W_{L}(k)\cong W_{L}(k)[\![T]\!]\) is a \(L\)-typical prism in \((W_{L}(k))_{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Proof.: For \((\pi,q_{n}(T))\)-adic completeness use that \(q_{n}(T)\equiv T^{q^{n}-q^{n-1}}\pmod{\pi}\) and the \((\pi,T)\)-adic completeness of \(W_{L}(k)[\![T]\!]\). To see that \(\pi\in(q_{n}(T),\phi(q_{n}(T)))=(q_{n}(T),q_{n+1}(T))\), note that \(q_{1}(T)=\frac{[\pi](T)}{T}\equiv\pi\pmod{T}\) so that
\[\pi=q_{1}(T)+Tf(T)\]
for some \(f(T)\in\mathcal{O}_{L}[\![T]\!]\); applying \(\phi^{n}\) then gives
\[\pi=q_{n+1}(T)+[\pi^{n}](T)f([\pi^{n}](T))=q_{n+1}(T)+q_{n}(T)[\pi^{n-1}](T)f( [\pi^{n}](T))\in(q_{n}(T),q_{n+1}(T)).\]
We now invoke construction 3.20 to produce a map \(W_{L}(k)[\![T]\!]\to W_{L}(\mathcal{O}_{K_{\infty}}^{\flat})\) of \(\delta_{L}\)-algebras. Since we have not yet shown that \(\mathcal{O}_{K_{\infty}}\) is a perfectoid \(\mathcal{O}_{L}\)-algebra, we cannot yet use construction 3.24, but once we have shown that \(\mathcal{O}_{K_{\infty}}\) is perfectoid the two constructions amount to the same thing.
Indeed, apply construction 3.20 with \(A=W_{L}(k)[\![T]\!]\), \(R=\mathcal{O}_{K_{\infty}}\), and \(\iota_{n}:W_{L}(k)[\![T]\!]\to R\) defined by sending \(f(T)\) to \(f^{\phi_{W_{L}(k)}^{-n}}(e_{n})\). Note that the system \((\iota_{n})_{n}\) is \(\phi\)-compatible since
\[\iota_{n+1}(\phi(f))=\iota_{n+1}(f^{\phi_{W_{L}(k)}}([\pi](T)))=f^{\phi_{W_{L }(k)}^{-n}}([\pi](e_{n+1}))=\iota_{n}(f),\]
where we've used that \(\phi\) acts as the identity on \(\mathcal{O}_{L}\) and \([\pi](e_{n+1})=e_{n}\). This gives us a map \(\delta_{L}\)-algebra map \(\iota:W_{L}(k)[\![T]\!]\to W_{L}(\mathcal{O}_{K_{\infty}}^{\flat})\) lifting the map \(k[\![T]\!]\to\mathcal{O}_{K}^{\flat}\) given by
\[T\mapsto\overline{\omega}:=(\ldots,\overline{e_{2}},\overline{e_{1}},0)\]
where \(\overline{e_{n}}\in\mathcal{O}_{K_{\infty}}/\pi\) is the mod \(\pi\) reduction of \(e_{n}\in\mathcal{O}_{K_{n}}\) for \(n\geq 0\). Let \(\omega=\iota(T)\in W_{L}(\mathcal{O}_{K_{\infty}}^{\flat})\) denote the given lift of \(\overline{\omega}\), and write \(\mathfrak{S}_{K}=W_{L}(k)[\![\omega]\!]\subseteq W_{L}(\mathcal{O}_{K_{\infty}})\) for the image of \(\iota\). By lemma 4.1, \((S_{K},q_{n}(\omega))\) is a \(L\)-typical prism for every \(n\geq 1\).
**Remark 4.2**.: When \(n=1\) and \(\mathcal{G}=\mu_{p^{\infty}}\), we see that \(q_{1}(T)=\frac{(1+T)^{p}-1}{T}\). After a change of variables \(T\mapsto T-1\), we thus get the \(q\)-de Rham prism.
**Remark 4.3**.: A different choice of coordinate on \(\mathcal{G}\) amounts to changing \(T\) by a unit in \(\mathcal{O}_{L}[\![T]\!]\), and a different choice of basis for the Tate module of \(\mathcal{G}\) corresponds to multiplying \(e\) by a unit in \(\mathcal{O}_{L}\). Hence changing \(T\) and \(e\) results in changing \(\omega\) by a unit but does not change the image \(\mathfrak{S}_{K}\) of \(W_{L}(k)[\![T]\!]\) in \(A_{\inf}(\mathcal{O}_{K_{\infty}})\).
**Lemma 4.4**.: \(\mathcal{O}_{K_{\infty}}\) _is a perfectoid \(\mathcal{O}_{L}\)-algebra, and_
\[(\mathfrak{S}_{K},(q_{n}(\omega)))\stackrel{{\phi^{-n}}}{{ \longrightarrow}}(A_{\inf}(\mathcal{O}_{K_{\infty}}),\ker\theta)\]
_is a map of prisms for every \(n\geq 1\)._
Proof.: Note first that by proposition 3.22, we have \(\phi^{-n}(q_{n}(\omega))\in\ker\theta\) since
\[\theta(\phi^{-n}(q_{n}(\omega)))=(\theta\circ\phi^{-n}\circ\iota)(q_{n}(T))= \iota_{n}(q_{n}(T))=\frac{[\pi^{n}](e_{n})}{[\pi^{n-1}](e_{n})}=0.\]
Thus, we'll be done if we show that \((W_{L}(\mathcal{O}_{K_{\infty}}^{\flat}),\ker\theta)\) is a prism (equivalently, that \(\mathcal{O}_{K_{\infty}}\) is a perfectoid \(\mathcal{O}_{L}\)-algebra). One way to procede would be to use a rigidity result like [7, Lemma 3.6]. Instead, we'll use proposition 3.15; \(O_{K_{\infty}}\) clearly satisfies conditions (1), (2), and (4'), so it sufices to show that it satifies condition (2) as well.
Let \(d=\phi^{-n}(q_{n}(\omega))\in\ker\theta\). Following the proof of proposition 3.15, we guess that \(\xi=\theta(\phi^{-1}(d))\in\mathcal{O}_{K_{\infty}}\) satisfies \(\xi^{d}=\pi u\) for a unit \(u\in\mathcal{O}_{K_{\infty}}^{\times}\). Indeed, we have
\[\xi^{d}=\theta(\phi^{-q}(d^{q}))=\theta(d-\pi\delta_{L}(\phi^{-1}(d)))=\pi \theta(-\delta_{L}(\phi^{-1}(d))),\]
and since \(q_{n}(T)\in\mathfrak{S}_{K}\) is distinguished, so is \(\phi^{-1}(d)=\phi^{-n-1}\iota(q_{n}(T))\in W_{L}(\mathcal{O}_{K_{\infty}}^{ \flat})\).
**Remark 4.5**.: Though we will not use this fact, we note that in this setting, there is an analytic way to construct the map \(\iota:W_{L}(k)\llbracket T\rrbracket\to W_{L}(\mathcal{O}_{K_{\infty}}^{\flat})\). Namely, following [29, lemma 1.2], we let \(\hat{\omega}\) be any lift of \(\overline{\omega}=(\ldots,\overline{e_{2}},\overline{e_{1}},0)\in\mathcal{O}_ {K_{\infty}}^{\flat}\), and set
\[\omega=\lim_{n\to\infty}[\pi^{n}](\phi_{W_{L}(\mathcal{O}_{K_{\infty}}^{ \flat})}^{-n}(\hat{\omega})).\]
Then \(\phi(\omega)=[\pi](\omega)\), so that
\[W_{L}(k)\llbracket T\rrbracket \to W_{L}(\mathcal{O}_{K_{\infty}}^{\flat})\] \[T \mapsto\omega\]
is a \(\delta_{L}\)-algebra map lifting the \(\mathcal{O}_{L}\)-algebra map \(W_{L}(k)\llbracket T\rrbracket\to\mathcal{O}_{K_{\infty}}^{\flat}\) via \(T\mapsto\overline{\omega}\), hence it coincides with \(\iota\) by lemma 2.5.
### Extension to \(\mathbf{A}_{K}^{+}\)
The prisms \((\mathfrak{S}_{K},(q_{n}(\omega)))\) from SS4.1 can be viewed as objects in \((W_{L}(k)[e_{n}])_{\underline{\mathbb{A}}_{L}}\). However, there is ramification in \(K/L\) outside of the ramification in \(L_{\infty}/L\) (i.e. \(\mathcal{O}_{K}\not\subseteq\bigcup_{n\geq 0}W_{L}(k)[e_{n}]\)) then \((\mathfrak{S}_{K},(q_{n}(\omega)))\) will never be a prism over \(\operatorname{Spf}\mathcal{O}_{K}\). In this section, we will extend \(\mathfrak{S}_{K}\) to a larger sub-\(\delta_{L}\)-algebra \(\mathbf{A}_{K}^{+}\) of \(A_{\inf}(\mathcal{O}_{K_{\infty}})\) which is a prism over \(\operatorname{Spf}\mathcal{O}_{K}\). The key point is that the formation of \(\mathfrak{S}_{K}\) is insensitive to taking ramified extensions of \(K\); we capture ramification coming from the tower \(L_{\infty}/L\) by our choice of \(n\) (since \(\mathfrak{S}_{K}/q_{n}(\omega)\cong W_{L}(k)[e_{n}]\)), but capturing the rest of the ramification in \(K/W_{L}(k)[1/\pi]\) requires Fontaine and Wintenberger's theory of imperfect norm fields.
Let
\[\mathbf{E}_{K}^{+}=\left\{(\alpha_{n})_{n}\in\varprojlim_{\varphi_{q}} \mathcal{O}_{K_{\infty}}/e_{1}=\mathcal{O}_{K_{\infty}}^{\flat}:\alpha_{n}\in \mathcal{O}_{K_{n}}/e_{1}\text{ for }n\gg 0\right\}\subseteq\mathcal{O}_{K_{ \infty}}^{\flat},\]
so that \(\overline{\omega}=(\ldots,\overline{e_{2}},\overline{e_{1}},0)\in\mathbf{E}_{K} ^{+}\). We recall some facts from the theory of norm fields [42].
**Proposition 4.6**.:
1. \(\mathbf{E}_{K}^{+}\) _is a complete discrete valuation ring with fraction field_ \(\mathbf{E}_{K}:=\mathbf{E}_{K}^{+}[1/\overline{\omega}]\subseteq K_{\infty}^{\flat}\)_._
2. _If_ \(K/L\) _is unramified, then_ \(\mathbf{E}_{K}^{+}=k\llbracket\overline{\omega}\rrbracket\)_. In general,_ \(\mathbf{E}_{K}\) _is a totally ramified extension of_ \(\mathbf{E}_{W_{L}(k)[1/\pi]}\) _of degree_ \([K_{n}:W_{L}(k)[e_{n}][1/\pi]]\) _for_ \(n\) _large enough._
3. _The completed perfection_ \((\varinjlim_{\varphi_{q}}\mathbf{E}_{K}^{+})_{(\overline{\omega})}^{\wedge}\) _of_ \(\mathbf{E}_{K}^{+}\) _is_ \(\mathcal{O}_{K_{\infty}}^{\flat}\)_._
4. _There is an equivalence of Galois categories_ \[\left\{\text{finite extensions of }\bigcup_{n\geq 1}L_{n}\text{ in }\overline{L}\right\}\simeq\left\{\text{finite seperable extensions of }\mathbf{E}_{L}\text{ in }K_{\infty}^{\flat}\right\}\] _where, given a finite subextension_ \(M/\bigcup_{n\geq 1}L_{n}\) _of_ \(\overline{L}\)_, the functor from the left to the right is given by selecting any finite_ \(M^{\prime}/L\) _with_ \(\bigcup_{n}M_{n}^{\prime}=M\) _and sending_ \(M\) _to_ \(\mathbf{E}_{M^{\prime}}\)_._
We would like to form Cohen rings \(\mathbf{A}_{K}\) for the fields \(\mathbf{E}_{K}\) in characteristic \(p\). For \(K=W_{L}(k)[1/\pi]\) unramified over \(L\), we write
\[\mathbf{A}_{K}=\mathfrak{S}_{K}[1/\omega]_{(\pi)}^{\wedge}\subseteq W_{L}(K_{ \infty}^{\flat})\]
for the \(\pi\)-adic completion of \(\mathfrak{S}_{K}[1/\omega]\cong W_{L}(k)\llbracket T\rrbracket[1/T]\). Then \(\mathbf{A}_{K}\) is a complete discrete valuation ring in characteristic zero with uniformizer \(\pi\), and by proposition 4.6\(\mathbf{A}_{K}\) has residue field \(\mathbf{E}_{K}\). When \(K/L\) is possibly ramified, Hensel's lemma allows us to lift the extension \(\mathbf{E}_{K}\) of \(\mathbf{E}_{W_{L}(k)[1/\pi]}\cong k(\!(T)\!)\) to an unramified extension \(\mathbf{A}_{K}\) of \(\mathbf{A}_{W_{L}(k)[1/\pi]}\cong W_{L}(k)[\![T]\!][1/T]^{\wedge}\) inside of \(W_{L}(K_{\infty}^{\flat})\).
By construction, \(\mathbf{A}_{K}\) is stable under \(\phi_{W_{L}(K_{\infty}^{\flat})}\) (since \(\phi(a)\mod\pi=\overline{a}^{q}\in\mathbf{E}_{K}\) for any \(a\in\mathbf{A}_{K}\)). Thus we set
\[\mathbf{A}_{K}^{+}=\mathbf{A}_{K}\cap W_{L}(\mathcal{O}_{K_{\infty}}^{\flat}),\]
which is \(\phi\)-stable as well. Since \(\mathbf{A}_{K}^{+}\) is \(\pi\)-torsionfree, this gives it a \(\delta_{L}\)-algebra structure. Note that when \(K/L\) is unramified, we have \(\mathbf{A}_{K}^{+}=\mathfrak{S}_{K}\).
**Remark 4.7**.: Instead of forming \(\mathbf{A}_{K}\) by lifting the extension \(\mathbf{E}_{K}/E_{W_{L}(k)[1/\pi]}\) over \(\mathbf{A}_{W_{L}(k)[1/\pi]}\), we could have instead lifted the extension \(\mathbf{E}_{K}/\mathbf{E}_{L}\) over \(\mathbf{A}_{L}\). This would have amounted to the same thing. We also note that the \(\phi\)-action on \(\mathbf{A}_{K}\) above clearly coincides with the one induced by lifting \(\varphi_{q}:\mathbf{E}_{K}\to\mathbf{E}_{K}\) via Hensel's lemma (and using that \(\mathbf{A}_{W_{L}(k)[1/\pi]}\) is \(\phi\)-stable by construction).
**Remark 4.8**.: Note that \(\mathbf{A}_{K}=\mathbf{A}_{K}^{+}[1/q_{n}(\omega)]_{(\pi)}^{\wedge}\) since \(q_{n}(\omega)\equiv\omega^{q^{n-1}(q-1)}\pmod{\pi}\), so that after \(\pi\)-adically completing, inverting \(\omega\) has the same effect as inverting \(q_{n}(\omega)\).
**Lemma 4.9**.:
1. _If_ \(A\to B\) _is a map of_ \(\pi\)_-adically complete_ \(\pi\)_-torsion free rings with_ \(A\) _noetherian and_ \(A/\pi\to B/\pi\) _is flat, then_ \(A\to B\) _is flat as well._
2. _The maps_ \[\mathfrak{S}_{K}\hookrightarrow\mathbf{A}_{K}^{+},\qquad\mathbf{A}_{K}^{+} \hookrightarrow A_{\inf}(\mathcal{O}_{K_{\infty}}),\quad\text{and}\qquad\phi: \mathbf{A}_{K}^{+}\to\mathbf{A}_{K}^{+}\] _are all faithfully flat._
3. \(\mathfrak{S}_{K}/q_{n}(\omega)\) _and_ \(\mathbf{A}_{K}^{+}/q_{n}(\omega)\) _are_ \(\pi\)_-torsion free._ \((\mathfrak{S}_{K},(q_{n}(\omega)))\) _and_ \((\mathbf{A}_{K}^{+},(q_{n}(\omega)))\) _are bounded._
Proof.: Part (1) is [6, remark 4.31] with \(p\) replaced by \(\pi\); the proof remains the same. The flatness in (2) follows from (1) since the mod \(\pi\) reductions of the given maps are
\[k[\![T]\!]\hookrightarrow\mathbf{E}_{K}^{+},\qquad\mathbf{E}_{K}^{+} \hookrightarrow\mathcal{O}_{K_{\infty}}^{\flat},\quad\text{and}\qquad\varphi_{ q}:\mathbf{E}_{K}^{+}\to\mathbf{E}_{K}^{+}\]
which are injective maps from discrete valuation rings to integral domains, hence flat. Faithful flatness follows since \(\overline{\omega}\) is not a unit in \(\mathbf{E}_{K}^{+}\) or \(\mathcal{O}_{K_{\infty}}^{\flat}\). For (3), we have that \(\mathfrak{S}_{K}/q_{n}(\omega)\cong W_{L}(k)[\![T]\!]/q_{n}(T)\cong W_{L}(k)[e _{n}]\) is an integral domain hence \(\pi\)-torsion free. By part (2) we have that \(\mathfrak{S}_{K}/q_{n}(\omega)\to\mathbf{A}_{K}^{+}/q_{n}(\omega)\) is flat, so \(\mathbf{A}_{K}^{+}/q_{n}(\omega)\) is \(\pi\)-torsion free as well.
It follows immediately from lemma 4.1 that \((\mathbf{A}_{K}^{+},(q_{n}(\omega)))\) is a \(L\)-typical prism for every \(n\geq 1\). Moreover, just as \(\mathbf{E}_{K}^{+}\) can be viewed as a deperfection of \(\mathcal{O}_{K_{\infty}}^{\flat}\), we have that the prism \((\mathbf{A}_{K}^{+},(q_{n}(\omega)))\) can be viewed as a deperfection of the perfect prism \((A_{\inf}(\mathcal{O}_{K_{\infty}}),\ker\theta)\).
**Proposition 4.10**.: _Let \((A_{\inf},IA_{\inf})\) be the perfection of \((\mathbf{A}_{K}^{+},(q_{n}(\omega)))\) as in proposition 3.14. Then \(A_{\inf}\cong A_{\inf}(\mathcal{O}_{K_{\infty}})\). The natural map \(\mathbf{A}_{K}^{+}\to A_{\inf}\cong A_{\inf}(\mathcal{O}_{K_{\infty}})\) is the usual inclusion, i.e. the map on the left in the following commutative diagram._
Proof.: By proposition 2.13, it suffices to show that \(A_{\inf}/\pi\cong\mathcal{O}_{K_{\infty}}^{\flat}\). Indeed, we have
\[A_{\inf}/\pi=(\varinjlim_{\phi}\mathbf{A}_{K}^{+})_{(q_{n}(\omega))}^{\wedge} /\pi\cong(\varinjlim_{\overline{\varphi_{q}}}\mathbf{E}_{K}^{+})_{(\overline{ \omega})}^{\wedge}=\mathcal{O}_{K_{\infty}}^{\flat}\]
since \(q_{n}(\omega)\equiv\omega^{q^{n}-q^{n-1}}\pmod{\pi}\) and modding out by \(\pi\) commutes with the colimit and \((\pi,q_{n}(\omega))\)-adic completion.
**Corollary 4.11**.: _For \(n\gg 0\) we have structure maps \(\mathcal{O}_{K}\to\mathbf{A}_{K}^{+}/q_{n}(\omega)\) such that the maps_
\[(\mathbf{A}_{K}^{+},(q_{n}(\omega)))\stackrel{{\phi^{-n}}}{{ \longrightarrow}}(A_{\inf}(\mathcal{O}_{K_{\infty}}),\ker\theta)\]
_are morphisms in \((\mathcal{O}_{K})_{\underline{\omega}_{\pi}}\)._
We will give two proofs, the first an abstract argument following [43, prop 2.19] and the second a more concrete argument involving norm fields.
Proof 1.: By proposition 4.10 we have an isomorphism
\[\left(\varinjlim_{\phi}\mathbf{A}_{K}^{+}\right)_{(\pi,q_{1}(\omega))}^{ \wedge}/q_{1}(\omega)\stackrel{{\phi^{-1}}}{{\longrightarrow}}A _{\inf}(\mathcal{O}_{K_{\infty}})/\ker\theta=\mathcal{O}_{K_{\infty}}.\]
This isomorphism can be rewritten as
\[\left(\varinjlim_{\phi}\mathbf{A}_{K}^{+}/q_{n}(\omega)\right)_{(p)}^{\wedge }\stackrel{{\sim}}{{\longrightarrow}}\left(\bigcup_{n\geq 1} \mathcal{O}_{K_{n}}\right)_{(p)}^{\wedge}.\]
Using that \(\varinjlim_{\phi}\mathbf{A}_{K}^{+}/q_{n}(\omega)\) and \(\bigcup\mathcal{O}_{K_{n}}\) are integral over \(W_{L}(k)\) and that there are no integral extensions between \(\bigcup\mathcal{O}_{K_{n}}\) and its completion \(\mathcal{O}_{K_{\infty}}\) (by Krasner's lemma applied to the Henselian ring \(\bigcup\mathcal{O}_{K_{n}}\)), we conclude that there is a short exact sequence
\[0\longrightarrow\varinjlim_{\phi}\mathbf{A}_{K}^{+}/q_{n}(\omega)\longrightarrow \bigcup\mathcal{O}_{K_{n}}\longrightarrow M\longrightarrow 0\]
with \(M\)\(\pi\)-torsion and \(M_{(\pi)}^{\wedge}=0\). Moreover, since \(\varinjlim\mathbf{A}_{K}^{+}/q_{n}(\omega)\) contains the subring \(\varinjlim\mathfrak{S}_{K}/q_{n}(\omega)\cong\bigcup W_{L}(k)[e_{n}]\) over which \(\bigcup\mathcal{O}_{K_{n}}\) is finite, we have that \(M\) is \(\pi\)-adically complete, so that \(M=M^{\wedge}=0\).
Moreover, since \(\mathfrak{S}_{K}/q_{n}(\omega)\stackrel{{\phi}}{{\to}}\mathfrak{ S}_{K}/q_{n+1}(\omega)\) identifies with the inclusion
\[W_{L}(k)[e_{n}]\hookrightarrow W_{L}(k)[e_{n+1}]\]
and \(\mathfrak{S}_{K}\hookrightarrow\mathbf{A}_{K}^{+}\) is flat by lemma 4.9, the transition maps in the direct limit are injective as well. All together, this gives
\[\bigcup_{n\geq 1}\mathbf{A}_{K}^{+}/q_{n}(\omega)\cong\bigcup_{n\geq 1} \mathcal{O}_{K_{n}}\supseteq\mathcal{O}_{K}.\]
As \(\mathcal{O}_{K}\) is finite over \(W_{L}(k)\) and the left-hand side is an increasing union of \(W_{L}(k)\)-modules, we get maps \(\mathcal{O}_{K}\to\mathbf{A}_{K}^{+}/q_{n}(\omega)\) for \(n\gg 0\). These maps commute with \(\phi^{-n}:\mathbf{A}_{K}^{+}\to A_{\inf}(\mathcal{O}_{K_{\infty}})\) by construction.
Proof 2.: To simplify notation, set \(F=W_{L}(k)[1/\pi]\). Let
\[\overline{\omega}_{K}=(\overline{\pi}_{n})_{n}\in\varprojlim_{\overline{\varphi _{q}}}\mathcal{O}_{K_{\infty}}/e_{1}=\mathcal{O}_{K_{\infty}}^{\flat}\]
be a uniformizer of \(\mathbf{E}_{K}^{+}\), so that \(\overline{\pi}_{n}\in\mathcal{O}_{K_{n}}/e_{1}\) for \(n\gg 0\) and \(\mathbf{E}_{K}^{+}=k\llbracket\overline{\omega}_{K}\rrbracket\). Let \(P(W,T)\in k\llbracket W\rrbracket[T]\) so that \(P(\overline{\omega},T)\in k\llbracket\overline{\omega}\rrbracket[T]=\mathbf{E }_{F}^{+}[T]\) is the minimal polynomial of \(\overline{\omega}_{K}\) over \(\mathbf{E}_{F}\). As explained above, \(\mathbf{E}_{K}/\mathbf{E}_{F}\) is a totally ramified extension of degree \(d=[K_{\infty}:F_{\infty}]\), so \(P(\overline{\omega},T)\) is a degree \(d\) Eisenstein polynomial. Since \(\overline{\omega}=(\overline{e_{n}})_{n}\), it follows that \(P(\overline{e_{n}},\overline{\pi}_{n})=0\in\mathcal{O}_{K_{n}}/e_{1}\) for \(n\gg 0\).
Let \(\hat{P}(W,T)\in\mathcal{O}_{F}\llbracket W\rrbracket[T]\) be a lift of \(P\). Using an argument involving Lang's refinement of Hensel's lemma, it is shown in [42, 3.2.5] (or see also [11, pf of prop 13.4.4]) that \(P(e_{n},T)\) has \(d\) distinct roots \(\{\pi_{n,1},\ldots,\pi_{n,d}\}\) in \(\mathcal{O}_{\overline{K}}\); one of these roots, call it \(\pi_{n}\), is a lift of \(\overline{\pi}_{n}\). Moreover, using that the roots of \(P(\overline{\omega},T)\) in \(\mathcal{O}_{K_{\infty}}^{\flat}\) are distinct, one can show that \(\pi_{n,1},\ldots,\pi_{n,d}\) are distinct mod \(e_{1}\) for \(n\gg 0\). On the other hand, since \(\pi_{n+1}^{q}\equiv\pi_{n}\pmod{e_{1}}\) for \(n\gg 0\), Krasner's lemma shows that for \(n\gg 0\) we have \(\pi_{n}\in F_{n}(\pi_{n+1})\).
Now, set \(K_{n}^{\prime}=F_{n}(\pi_{n})\). For some \(N\gg 0\) and all \(n\geq N\), we have that \(K_{n}^{\prime}\subseteq K_{n+1}^{\prime}\). This gives us the following diagram of field extensions, with degrees indicated.
This implies that \(K_{n+1}^{\prime}=K_{n}^{\prime}F_{n+1}\), and thus that \(K_{n}^{\prime}=K_{N}^{\prime}F_{n}\) for all \(n\geq N\). We also have that \(\mathbf{E}_{K_{N}^{\prime}}=\mathbf{E}_{K}\), since they are both degree \(d=[K_{\infty}:F_{\infty}]=[K_{N,\infty}^{\prime}:F_{\infty}]\) extensions of \(\mathbf{E}_{F}\) and \(\mathbf{E}_{K}=k((\overline{\omega}_{K}))\subseteq\mathbf{E}_{K^{\prime}}\) by construction. Thus by proposition 4.6(4), we get that \(\bigcup_{n}K_{N,n}^{\prime}=\bigcup_{n}K_{n}\), so that for \(n\gg 0\), we have \(K\subseteq K_{N,n}^{\prime}\). Hence for \(n\gg 0\),
\[\mathcal{O}_{K}\subseteq\mathcal{O}_{K_{N,n}^{\prime}}=\mathcal{O}_{F}[e_{n}] [\pi_{n}]=\mathcal{O}_{F}\llbracket\omega\rrbracket[T]/(q_{n}(\omega),\hat{P}( \omega,T))=\mathbf{A}_{K}/(q_{n}(\omega))\]
as desired.
Using the formula \(\theta(\phi^{-n}(\omega))=e_{n}\) (which follows from proposition 3.22), we can trace the inclusion above through the map \(\phi^{-n}:\mathbf{A}_{K}/(q_{n}(\omega))\to A_{\inf}(\mathcal{O}_{K_{\infty}}) /\ker\theta\) to find that it coincides with \(\mathcal{O}_{K}\subseteq\mathcal{O}_{K_{\infty}}\cong A_{\inf}(\mathcal{O}_{ K_{\infty}})/\ker\theta\).
### \(\Gamma_{K}\)-actions and etale \((\varphi_{q},\Gamma)\)-modules
In this section we summarize the main results in the theory of Lubin-Tate \((\varphi_{q},\Gamma)\)-modules. These results will be recovered as special cases of the results in SS5.
**Definition 4.12**.: _A \(\varphi_{q}\)-module over \(\mathbf{A}_{K}\) is a finite flat \(\mathbf{A}_{K}\) module \(M\) equipped with a \(\phi_{\mathbf{A}_{K}}\)-semilinear endomorphism \(\phi_{M}:M\to M\). It is called etale if the \(\mathbf{A}_{K}\)-linear map_
\[\phi^{*}M:=\mathbf{A}_{K}\otimes_{\phi,\mathbf{A}_{K}}M \longrightarrow M\] \[a\otimes m \mapsto a\phi_{M}(m)\]
_is an isomorphism. When equipped with \(\mathbf{A}_{K}\)-module maps that commute with the \(\phi_{M}\)'s, these form categories \(\operatorname{Mod}_{\mathbf{A}_{K}}^{\varphi_{q}}\) and \(\operatorname{Mod}_{\mathbf{A}_{K}}^{\varphi_{q},et}\). We similarly define \(\varphi_{q}\)-modules over \(W_{L}(K_{\infty}^{\flat})\) and the categories \(\operatorname{Mod}_{W_{L}(K_{\infty}^{\flat})}^{\varphi_{q}}\) and \(\operatorname{Mod}_{W_{L}(K_{\infty}^{\flat})}^{\varphi_{q},et}\)._
By a result of Kisin-Ren [29] (and Fontaine [15] in the cyclotomic case), we have that etale \(\varphi_{q}\)-modules are equivalent to the category \(\operatorname{Rep}_{\mathcal{O}_{L}}(G_{K_{\infty}})\) of continuous finite free \(G_{K_{\infty}}=\operatorname{Gal}(\overline{K}/K_{\infty})\)-representations over \(\mathcal{O}_{L}\). In more detail, proposition 4.6(4) implies that \(\mathbf{E}:=\bigcup_{K/L}\mathbf{E}_{K}\) is the seperable closure of \(\mathbf{E}_{L}\), and that \(\operatorname{Gal}(\mathbf{E}/\mathbf{E}_{K})=G_{K_{\infty}}\). It follows that \(\mathbf{A}:=\left(\bigcup_{K/L}\mathbf{A}_{K}\right)^{\wedge}\) is the completion of the maximal unramified extension of \(\mathbf{A}_{L}\); \(\mathbf{A}\) thus inherits a \(\operatorname{Gal}(\mathbf{E}/\mathbf{E}_{L})=G_{L_{\infty}}\)-action with \(\mathbf{A}^{G_{K_{\infty}}}=\mathbf{A}_{K}\). Moreover, \(\mathbf{A}\subseteq W_{L}(K_{\infty}^{\flat})\) has a \(\phi\)-action. The key theorem is as follows.
**Theorem 4.13**.: _(cf. [29, Theorem 1.6]) The functors_
_form an equivalence of exact tensor categories._
We make two observations about Theorem 4.13. First, that the base of the \(\varphi_{q}\)-modules is \(\mathbf{A}_{K}\). In fact, this is a red herring: base change induces an equivalence of categories
\[\operatorname{Mod}_{\mathbf{A}_{K}}^{\varphi_{q},et} \stackrel{{\sim}}{{\longrightarrow}}\operatorname{Mod }_{W_{L}(K_{\infty}^{\flat})}^{\varphi_{q},et}\] \[M \mapsto M\otimes_{\mathbf{A}_{K}}W_{L}(K_{\infty}^{\flat})\]
so that theorem would remain true with \(W_{L}(K_{\infty}^{\flat})\) replacing \(\mathbf{A}_{K}\) (and \(W_{L}(\overline{K}^{\flat})\) replacing \(\mathbf{A}\)). (This result is due to Fontaine [15] in the cyclotomic case, but as far as the author is aware has not yet appeared in the literature in general; it will follow from proposition 5.4 below.)
Our second observation is that we would like to descend the equivalence to the full category \(\operatorname{Rep}_{\mathcal{O}_{L}}(G_{K})\) of continuous finite free \(G_{K}\)-representations over \(\mathcal{O}_{L}\). Indeed, this is not hard to do, and involves picking up a semilinear action of \(\Gamma_{K}=\operatorname{Gal}(K_{\infty}/K)\). Before stating the result, we first explain the \(\Gamma_{K}\) actions on the rings \(\mathfrak{S}_{K}\), \(\mathbf{A}_{K}\), and \(W_{L}(K_{\infty}^{\flat})\).
Equip \(W_{L}(k)\llbracket T\rrbracket\) with the \(\Gamma_{K}\)-action where \(\sigma\in\Gamma_{K}\) acts by \(f(T)\mapsto f([\chi_{\mathcal{G}}(\sigma)](T))\), and equip \(W_{L}(\mathcal{O}_{K_{\infty}}^{\flat})\) with the natural \(\Gamma_{K}\)-action (coming from the \(\Gamma_{K}\)-action on \(K_{\infty}^{\flat}\)) and the functoriality of \(W_{L}\)). By the definition of the Lubin-Tate character \(\chi_{\mathcal{G}}\) we have
\[[\chi_{\mathcal{G}}(\sigma)](\overline{\omega})=(\overline{e_{n}^{\sigma}})_{ n}=\overline{\omega}^{\sigma}\]
so that \(W_{L}(k)\llbracket T\rrbracket\to k\llbracket T\rrbracket\stackrel{{ \overline{\imath}}}{{\to}}\mathcal{O}_{K_{\infty}}^{\flat}\) is \(\Gamma_{K}\)-equivariant. Thus \(\iota\) is \(\Gamma_{K}\)-equivariant as well by naturality, and the \(\Gamma_{K}\)-actions on \(\mathfrak{S}_{K}\) induced by \(\iota\) and \(W_{L}(K_{\infty}^{\flat})\) coincide. By the uniqueness of lifts given by Hensel's lemma, we further have that the \(\Gamma_{K}\)-action on \(\mathbf{A}_{K}\) induced by viewing it as a subring of \(W_{L}(K_{\infty}^{\flat})\) coincides with the \(\Gamma_{K}\)-action defined by lifting the \(\Gamma_{K}\)-action on \(\mathbf{E}_{K}\). Note also that all of these \(\Gamma_{K}\)-actions commute with the \(\phi\)-actions, because this is true for \(W_{L}(K_{\infty}^{\flat})\). This can also be seen directly for \(W_{L}(k)\llbracket T\rrbracket\) using properties of Lubin-Tate formal \(\mathcal{O}_{L}\)-modules:
\[\phi(f)^{\sigma}(T)=f([\pi\chi_{\mathcal{G}}(\sigma)](T))=f([\chi_{\mathcal{ G}}(\sigma)\pi](T))=\phi(f^{\sigma})(T).\]
We can also view \(\Gamma_{K}\) as acting on the corresponding prisms.
**Proposition 4.14**.: \(\Gamma_{K}\) _acts via automorphisms on the \(L\)-typical prisms \((\mathbf{A}_{K}^{+},(q_{n}(\omega)))\) and \((A_{\inf}(\mathcal{O}_{K_{\infty}}),\ker\theta)\). Moreover, we have that_
\[\operatorname{Aut}_{(\mathcal{O}_{K})_{\mathbbm{A}_{L}}}(\mathbf{A}_{K}^{+},( q_{n}(\omega)))\cong\operatorname{Aut}_{(\mathcal{O}_{K})_{\mathbbm{A}_{L}}}(A_{ \inf}(\mathcal{O}_{K_{\infty}}),\ker\theta)\cong\Gamma_{K}\]
_if \(n\) is large enough that \((\mathbf{A}_{K}^{+},(q_{n}(\omega)))\in(\mathcal{O}_{K})_{\mathbbm{A}_{L}}\)._
Proof.: Since any \(\sigma\in\Gamma_{K}\) commutes with \(\phi\), we know that \(\sigma\) gives a map of \(\delta_{L}\)-algebras. Additionally, since \(q_{n}(\omega)^{\sigma}=[\chi_{\mathcal{G}}(\sigma)](q_{n}(\omega))\) and
\[[\chi_{\mathcal{G}}(\sigma)](T)=\chi_{\mathcal{G}}(\sigma)T+\text{ higher order terms},\]
we have that any \(\sigma\in\Gamma_{K}\) preserves \((q_{n}(\omega))\), and thus gives an automorphism of \((\mathbf{A}_{K}^{+},(q_{n}(\omega)))\). If \(n\) is large enough that \((\mathbf{A}_{K}^{+},(q_{n}(\omega)))\in(\mathcal{O}_{K})_{\mathbbm{A}_{L}}\) then \(\sigma\) respects the structure map \(\mathcal{O}_{K}\to\mathbf{A}_{K}^{+}/q_{n}(\omega)\) as well, so that
\[\Gamma_{K}\hookrightarrow\operatorname{Aut}_{(\mathcal{O}_{K})_{\mathbbm{A}_{L }}}(\mathbf{A}_{K}^{+},(q_{n}(\omega))).\]
Moreover, we see that any automorphism of \((\mathbf{A}_{K}^{+},(q_{n}(\omega)))\) is automatically \((\pi,q_{n}(\omega))\)-adically continuous and \(\phi\)-equivariant, hence extends to an automorphism of the perfection
\((\mathbf{A}_{K}^{+},(q_{n}(\omega)))_{\mathrm{perf}}\cong(A_{\inf}(\mathcal{O}_{K_{ \infty}}),\ker\theta)\) by proposition 4.10. But proposition 3.11, we have that
\[\operatorname{Aut}_{(\mathcal{O}_{K})_{\underline{\mathbb{A}}_{L}}}(A_{\inf}( \mathcal{O}_{K_{\infty}}),\ker\theta)\cong\operatorname{Aut}(\mathcal{O}_{K_{ \infty}}/\mathcal{O}_{K})=\Gamma_{K}.\]
Thus we've shown
\[\Gamma_{K}\hookrightarrow\operatorname{Aut}_{(\mathcal{O}_{K})_{\underline{ \mathbb{A}}_{L}}}(\mathbf{A}_{K}^{+},(q_{n}(\omega)))\hookrightarrow \operatorname{Aut}_{(\mathcal{O}_{K})_{\underline{\mathbb{A}}_{L}}}(A_{\inf}( \mathcal{O}_{K_{\infty}}),\ker\theta)\cong\Gamma_{K}\]
which gives the result.
Descending from \(\operatorname{Rep}_{\mathcal{O}_{L}}(G_{K_{\infty}})\) to \(\operatorname{Rep}_{\mathcal{O}_{L}}(G_{K})\) involves picking up a \(\Gamma_{K}\)-action.
**Definition 4.15**.: A _\((\varphi_{q},\Gamma)\)-module over \(\mathbf{A}_{K}\)_ is a \(\varphi_{q}\)-module \(M\) over \(\mathbf{A}_{K}\) with a semilinear \(\Gamma_{K}\)-action which commutes with the \(\phi\)-action. It is _etale_ if \(M\) is etale as a \(\varphi_{q}\) module. These form categories \(\operatorname{Mod}_{\mathbf{A}_{K}}^{\varphi_{q},\Gamma}\) and \(\operatorname{Mod}_{\mathbf{A}_{K}}^{\varphi_{q},\Gamma,et}\). We similarly define \((\varphi_{q},\Gamma)\)-modules over \(W_{L}(K_{\infty}^{\flat})\).
The equivalence of Theorem 4.13 extends to \((\varphi_{q},\Gamma)\)-modules. So in summary, we have the following inclusions and equivalences among exact tensor categories.
### The prismatic logarithm for \(\mathfrak{S}_{L}\)
For convenience, throughout this section let \((A,I)=(\mathbf{A}_{L}^{+},(q_{n}(\omega)))=(\mathfrak{S}_{L},(q_{n}(\omega))) \cong(\mathcal{O}_{L}[\![T]\!],(q_{n}(T)))\) be the prism of SS 4.1. We will construct a map \(\log_{\underline{\mathbb{A}}}\) from a certain subset \(I_{\phi=[\pi]}\) of \(I\) to the Breuil-Kisin twist \(A\{1\}\) of \(A\). Heuristically, we can think of \(\log_{\underline{\mathbb{A}}}\) as being given by \(\log_{\underline{\mathbb{A}}}(u)=``\lim_{n\to\infty}\frac{[\pi^{n}](u)}{\pi^{ n}}"\). We will further see that \(\log_{\underline{\mathbb{A}}}\) is \(\mathcal{O}_{L}\)-linear, where \(I_{\phi=[\pi]}\) is viewed as an \(\mathcal{O}_{L}\)-module via the Lubin-Tate formal group law \(\mathcal{G}\).
**Remark 4.16**.: In the cyclotomic case \(\mathcal{G}=\mu_{p^{\infty}}\), our \(\log_{\underline{\mathbb{A}}}\) coincides with the map \(u\mapsto\log_{\underline{\mathbb{A}}}(1+u)\) of [5, SS2]. In that setting, \(\log_{\underline{\mathbb{A}}}(1+u)=``\lim_{n\to\infty}\frac{u^{p^{n}}-1}{p^{n}}"\), which is analogous to the classical formula \(\log(1+x)=\lim_{\alpha\to 0}\frac{x^{\alpha}-1}{\alpha}\).
For this paragraph only, let \((A,I)\) be an arbitrary bounded \(L\)-typical prism. Informally, we define
\[A\{1\}=\bigotimes_{n=0}^{\infty}(\phi^{n})^{*}I.\]
More precisely, for \(n\geq 1\) set \(I_{n}\) to be the product \(\prod_{i=0}^{n-1}\phi^{i}(I)\) as an ideal of \(A\). Note that \(I_{n}\equiv I^{\frac{g^{n}-1}{q-1}}\pmod{\pi}\). Thus, since \(A\) is bounded and \((\pi,I)\)-adically complete we have
\(\operatorname{Pic}(A)\simeq\lim_{n}\operatorname{Pic}(A/I_{n})\), and we let \(A\{1\}\in\operatorname{Pic}(A)\) correspond to \(((\phi^{n})^{*}I\otimes_{A}A/I_{n})\,n\geq 0\). See [14, SS4.9] for additional details, or [5, SS2] for a more explicit construction bootstrapping from the case where \(A/I\) is \(\pi\)-torsion free.
Taking \((A,I)=(\mathfrak{S}_{L},(q_{n}(\omega)))\) once more, we give also a more explicit definition. We can define \(A\{1\}\) by
Here
\[I_{m}=(q_{n}(\omega)q_{n+1}(\omega)\cdots q_{n+m-1}(\omega))=\left(\frac{[\pi^ {n+m-1}](\omega)}{[\pi^{n-1}](\omega)}\right)\]
and the transition maps \(I_{m+1}/I_{m+1}^{2}\to I_{m}/I_{m}^{2}\) are quotienting by \(I_{m}^{2}\) followed with dividing by \(\pi\); these are well-defined and surjective as
\[\frac{[\pi^{n+m}](\omega)}{[\pi^{n-1}](\omega)}\equiv\pi\frac{[\pi^{n+m-1}]( \omega)}{[\pi^{n-1}](\omega)}\mod\left(\frac{[\pi^{n+m-1}](\omega)}{[\pi^{n-1} ](\omega)}\right)^{2}\]
since \([\pi^{n+m}](\omega)=[\pi]\left([\pi^{n+m-1}](\omega)\right)\) and \([\pi](T)=\pi T+(\text{higher order terms})\).
**Lemma 4.17**.:
1. \(A/I_{m}\) _is_ \(\pi\)_-torsionfree for all_ \(m\geq 1\)_._
2. _We have_ \(I_{m}=\bigcap_{i=0}^{m-1}\phi^{i}(I)=\bigcap_{i=0}^{m}\left(q_{n+i}(\omega)\right)\)_._
Proof.: We prove part (1) by induction. The result is clear for \(m=1\), and for \(m\geq 2\) we have an exact sequence
\[0\longrightarrow I_{m}\otimes_{A}A/\phi^{m}(I)\cong I_{m}/I_{m+1} \longrightarrow A/I_{m+1}\longrightarrow A/I_{m}\longrightarrow 0\]
where the first and third terms are \(\pi\)-torsion free.
For part (2) we follow [5, lemmas 2.2.8, 2.2.9]. First, we show that the natural map \(f:\phi^{m}(I)/I_{m+1}\to A/I_{m}\) is injective. As above, using the identity \([\pi](T)=\pi T+(\text{higher order terms})\) one shows that the \(f\) has image containing \((\pi,I_{m})\); by part (1), \(f\) therefore factors as \(f=\pi f_{0}\). We show that \(f_{0}\) is an isomorphism; as the domain and codomain are invertible \(A/I_{m}\) modules, it suffices to show surjectivity. One shows by induction over \(m\geq 1\) that if \(\alpha\in I\) then \(f_{0}(\phi^{m}(\alpha))\mod(\pi,I)\) is a unit in \(A/(\pi,I)\). Then by \((\pi,I)\)-adic completeness and the inclusion \(I_{m}\subseteq(\pi,I)\) we conclude that the image of \(I\overset{\phi^{m}}{\to}\phi^{m}(I)/I_{m}\overset{f_{0}}{\to}A/I_{m}\) is the unit ideal as desired.
We now prove the statement in the lemma by induction on \(m\geq 0\), with \(m=0\) being interpreted as the equality (1) = (1) of unit ideals. For \(m\geq 1\), let \(\alpha\in\bigcap_{i=0}^{m}\phi^{i}(I)\). By induction, we have \(\alpha\in I_{m}\cap\phi^{m}(I)\). Thus \(\alpha\) is in the kernel of \(\phi^{m}(I)\to A/I_{m}\), which factors as
\[\phi^{m}(I)\to\phi^{m}(I)/I_{m+1}\overset{f_{0}}{\to}A/I_{m}.\]
Since we showed that \(f\) is injective, we have that \(\alpha\in I_{m+1}\) as desired.
We now define \(\log_{\underline{\Delta}}\). Let \(I_{\phi=[\pi]}\) denote the subset of \(\alpha\in I\) such that \(\phi(\alpha)=[\pi](\alpha)\). For example, we have that \([\pi^{n}](\omega)\in I_{\phi=[\pi]}\) since \(\phi([\pi^{n}](\omega))=[\pi^{n}]([\pi](\omega))=[\pi]([\pi^{n}](\omega))\).
**Lemma 4.18**.: _If \(\alpha\in I_{\phi=[\pi]}\) and \(m\geq 1\) then \([\pi^{m}](\alpha)\in I_{m+1}\) and \([\pi^{m}](\alpha)\equiv\pi\cdot[\pi^{m-1}](\alpha)\pmod{I_{m}^{2}}\)._
Proof.: The second part of the lemma is clear from \([\pi](T)=\pi T+(\text{higher order terms})\). For the first part, for each \(0\leq i\leq m\) we have \([\pi^{m}](\alpha)=[\pi^{m-i}]([\pi^{i}](\alpha))\in\phi^{i}(I)\). Thus \([\pi^{m}](\alpha)\in I_{m+1}\) by lemma 4.17(2).
**Definition 4.19**.: Let \(\log_{\underline{\Delta}}:I_{\phi=[\pi]}\to\mathfrak{S}_{L}\{1\}\) be defined by
\[\log_{\underline{\Delta}}(\alpha)=([\pi^{m-1}](\alpha))_{m\geq 1}=(\phi^{m-1}( \alpha))_{m\geq 1}\in\varprojlim_{\frac{1}{1/\pi}}I_{m}/I_{m}^{2}=\mathfrak{S}_{L}\{1\}.\]
Recall that the Lubin-Tate formal \(\mathcal{O}_{L}\)-module \(\mathcal{G}\) comes with a formal group law \(X+_{\mathcal{G}}Y\in\mathcal{O}_{L}[\![X,Y]\!]\) satisfying
\[X+_{\mathcal{G}}Y =X+Y+(\text{degree }\geq 2\text{ terms}) \tag{4.1}\] \[[a](X+_{\mathcal{G}}Y) =[a](X)+_{\mathcal{G}}[a](Y)\qquad\text{for }a\in\mathcal{O}_{L}. \tag{4.2}\]
This second condition with \(a=\pi\) implies that if \(\alpha,\beta\in I_{\phi=[\pi]}\) then \(\alpha+_{\mathcal{G}}\beta\in I_{\phi=[\pi]}\) as well. Similarly, we have that if \(\alpha\in I_{\phi=[\pi]}\) and \(a\in\mathcal{O}_{L}\) then \([a](\alpha)\in I_{\phi=[\pi]}\). Thus \(I_{\phi=[\pi]}\) can be viewed as an \(\mathcal{O}_{L}\)-module. We show that \(\log_{\underline{\Delta}}\) is an \(\mathcal{O}_{L}\)-module homomorphism.
**Proposition 4.20**.: _For \(\alpha,\beta\in I_{\phi=[\pi]}\) and \(a\in\mathcal{O}_{L}\) we have \(\log_{\underline{\Delta}}(\alpha+_{\mathcal{G}}\beta)=\log_{\underline{\Delta }}(\alpha)+\log_{\underline{\Delta}}(\beta)\) and \(\log_{\underline{\Delta}}([a](\alpha))=a\log_{\underline{\Delta}}(\alpha)\)._
Proof.: We have
\[\log_{\underline{\Delta}}(\alpha+_{\mathcal{G}}\beta) =([\pi^{m-1}](\alpha+_{\mathcal{G}}\beta))_{m\geq 1}=([\pi^{m-1}]( \alpha)+_{\mathcal{G}}[\pi^{m-1}](\beta))_{m\geq 1}\] \[=([\pi^{m-1}](\alpha)+[\pi^{m-1}](\beta))_{m\geq 1}=\log_{ \underline{\Delta}}(\alpha)+\log_{\underline{\Delta}}(\beta)\]
where the penultimate equality uses that \(X+_{\mathcal{G}}Y=X+Y+(\text{degree }\geq 2\text{ terms})\) and \([\pi^{m-1}](\alpha),[\pi^{m-1}](\beta)\in I_{m}\) by lemma 4.18. The identity \(\log_{\underline{\Delta}}([a](\alpha))=a\log_{\underline{\Delta}}(\alpha)\) is shown similarly.
**Remark 4.21**.: Recall that \(\mathfrak{S}_{L}\) was defined by applying construction 3.20 to an element \(e=\in T\mathcal{G}\) of the Tate module of \(\mathcal{G}\); this gave a map \(\iota:\mathcal{O}_{L}[\![T]\!]\to W_{L}(\mathcal{O}_{L_{\infty}}^{\flat})\) with image \(\mathfrak{S}_{L}\) and the element \(\omega:=\iota(T)\). As in remark 4.3, applying the same construction with the element \(e^{\prime}=ae\) for some \(a\in\mathcal{O}_{L}\) results in the element \(\omega^{\prime}=[a](\omega)\) still in \(\mathfrak{S}_{L}\). We thus get a natural \(\mathcal{O}_{L}\)-module map
\[\rho: T\mathcal{G}\to I_{\phi=[\pi]}\] \[ae\mapsto[a\pi^{n}](\omega)\]
and by composition an \(\mathcal{O}_{L}\)-module map \(T\mathcal{G}\to\mathfrak{S}_{L}\{1\}\).
Laurent \(F\)-crystals
In this section we introduce etale \(\varphi_{q}\)-modules over \(L\)-typical prisms and Laurent \(F\)-crystals, and we prove theorem 1.3. In SS5.1 we show that the equivalence \(\operatorname{Mod}_{\mathbf{A}_{K}}^{\varphi_{q},et}\simeq\operatorname{Mod}_{ W_{L}(K_{\infty}^{\flat})}^{\varphi_{q},et}\) is in fact a special case of an equivalence
\[\operatorname{Mod}_{(A,I)}^{\varphi_{q},et}\simeq\operatorname{Mod}_{(A,I)_{ \operatorname{perf}}}^{\varphi_{q},et}\]
between categories of etale \(\varphi_{q}\)-modules which make sense for any \(L\)-typical prism \((A,I)\). In SS5.2, we define Laurent \(F\)-crystals in the \(L\)-typical prismatic setting; these are objects which serve as relativizations of etale \(\varphi_{q}\)-modules over a base formal scheme \(X/\mathcal{O}_{L}\). We go on to show that the category of Laurent \(F\)-crystals over \(X\) is equivalent to the category of lisse local systems on the adic generic fiber \(X_{\eta}\) with coeffficients in \(\mathcal{O}_{L}\). Finally, in SS5.3 we use this theory to recover the Kisin-Ren equivalence between Lubin-Tate \((\varphi_{q},\Gamma)\)-modules and continuous \(G_{K}\) representations over \(\mathcal{O}_{L}\).
### Etale \(\varphi_{q}\)-modules over \(L\)-typical prisms
Given a \(p\)-adic field \(K/L\) and a Lubin-Tate formal \(\mathcal{O}_{L}\)-module, we described in SS4 prisms \((\mathbf{A}_{K}^{+},(q_{n}(\omega))\) with perfection \((A_{\inf}(\mathcal{O}_{K_{\infty}}),\ker\theta)\). We also saw that the categories of etale \(\varphi_{q}\)-modules over \(\mathbf{A}_{K}=\mathbf{A}_{K}^{+}[1/q_{n}(\omega)]_{(\pi)}^{\wedge}\) and \(W_{L}(K_{\infty}^{\flat})=A_{\inf}(\mathcal{O}_{K_{\infty}})[1/\ker\theta]_{ (\pi)}^{\wedge}\) were equivalent. In fact, this reflects a general fact about categories of \(\varphi_{q}\)-modules over \(L\)-typical prisms, which we prove here.
The definition of \(\varphi_{q}\)-modules in this setting is as follows.
**Definition 5.1**.:
1. Let \(\mathcal{A}\) be a ring together with a ring homomorphism \(\varphi:\mathcal{A}\to\mathcal{A}\). An etale \(\varphi\)-module over \(\mathcal{A}\) is a finite projective \(\mathcal{A}\)-module \(M\) equipped with an isomorphism \[\varphi_{M}:\varphi^{*}M:=\mathcal{A}\otimes_{\varphi,\mathcal{A}}M\stackrel{{ \sim}}{{\longrightarrow}}M.\] This gives us a \(\varphi\)-semilinear map \(M\to M\) via \(m\mapsto\varphi_{M}(1\otimes m)\); we will abuse notation and write \(\varphi_{M}\) also for this map. Equipped with \(\mathcal{A}\)-module endomorphisms commuting with the \(\varphi_{M}\)'s, etale \(\varphi\)-modules over \(\mathcal{A}\) form a category \(\operatorname{Mod}_{\mathcal{A}}^{\varphi,et}\).
2. Let \((A,I)\) be a bounded \(L\)-typical prism. Then an etale \(\varphi_{q}\)-module over \((A,I)\) is an etale \(\varphi=\phi_{\mathcal{A}}\)-module over \(\mathcal{A}=A[\frac{1}{I}]_{(\pi)}^{\wedge}\) in the sense of (1). In other words, it is a finite projective \(\mathcal{A}\)-module \(M\) with an isomorphism \(\varphi_{M}:\varphi^{*}M\stackrel{{\sim}}{{\rightarrow}}M\) (which we also view as a \(\varphi\)-semilinear endomorphism of \(M\)). We denote the resulting category by \(\operatorname{Mod}_{(A,I)}^{\varphi_{q},et}=\operatorname{Mod}_{\mathcal{A}}^ {\phi_{\mathcal{A}},et}\).
3. For the corresponding category of derived objects, let \(D_{\rm perf}({\cal A})\) denote the category of perfect complexes in modules over the ring \({\cal A}\), i.e. objects in the derived category of \({\cal A}\)-modules quasi-isomorphic to a bounded complex of finite projective \({\cal A}\)-modules. If \({\cal A}\) has an endomorphism \(\varphi\) then we write \(D_{\rm perf}({\cal A})^{\varphi=1}\) for the category of pairs \((E,\varphi_{E})\) where \(E\in D_{\rm perf}({\cal A})\) and \(\varphi_{E}:\varphi^{*}E\stackrel{{\sim}}{{\to}}E\).
On the representation-theory side, the appropriate generalization of \(G_{K_{\infty}}\)-representations on finite free \({\mathbb{Z}}_{p}\)-modules is \({\cal O}_{L}\)-local systems on \({\rm Spec}({\cal A}/\pi)\). Recall that this means the following.
**Definition 5.2**.: (c.f. [38, definition 8.1].) Let \(X\) be a scheme, formal scheme, or adic space, and denote by \(X_{et}\) the etale site of \(X\).
1. For \(n\geq 1\), an \({\cal O}_{L}/\pi^{n}\)-local system on \(X_{et}\) is a sheaf of flat \({\cal O}_{L}/\pi^{n}\)-modules on \(X_{et}\) which is locally a constant sheaf associated to a finitely generated \({\cal O}_{L}/\pi^{n}\)-module. We denote this category by \({\rm Loc}_{{\cal O}_{L}/\pi^{n}}(X)\).
2. An \({\cal O}_{L}\)-local system on \(X_{et}\) is an inverse system \(({\mathbb{L}}_{n})_{n\geq 1}\) of \({\cal O}_{L}/\pi^{n}\)-local systems on \(X_{et}\) in which the transition maps induce isomorphisms \({\mathbb{L}}_{n+1}/\pi^{n}\stackrel{{\sim}}{{\to}}{\mathbb{L}}_{n}\). We denote this category by \({\rm Loc}_{{\cal O}_{L}}(X)\). This identifies with the category of lisse \(\hat{\cal O}_{L}\)-sheaves on the pro-etale site \(X_{proet}\).
3. Let \(D^{b}_{lisse}(X,{\cal O}_{L})\) be the subcategory of the derived category of \(\hat{\cal O}_{L}\)-modules on \(X_{proet}\) spanned by objects \(T\) which are locally bounded, derived \(\pi\)-complete, and have \(H^{i}(X_{proet},T/\pi)\) locally constant with finitely generated stalks.
When \(X={\rm Spec}\,R\) is affine, we simplify notation by writing \({\rm Loc}_{{\cal O}_{L}}(R)\) for \({\rm Loc}_{{\cal O}_{L}}({\rm Spec}\,R)\) and similarly for \(D^{b}_{lisse}\).
**Remark 5.3**.: For a field \(K\) we have equivalences \({\rm Loc}_{{\cal O}_{L}/\pi^{n}}(K)\cong{\rm Rep}_{{\cal O}_{L}/\pi^{n}}(G_{K})\) and \({\rm Loc}_{{\cal O}_{L}}(K)\cong{\rm Rep}_{{\cal O}_{L}}(G_{K})\) with the categories of continuous \(G_{K}\)-representations on finite free \({\cal O}_{L}/\pi^{n}\)- or \({\cal O}_{L}\)-modules
The main result of this section is as follows.
**Proposition 5.4**.: _Let \((A,I)\) be a bounded \(L\)-typical prism. Let \((A_{\rm perf},IA_{\rm perf})\) be the perfection of \((A,I)\) as in proposition 3.14. Then base change gives an equivalence_
\[{\rm Mod}^{\varphi_{q},et}_{(A,I)} \longrightarrow{\rm Mod}^{\varphi_{q},et}_{(A_{\rm perf},IA_{\rm perf })}\] \[M \mapsto A_{\rm perf}[\tfrac{1}{I}]^{\wedge}\otimes_{A[\tfrac{1}{I}]^{ \wedge}}M.\]
_Both of these categories are in turn equivalent to \({\rm Loc}_{{\cal O}_{L}}(A[\tfrac{1}{I}]/\pi)\). We similarly have equivalences_
\[D_{\rm perf}(A[\tfrac{1}{I}]^{\wedge}_{(\pi)})^{\phi=1}\simeq D_{\rm perf}(A_{ \rm perf}[\tfrac{1}{I}]^{\wedge}_{(\pi)})^{\phi=1}\simeq D^{b}_{lisse}(A[\tfrac {1}{I}]/\pi,{\cal O}_{L}).\]
**Remark 5.5**.: If \((A,I)=(W_{L}(R^{\flat}),\ker\theta)\) is a perfect \(L\)-typical prism, then the equivalence of the theorem is given by
\[\operatorname{Mod}_{(A,I)}^{\varphi_{q},et} \simeq\operatorname{Loc}_{\mathcal{O}_{L}}(R[\tfrac{1}{\pi}])\] \[M \mapsto\left(R[\tfrac{1}{\pi}]_{et}\ni S\mapsto\left(W_{L}(S^{ \flat})\otimes_{W_{L}(R[\tfrac{1}{\pi}]^{\flat})}M/\pi^{n}\right)^{\phi=1} \right)_{n\geq 1}.\]
where \((-)^{\phi=1}\) denotes taking fixed points for \(\phi=\phi_{W_{L}(S^{\flat})}\otimes\phi_{M}\). The same formula holds for the derived categories, with the tensor replaced by \(\otimes^{L}\) and with the inverse system replaced with \(R\operatorname{lim}\) of the inverse system.
**Remark 5.6**.: Theorem 4.13 follows from proposition 5.4: taking \((A,I)=(\mathbf{A}_{K}^{+},(q_{n}(\omega)))\), we get
\[\operatorname{Mod}_{\mathbf{A}_{K}}^{\varphi_{q},et}\simeq\operatorname{Mod}_ {W_{L}(K_{\infty}^{\flat})}^{\varphi_{q},et}\simeq\operatorname{Loc}_{ \mathcal{O}_{L}}(K_{\infty}^{\flat})\simeq\operatorname{Rep}_{\mathcal{O}_{L} }(G_{K_{\infty}}).\]
We will discuss this point further in SS 5.3.
The key input to the proof of proposition 5.4 is the following comparison between \(\pi\)-torsion \(\varphi_{q}\)-modules and \(\mathbb{F}_{q}\)-local systems.
**Lemma 5.7**.: _Let \(R\) be a \(\mathbb{F}_{q}\)-algebra. Then there is an equivalence of categories_
\[\operatorname{Mod}_{R}^{\varphi_{q},et} \simeq\operatorname{Loc}_{\mathbb{F}_{q}}(R)\] \[M \mapsto(R_{et}\ni S\mapsto S\otimes_{R}M)^{\varphi_{q}=1}\] \[(\mathcal{O}_{R,et}\otimes_{\mathbb{F}_{q}}T)(R) \maps T.\]
_The corresponding derived statement \(D_{\operatorname{perf}}(R)^{\varphi_{q}=1}\simeq D_{\mathit{l}isse}^{b}(R, \mathbb{F}_{q})\) also holds._
Proof.: Using the same argument in [8, proposition 3.6], we reduced the derived statement to the statement \(\operatorname{Mod}_{R}^{\varphi_{q},et}\simeq\operatorname{Loc}_{\mathbb{F}_ {q}}(R)\). But this is well known and due originally to Katz [25, proposition 4.1.1].
Proof of proposition 5.4.: We explain the proof for \(\operatorname{Mod}_{(A,I)}^{\varphi_{q},et}\), with the derived version being identical. First, we show that the base change functor is an equivalence. By the \(\pi\)-adic completeness of \(A[\tfrac{1}{I}]^{\wedge}_{(\pi)}\) and devissage, we reduce to the \(\pi\)-torsion case, i.e. to showing that base change gives an equivalence
\[\operatorname{Mod}_{A[\tfrac{1}{I}]/\pi}^{\varphi_{q},et}\xrightarrow[A]{ \sim}\operatorname{Mod}_{A_{\operatorname{perf}}[\tfrac{1}{I}]/\pi}^{\varphi_{ q},et}.\]
Applying lemma 5.7 with \(R=A[\tfrac{1}{I}]/\pi\), we are reduced to showing that base change gives an equivalence
\[\operatorname{Loc}_{\mathbb{F}_{q}}(A/\pi[\tfrac{1}{I}])\simeq\operatorname{ Loc}_{\mathbb{F}_{q}}(A_{\operatorname{perf}}/\pi[\tfrac{1}{I}]).\]
As \(I\) is a Cartier divisor, we may assume that \(I\) is generated by a nonzerodivisor \(d\in A\). Then this equivalence holds since the maps
\[A/\pi[\tfrac{1}{d}]\longrightarrow(\varinjlim_{\varphi_{q}}A/\pi)[\tfrac{1}{d}] \longrightarrow(\varinjlim_{\varphi_{q}}A/\pi)^{\wedge}_{(d)}[\tfrac{1}{d}]\]
induce equivalences of etale sites (the first by topological invariance of the etale site and the second by [17, proposition 5.4.53]).
For the identification with \(\operatorname{Loc}_{\mathcal{O}_{L}}(A[\tfrac{1}{I}])\), note that as \(A_{\operatorname{perf}}[\tfrac{1}{I}]^{\wedge}_{(\pi)}\) is a \(\pi\)-adically complete perfect \(\delta_{L}\)-algebra, we have \(A_{\operatorname{perf}}[\tfrac{1}{I}]^{\wedge}=W_{L}(A_{\operatorname{perf}}[ \tfrac{1}{I}]/\pi)\) by proposition 2.13. Thus by \(\pi\)-adic completeness and lemma 5.7 we get
\[\operatorname{Mod}^{\varphi_{q},et}_{(A_{\operatorname{perf}},I)}\simeq \operatorname{Loc}_{\mathcal{O}_{L}}(A_{\operatorname{perf}}[\tfrac{1}{I}]/\pi)\]
which identifies in turn with \(\operatorname{Loc}_{\mathcal{O}_{L}}(A[\tfrac{1}{I}]/\pi)\) by the same argument as above.
As a corollary of proposition 5.4, we get that the equivalence \(D_{\operatorname{perf}}(A[\tfrac{1}{I}]^{\wedge}_{(\pi)})^{\phi=1}\simeq D_{ \operatorname{perf}}(A_{\operatorname{perf}}[\tfrac{1}{I}]^{\wedge}_{(\pi)})^{ \phi=1}\) also holds "on the level of objects."
**Corollary 5.8**.: _Let \((A,I)\) be a bounded \(L\)-typical prism, and let \(M\in D_{\operatorname{perf}}(A[\tfrac{1}{I}]^{\wedge}_{(\pi)})^{\phi=1}\). Then the canonical map_
\[M^{\phi=1}\longrightarrow(A_{\operatorname{perf}}[\tfrac{1}{I}]^{\wedge}_{( \pi)}\otimes_{A[\tfrac{1}{I}]^{\wedge}_{(\pi)}}M)^{\phi=1}\]
_is an isomorphism._
Proof.: Our proof will follow [18, lemma 6.3]. First we recall how \(M^{\phi=1}\) is defined. In general, let \(B\) be a ring with an endomorphism \(\varphi\) and let \(B[F]\) be the noncommutative polynomial ring with relation \(Fb=\varphi(b)F\). Then we get a fully faithful embedding \(D_{\operatorname{perf}}(B)^{\varphi=1}\hookrightarrow D(B[F])\) into the derived category by sending \((N,\varphi_{N}:\varphi^{*}N\stackrel{{\sim}}{{\to}}N)\in D_{ \operatorname{perf}}(B)^{\varphi=1}\) to the \(B\)-algebra \(N\) with \(F\)-action given by \(N\to(\varphi_{N})_{*}N\) (this is the normal way of seeing an element of \(D_{\operatorname{perf}}(B)^{\varphi=1}\) as being a \(B\)-module with a \(\varphi\)-semilinear endomorphism). Then \(N^{\varphi=1}\) is defined by
\[N^{\varphi=1}:=R\mathrm{Hom}(B[F]/(1-F)B[F],N).\]
Thus, setting \(\mathcal{A}=A[\tfrac{1}{I}]^{\wedge}_{(\pi)}\) and \(\mathcal{A}_{\operatorname{perf}}=A_{\operatorname{perf}}[\tfrac{1}{I}]^{ \wedge}_{(\pi)}\) to simplify notation, our goal is to show that
\[R\mathrm{Hom}(\mathcal{A}[F]/(1-F),M)\longrightarrow R\mathrm{Hom}( \mathcal{A}_{\operatorname{perf}}[F]/(1-F),\mathcal{A}_{\operatorname{perf}} \otimes_{\mathcal{A}}M)\]
is an isomorphism. As this can be checked on cohomology and \(D_{\operatorname{perf}}(\mathcal{A})\) is closed under shifting, it thus suffices to show that
\[\mathrm{Hom}(\mathcal{A}[F]/(1-F),M)\longrightarrow\mathrm{Hom}(\mathcal{A}_ {\operatorname{perf}}[F]/(1-F),\mathcal{A}_{\operatorname{perf}}\otimes_{ \mathcal{A}}M)\]
is an isomorphism. But, up to fully faithful embedding, the hom-set on the right comes from the one of the left by applying the functor \(M\mapsto\mathcal{A}_{\operatorname{perf}}\otimes_{\mathcal{A}}M\), which is an equivalence by proposition 5.4. Thus the hom-sets are isomorphic as desired.
### Laurent \(F\)-crystals
For a bounded formal scheme \(X\) adic over \(\operatorname{Spf}\mathcal{O}_{L}\), denote by \(\mathcal{O}_{\underline{\mathbb{A}}}\) the presheaf \((A,I)\mapsto A\) on the \(L\)-typical prismatic site \(X_{\underline{\mathbb{A}}_{L}}\). By \((\pi,I)\)-completely faithfully flat descent (see [7, corollary 3.12]), \(\mathcal{O}_{\underline{\mathbb{A}}}\) is a sheaf, which we take as the structure sheaf for \(X_{\underline{\mathbb{A}}_{L}}\). It has a natural endomorphism \(\phi\) lifting \(\varphi_{q}\) on \(\mathcal{O}_{\underline{\mathbb{A}}}/\pi\) and an ideal sheaf \(\mathcal{I}\subseteq\mathcal{O}_{\underline{\mathbb{A}}}\) given by \((A,I)\mapsto I\). We will also make use of the sheaf \(\mathcal{O}_{\underline{\mathbb{A}},\operatorname{perf}}\), which sends \((A,I)\mapsto A_{\operatorname{perf}}\).
Denote by \(\mathcal{O}_{\underline{\mathbb{A}}}[\frac{1}{\mathcal{I}}]^{\wedge}_{(\pi)}\) the \(\pi\)-adic completion of the localization of \(\mathcal{O}_{\underline{\mathbb{A}}}\) away from \(\mathcal{I}\) (i.e. locally inverting a generator of \(\mathcal{I}\); recall that if \((A,I)\) is a prism then \(I\) is a Cartier divisor hence locally principal).
**Definition 5.9**.: Let \(X\) be a bounded formal scheme adic over \(\operatorname{Spf}\mathcal{O}_{L}\).
1. A Laurant \(F\)-crystal is a finite locally free \(\mathcal{O}_{\underline{\mathbb{A}}}[\frac{1}{\mathcal{I}}]^{\wedge}_{(\pi)}\)-module \(\mathcal{M}\) over \(X_{\underline{\mathbb{A}}_{L}}\) equipped with an isomorphism \[F:\phi^{*}\mathcal{M}\stackrel{{\sim}}{{\longrightarrow}} \mathcal{M}.\] As before, we abusively write \(\phi_{\mathcal{M}}:\mathcal{M}\to\mathcal{M}\) also for the resulting \(\phi\)-semilinear endomorphism.
2. Write \(\operatorname{Vect}(\mathcal{X},\mathcal{O})\) for the category of vector bundles on a ringed topos \((\mathcal{X},\mathcal{O})\), we can describe the category of Laurent \(F\)-crystals over \(X_{\underline{\mathbb{A}}_{L}}\) as \(\operatorname{Vect}(X_{\underline{\mathbb{A}}_{L}},\mathcal{O}_{\underline{ \mathbb{A}}}[\frac{1}{\mathcal{I}}]^{\wedge}_{\pi})^{\phi=1}\), the \(\phi\)-fixed objects of \(\operatorname{Vect}(X_{\underline{\mathbb{A}}_{L}},\mathcal{O}_{\underline{ \mathbb{A}}}[\frac{1}{\mathcal{I}}]^{\wedge}_{(\pi)})\).
3. Similarly, write \(D_{\operatorname{perf}}(\mathcal{X},\mathcal{O})\) for the category of perfect complexes on \((\mathcal{X},\mathcal{O})\), i.e. objects \(E\) in the derived category of \(\mathcal{O}\)-modules over \(\mathcal{X}\) such that there is a cover \(\{U_{i}\}\) of \(\mathcal{X}\) with each \(E|_{U_{i}}\) a perfect complex of \(\mathcal{O}(U_{i})\)-modules. Let \(D_{\operatorname{perf}}(\mathcal{X},\mathcal{O})^{\phi=1}\) denote corresponding category of \(\phi\)-fixed objects.
Given a Laurent \(F\)-crystal \(\mathcal{M}\) and an object \((A,I)\in X_{\underline{\mathbb{A}}_{L}}\), we have that \(\mathcal{M}(A,I)\in\operatorname{Mod}^{\varphi_{q},et}_{(A,I)}\) is an etale \(\varphi_{q}\)-module. We further have the following.
**Lemma 5.10**.: _There is an equivalence_
\[\operatorname{Vect}(X_{\underline{\mathbb{A}}_{L}},\mathcal{O}_{ \underline{\mathbb{A}}}[\frac{1}{\mathcal{I}}]^{\wedge}_{(\pi)})^{\phi=1} \stackrel{{\sim}}{{\longrightarrow}}\lim_{(A,I)\in X_{\underline{ \mathbb{A}}_{L}}}\operatorname{Mod}^{\varphi_{q},et}_{(A,I)}\] \[\mathcal{M}\mapsto(\mathcal{M}(A,I))_{(A,I)\in X_{\underline{ \mathbb{A}}_{L}}}.\]
_Similarly \(D_{\operatorname{perf}}(X_{\underline{\mathbb{A}}_{L}},\mathcal{O}_{ \underline{\mathbb{A}}}[\frac{1}{\mathcal{I}}]^{\wedge}_{(\pi)})^{\phi=1} \simeq\lim_{(A,I)\in X_{\underline{\mathbb{A}}_{L}}}D_{\operatorname{perf}}(A[ \frac{1}{\mathcal{I}}]^{\wedge}_{(\pi)})^{\phi=1}\). A similar result holds with \(\mathcal{O}_{\underline{\mathbb{A}},\operatorname{perf}}\) replacing \(\mathcal{O}_{\underline{\mathbb{A}}}\) (and \(\operatorname{Mod}^{\varphi_{q},et}_{(A,I)_{\operatorname{perf}}}\) replacing \(\operatorname{Mod}^{\varphi_{q},et}_{(A,I)}\))._
Proof.: The proof is the same as [8, proposition 2.7]: one can reduce via devissage to the \(\pi\)-torsion case, where the result follows from the descent results in [32, theorem 5.8].
We regard Laurent \(F\)-crystals as (geometrically) relativizing etale \(\varphi_{q}\)-modules over the base formal scheme \(X\). We then have the following analogues of proposition 5.4 and corollary 5.8 (except without the local systems, which will appear in theorem 5.15).
**Theorem 5.11**.: _Let \(X\) be a bounded formal scheme adic over \(\operatorname{Spf}\mathcal{O}_{L}\)._
1. _Base change induces an equivalence of categories_ \[\operatorname{Vect}(X_{\underline{\mathbb{A}}_{L}},\mathcal{O}_{ \underline{\mathbb{A}}}[\tfrac{1}{\mathcal{I}}]^{\wedge}_{(\pi)})^{\phi=1} \stackrel{{\sim}}{{\longrightarrow}}\operatorname{Vect}(X_{ \underline{\mathbb{A}}_{L}},\mathcal{O}_{\underline{\mathbb{A}},\operatorname {perf}}[\tfrac{1}{\mathcal{I}}]^{\wedge}_{(\pi)})^{\phi=1}\] \[\mathcal{M} \mapsto\mathcal{O}_{\underline{\mathbb{A}},\operatorname{perf}}[ \tfrac{1}{\mathcal{I}}]^{\wedge}_{(\pi)}\otimes_{\mathcal{O}_{\underline{ \mathbb{A}}}[\tfrac{1}{\mathcal{I}}]^{\wedge}_{(\pi)}}\mathcal{M}\] _and the same holds with_ \(D_{\operatorname{perf}}\) _replacing_ \(\operatorname{Vect}\)_._
2. _For_ \(\mathcal{M}\in D_{\operatorname{perf}}(X_{\underline{\mathbb{A}}_{L}}, \mathcal{O}_{\underline{\mathbb{A}}}[\tfrac{1}{\mathcal{I}}]^{\wedge}_{(\pi)} )^{\phi=1}\)_, the canonical map_ \[\mathcal{M}^{\phi=1}\longrightarrow(\mathcal{O}_{\underline{\mathbb{A}}, \operatorname{perf}}[\tfrac{1}{\mathcal{I}}]^{\wedge}_{(\pi)}\otimes_{ \mathcal{O}_{\underline{\mathbb{A}}}[\tfrac{1}{\mathcal{I}}]^{\wedge}_{(\pi)} }\mathcal{M})^{\phi=1}\] _is an isomorphism._
Proof.: For part (1), we have the following commutative diagram.
By lemma 5.10 the vertical arrows are equivalences of categories, and the bottom horizontal arrow is an equivalence by proposition 5.4. The same holds replacing \(\operatorname{Vect}\) with \(D_{\operatorname{perf}}\) and \(\operatorname{Mod}^{\varphi_{q},et}_{(A,I)}\) with \(D_{perf}(A[\tfrac{1}{\mathcal{I}}]^{\wedge}_{(\pi)})\). For part (2), we can again check on individual prisms \((A,I)\in X_{\underline{\mathbb{A}}_{L}}\), in which case the result follows from corollary 5.8.
Let \(X_{\underline{\mathbb{A}}_{L}}^{\operatorname{perf}}\) denote the subsite of \(X_{\underline{\mathbb{A}}_{L}}\) consisting of perfect \(L\)-typical prisms.
**Corollary 5.12**.: _For \(X\) a bounded formal scheme adic over \(\operatorname{Spf}\mathcal{O}_{L}\) we have_
\[\operatorname{Vect}(X_{\underline{\mathbb{A}}_{L}},\mathcal{O}_{\underline{ \mathbb{A}}}[\tfrac{1}{\mathcal{I}}]^{\wedge}_{(\pi)})^{\phi=1}\simeq\lim_{(A,I)\in X^{\operatorname{perf}}_{\underline{\mathbb{A}}_{L}}}\operatorname{Mod }^{\varphi_{q},et}_{(A,I)}\]
_and similarly for \(D_{\operatorname{perf}}\)._
Proof.: This follows from theorem 5.11(1), lemma 5.10, and the fact that for \(\mathcal{M}\in D_{\operatorname{perf}}(X_{\underline{\mathbb{A}}_{L}}, \mathcal{O}_{\underline{\mathbb{A}}}[\tfrac{1}{\mathcal{I}}]^{\wedge}_{(\pi) })^{\phi=1}\) and \((A,I)\in X_{\underline{\mathbb{A}}_{L}}\) we have
\[\mathcal{M}((A,I)_{\operatorname{perf}})\cong A_{\operatorname{perf}}[\tfrac{1 }{I}]^{\wedge}_{(\pi)}\otimes_{A[\tfrac{1}{I}]^{\wedge}_{(\pi)}}\mathcal{M}(A,I).\]
We now globalize the relationship between etale \(\varphi_{q}\)-modules and local systems from proposition 5.4. We've essentially already shown this in the case that \(X=\operatorname{Spf}(R)\) for a perfectoid \(\mathcal{O}_{L}\)-algebra:
**Proposition 5.13**.: _If \(R\) is a perfectoid \(\mathcal{O}_{L}\)-algebra, there are equivalences_
\[\operatorname{Vect}(R_{\underline{\mathbb{A}}_{L}},\mathcal{O}_{\underline{ \mathbb{A}}}[\tfrac{1}{\mathcal{I}}]^{\wedge}_{(\pi)})^{\phi=1}\simeq \operatorname{Mod}^{\varphi_{q},et}_{(A_{\inf}(R),\ker\theta)}\simeq \operatorname{Loc}_{\mathcal{O}_{L}}(R[\tfrac{1}{\pi}])\]
_and \(D_{\operatorname{perf}}(R_{\underline{\mathbb{A}}_{L}},\mathcal{O}_{\underline {\mathbb{A}}}[\tfrac{1}{\mathcal{I}}]^{\wedge}_{(\pi)})^{\phi=1}\simeq D_{ \operatorname{perf}}(W_{L}(R[\tfrac{1}{\pi}]^{\flat}))^{\phi=1}\simeq D^{b}_{ \operatorname{lisse}}(R[\tfrac{1}{\pi}],\mathcal{O}_{L})\)._
Proof.: By proposition 3.14(2), \(R^{\operatorname{perf}}_{\underline{\mathbb{A}}_{L}}\) has an initial object \((A_{\inf}(R),\ker\theta)\). By corollary 5.12 and proposition 5.4, we then have that
\[\operatorname{Vect}(R_{\underline{\mathbb{A}}_{L}},\mathcal{O}_{\underline{ \mathbb{A}}}[\tfrac{1}{\mathcal{I}}^{\wedge}_{(\pi)})^{\phi=1}\simeq \operatorname{Loc}_{\mathcal{O}_{L}}(A_{\inf}(R)\tfrac{1}{\ker\theta}]/\pi).\]
In the proof of proposition 3.15, we showed that \(\ker\theta\) has a generator of the form \(d=[a_{0}]-\pi u\) for \(a_{0}\in R^{\flat}\) such that \(R^{\flat}\) is \(a_{0}\)-adically complete and \(a_{0}^{\sharp}=\pi\). Thus \(A_{\inf}[\tfrac{1}{\ker\theta}]/\pi\cong R^{\flat}[\tfrac{1}{a_{0}}]\), and we conclude by the tilting equivalence.
**Corollary 5.14**.: _If \(R\) is perfectoid and \(\mathcal{M}\in D_{\operatorname{perf}}(R_{\underline{\mathbb{A}}_{L}}, \mathcal{O}_{\underline{\mathbb{A}}}[\tfrac{1}{\mathcal{I}}]^{\wedge}_{(\pi) })^{\phi=1}\) corresponds to \(T\in D^{b}_{\operatorname{lisse}}(R[\tfrac{1}{\pi}]_{et},\mathcal{O}_{L})\) under the equivalence of proposition 5.13, then there is an isomorphism_
\[R\Gamma(R_{\underline{\mathbb{A}}_{L}},\mathcal{M})^{\phi=1}\cong R\Gamma(R[ \tfrac{1}{\pi}]_{prot},T).\]
Proof.: Since the map \(D_{\operatorname{perf}}(R_{\underline{\mathbb{A}}_{L}},\mathcal{O}_{\underline {\mathbb{A}}}[\tfrac{1}{\mathcal{I}}]^{\wedge}_{(\pi)})^{\phi=1}\to D_{ \operatorname{perf}}(W_{L}(R[\tfrac{1}{\pi}]^{\flat}))^{\phi=1}\) is given by \(\mathcal{M}\mapsto R\Gamma(R_{\underline{\mathbb{A}}_{L}},\mathcal{M})\), this follows from the description of the map \(D_{\operatorname{perf}}(W_{L}(R[\tfrac{1}{\pi}]^{\flat}))^{\phi=1}\to D^{b}_{ \operatorname{lisse}}(R[\tfrac{1}{\pi}],\mathcal{O}_{L})\) given in remark 5.5.
The following theorem globalizes proposition 5.13 by descent from the affine perfectoid case. This generalizes [8, cor 3.8]. More specifically, we will use v-descent: by [39, SS15], \(X_{\eta}\) can be viewed as a locally spatial diamond, so that the categories \(\operatorname{Loc}_{\mathcal{O}_{L}}(X_{\eta})\) and \(D^{b}_{lisse}(X_{\eta},\mathcal{O}_{L})\) satisfy v-descent with respect to v-covers of \(X_{\eta}\) (i.e. covers by surjective maps of v-sheaves; see [31] and especially [31, theorem 3.11] for the relationship between local systems on the diamondification of \(X_{\eta}\) and pro-etale local systems on \(X_{\eta}\)). By [39, lemma 15.3], any analytic adic space has a v-cover by generic fibers of perfectoid rings; by theorem 3.18 this also gives a v-cover by perfectoid \(\mathcal{O}_{L}\)-algebras.
For now this globalization will result in losing the etale \(\varphi_{q}\)-module part of the result; that part will be restored in the special case \(X=\operatorname{Spf}\mathcal{O}_{K}\) in SS5.3.
**Theorem 5.15**.: _Let \(X\) be a formal scheme adic over \(\operatorname{Spf}\mathcal{O}_{L}\) with adic generic fiber \(X_{\eta}\) over \(\operatorname{Spa}(L,\mathcal{O}_{L})\)._
1. _There are equivalence of categories_ \[\operatorname{Vect}(X_{\underline{\mathbb{A}}_{L}},\mathcal{O}_{ \underline{\mathbb{A}}}[\tfrac{1}{\mathcal{I}}]^{\wedge})^{\phi=1} \simeq\operatorname{Loc}_{\mathcal{O}_{L}}(X_{\eta}),\qquad\text{ and}\] \[D_{\operatorname{perf}}(X_{\underline{\mathbb{A}}_{L}},\mathcal{O }_{\underline{\mathbb{A}}}[\tfrac{1}{\mathcal{I}}]^{\wedge})^{\phi=1} \simeq D^{b}_{lisse}(X_{\eta},\mathcal{O}_{L}).\]
2. _Let_ \(\mathcal{M}\in D_{\operatorname{perf}}(X_{\underline{\mathbb{A}}_{L}}, \mathcal{O}_{\underline{\mathbb{A}}}[\tfrac{1}{\mathcal{I}}]^{\wedge})^{\phi=1}\) _and_ \(T\in D^{b}_{lisse}(X_{\eta},\mathcal{O}_{L})\) _correspond under the above equivalence. Then there is an isomorphism_ \[R\Gamma(X_{\underline{\mathbb{A}}_{L}},\mathcal{M})^{\phi=1}\cong R\Gamma(X_{ \eta,proet},T).\]
Note that if \(\pi=0\) on \(X\), then the theorem is trivial: \(X_{\eta}=\operatorname{Spf}0\) and \(\mathcal{O}_{\underline{\mathbb{A}}}[\tfrac{1}{\mathcal{I}}]^{\wedge}_{(\pi)}=0\) (since \(\mathcal{I}\) is the ideal sheaf generated by \(\pi\)).
Proof.: For part (1), we have
\[\operatorname{Vect}(X_{\underline{\mathbb{A}}_{L}},\mathcal{O}_{ \underline{\mathbb{A}}}[\tfrac{1}{\mathcal{I}}]^{\wedge})^{\phi=1} \simeq\lim_{(A,I)\in X_{\underline{\mathbb{A}}_{L}}^{\operatorname {perf}}}\operatorname{Mod}_{(A,I)}^{\varphi_{q},et}\] \[\simeq\lim_{\begin{subarray}{c}\operatorname{Spf}R\to X\\ R\text{ perf }\mathcal{O}_{L}\text{-alg}\end{subarray}}\operatorname{Loc}_{ \mathcal{O}_{L}}(R[1/\pi])\] \[\simeq\operatorname{Loc}_{\mathcal{O}_{L}}(X_{\eta})\]
where the first equivalence is corollary 5.12, the second is proposition 5.13 and proposition 3.11, and the final equivalence is by v-descent. The same argument works for the derived categories.
The proof of part (2) is formally identical:
\[R\Gamma(X_{\underline{\mathbb{A}}_{L}},\mathcal{M})^{\phi=1} \cong\lim_{(A,I)\in X_{\underline{\mathbb{A}}_{L}}^{\mathrm{perf}} }R\Gamma((X/A)_{\underline{\mathbb{A}}_{L}},\mathcal{M})^{\phi=1}\] \[\cong\lim_{\begin{subarray}{c}\mathrm{Spf}\,R\to X\\ R\ \mathrm{perf}\ \mathcal{O}_{L}\text{-alg}\end{subarray}}R\Gamma(R[\tfrac{1}{\pi}]_{ \mathrm{\it project}},T)\] \[\cong R\Gamma(X_{\eta,proet},T)\]
where \((X/A)_{\underline{\mathbb{A}}_{L}}\) denotes the relative prismatic site of \((B,J)\in X_{\underline{\mathbb{A}}_{L}}\) with a map from \((A,I)\) compatible with the maps \(\mathrm{Spf}(A/J),\mathrm{Spf}(B/J)\to X\), and we're now using corollary 5.14 instead of proposition 5.13.
### Lubin-Tate etale \((\varphi_{q},\Gamma)\)-modules and Laurent \(F\)-crystals
Let \(K/L\) be a \(p\)-adic field. Recall that work of Kisin-Ren [29] gives the solid equivalences in the following diagram.
In this section, we show that theorem 5.15(1) specializes to the top row of this diagram when \(X=\mathrm{Spf}(\mathcal{O}_{K_{\infty}})\) and the bottom row when \(X=\mathrm{Spf}(\mathcal{O}_{K})\). We'll further find that the comparison morphism in theorem 5.15(2) recovers the results on \(\varphi_{q}\)-Herr complexes from [30, theorem A].
We begin with the case \(X=\mathrm{Spf}(\mathcal{O}_{K_{\infty}})\).
**Theorem 5.16**.: _Let \(K/L\) be a \(p\)-adic field._
1. _There are equivalences of categories_ \[\mathrm{Mod}^{\varphi_{q},et}_{\mathbf{A}_{K}}\simeq\mathrm{Mod}^{\varphi_{q},et}_{W_{L}(K_{\infty}^{\flat})}\simeq\mathrm{Vect}((\mathcal{O}_{K_{\infty}}) _{\underline{\mathbb{A}}_{L}},\mathcal{O}_{\underline{\mathbb{A}}}[\tfrac{1}{ \mathcal{I}}]^{\wedge}_{(\pi)})^{\phi=1}\simeq\mathrm{Rep}_{\mathcal{O}_{L}}( G_{K_{\infty}}).\] _For the derived category, we similarly have_ \[D_{\mathrm{perf}}(\mathbf{A}_{K})^{\phi=1}\simeq D_{\mathrm{perf}}(W_{L}(K_{ \infty}^{\flat}))^{\phi=1}\simeq D_{\mathrm{perf}}((\mathcal{O}_{K_{\infty}}) _{\underline{\mathbb{A}}_{L}},\mathcal{O}_{\underline{\mathbb{A}}}[\tfrac{1}{ \mathcal{I}}]^{\wedge}_{(\pi)})^{\phi=1}\simeq D^{b}_{\mathrm{\it l}\mathrm{ \it l}\mathrm{\it is}}(K_{\infty,proet},\mathcal{O}_{L}).\]
2. _For_ \(T\in\mathrm{Rep}_{\mathcal{O}_{L}}(G_{K_{\infty}})\) _corresponding to_ \(M\in\mathrm{Mod}^{\varphi_{q},et}_{\mathbf{A}_{K}}\) _or_ \(\mathrm{Mod}^{\varphi_{q},et}_{W_{L}(K_{\infty}^{\flat})}\) _under the equivalence from (_1_), we have that_ \(R\Gamma(K_{\infty,proet},T)\) _is isomorphic to the complex_ \[M\stackrel{{\phi-1}}{{\longrightarrow}}M\] _concentrated in degrees_ \(0\) _and_ \(1\)_._
Proof.: By proposition 3.14, \((\mathcal{O}_{K_{\infty}})_{\underline{\mathbb{A}}_{L}}^{\mathrm{perf}}\) has an initial object given by \((W_{L}(\mathcal{O}_{K_{\infty}}^{\flat}),\ker\theta)\). Thus by corollary 5.12 we get the equivalence \(\mathrm{Mod}_{W_{L}(K_{\infty}^{\flat})}^{\varphi_{q},et}\simeq\mathrm{Vect}(( \mathcal{O}_{K_{\infty}})_{\underline{\mathbb{A}}_{L}},\mathcal{O}_{\underline {\mathbb{A}}}[\frac{1}{\mathcal{I}}]^{\wedge}_{(\pi)})^{\phi=1}\). Then proposition 5.4 and theorem 5.15 give the first part of part (1). The argument for the derived categories is identical.
For part (2) note that, viewing \(M\) as a complex concentrated in degree \(0\), we have
\[M^{\phi=1}:=\mathrm{Cone}(\phi_{M}-1)[-1]=\left(M\stackrel{{\phi -1}}{{\longrightarrow}}M\right).\]
Thus by corollary 5.8, it suffices to prove part (2) for \(M\in\mathrm{Mod}_{W_{L}(K_{\infty}^{\flat})}^{\varphi_{q},et}\) corresponding to \(T\). Letting \(\mathcal{M}\in\mathrm{Vect}((\mathcal{O}_{K_{\infty}})_{\underline{\mathbb{A} }_{L}},\mathcal{O}_{\underline{\mathbb{A}}}[\frac{1}{\mathcal{I}}]^{\wedge}_{ (\pi)})^{\phi=1}\) correspond to \(T\) and \(M\), we have by theorem 5.15(2) that \(R\Gamma((\mathcal{O}_{K_{\infty}})_{\underline{\mathbb{A}}_{L}},\mathcal{M})^ {\phi=1}\cong R\Gamma(K_{\infty,proset},T)\). Thus it suffices to show that \(R\Gamma((\mathcal{O}_{K_{\infty}})_{\underline{\mathbb{A}}_{L}},\mathcal{M}) \cong M\); this is given by the following lemma.
**Lemma 5.17**.: _If \(\mathcal{M}\in\mathrm{Vect}((\mathcal{O}_{K_{\infty}})_{\underline{\mathbb{A} }_{L}},\mathcal{O}_{\underline{\mathbb{A}}}[\frac{1}{\mathcal{I}}]^{\wedge}_{ (\pi)})^{\phi=1}\) then_
\[R\Gamma((\mathcal{O}_{K_{\infty}})_{\underline{\mathbb{A}}_{L}},\mathcal{M}) \cong\Gamma((\mathcal{O}_{K_{\infty}})_{\underline{\mathbb{A}}_{L}},\mathcal{M}).\]
Proof.: We want to show that \(H^{i}((\mathcal{O}_{K_{\infty}})_{\underline{\mathbb{A}}_{L}},\mathcal{M})=0\) for \(i\geq 1\). Indeed, by derived \(\pi\)-completeness and derived Nakayama [40, Tag 0G1U], it suffices to show this upon replacing \(\mathcal{M}\) with \(\mathcal{M}/\pi\). By corollary 5.12 we can compute cohomology on the site \((\mathcal{O}_{K_{\infty}})_{\underline{\mathbb{A}}_{L}}^{\mathrm{perf}}\), which identifies with the category of perfectoid \(\mathcal{O}_{L}\)-algebras over \(\mathcal{O}_{K_{\infty}}\) by proposition 3.11. Under this identification, \(\mathcal{M}/\pi\) is the sheaf which sends a perfectoid \(\mathcal{O}_{L}\)-algebra \(S\) over \(\mathcal{O}_{K_{\infty}}\) to
\[\mathcal{M}(A_{\inf}(S),\ker\theta)/\pi=S[\frac{1}{\pi}]^{\flat}\otimes_{K_{ \infty}^{\flat}}\mathcal{M}(A_{\inf}(\mathcal{O}_{K_{\infty}}),\ker\theta)/\pi.\]
Thus it suffices to show that the sheaf which sends a perfectoid \(\mathcal{O}_{L}\)-algebra \(S\) over \(\mathcal{O}_{K_{\infty}}\) to \(S[\frac{1}{\pi}]^{\flat}\) has vanishing higher cohomology. But, via the tilting equivalence, this is just the basic fact about Galois cohomology that \(H^{i}(K_{\infty}^{\flat},\overline{K}^{\flat})=0\) for \(i\geq 1\).
Naively, we might hope to deduce the corresponding result for \(X=\mathrm{Spf}\,\mathcal{O}_{K}\) by descent along \(\mathrm{Spf}\,\mathcal{O}_{K_{\infty}}\to\mathrm{Spf}\,\mathcal{O}_{K}\). However, instead of using this angle of attack, we will use a more delicate descent argument along the Cech nerve \((W_{L}(\mathcal{O}_{K_{\infty}}^{\flat}),\ker\theta)^{\bullet}\) in the perfect prismatic site \((\mathcal{O}_{K})_{\underline{\mathbb{A}}_{L}}^{\mathrm{perf}}\). This approach, which is the same as the one in [43, proof of theorem 5.2], allows us to recover a Laurent \(F\)-crystal \(\mathcal{M}\) over \((\mathcal{O}_{K})_{\underline{\mathbb{A}}_{L}}\) from the data of \(\mathcal{M}(W_{L}(\mathcal{O}_{K_{\infty}}^{\flat}),\ker\theta)\) and a semilinear action of \(\mathrm{Aut}_{(\mathcal{O}_{K})_{\underline{\mathbb{A}}_{L}}}(W_{L}(\mathcal{ O}_{K_{\infty}}^{\flat}),\ker\theta)\cong\Gamma_{K}\) (by proposition 4.14).
**Lemma 5.18**.: \((A_{\inf}(\mathcal{O}_{K_{\infty}}),\ker\theta)\) _is a cover of the final object of the topos \(\mathrm{Shv}((\mathcal{O}_{K})_{\underline{\mathbb{A}}_{L}})\)._
Proof.: We want to show that for any \((A,I)\in(\mathcal{O}_{K})_{\underline{\mathbb{A}}_{L}}\), there is a cover \((B,J)\) of \((A,I)\) with a map \((A_{\inf}(\mathcal{O}_{K_{\infty}}),\ker\theta)\to(B,J)\). Let \((A_{\inf}(R),\ker\theta)=(A,I)_{\operatorname{perf}}\), using proposition 3.11. As \(\mathcal{O}_{K}\to\mathcal{O}_{K_{\infty}}\) is \(\pi\)-completely faithfully flat, so is \(R\to S:=R\hat{\otimes}_{\mathcal{O}_{K}}^{L}\mathcal{O}_{K_{\infty}}\), where \(S\) is the derived \(\pi\)-completion of the derived tensor product. Using the same argument as in [4, IV, proposition 2.11], we have that \(S\) is a perfectoid \(\mathcal{O}_{L}\)-algebra. Thus by proposition 3.14 and lemma 3.12, we have that the composite
\[(A,I)\to(A_{\inf}(R),\ker\theta)\to(A_{\inf}(S),\ker\theta)\]
is a cover in \((\mathcal{O}_{K})_{\underline{\mathbb{A}}_{L}}\). But also from the map \(\mathcal{O}_{K_{\infty}}\to S\), we get a morphism \((A_{\inf}(\mathcal{O}_{K_{\infty}}),\ker\theta)\to(A_{\inf}(S),\ker\theta)\) as desired.
**Lemma 5.19**.: _Let \(n\geq 1\) and let_
\[(B,J)=(A_{\inf}(\mathcal{O}_{K_{\infty}}),\ker\theta)^{(n+1)}:=(A_{\inf}( \mathcal{O}_{K_{\infty}}),\ker\theta)\times\cdots\times(A_{\inf}(\mathcal{O}_ {K_{\infty}}),\ker\theta)\]
_be the \((n+1)\)-times iterated self-product in \((\mathcal{O}_{K})_{\underline{\mathbb{A}}_{L}}^{\operatorname{perf}}\). Then_
\[B =\operatorname{Hom}_{\operatorname{cont}}(\Gamma_{K},W_{L}( \mathcal{O}_{K_{\infty}}^{\flat}))\qquad\text{and}\] \[B[\tfrac{1}{J}]_{(\pi)}^{\wedge} =\operatorname{Hom}_{\operatorname{cont}}(\Gamma_{K},W_{L}(K_{ \infty}^{\flat})).\]
Proof.: We first compute \(B\). By proposition 3.11, we are interested in the self-product of \(\mathcal{O}_{K_{\infty}}\) in the category of perfectoid \(\mathcal{O}_{L}\)-algebras over \(\mathcal{O}_{K}\). To compute this, let \(U=\varprojlim\operatorname{Spa}(K_{m},\mathcal{O}_{K_{m}})\) be the element of the pro-etale site \(X_{proet}\) for \(X=\operatorname{Spa}(K,\mathcal{O}_{K})\). By [38, lemma 4.10], the self-product we are looking for can be computed as \(H^{0}(U^{(n+1)},\hat{\mathcal{O}}_{X}^{+})\) where \(U^{(n+1)}=U\times_{X}\cdots\times_{X}U\). As \(U\to X\) is Galois with Galois group \(\Gamma_{K}\), we have \(U^{(n+1)}=U\times\Gamma_{K}^{n}\) where \(\Gamma_{K}^{n}\) is viewed in \(X_{proet}\) as a profinite set with trivial Galois action (cf. [38, proof of lemma 5.6]). But then [38, theorem 4.9] and [38, lemma 3.16] imply that
\[H^{0}(U\times\Gamma_{K}^{n},\hat{\mathcal{O}}_{X}^{+})=\operatorname{Hom}_{ \operatorname{cont}}(\Gamma_{K}^{n},H^{0}(U,\hat{\mathcal{O}}_{X}^{+}))= \operatorname{Hom}_{\operatorname{cont}}(\Gamma_{K}^{n},\mathcal{O}_{K_{ \infty}}).\]
It is easy to verify that tilting and taking \(W_{L}(-)\) commutes with \(\operatorname{Hom}_{\operatorname{cont}}(\Gamma_{K}^{n},-)\), giving the first part of the result.
Since \(B/J\) is a perfectoid \(\mathcal{O}_{L}\)-algebra, we have \(B[\tfrac{1}{J}]_{(\pi)}^{\wedge}=W_{L}(B/J[\tfrac{1}{\pi}]^{\flat})\). We thus have
\[B[\tfrac{1}{J}]_{(\pi)}^{\wedge}=W_{L}\left(\operatorname{Hom}_{\operatorname{ cont}}(\Gamma_{K}^{n},\mathcal{O}_{K_{\infty}})[\tfrac{1}{\pi}]^{\flat}\right)= \operatorname{Hom}_{\operatorname{cont}}(\Gamma_{K}^{n},W_{L}(K_{\infty}^{ \flat}))\]
as desired.
**Theorem 5.20**.: _Let \(K/L\) be a \(p\)-adic field._
1. _There are equivalences of categories_ \[\operatorname{Mod}_{\mathbf{A}_{K}}^{\varphi_{q},\Gamma_{K},et}\simeq \operatorname{Mod}_{W_{L}(K_{\infty}^{\flat})}^{\varphi_{q},\Gamma_{K},et}\simeq \operatorname{Vect}((\mathcal{O}_{K})_{\underline{\mathbb{A}}_{L}},\mathcal{O} _{\underline{\mathbb{A}}}[\tfrac{1}{\mathcal{I}}]^{\wedge}_{(\pi)})^{\phi=1} \simeq\operatorname{Rep}_{\mathcal{O}_{L}}(G_{K})\] _and similarly for the corresponding derived categories._
2. _Let_ \(T\in\operatorname{Rep}_{\mathcal{O}_{L}}(G_{K})\) _correspond to_ \(M\in\operatorname{Mod}_{\mathbf{A}_{K}}^{\varphi_{q},\Gamma_{K},et}\) _or_ \(\operatorname{Mod}_{W_{L}(K_{\infty}^{\flat})}^{\varphi_{q},\Gamma_{K},et}\) _under the equivalence from (_1_). Let_ \(C_{\operatorname{cont}}^{\bullet}(\Gamma_{K},M)\) _denote the continuous cochain complex of_ \(\Gamma_{K}\) _with values in_ \(M\)_. Then_ \(R\Gamma(K_{proet},T)\) _is isomorphic to_ \(C_{\operatorname{cont}}^{\bullet}(\Gamma_{K},M)^{\phi=1}\)_._
Proof of theorem 5.20.: The first and last equivalences in the theorem follow from proposition 5.4 and theorem 5.15, so we focus on the equivalence \(\operatorname{Mod}_{W_{L}(K_{\infty}^{\flat})}^{\phi,\Gamma_{K},et}\simeq \operatorname{Vect}((O_{K})_{\underline{\mathbb{A}}_{L}},\mathcal{O}_{ \underline{\mathbb{A}}}[\tfrac{1}{T}]^{\wedge}_{(\pi)})^{\phi=1}\). Since \((W_{L}(\mathcal{O}_{K_{\infty}}^{\flat}),\ker\theta)\) is a cover of the final object \(*\) of \(\operatorname{Shv}((\mathcal{O}_{K})_{\underline{\mathbb{A}}_{L}})\) by lemma 5.18, we have that
where \((W_{L}(\mathcal{O}_{K_{\infty}}^{\flat}),\ker\theta)^{(2)}:=(W_{L}(\mathcal{O} _{K_{\infty}}^{\flat}),\ker\theta)\times_{*}(W_{L}(\mathcal{O}_{K_{\infty}}^{ \flat}),\ker\theta)\) denotes the self product in \((\mathcal{O}_{K})_{\underline{\mathbb{A}}_{L}}^{\operatorname{perf}}\) (here we have used a general fact about recoving a vector bundle from its value on the Cech nerve of a cover of the final object; see [7, footnote 10] or [43, SS3] for more details). By lemma 5.19 we then get
\[\operatorname{Vect}((\mathcal{O}_{K})_{\underline{\mathbb{A}}_{L}},\mathcal{O }_{\underline{\mathbb{A}}}[\tfrac{1}{T}]^{\wedge}_{(\pi)})^{\phi=1}\simeq \varprojlim\left(\operatorname{Mod}_{W_{L}(K_{\infty}^{\flat})}^{\phi,et} \xrightarrow[]{}\operatorname{Mod}_{\operatorname{Hom}_{\operatorname{cont}}( \Gamma_{K},W_{L}(K_{\infty}^{\flat}))}^{\phi,et}\xrightarrow[]{}\cdots\right).\]
By the same argument as for usual Galois descent, this identifies \(\operatorname{Vect}((\mathcal{O}_{K})_{\underline{\mathbb{A}}_{L}},\mathcal{O }_{\underline{\mathbb{A}}}[\tfrac{1}{T}]^{\wedge}_{(\pi)})^{\phi=1}\) with the category of etale \(\varphi_{q}\)-modules over \(W_{L}(K_{\infty}^{\flat})\) with a semilinear action of \(\Gamma_{K}\) which also commutes with \(\phi\). But this is exactly the definition of the category \(\operatorname{Mod}_{W_{L}(K_{\infty}^{\flat})}^{\phi,\Gamma_{K},et}\), giving part (1).
For part (2), we can again focus on the case \(M\in\operatorname{Mod}_{W_{L}(K_{\infty}^{\flat})}^{\varphi_{q},\Gamma_{K},et}\) by corollary 5.8. For \(\mathcal{M}\in\operatorname{Vect}((\mathcal{O}_{K})_{\underline{\mathbb{A}}_{L} },\mathcal{O}_{\underline{\mathbb{A}}}[\tfrac{1}{\mathcal{I}}]^{\wedge}_{( \pi)})^{\phi=1}\) corresponding to \(T\) and \(M\), we get by the same computation as above that \(R\Gamma((\mathcal{O}_{K})_{\underline{\mathbb{A}}_{L}},\mathcal{M})\simeq C_{ \operatorname{cont}}^{\bullet}(\Gamma_{K},M)\). We then conclude by theorem 5.15(2).
|
2306.02186 | Rigorous derivation of weakly dispersive shallow water models with large
amplitude topography variations | We derive rigorously from the water waves equations new irrotational shallow
water models for the propagation of surface waves in the case of uneven
topography in horizontal dimensions one and two. The systems are made to
capture the possible change in the waves' propagation, which can occur in the
case of large amplitude topography. The main contribution of this work is the
construction of new multi-scale shallow water approximations of the
Dirichlet-Neumann operator. We prove that the precision of these approximations
is given at the order $O(\mu \varepsilon)$, $O(\mu\varepsilon +\mu^2\beta^2)$
and $O(\mu^2\varepsilon+\mu \varepsilon \beta+ \mu^2\beta^2)$. Here $\mu$,
$\varepsilon$, and $\beta$ denote respectively the shallow water parameter, the
nonlinear parameter, and the bathymetry parameter. From these approximations,
we derive models with the same precision as the ones above. The model with
precision $O(\mu \varepsilon)$ is coupled with an elliptic problem, while the
other models do not present this inconvenience. | Louis Emerald, Martin Oen Paulsen | 2023-06-03T19:45:48Z | http://arxiv.org/abs/2306.02186v2 | Rigorous derivation of weakly dispersive shallow water models with large amplitude topography variations
###### Abstract.
We derive new irrotational shallow water models for the propagation of surface waves in the case of strong variable topography. We expect that such models can prove to be useful when studying the propagation of waves above obstacles. In this situation, there can be a change of behavior, where the waves pass from a long wave regime to a weakly non-linear one. To that purpose, we construct multi-scale approximations of the Dirichlet-Neumann operator. Then, we make use of them to rigorously derive models of the Whitham type which are precise at the order \(O(\mu\varepsilon+\mu^{2}\beta^{2})\) or \(O(\mu^{2}\varepsilon+\mu\varepsilon\beta+\mu^{2}\beta^{2})\). Here \(\mu\), \(\varepsilon\), and \(\beta\) denote the shallow water parameter, the nonlinear parameter, and the bathymetry parameter.
Key words and phrases:Rigorous derivation, shallow water models, multi-scale expansion, Dirichlet-Neumann operator, pseudo-differential operators 2010 Mathematics Subject Classification: Primary: 76B15; 35Q35; 35C20
## 1. Introduction
### Motivations
This work is a continuation of the earlier paper [12] where the author rigorously derived full dispersion models, with a flat bottom, from the water waves equations in the shallow water regime with non-trivial order of precision. More specifically, let \(\mu:=\frac{H_{0}^{2}}{L^{2}}\) be the shallow water parameter and \(\varepsilon:=\frac{a_{\text{surf}}}{H_{0}}\) be the nonlinear parameter, where \(H_{0},L\) and \(a_{\text{surf}}\) are characteristic quantities of the system under study, with \(H_{0}\) the characteristic water depth, \(L\) the characteristic wavelength in the longitudinal direction and \(a_{\text{surf}}\) is the characteristic surface amplitude. Setting \(\mu,\varepsilon\in[0,1]\), the author derived, from the water waves equations, a class of Whitham-Boussinesq systems with an order of precision \(O(\mu\varepsilon)\) and a class of Whitham-Green-Naghdi systems with an order of precision \(O(\mu^{2}\varepsilon)\). These orders of precision make them good approximations of the water waves equations in both the shallow water regime, for which \(\mu\ll 1\), and the weakly non-linear regime, for which \(\varepsilon\ll 1\). This makes them good candidates to study situations where there is a change of behavior in the propagation of the waves. But these situations occur only when the bottom is not flat, so that one has to consider the effects of the topography in the analysis.
In the flat bottom case, the reader can consult [12] for a historical overview of the full dispersion models. However, for the sake of completeness, we should also mention recent works on the local well-posedness of such models. It has been proven for a large class of Whitham-Boussinesq systems with flat bottoms [13, 18] to be well-posed for an existence time of order \(O(\frac{1}{\varepsilon})\).
In [11], the author extended the work in [12] in the case of variable bottom, the latter being characterized by the bathymetry parameter \(\beta:=\frac{a_{\text{bott}}}{H_{0}}\), where \(a_{\text{bott}}\) is the characteristic amplitude of the bathymetry. The author derived models with a precision of \(O(\mu\varepsilon+\mu\beta)\) when compared to the water waves equations for a class of Whitham-Boussinesq system, and a precision order \(O(\mu^{2}\varepsilon+\mu^{2}\beta)\) with respect to the water waves equations for a class
of Whitham-Green-Naghdi systems. For a particular Whitham-Green-Naghdi system, the local well-posedness with a time of existence of \(O(\frac{1}{\max\{\varepsilon,\beta\}})\) was proved in horizontal dimension \(1\)[14].
In a recent paper [7], the authors considered three types of Whitham-Boussinesq models to simulate the Dingemans experiments [10]. In these experiments, the author considers the propagation of waves above obstacles, a situation where a long wave (\(\varepsilon=O(\mu)\) and \(\mu\ll 1\)) pass above an obstacle. This induces the creation of high harmonics which are then freely released (see also [4, 8]), so that at the end, the wave becomes weakly-nonlinear (\(\varepsilon\ll 1\) but \(\mu\) not necessarily small). The derivation of their models is based on the work of Craig _et al_[9] where the authors construct an approximation of the Dirichlet-Neumann operator around \(\varepsilon=0\). A formal argument presented in [7] shows that their models have an order of precision \(O(\mu\varepsilon+\varepsilon^{2})\), with respect to the water waves equations. A drawback is that their models depend on the inversion of a pseudo-differential operator and consequently seem to create instabilities in the simulations. Moreover, if one inverts a pseudo-differential operator, it is not clear how one could quantify the error of approximation in the Sobolev spaces uniformly in the parameters \(\mu\), \(\varepsilon\) and \(\beta\) and then make the derivation rigorous with the correct order of precision.
In the present paper, we propose a new method to extend the results in [12, 11] for variable bottoms. We construct new approximations of the Dirichlet-Neumann operator at the order of precision \(O(\mu\varepsilon+\mu^{2}\beta^{2})\) or \(O(\mu^{2}\varepsilon+\mu\varepsilon\beta+\mu^{2}\beta^{2})\), and quantify the error in the Sobolev spaces, uniformly in \(\mu,\varepsilon\) and \(\beta\). With these approximations, we derive new Whitham-Boussinesq systems with an order of precision \(O(\mu\varepsilon+\mu^{2}\beta^{2})\), with respect to the water waves equations. In addition, we derive new Whitham-Green-Naghdi systems with the order of precision \(O(\mu^{2}\varepsilon+\mu\varepsilon\beta+\mu^{2}\beta^{2})\), with respect to the water waves equations. We also show how to get Hamiltonian versions of these systems in some cases. We emphasize the fact that the orders of precision are non-trivial in terms of the bathymetry parameter. In the case of large variable bottom, these models have a larger range of validity when compared to the models derived in [11]. Furthermore, the systems we derive do not depend on the inversion of a pseudo-differential operator. Another advantage is that they have a similar quasi-linear hyperbolic structure as the full dispersion models in the flat bottom case. We expect then, in light of the recent works of [6, 5], to be able to prove the local well-posedness of these models with a time existence of order \(O(\frac{1}{\varepsilon})\). This will be an objective for future work.
**Notations 1.1**.:
* _Let_ \(\mathrm{Id}\) _be the_ \(d\times d\) _identity matrix, and take_ \(\mathbf{0}=(0,0)^{T}\) _if_ \(d=2\)_,_ \(\mathbf{0}=0\) _if_ \(d=1\)_. Then we define the_ \((d+1)\times(d+1)\) _matrix_ \(I^{\mu}\) _by_ \[I^{\mu}=\begin{pmatrix}\sqrt{\mu}\mathrm{Id}&\mathbf{0}\\ \mathbf{0}^{T}&1\end{pmatrix}.\]
* _We define the_ \(d\)_-dimensional Laplace operator by_ \[\Delta_{X}=\begin{cases}\partial_{x}^{2}&\text{when}&d=1\\ \partial_{x}^{2}+\partial_{y}^{2}&\text{when}&d=2.\end{cases}\]
* _We define the_ \((d+1)\)_-dimensional scaled gradient by_ \[\nabla_{X,z}^{\mu}=I^{\mu}\nabla_{X,z}=\begin{cases}(\sqrt{\mu}\partial_{x}, \partial_{z})^{T}&\text{when}&d=1\\ (\sqrt{\mu}\partial_{x},\sqrt{\mu}\partial_{y},\partial_{z})^{T}&\text{when}&d= 2,\end{cases}\]
_and we introduce the scaled Laplace operator_ \[\Delta^{\mu}_{X,z}=\nabla^{\mu}_{X,z}\cdot\nabla^{\mu}_{X,z}=\mu\Delta_{X}+ \partial^{2}_{z}.\]
* _Let_ \(f:\mathbb{R}^{d}\to\mathbb{R}\) _be a tempered distribution, let_ \(\hat{f}\) _or_ \(\mathcal{F}f\) _be its Fourier transform and_ \(\mathcal{F}^{-1}f\) _be its inverse Fourier transform._
* _For any_ \(s\in\mathbb{R}\) _we call the multiplier_ \(\mathrm{J}^{s}=(1+|\mathrm{D}|^{2})^{\frac{s}{2}}=\langle\mathrm{D}\rangle^{s}\) _the Bessel potential of order_ \(-s\)_._
* _The Sobolev space_ \(H^{s}(\mathbb{R}^{d})\) _is equivalent to the weighted_ \(L^{2}-\)_space with_ \(|f|_{H^{s}}=|\mathrm{J}^{s}f|_{L^{2}}\)_._
* _For any_ \(s\geq 1\) _we will denote_ \(\dot{H}^{s}(\mathbb{R}^{d})\) _the Beppo Levi space with_ \(|f|_{\dot{H}^{s}}=|\mathrm{J}^{s-1}\nabla_{X}f|_{L^{2}}\)_._
* _Let_ \(\Omega\subset\mathbb{R}^{d+1}\)_. For any_ \(k\in\mathbb{N}\)_, we define the space_ \(H^{k,0}(\Omega)\) _with norm_ \[\|f\|_{H^{k,0}(\Omega)}^{2}=\sum\limits_{|\gamma|\leq k}\int_{\Omega}|\partial ^{\gamma}_{X}f(X,z)|^{2}\,\mathrm{d}z\mathrm{d}X,\] _and similarly, for_ \(l\in\mathbb{N}\) _such that_ \(l\leq k\)_, we define the space_ \(H^{k,l}(\Omega)\) _with norm_ \(\|f\|_{H^{k,l}(\Omega)}=\sum\limits_{j=0}^{l}\|\partial^{j}_{z}f\|_{H^{k-j,0}( \Omega)}\)_._
* _We say that_ \(f\) _is a Schwartz function_ \(\mathscr{S}(\mathbb{R}^{d})\)_, if_ \(f\in C^{\infty}(\mathbb{R}^{d})\) _and satisfies for all_ \(\alpha,\beta\in\mathbb{N}^{d}\)_,_ \[\sup_{X\in\mathbb{R}^{d}}|X^{\alpha}\partial^{\beta}_{X}f|<\infty.\]
* _If_ \(A\) _and_ \(B\) _are two operators, then we denote the commutator between them to be_ \([A,B]=AB-BA\)_._
* _We let_ \(c\) _denote a positive constant independent of_ \(\mu,\varepsilon,\beta\) _that may change from line to line. Also, as a shorthand, we use the notation_ \(a\lesssim b\) _to mean_ \(a\leq c\,b\)_._
* _Let_ \(t_{0}>\frac{d}{2}\)_,_ \(s\geq 0\)_,_ \(h_{\min},h_{b,\min}\in(0,1)\)_. Then for_ \(\zeta,b,\nabla_{X}\psi\) _sufficiently regular and_ \(C(\cdot)\) _a positive, non-decreasing function of its argument, we define the constants_ \[M_{0}=C(\frac{1}{h_{\min}},\frac{1}{h_{b,\min}},|\zeta|_{H^{t_{0}} },|b|_{H^{t_{0}}})\] \[M(s)=C(M_{0},|\zeta|_{H^{\max\{t_{0}+2,s\}}},|b|_{H^{\max\{t_{0}+ 2,s\}}})\] \[N(s)=C(M(s),|\nabla_{X}\psi|_{H^{s}}).\]
### The consistency problem and main results
Throughout this paper, \(d\) will be the dimension of the horizontal variable, denoted \(X\in\mathbb{R}^{d}\). The reference model of our study is the water waves equations, written under the Zakharov-Craig-Sulem formulation:
\[\begin{cases}\partial_{t}\zeta-\frac{1}{\mu}\mathcal{G}^{\mu}[\varepsilon\zeta,\beta b]\psi=0\\ \partial_{t}\psi+\zeta+\frac{\varepsilon}{2}|\nabla_{X}\psi|^{2}-\frac{ \mu\varepsilon}{2}\frac{(\frac{1}{\mu}\mathcal{G}^{\mu}[\varepsilon\zeta, \beta b]\psi+\varepsilon\nabla_{X}\zeta\cdot\nabla_{X}\psi)^{2}}{1+ \varepsilon^{2}\mu|\nabla_{X}\zeta|^{2}}=0.\end{cases} \tag{1.1}\]
Here the free surface elevation is the graph of \(\zeta(t,X)\), which is a function of time \(t\) and horizontal space \(X\in\mathbb{R}^{d}\). The bottom elevation is the graph of \(b(X)\), which is a time-independent function. The function \(\psi(t,X)\) is the trace at the surface of the velocity potential, and \(\mathcal{G}^{\mu}\) is the Dirichlet-to-Neumann operator defined later in Definition 1.3. Moreover, every variable and function in (1.1) is compared with physical characteristic parameters of the same dimension \(H_{0},a_{\mathrm{surf}},a_{\mathrm{bott}}\) or \(L\).
Throughout the paper, we will always make the following fundamental assumption:
**Definition 1.2** (Non-cavitation condition).: _Let \(\varepsilon\in[0,1]\), \(\beta\in[0,1]\) and \(s>\frac{d}{2}\). Let also \(b\in C^{\infty}_{c}(\mathbb{R}^{d})\) be a smooth function with compact support, and take \(\zeta\in H^{s}(\mathbb{R}^{d})\). We say \(\zeta\) and \(b\) satisfies the "non-cavitation condition" if there exists \(h_{\text{min}}\in(0,1)\) such that_
\[h:=1+\varepsilon\zeta(X)-\beta b(X)\geq h_{\text{min}},\quad\text{for all}\ \ X\in\mathbb{R}^{d}. \tag{1.2}\]
Under the non-cavitation condition, we may define the Dirichlet-Neumann operator by [16]:
**Definition 1.3**.: _Let \(t_{0}>\frac{d}{2}\), \(\psi\in\dot{H}^{\frac{3}{2}}(\mathbb{R}^{d})\), \(b\in C^{\infty}_{c}(\mathbb{R}^{d})\), and \(\zeta\in H^{t_{0}+1}(\mathbb{R}^{d})\) be such that (1.2) is satisfied. Let \(\Phi\) be the unique solution in \(\dot{H}^{2}(\Omega_{t})\) of the boundary value problem_
\[\begin{cases}\Delta^{\mu}_{X,z}\Phi=0&\text{in}\quad\Omega_{t}:=\{(X,z)\in \mathbb{R}^{d+1},-1+\beta b(X)<z<\varepsilon\zeta(X)\}\\ \partial_{n_{b}}\Phi=0&\text{on}\quad z=-1+\beta b(X)\\ \Phi=\psi&\text{on}\quad z=\varepsilon\zeta(t,X),\end{cases} \tag{1.3}\]
_where_
\[\partial_{n_{b}}=\mathbf{n}_{b}\cdot I^{\mu}\nabla^{\mu}_{X,z},\qquad\mathbf{ n}_{b}=\frac{1}{\sqrt{1+\beta^{2}|\nabla_{X}b|^{2}}}\begin{pmatrix}-\beta \nabla_{X}b\\ 1\end{pmatrix},\]
_then \(\mathcal{G}^{\mu}[\varepsilon\zeta,\beta b]\psi\in H^{\frac{1}{2}}(\mathbb{R }^{d})\) is defined by_
\[\mathcal{G}^{\mu}[\varepsilon\zeta,\beta b]\psi=(\partial_{z}\Phi-\mu \varepsilon\nabla_{X}\zeta\cdot\nabla_{X}\Phi)_{|_{z=\varepsilon\zeta}}. \tag{1.4}\]
For convenience, it is easier to work with the vertical average of the horizontal component of the velocity. We make the following definition using Proposition 3.35 in [16].
**Definition 1.4**.: _Let \(t_{0}>\frac{d}{2}\), \(\psi\in\dot{H}^{\frac{3}{2}}(\mathbb{R}^{d})\), \(b\in C^{\infty}_{c}(\mathbb{R}^{d})\), and \(\zeta\in H^{t_{0}+1}(\mathbb{R}^{d})\) such that (1.2) is satisfied. Let \(\Phi\in\dot{H}^{2}(\Omega_{t})\) be the solution of (1.3), then we define the operator:_
\[\overline{V}^{\mu}[\varepsilon\zeta,\beta b]\psi=\frac{1}{h}\int_{-1+\beta b} ^{\varepsilon\zeta}\nabla_{X}\Phi\,\mathrm{d}z, \tag{1.5}\]
_and the following relation holds,_
\[\mathcal{G}^{\mu}[\varepsilon\zeta,\beta b]\psi=-\mu\nabla_{X}\cdot(h \overline{V}^{\mu}[\varepsilon\zeta,\beta b]\psi). \tag{1.6}\]
_Throughout this paper, we will denote \(\overline{V}^{\mu}[\varepsilon\zeta,\beta b]\psi\) by \(\overline{V}\) when no confusion is possible._
In order to write the main results of this paper, we need to define two types of differential operators. The first type is the Fourier multipliers.
**Definition 1.5**.: _Let \(u:\mathbb{R}^{d}\to\mathbb{R}^{d}\) be a tempered distribution, and let \(\widehat{u}\) be its Fourier transform. Let \(F:\mathbb{R}^{d}\to\mathbb{R}\) be a smooth function with polynomial decay. Then the Fourier multiplier associated with \(F(\xi)\) is denoted \(\mathrm{F}(\mathrm{D})\) (denoted \(\mathrm{F}\) when no confusion is possible) and defined by the formula:_
\[\widehat{\mathrm{F}(\mathrm{D})u}(\xi)=F(\xi)\widehat{u}(\xi).\]
**Definition 1.6**.: _Let \(\mathrm{F}_{0}\) be a Fourier multiplier depending on the transverse variable:_
\[\mathrm{F}_{0}u(X)=\mathcal{F}^{-1}\Big{(}\frac{\cosh((z+1)\sqrt{\mu}|\xi|)}{ \cosh(\sqrt{\mu}|\xi|)}\hat{u}(\xi)\Big{)}(X),\]
_for \(z\in[-2,0]\). We also define the four Fourier multipliers \(\mathrm{F}_{1}\), \(\mathrm{F}_{2}\), \(\mathrm{F}_{3}\) and \(\mathrm{F}_{4}\) by the expressions:_
\[\mathrm{F}_{1}=\frac{\tanh{(\sqrt{\mu}|\mathrm{D}|)}}{\sqrt{\mu}|\mathrm{D}|}, \quad\mathrm{F}_{2}=\frac{3}{\mu|\mathrm{D}|^{2}}(1-\mathrm{F}_{1}),\quad \mathrm{F}_{3}=\mathrm{sech}(\sqrt{\mu}|D|),\quad\mathrm{F}_{4}=\frac{2}{\mu| \mathrm{D}|^{2}}(1-\mathrm{F}_{3}).\]
Next, we would like to define operators of the form
\[\mathcal{L}[X,D]u(X):=\mathcal{F}^{-1}\big{(}L(X,\xi)\hat{u}(\xi)\big{)}(X), \tag{1.7}\]
where \(L\) is a smooth function in a particular symbol class given in the next definition.
**Definition 1.7**.: _Let \(d=1,2\) and \(m\in\mathbb{R}\). We say \(L\in S^{m}\) is a symbol of order \(m\) if \(L(X,\xi)\) is \(C^{\infty}(\mathbb{R}^{d}\times\mathbb{R}^{d})\) and satisfies_
\[\forall\alpha\in\mathbb{N}^{d},\quad\forall\gamma\in\mathbb{N}^{d},\quad \langle\xi\rangle^{-(m-|\gamma|)}|\partial_{X}^{\alpha}\partial_{\xi}^{\gamma} L(X,\xi)|<\infty.\]
_We also introduce the seminorm_
\[\mathcal{M}_{m}(L)=\sup_{|\alpha|\leq\lceil\frac{d}{2}\rfloor+1}\sup_{|\gamma |\leq\lceil\frac{d}{2}\rfloor+1}\sup_{(X,\xi)\in\mathbb{R}^{d}\times\mathbb{R} ^{d}}\Big{\{}\langle\xi\rangle^{-(m-|\gamma|)}|\partial_{X}^{\alpha}\partial _{\xi}^{\gamma}L(X,\xi)|\Big{\}}. \tag{1.8}\]
The next result allows us to justify the formula (1.7) for functions \(u\) in Sobolev spaces.
**Theorem 1.8**.: _Let \(d=1,2\), \(s\geq 0\), and \(L\in S^{m}\). Then formula (1.7) defines a bounded pseudo-differential operator of order \(m\) from \(H^{s+m}(\mathbb{R}^{d})\) to \(H^{s}(\mathbb{R})\) and satisfies_
\[|\mathcal{L}[X,D]u|_{H^{s}}\leq\mathcal{M}_{m}(L)|u|_{H^{s+m}}. \tag{1.9}\]
We refer to [2] for this result, where the constant is given implicitly in the proof (see also [17, 1]). We will define operators of interest under the assumption:
**Assumption/Definition 1.9**.: _Let \(d=1,2\) and \(\beta\in[0,1]\). Throughout this paper, we will always assume that the bathymetry \(\beta b\in C^{\infty}_{c}(\mathbb{R}^{d})\) satisfies the following: There exists \(b_{\max}\in(0,1)\) such that_
\[\beta|b(X)|\leq b_{\max}<1,\quad\text{for all}\ \ X\in\mathbb{R}^{d}. \tag{1.10}\]
_We also define the water depth at the rest state \(h_{b}:=1-\beta b(X)\). As a consequence of (1.10), there exists a constant \(h_{b,\min}\in(0,1)\) such that_
\[0<h_{b,\min}\leq h_{b}. \tag{1.11}\]
Through (1.11), we suppose the bottom topography is submerged under the still water level. We may now define the pseudo-differential operators that will play an important role in deriving new models that allow for large amplitude topography variations.
**Definition/Proposition 1.10**.: _Let \(\mu,\beta\in[0,1]\), \(d=1,2\), \(s\geq 0\) and \(b\in C^{\infty}_{c}(\mathbb{R}^{d})\) such that (1.10) is satisfied. We define the following pseudo-differential operators of order zero, bounded uniformly with respect to \(\mu\) and \(\beta\) in \(H^{s}(\mathbb{R}^{d})\):_
\[\mathcal{L}^{\mu}_{1}[\beta b] =-\frac{1}{\beta}\sinh{(\beta b(X)\sqrt{\mu}|\mathrm{D}|)}\mathrm{ sech}(\sqrt{\mu}|\mathrm{D}|)\frac{1}{\sqrt{\mu}|\mathrm{D}|}\] \[\mathcal{L}^{\mu}_{2}[\beta b] =-(\mathcal{L}^{\mu}_{1}[\beta b]+b)\frac{1}{\mu|\mathrm{D}|^{2}}\] \[\mathcal{L}^{\mu}_{3}[\beta b] =-\big{(}\cosh(\beta b(X)\sqrt{\mu}|\mathrm{D}|)\mathrm{sech}( \sqrt{\mu}|\mathrm{D}|)-1\big{)}\frac{1}{\mu|\mathrm{D}|^{2}}.\]
_Moreover, for \(u\in\mathscr{S}(\mathbb{R}^{d})\) we have the following estimates_
\[|\mathcal{L}_{1}^{\mu}[\beta b]u|_{H^{s}} \leq M(s)|u|_{H^{s}}. \tag{1.12}\] \[|\mathcal{L}_{2}^{\mu}[\beta b]u|_{H^{s}} \leq M(s)|u|_{H^{s}}\] (1.13) \[|\mathcal{L}_{3}^{\mu}[\beta b]u|_{H^{s}} \leq M(s)|u|_{H^{s}}\] (1.14) \[|\mathcal{L}_{1}^{\mu}[\beta b]u+bu|_{H^{s}} \leq\mu M(s)|u|_{H^{s+2}}\] (1.15) \[|\mathcal{L}_{1}^{\mu}[\beta b]u-(-b-\frac{\mu\beta^{2}}{6}b^{3}| \mathrm{D}|^{2})\mathrm{F}_{3}u|_{H^{s}} \leq\mu^{2}\beta^{4}M(s)|u|_{H^{s+4}}\] (1.16) \[|\mathcal{L}_{2}^{\mu}[\beta b]u-(-\frac{1}{2}b\mathrm{F}_{4}+ \frac{\beta^{2}}{6}b^{3}\mathrm{F}_{3})u|_{H^{s}} \leq\mu\beta^{4}M(s)|u|_{H^{s+2}}. \tag{1.17}\]
**Remark 1.11**.: _Under assumption (1.10) the operators \(\mathcal{L}_{1}^{\mu}\), \(\mathcal{L}_{2}^{\mu}\), and \(\mathcal{L}_{3}^{\mu}\) are "classical pseudo-differential operators of order zero". We will share the details of the proof in Appendix A, Subsection A.1._
**Proposition 1.12**.: _Let \(d=1,2\), \(t_{0}>\frac{d}{2}\) and \(s\geq 0\). Let also \(b\in C_{c}^{\infty}(\mathbb{R}^{d})\) and \(\zeta\in H^{\max\{t_{0}+2,s+3\}}(\mathbb{R}^{d})\) such that (1.2) and (1.10) are satisfied. From the previously defined operators, we have the following approximations of the Dirichlet-Neumann operator:_
\[\frac{1}{\mu}\mathcal{G}_{0}\psi =-\mathrm{F}_{1}\Delta_{X}\psi-\beta(1+\frac{\mu}{2}\mathrm{F}_{4 }\Delta_{X})\nabla_{X}\cdot\big{(}\mathcal{L}_{1}^{\mu}[\beta b]\nabla_{X} \psi\big{)}-\varepsilon\nabla_{X}\cdot\big{(}\zeta\mathrm{F}_{1}\nabla_{X} \psi\big{)}\] \[\qquad+\frac{\mu\beta^{2}}{2}\nabla_{X}\cdot\big{(}\mathcal{B}[ \beta b]\nabla_{X}\psi\big{)},\]
_and_
\[\frac{1}{\mu}\mathcal{G}_{1}\psi =-\nabla_{X}\cdot(h\nabla_{X}\psi)-\frac{\mu}{3}\Delta_{X}\Big{(} \frac{h^{3}}{h_{b}^{3}}\mathrm{F}_{2}\Delta_{X}\psi\Big{)}-\mu\beta\Delta_{X} \big{(}\mathcal{L}_{2}^{\mu}[\beta b]\Delta_{X}\psi\big{)}\] \[\qquad-\frac{\mu\beta}{2}\mathrm{F}_{4}\Delta_{X}\nabla_{X}\cdot \big{(}\mathcal{L}_{1}^{\mu}[\beta b]\nabla_{X}\psi\big{)}+\frac{\mu\beta^{2 }}{2}\nabla_{X}\cdot\big{(}\mathcal{B}[\beta b]\nabla_{X}\psi\big{)},\]
_where_
\[\mathcal{B}[\beta b]\nabla_{X}\psi =b\nabla_{X}(\nabla_{X}\cdot(b\nabla_{X}\psi)) \tag{1.18}\] \[\qquad+h_{b}\nabla_{X}\big{(}b\nabla_{X}\cdot(b\nabla_{X}\psi) \big{)}+2h_{b}(\nabla_{X}b)\nabla_{X}\cdot(b\nabla_{X}\psi).\]
_Moreover, we have the following estimates on the Dirichlet-Neumann operator_
\[\frac{1}{\mu}|\mathcal{G}^{\mu}\psi-\mathcal{G}_{0}\psi|_{H^{s}} \leq(\mu\varepsilon+\mu^{2}\beta^{2})M(s+3)|\nabla_{X}\psi|_{H^{s+5}} \tag{1.19}\] \[\frac{1}{\mu}|\mathcal{G}^{\mu}\psi-\mathcal{G}_{1}\psi|_{H^{s}} \leq(\mu^{2}\varepsilon+\mu\varepsilon\beta+\mu^{2}\beta^{2})M(s+ 3)|\nabla_{X}\psi|_{H^{s+5}}. \tag{1.20}\]
Proposition 1.12 is the key result from which we will derive our new models. However, before presenting these models, we need to define the notion of consistency of the water waves equations (1.1).
**Definition 1.13** (Consistency).: _Let \(\mu,\varepsilon,\beta\in[0,1]\). We denote by (A) an asymptotic model of the following form:_
\[\mathrm{(A)}\quad\begin{cases}\partial_{t}\zeta+\mathcal{N}_{1}(\zeta,b,\psi)=0 \\ \partial_{t}(\mathcal{T}[\zeta,b]\psi)+\mathcal{N}_{2}(\zeta,b,\psi)=0,\end{cases}\]
_where \(\mathcal{T}\) is a linear operator with respect to \(\psi\) and possibly nonlinear with respect to \(\zeta\) and \(b\). While \(\mathcal{N}_{1}\) and \(\mathcal{N}_{2}\) are possibly nonlinear operators._
_We say that the water waves equations are consistent at order \(O(\sum\mu^{k}\varepsilon^{l}\beta^{m})\) with (A) if there exists \(n\in\mathbb{N}\) and a universal constant \(T>0\) such that for any \(s\geq 0\) and every solution \((\zeta,\psi)\in C([0,\frac{T}{\varepsilon}];H^{s+n}(\mathbb{R}^{d})\times\dot{ H}^{s+n}(\mathbb{R}^{d}))\) to the water waves equations (1.1) such that for all \(t\in[0,\frac{T}{\varepsilon}]\), one has_
\[\begin{cases}\partial_{t}\zeta+\mathcal{N}_{1}(\zeta,b,\psi)=\big{(}\sum\mu^{ k}\varepsilon^{l}\beta^{m}\big{)}R_{1}\\ \partial_{t}(\mathcal{T}[\zeta,b]\psi)+\mathcal{N}_{2}(\zeta,b,\psi)=\big{(} \sum\mu^{k}\varepsilon^{l}\beta^{m}\big{)}R_{2},\end{cases}\]
_where \(|R_{i}|_{H^{s}}\leq N(s+n)\) for all \(t\in[0,\frac{T}{\varepsilon}]\) with \(i=1,2\)._
We should note that the existence time for solutions of the water waves equations is proved to be on the scale \(O(\frac{1}{\max\{\varepsilon,\beta\}})\) and uniformly with respect to \(\mu\)[3]. However, it was proved that when one includes surface tension with a strength of the same order as the shallow water parameter \(\mu\), then the time existence is improved and becomes of order \(O(\frac{1}{\varepsilon})\)[6]. This result allows for large bathymetric variations in the presence of surface tension. For the sake of clarity, we will omit the surface tension in this paper. But one could easily add it to every model of this work without changing the results. With this in mind, we may now state our consistency results.
**Theorem 1.14**.: _Let \(\mathrm{F}_{1}\) and \(\mathrm{F}_{4}\) be the two Fourier multipliers given in Definition 1.6, and let \(\mathcal{L}_{1}^{\mu}\) be given in Definition 1.10. Then for any \(\mu\in(0,1]\), \(\varepsilon\in[0,1]\), and \(\beta\in[0,1]\) the water waves equations (1.1) are consistent, in the sense of Definition 1.13 with \(n=5\), at order \(O(\mu\varepsilon+\mu^{2}\beta^{2})\) with the Whitham-Boussinesq system:_
\[\begin{cases}\partial_{t}\zeta+\mathrm{F}_{1}\Delta_{X}\psi+\beta(1+\frac{\mu }{2}\mathrm{F}_{4}\Delta_{X})\nabla_{X}\cdot(\mathcal{L}_{1}^{\mu}[\beta b] \nabla_{X}\psi)\\ \hskip 113.811024pt+\varepsilon\mathrm{G}_{1}\nabla_{X}\cdot(\zeta\mathrm{G}_ {2}\nabla_{X}\psi)-\frac{\mu\beta^{2}}{2}\nabla_{X}\cdot(\mathcal{B}[\beta b ]\nabla_{X}\psi)=0\\ \partial_{t}\psi+\zeta+\frac{\varepsilon}{2}(\mathrm{G}_{1}\nabla_{X}\psi) \cdot(\mathrm{G}_{2}\nabla_{X}\psi)=0,\end{cases} \tag{1.21}\]
_where_
\[\mathcal{B}[\beta b]\nabla_{X}\psi=b\nabla_{X}(\nabla_{X}\cdot(b\nabla_{X} \psi))+h_{b}\nabla_{X}\big{(}b\nabla_{X}\cdot(b\nabla_{X}\psi)\big{)}+2h_{b}( \nabla_{X}b)\nabla_{X}\cdot(b\nabla_{X}\psi),\]
_and \(\mathrm{G}_{1},\mathrm{G}_{2}\) are any Fourier multipliers such that for any \(s\geq 0\) and \(u\in H^{s+2}(\mathbb{R}^{d})\), we have_
\[|(\mathrm{G}_{j}-1)u|_{H^{s}}\lesssim\mu|u|_{H^{s+2}}.\]
**Remark 1.15**.:
* _Taking_ \(\beta=0\) _in (_3.3_), we get the class of Whitham-Boussinesq derived rigorously in_ _[_12_]_ _with a precision_ \(O(\mu\varepsilon)\)_. These systems were rigorously justified on a time scale of order_ \(O(\frac{1}{\varepsilon})\) _under additional decrease constraint on the Fourier multipliers_ \(\mathrm{G}_{1}\) _and_ \(\mathrm{G}_{2}\) _(see_ _[_13_]_ _for more information)._
* _In the case_ \(\mathrm{G}_{1}=\mathrm{G}_{2}=\mathrm{Id}\)_, (_3.3_) is believed to be ill-posed_ _[_15_]_ _in the case_ \(\beta=0\)_, unless one includes surface tension_ _[_18_]__. Alternatively, one can exploit the regularizing effect provided by the multipliers_ \(\mathrm{G}_{j}\)__[_13_]__._
* _Neglecting terms of order_ \(O(\mu\varepsilon+\mu\beta)\) _and approximating_ \(\mathcal{L}_{1}^{\mu}[\beta b]\) _with estimate (_1.15_), we arrive at the same models derived in_ _[_11_]__._
One can replace the pseudo-differential operator in (3.3) with estimate (1.16). Indeed, we have the following result:
**Corollary 1.16**.: _Under the same assumptions as in Theorem 1.14, we can take_
\[\mathcal{L}_{1}^{\mu}[\beta b]^{\bullet}=-(b+\frac{\mu\beta^{2}}{6}b^{3}| \mathrm{D}|^{2})\mathrm{F}_{3}{}^{\bullet},\]
_in system (3.3) and keep the precision \(O(\mu\varepsilon+\mu^{2}\beta^{2})\)._
We also derive a Whitham-Boussinesq system in the variables \((\zeta,\overline{V})\):
**Theorem 1.17**.: _Let \(\mathrm{F}_{1}\) and \(\mathrm{F}_{4}\) be the two Fourier multipliers given in Definition 1.6, and let \(\mathcal{L}_{1}^{\mu}\) be given in Definition 1.10. Then for any \(\mu\in(0,1]\), \(\varepsilon\in[0,1]\), and \(\beta\in[0,1]\) the water waves equations (1.1) are consistent, in the sense of Definition 1.13 with \(n=6\), at order \(O(\mu\varepsilon+\mu^{2}\beta^{2})\) with the Whitham-Boussinesq system:_
\[\begin{cases}\partial_{t}\zeta+\nabla_{X}\cdot(h\overline{V})=0\\ \partial_{t}\overline{V}+\mathcal{T}_{0}^{\mu}[\beta b,\varepsilon\zeta] \nabla_{X}\zeta+\frac{\varepsilon}{2}\nabla_{X}|\overline{V}|^{2}=\mathbf{0}, \end{cases} \tag{1.22}\]
_where_
\[h\mathcal{T}_{0}^{\mu}[\beta b,\varepsilon\zeta]\bullet =\mathrm{F}_{1}\bullet+\beta\mathcal{L}_{1}^{\mu}[\beta b]\bullet +\varepsilon\zeta\mathrm{F}_{1}\bullet+\frac{\mu\beta}{2}h_{b}\nabla_{X} \mathrm{F}_{4}\nabla_{X}\cdot\big{(}\mathcal{L}_{1}^{\mu}[\beta b]\bullet \big{)}\] \[\quad-\frac{\mu\beta^{2}}{2}h_{b}\nabla_{X}\big{(}b\nabla_{X} \cdot(b\bullet)\big{)}-\mu\beta^{2}h_{b}(\nabla_{X}b)\nabla_{X}\cdot(b\bullet).\]
**Remark 1.18**.:
* _The first equation in (_1.22_) is exact and is a formulation of the conservation of mass._
* _Taking_ \(\beta=0\) _in (_1.22_), we get the class of Whitham-Boussinesq derived rigorously in_ _[_18_]_ _with a precision_ \(O(\mu\varepsilon)\)_._
**Corollary 1.19**.: _Under the same assumptions as in Theorem 1.17, we can take_
\[h\mathcal{T}_{0}^{\mu}[\beta b,\varepsilon\zeta]\bullet =h\mathrm{F}_{1}\bullet+\beta b(\mathrm{F}_{1}-\mathrm{F}_{3}) \bullet+\frac{\mu\beta^{3}}{6}b^{3}|\mathrm{D}|^{2}\mathrm{F}_{3}\bullet-\frac {\mu\beta}{2}h_{b}\nabla_{X}\mathrm{F}_{4}\nabla_{X}\cdot(b\bullet)\] \[\quad-\frac{\mu\beta^{2}}{2}h_{b}\nabla_{X}\big{(}b\nabla_{X} \cdot(b\bullet)\big{)}-\mu\beta^{2}h_{b}(\nabla_{X}b)\nabla_{X}\cdot(b\bullet),\]
_in system (1.22) and keep the precision \(O(\mu\varepsilon+\mu^{2}\beta^{2})\)._
The next two results concern full dispersion Green-Naghdi systems.
**Theorem 1.20**.: _Let \(\mathrm{F}_{2}\) and \(\mathrm{F}_{4}\) be the two Fourier multipliers given in Definition 1.6, and let \(\mathcal{L}_{2}^{\mu}\) be given in Definition 1.10. Then for any \(\mu\in(0,1]\), \(\varepsilon\in[0,1]\), and \(\beta\in[0,1]\) the water waves equations (1.1) are consistent, in the sense of Definition 1.13 with \(n=5\), at order \(O(\mu^{2}\varepsilon+\mu\varepsilon\beta+\mu^{2}\beta^{2})\) with the Whitham-Green-Naghdi system:_
\[\begin{cases}\partial_{t}\zeta+\nabla_{X}\cdot(h\mathcal{T}_{1}^{\mu}[\beta b,\varepsilon\zeta]\nabla_{X}\psi)-\frac{\mu\beta^{2}}{2}\nabla_{X}\cdot \big{(}\mathcal{B}[\beta b]\nabla_{X}\psi\big{)}=0\\ \partial_{t}\psi+\zeta+\frac{\varepsilon}{2}|\nabla_{X}\psi|^{2}-\frac{\mu \varepsilon}{2}h^{2}(\sqrt{\mathrm{F}_{2}}\Delta_{X}\psi)^{2}=0,\end{cases} \tag{1.23}\]
_where_
\[\mathcal{B}[\beta b]\bullet =b\nabla_{X}(\nabla_{X}\cdot(b\bullet))+h_{b}\nabla_{X}\big{(}b \nabla_{X}\cdot(b\bullet)\big{)}+2h_{b}(\nabla_{X}b)\nabla_{X}\cdot(b\bullet),\]
_and_
\[\mathcal{T}_{1}^{\mu}[\beta b,\varepsilon\zeta]\bullet =\mathrm{Id}+\frac{\mu}{3h}\nabla_{X}\sqrt{\mathrm{F}_{2}}\Big{(} \frac{h^{3}}{h_{b}^{3}}\sqrt{\mathrm{F}_{2}}\nabla_{X}\cdot\bullet\Big{)}+ \frac{\mu\beta}{h}\nabla_{X}\Big{(}\mathcal{L}_{2}^{\mu}[\beta b]\nabla_{X} \cdot\bullet\Big{)}\] \[\quad+\frac{\mu\beta^{2}}{2h}\mathrm{F}_{4}\nabla_{X}\nabla_{X} \cdot\big{(}\mathcal{L}_{1}^{\mu}[\beta b]\bullet\big{)},\]
_and \(\sqrt{\mathrm{F}_{2}}\) is the square root of \(\mathrm{F}_{2}\)._
**Remark 1.21**.:
* _System (_1.23_) was first derived in_ _[_12_]_ _in the case_ \(\beta=0\)_._
* _In_ _[_11_]__, the author derived a Whitham-Green-Naghdi system with an order of precision given by_ \(O(\mu^{2}\varepsilon+\mu^{2}\beta)\)_._
Again, we can simplify the system using Proposition 1.10 to obtain a system only depending on Fourier multipliers.
**Corollary 1.22**.: _Under the same assumptions as in Theorem 1.20, we can take_
\[\mathcal{L}_{1}^{\mu}[\beta b]\bullet=-b\mathrm{F}_{3}\bullet,\]
_and_
\[\mathcal{L}_{2}^{\mu}[\beta b]\bullet=-\frac{1}{2}b\mathrm{F}_{4}\bullet+ \frac{\beta^{2}}{6}b^{3}\mathrm{F}_{3}\bullet,\]
_in system (1.23) keeping the precision \(O(\mu^{2}\varepsilon+\mu\varepsilon\beta+\mu^{2}\beta^{2})\)._
Several generalizations can now be made, where the next system is chosen to mimic some of the properties of the classical Green-Naghdi systems:
**Theorem 1.23**.: _Let \(\mathrm{F}_{2}\) and \(\mathrm{F}_{4}\) be the two the Fourier multipliers given in Definition 1.6, let \(\mathcal{L}_{1}^{\mu}\) and \(\mathcal{L}_{2}^{\mu}\) be given in Definition 1.10. Then for any \(\mu\in(0,1]\), \(\varepsilon\in[0,1]\), and \(\beta\in[0,1]\) the water waves equations (1.1) are consistent, in the sense of Definition 1.13 with \(n=6\), at order \(O(\mu^{2}\varepsilon+\mu\varepsilon\beta+\mu^{2}\beta^{2})\) with the Whitham-Green-Naghdi system:_
\[\begin{cases}\partial_{t}\zeta+\nabla_{X}\cdot(h\overline{V})=0,\\ \partial_{t}(\mathcal{I}^{\mu}[h]\overline{V})+\mathcal{I}^{\mu}[h]\mathcal{T }_{2}^{\mu}[\beta b,h]\nabla_{X}\zeta+\frac{\varepsilon}{2}\nabla_{X}\big{(} |\overline{V}|^{2}\big{)}+\mu\varepsilon\nabla_{X}\mathcal{R}_{1}^{\mu}[ \beta b,h,\overline{V}]=\mathbf{0},\end{cases} \tag{1.24}\]
_where \(\overline{V}\) defined by (1.5),_
\[\mathcal{I}^{\mu}[h]\bullet=\mathrm{Id}-\frac{\mu}{3h}\sqrt{ \mathrm{F}_{2}}\nabla_{X}\Big{(}h^{3}\sqrt{\mathrm{F}_{2}}\nabla_{X}\cdot \bullet\Big{)}, \tag{1.25}\]
\[\mathcal{T}_{2}^{\mu}[\beta b,\varepsilon\zeta]\bullet =\mathrm{Id}+\frac{\mu}{3h}\sqrt{\mathrm{F}_{2}}\nabla_{X}\Big{(} \frac{h^{3}}{h_{b}^{3}}\sqrt{\mathrm{F}_{2}}\nabla_{X}\cdot\bullet\Big{)}+ \frac{\mu\beta}{h}\nabla_{X}\Big{(}\mathcal{L}_{2}^{\mu}[\beta b]\nabla_{X} \cdot\bullet\Big{)}\] \[\quad+\frac{\mu\beta h_{b}}{2h}\nabla_{X}\mathrm{F}_{4}\nabla_{X} \cdot\big{(}\mathcal{L}_{1}^{\mu}[\beta b]\bullet\big{)}-\frac{\mu\beta^{2}h_ {b}}{2h}\nabla_{X}\big{(}b\nabla_{X}\cdot(b\bullet)\big{)}-\frac{\mu\beta^{2} h_{b}}{h}(\nabla_{X}b)\nabla_{X}\cdot(b\bullet),\]
_and_
\[\mathcal{R}_{1}^{\mu}[\beta b,h,\overline{V}]=-\frac{h^{2}}{2}( \nabla_{X}\cdot\overline{V})^{2}-\frac{1}{3h}\big{(}\nabla_{X}(h^{3}\nabla_{X} \cdot\overline{V})\big{)}\cdot\overline{V}-\frac{1}{2}h^{3}\Delta_{X}(| \overline{V}|^{2})+\frac{1}{6h}h^{3}\Delta_{X}(|\overline{V}|^{2}). \tag{1.26}\]
**Remark 1.24**.:
* _As for the classical Green-Naghdi system, we observe that the first equation is a formulation of mass conservation._
* _The system depends on the elliptic operator_ \(h\mathcal{I}^{\mu}[h]\) _and is similar to the systems derived in_ _[_16, 12, 11_]_ _in that sense._
* _The presence of the term_ \(\mathcal{I}^{\mu}[h]\mathcal{T}_{2}^{\mu}[\beta b,h]\nabla_{X}\zeta\) _in the second equation makes it quite unique. Note that one may simplify it, but we chose to keep it under this form because in the study of the local well-posedness theory, one would apply the inverse of the elliptic operator_ \(h\mathcal{I}^{\mu}[h]\) _to the equation._
**Corollary 1.25**.: _Under the same assumptions as in Theorem 1.23, we can take_
\[\mathcal{L}_{1}^{\mu}[\beta b]\bullet=-b\mathrm{F}_{3}\bullet,\]
_and_
\[\mathcal{L}_{2}^{\mu}[\beta b]\bullet=-\frac{1}{2}b\mathrm{F}_{4}\bullet+\frac{ \beta^{2}}{6}b^{3}\mathrm{F}_{3}\bullet,\]
_in system (4.5) keeping the precision \(O(\mu^{2}\varepsilon+\mu\varepsilon\beta+\mu^{2}\beta^{2})\)._
### Outline
The paper is organized as follows. In Section 2, we set out to prove Proposition 1.12. First, we start Subsection 2.1 by transforming the elliptic problem (1.3) so that its domain is time-independent. Then we use this new formulation to perform multi-scale expansions. In particular, in Subsection 2.2 and 2.3, we make several expansions of the velocity potential in terms of \(\mu\), \(\varepsilon\) and \(\beta\). From these expansions, we approximate the vertically averaged velocity potential \(\overline{V}\) in Subsection 2.4, from which the proof of Proposition 1.12 is deduced in Subsection 2.5. Section 3 is dedicated to the proofs of Theorem 1.14 and Theorem 1.17. We also formally derive a Hamiltonian Whitham-Boussinesq system. In Section 4 we prove Theorem 1.20 and Theorem 1.23. Lastly, the appendix is composed of three subsections. The Subsection A.1 is dedicated to the proof of Proposition 1.10. In the last two Subsections A.2 and A.3, we state and prove technical tools.
## 2. Asymptotic expansions of the Dirichlet-Neumann operator
In this section, we perform expansions of the Dirichlet-Neumann operator with an error of order \(O(\mu\varepsilon+\mu^{2}\beta^{2})\) and \(O(\mu^{2}\varepsilon+\mu\varepsilon\beta+\mu^{2}\beta^{2})\). The standard approach to deriving asymptotic models is by approximating the velocity potential \(\Phi\), which in turn will give an approximation of (1.4). Classically, one straightens the fluid domain to work on the flat strip, where we can easily make approximations. However, if we straighten the bottom, there will be an appearance of \(\beta\) that will give approximations on the form \(O(\mu\varepsilon+\mu\beta)\) in the case Whitham-Boussinesq and \(O(\mu^{2}\varepsilon+\mu^{2}\beta)\) in the case Whitham-Green-Naghdi systems (see [11] for the derivation of such models).
### The transformed Laplace equation
Motivated by the previous discussion, we make a change of variable that only straightens the top of the fluid domain.
**Definition 2.1**.: _Let \(s>\frac{d}{2}+1\), \(b\in C_{c}^{\infty}(\mathbb{R}^{d})\), and \(\zeta\in H^{s}(\mathbb{R}^{d})\) such that the non-cavitation assumptions (1.2) and (1.10) are satisfied. We define the time-dependent diffeomorphism mapping the domain_
\[\mathcal{S}_{b}:=\{(X,z)\in\mathbb{R}^{d+1}:-1+\beta b\leq z\leq 0\},\]
_onto the water domain \(\Omega_{t}\) through_
\[\Sigma_{b}:\begin{cases}\mathcal{S}_{b}&\longrightarrow\quad\Omega_{t}\\ (X,z)&\mapsto\quad(X,z+\sigma(X,z))\end{cases}\]
_with_
\[\sigma(X,z)=\frac{\varepsilon\zeta(X)}{1-\beta b(X)}z+\varepsilon\zeta(X). \tag{2.1}\]
**Remark 2.2**.: _The map given in Definition 2.1 is a diffeomorphism. Indeed, by computing the Jacobi matrix, we find that_
\[J_{\Sigma_{b}}=\begin{pmatrix}\mathrm{Id}&\mathbf{0}\\ (\nabla_{X}\sigma)^{T}&1+\partial_{z}\sigma\end{pmatrix},\]
_where_
\[1+\partial_{z}\sigma=\frac{h}{h_{b}}.\]
_Therefore, under the non-cavitation condition as stated in Definition 1.2, we have a non-zero determinant:_
\[|J_{\Sigma_{b}}|\geq\frac{h_{\min}}{1+\beta|b|_{L^{\infty}}}.\]
The next result shows that the properties of solutions of the boundary problem (1.3) can be obtained from the study of an equivalent elliptic boundary value problem defined on \(\mathcal{S}_{b}\).
**Proposition 2.3**.: _Let \(\phi_{b}=\Phi\circ\Sigma_{b}\) where the map \(\Sigma_{b}\) is given in Definition 2.1. Then under the provisions of Definition 1.3 we have that \(\Phi\) is a (variational, classical) solution of (1.3) if and only if \(\phi_{b}\) is a (variational, classical) solution of_
\[\begin{cases}\nabla^{\mu}_{X,z}\cdot P(\Sigma_{b})\nabla^{\mu}_{X,z}\phi_{b}= 0\quad\text{in}\quad S_{b}\\ \phi_{b}|_{z=0}=\psi,\quad\partial^{P_{b}}_{n_{b}}\phi_{b}|_{z=-h_{b}}=0,\end{cases} \tag{2.2}\]
_where the matrix \(P(\Sigma_{b})\) is given by_
\[P(\Sigma_{b})=|J_{\Sigma_{b}}|(I^{\mu})^{-1}J_{\Sigma_{b}}^{-1}(I^{\mu})^{2}( J_{\Sigma_{b}}^{-1})^{T}(I^{\mu})^{-1}, \tag{2.3}\]
_and the Neumann condition reads_
\[\partial^{P_{b}}_{n_{b}}\phi_{b}|_{z=-h_{b}}=\mathbf{n}_{b}\cdot I^{\mu}P( \Sigma_{b})\nabla^{\mu}_{X,z}\phi_{b}|_{z=-h_{b}}. \tag{2.4}\]
_Moreover, the matrix \(P(\Sigma_{b})\) is coercive, i.e. there exists \(c>0\) such that for all \(Y\in\mathbb{R}^{d+1}\) and any \((X,z)\in\mathcal{S}_{b}\) there holds,_
\[P(\Sigma_{b})Y\cdot Y\geq c|Y|^{2}. \tag{2.5}\]
**Remark 2.4**.: _We may compute the inverse Jacobi-matrix \(J_{\Sigma_{b}}^{-1}\) so that using the expression for \(P(\Sigma_{b})\) (2.3), we find_
\[P(\Sigma_{b})=\begin{pmatrix}(1+\partial_{z}\sigma)\mathrm{Id}&-\sqrt{\mu} \nabla_{X}\sigma\\ -\sqrt{\mu}(\nabla_{X}\sigma)^{T}&\frac{1+\mu h_{b}|\nabla_{X}\sigma|^{2}}{1+ \partial_{z}\sigma}\end{pmatrix}.\]
_Now, since \(\sigma\) is given by (2.1) we find that_
\[1+\partial_{z}\sigma=1+\frac{\varepsilon\zeta}{h_{b}}=\frac{h}{h_{b}},\]
_and_
\[P(\Sigma_{b})=\begin{pmatrix}\frac{h}{h_{b}}\mathrm{Id}&-\sqrt{\mu}\nabla_{X} \sigma\\ -\sqrt{\mu}(\nabla_{X}\sigma)^{T}&\frac{h_{b}+\mu h_{b}|\nabla_{X}\sigma|^{2}} {h}\end{pmatrix},\]
_where_
\[\nabla_{X}\sigma=\varepsilon\nabla_{X}\Big{(}\frac{\zeta}{h_{b}}\Big{)}z+ \varepsilon\nabla_{X}\zeta. \tag{2.6}\]
Proof.: The fact that \(\nabla^{\mu}_{X,z}\cdot P(\Sigma_{b})\nabla^{\mu}_{X,z}\phi_{b}=0\) in \(\mathcal{S}_{b}\) and that \(P(\Sigma_{b})\) satisfies (2.5) is classical and we simply refer to [16], Proposition 2.25 and Lemma 2.26.
To verify the Neumann condition, we first use the chain rule to make the observation
\[\nabla^{\mu}_{X,z}\phi_{b}=I^{\mu}(J_{\Sigma_{b}})^{T}(I^{\mu})^{-1}(\nabla^{ \mu}_{X,z}\Phi)\circ\Sigma_{b}. \tag{2.7}\]
Then by (2.4), we get that
\[\partial^{P_{b}}_{n_{b}}\phi_{b} =|J_{\Sigma_{b}}|\mathbf{n}_{b}\cdot(J_{\Sigma_{b}})^{-1}I^{\mu}( \nabla^{\mu}_{X,z}\Phi)\circ\Sigma_{b}\] \[=\mathbf{n}_{b}\cdot I^{\mu}\begin{pmatrix}(1+\partial_{z}\sigma) \mathrm{Id}&\mathbf{0}\\ -(\nabla_{X}\sigma)^{T}&1\end{pmatrix}(\nabla^{\mu}_{X,z}\Phi)\circ\Sigma_{b}\] \[=\mathbf{n}_{b}\cdot I^{\mu}(\nabla^{\mu}_{X,z}\Phi)\circ\Sigma _{b}+\mathbf{n}_{b}\cdot I^{\mu}\begin{pmatrix}\partial_{z}\sigma\mathrm{Id}& \mathbf{0}\\ -(\nabla_{X}\sigma)^{T}&0\end{pmatrix}(\nabla^{\mu}_{X,z}\Phi)\circ\Sigma_{b}.\]
Now, use (1.3) with the fact that for \(z=-h_{b}\) then
\[0=\partial_{n_{b}}\Phi|_{-h_{b}}=\mathbf{n}_{b}\cdot I^{\mu}(\nabla^{\mu}_{X,z }\Phi)\circ\Sigma_{b}.\]
Therefore, we are left with the expression
\[\partial^{P_{b}}_{n_{b}}\phi_{b}|_{z=-h_{b}} =\frac{1}{|\mathbf{n}_{b}|}\begin{pmatrix}-\beta\nabla_{X}b\\ 1\end{pmatrix}\cdot\begin{pmatrix}\mu\partial_{z}\sigma(\nabla_{X}\Phi)|_{z=-h _{b}}\\ -\mu\nabla_{X}\sigma\cdot(\nabla_{X}\Phi)|_{z=-h_{b}}\end{pmatrix}\] \[=-\frac{\mu}{|\mathbf{n}_{b}|}(\beta\partial_{z}\sigma\nabla_{X}b +\nabla_{X}\sigma)\cdot(\nabla_{X}\Phi)\big{|}_{z=-h_{b}}.\]
But \(\partial_{z}\sigma|_{z=-h_{b}}=\frac{\varepsilon\zeta}{h_{b}}\), \(\nabla_{X}\sigma|_{z=-h_{b}}=-\beta\frac{\varepsilon\zeta}{h_{b}}\nabla_{X}b\), and thus the proof is complete.
In the next section, we will make expansions of \(\phi_{b}\) and then use the expression of (1.5) to approximate the Dirichelet-Naumann operator. But first, we must relate the definition of \(\overline{V}^{\mu}[\varepsilon\zeta,\beta b]\psi\) with the new velocity potential on \(\mathcal{S}_{b}\).
**Proposition 2.5**.: _Let \(\Sigma_{b}\) be given in Definition 2.1. Then under the provisions of Definition 1.4, the operator (1.5) is equivalent to the following formulation_
\[\overline{V}^{\mu}[\varepsilon\zeta,\beta b]\psi=\frac{1}{h}\int_{-1+\beta b} ^{0}\big{[}\frac{h}{h_{b}}\nabla_{X}\phi_{b}-(\varepsilon\nabla_{X}\Big{(} \frac{\zeta}{h_{b}}\Big{)}z+\varepsilon\nabla_{X}\zeta)\partial_{z}\phi_{b} \big{]}\,\mathrm{d}z, \tag{2.8}\]
_where \(\phi_{b}=\Phi\circ\Sigma_{b}\)._
Proof.: We use the new variables defined by the mapping \(\Sigma_{b}\) and the observation that
\[(\nabla_{X}\Phi)\circ\Sigma_{b}=\begin{pmatrix}\mathrm{Id}&\mathbf{0}\\ \mathbf{0}^{T}&0\end{pmatrix}(J_{\Sigma_{b}}^{-1})^{T}\nabla_{X,z}\phi_{b},\]
to get
\[\overline{V}^{\mu}[\varepsilon\zeta,\beta b]\psi =\frac{1}{h}\int_{-1+\beta b}^{0}(\nabla_{X}\Phi)\circ\Sigma_{b} \left|J_{\Sigma_{b}}\right|\mathrm{d}z\] \[=\frac{1}{h}\int_{-1+\beta b}^{0}\big{[}\frac{h}{h_{b}}\nabla_{X} \phi_{b}-\nabla_{X}\sigma\partial_{z}\phi_{b}\big{]}\,\mathrm{d}z.\]
Then using (2.1), we obtain the result.
### Multi-scale expansions
In order to make expansions of \(\phi_{b}\) we first make several observations on how to decompose system (2.2).
**Observation 2.6**.: _We can decompose the elliptic operator given in Remark 2.4 into:_
\[\frac{h}{h_{b}}\nabla_{X,z}^{\mu}\cdot P(\Sigma_{b})\nabla_{X,z}^{\mu}\phi_{b}= \Delta_{X,z}^{\mu}\phi_{b}+\mu\varepsilon A[\nabla_{X},\partial_{z}]\phi_{b},\]
_where_
\[A[\nabla_{X},\partial_{z}]\phi_{b} =\frac{\zeta}{h_{b}}\Delta_{X}\phi_{b}+\frac{h}{h_{b}}\nabla_{X} \cdot\big{(}\frac{\zeta}{h_{b}}\nabla_{X}\phi_{b}\big{)}-\frac{h}{h_{b}}\nabla _{X}\cdot\big{(}\frac{1}{\varepsilon}\nabla_{X}\sigma\partial_{z}\phi_{b} \big{)}\] \[-\frac{h}{h_{b}}\partial_{z}\big{(}\frac{1}{\varepsilon}\nabla_{X }\sigma\cdot\nabla_{X}\phi_{b}\big{)}+\partial_{z}\big{(}\frac{1}{\varepsilon }|\nabla_{X}\sigma|^{2}\partial_{z}\phi_{b}\big{)}.\]
_We may simplify this expression by using formula (2.6) for \(\nabla_{X}\sigma\) to get that_
\[A[\nabla_{X},\partial_{z}]\phi_{b} =\frac{\zeta}{h_{b}}(1+\frac{h}{h_{b}})\Delta_{X}\phi_{b}-\frac{h }{h_{b}}\nabla_{X}\zeta\cdot\nabla_{X}\partial_{z}\phi_{b} \tag{2.9}\] \[-\frac{h}{h_{b}}\nabla_{X}\cdot\big{(}\frac{1}{\varepsilon}\nabla _{X}\sigma\partial_{z}\phi_{b}\big{)}+\partial_{z}\big{(}\frac{1}{\varepsilon }|\nabla_{X}\sigma|^{2}\partial_{z}\phi_{b}\big{)}.\]
_In this formula, we emphasize the terms that do not contain \(\partial_{z}\phi_{b}\). This is because these are the leading terms in the approximations that are performed below._
**Observation 2.7**.: _Similarly, we can also decompose the Neumann condition into_
\[\frac{h}{h_{b}}|\mathbf{n}_{b}|\partial_{n_{b}}^{P_{b}}\phi_{b}|_ {z=-h_{b}} =[\partial_{z}\phi_{b}-\mu\beta\frac{h}{h_{b}}\nabla_{X}b\cdot \nabla_{X}\phi_{b}-\mu\beta^{2}\varepsilon\zeta\frac{|\nabla_{X}b|^{2}}{h_{b }}\partial_{z}\phi_{b}]|_{z=-h_{b}}\] \[=[\partial_{z}\phi_{b}-\mu\beta\nabla_{X}b\cdot\nabla_{X}\phi_{b} ]|_{z=-h_{b}}+\mu\varepsilon\beta B[\nabla_{X},\partial_{z}]\phi_{b}|_{z=-h_{ b}}.\]
_where_
\[B[\nabla_{X},\partial_{z}]\phi_{b}=-\frac{\zeta}{h_{b}}\nabla_{X}b\cdot\nabla _{X}\phi_{b}-\beta\zeta\frac{|\nabla_{X}b|^{2}}{h_{b}}\partial_{z}\phi_{b}.\]
To summarize the observations, we now have that \(\phi_{b}\) solves
\[\begin{cases}\Delta_{X,z}^{\mu}\phi_{b}=-\mu\varepsilon A[\nabla_{X},\partial _{z}]\phi_{b}\quad\text{in}\quad\mathcal{S}_{b}\\ \phi_{b}|_{z=0}=\psi,\quad[\partial_{z}\phi_{b}-\mu\beta\nabla_{X}b\cdot\nabla _{X}\phi_{b}]|_{z=-h_{b}}=\mu\varepsilon\beta B[\nabla_{X},\partial_{z}]\phi_ {b}|_{z=-h_{b}}.\end{cases} \tag{2.10}\]
**Remark 2.8**.: _In the paper [9], their strategy is to solve (2.10) first in the case \(\varepsilon=0\), where the solution is defined in terms of the inverse of a pseudo-differential operator. If we add the parameters \(\mu\) and \(\beta\) then, in dimension one, this operator is given by_
\[\mathcal{L}^{\mu}[\beta b]=-\cosh\left((-1+\beta b(X))\sqrt{\mu}D\right)^{-1} \sinh\left(\beta b(X)\sqrt{\mu}D\right)\mathrm{sech}(\sqrt{\mu}D). \tag{2.11}\]
_Formally, in dimension one, they obtain the first order approximation:_
\[\mathcal{G}_{0}=\sqrt{\mu}D\tanh(\sqrt{\mu}D)+\sqrt{\mu}D\mathcal{L}^{\mu}[ \beta b].\]
_At higher order they obtain the expansion of \(\mathcal{G}^{\mu}\) given on the form_
\[\frac{1}{\mu}\mathcal{G}^{\mu}=\frac{1}{\mu}\sum_{j=0}^{n}\varepsilon^{j} \mathcal{G}_{j}+O(\varepsilon^{n+1}),\]
_where \(\mathcal{G}_{j}\) defined recursively for \(j\geq 0\) and is the classical expansion for small amplitude waves when \(\beta=0\)[9] (see also [16] where the approximation is proved with Sobolev bounds when \(\beta=0\)). In this paper, our approach allow us to decouple the parameters \(\mu\), \(\varepsilon\) and \(\beta\), writing expansions of the Dirichlet-Neumann operator which do not include the inversion of a pseudo-differential operator._
### Multi-scale expansions of the velocity potential \(\phi_{b}\)
We will now use (2.10) to make multi-scale expansions of \(\phi_{b}\). But first, we state an important result to justify the procedure.
**Proposition 2.9**.: _Let \(d=1,2\), \(t_{0}>\frac{d}{2}\), and \(k\in\mathbb{N}\). Let \(b\in C_{c}^{\infty}(\mathbb{R}^{d})\) and \(\zeta\in H^{\max\{t_{0}+2,k+1\}}(\mathbb{R}^{d})\) such that (1.2) and (1.10) are satisfied. Let also \(f\in H^{k,k}(\mathcal{S}_{b})\) and \(g\in H^{k}(\mathbb{R}^{d})\) be two given functions. Then the boundary value problem_
\[\begin{cases}\nabla_{X,z}^{\mu}\cdot P(\Sigma_{b})\nabla_{X,z}^{\mu}u=f\quad \text{in}\quad\mathcal{S}_{b}\\ u|_{z=0}=0,\quad\partial_{n_{b}}^{P_{b}}u|_{z=-1+\beta b}=g,\end{cases} \tag{2.12}\]
_admits a unique solution \(u\in H^{k+1,0}(\mathcal{S}_{b})\). Moreover, the solution satisfies the estimate_
\[\|\nabla_{X,z}^{\mu}u\|_{H^{k,0}(\mathcal{S}_{b})}\leq M(k+1)(|g|_{H^{k}}+\sum _{j=0}^{k}\|f\|_{H^{k-j,j}(\mathcal{S}_{b})}). \tag{2.13}\]
The proof of Proposition 2.9 is similar to the one of Proposition 4.5 in [11] and is postponed for Appendix \(A\), Subsection A.2 to ease the presentation. We may now use this result to construct \(\phi_{0}=\phi_{b}+O(\mu(\varepsilon+\beta))\) by solving the first part of the "straightened" Laplace problem with an explicit error of order \(O(\mu\beta)\) and an additional error of \(O(\mu\varepsilon)\).
**Proposition 2.10**.: _Let \(d=1,2\), \(t_{0}>\frac{d}{2}\), and \(k\in\mathbb{N}\). Let \(\psi\in\dot{H}^{k+3}(\mathbb{R}^{d})\). Let also \(b\in C_{c}^{\infty}(\mathbb{R}^{d})\) and \(\zeta\in H^{\max\{t_{0}+2,k+2\}}(\mathbb{R}^{d})\) such that (1.2) and (1.10) are satisfied. If \(\phi_{0}\) satisfies the following Laplace problem:_
\[\begin{cases}\Delta_{X,z}^{\mu}\phi_{0}=0\ \ \text{in}\ \ \mathcal{S}_{b},\\ \phi_{0}|_{z=0}=\psi,\ \ \big{[}\partial_{z}\phi_{0}-\mu\beta\nabla_{X}b \cdot\nabla_{X}\phi_{0}\big{]}\big{|}_{z=-1+\beta b}=\mu\beta\nabla_{X}\cdot \mathcal{L}_{1}^{\mu}[\beta b]\nabla_{X}\psi,\end{cases} \tag{2.14}\]
_where_
\[\mathcal{L}_{1}^{\mu}[\beta b]\nabla_{X}\psi=-\frac{1}{\beta}\sinh\big{(} \beta b(X)\sqrt{\mu}|\mathrm{D}|\big{)}\mathrm{sech}(\sqrt{\mu}|\mathrm{D}| \big{)}\frac{1}{\sqrt{\mu}|\mathrm{D}|}\nabla_{X}\psi,\]
_then for \(z\in[-1+\beta b,0]\) its expression is given by_
\[\phi_{0}=\frac{\cosh\big{(}(z+1)\sqrt{\mu}|\mathrm{D}|\big{)}}{\cosh\big{(} \sqrt{\mu}|\mathrm{D}|\big{)}}\psi=\mathrm{F}_{0}\psi. \tag{2.15}\]
_Moreover, the solution satisfies the estimate_
\[\|\nabla_{X,z}^{\mu}(\phi_{b}-\phi_{0})\|_{H^{k,0}(\mathcal{S}_{b})}\leq\mu( \varepsilon+\beta)M(k+2)|\nabla_{X}\psi|_{H^{k+2}}. \tag{2.16}\]
Proof.: Since \(\phi_{0}\) is given by the solution of the Laplace problem when the bottom is flat, we only need to verify the boundary condition at the bottom. In fact, we have that
LHS : \[=\big{[}\partial_{z}\phi_{0}-\mu\beta\nabla_{X}b\cdot\nabla_{X} \phi_{0}\big{]}\big{|}_{z=-1+\beta b}\] \[=\mathcal{F}^{-1}\Big{(}\sqrt{\mu}|\xi|\sinh\big{(}(z+1)\sqrt{ \mu}|\xi|\big{)}\mathrm{sech}(\sqrt{\mu}|\xi|)\widehat{\psi}(\xi)\Big{)}(X) \big{|}_{z=-1+\beta b(X)}\] \[\quad\ -\mathcal{F}^{-1}\Big{(}\mu\beta\nabla_{X}b(X)\cdot i\xi \cosh\big{(}(z+1)\sqrt{\mu}|\xi|\big{)}\mathrm{sech}(\sqrt{\mu}|\xi|)\widehat {\psi}(\xi)\Big{)}(X)\big{|}_{z=-1+\beta b(X)}\] \[=-\sqrt{\mu}\nabla_{X}\cdot\big{(}\sinh\big{(}\beta b(X)\sqrt{ \mu}|\mathrm{D}|\big{)}\mathrm{sech}(\sqrt{\mu}|\mathrm{D}|\big{)}\frac{1}{| \mathrm{D}|}\nabla_{X}\psi\big{)}.\]
The next step is to prove that \(\phi_{0}\) approximates \(\phi_{b}\) with a precision of \(O(\mu(\varepsilon+\beta))\). To that end, we first note that \(u=\phi_{b}-\phi_{0}\) solves the elliptic problem (2.12) with
\[f=-\mu\varepsilon\frac{h_{b}}{h}A[\nabla_{X},\partial_{z}]\phi_{0},\]
\[g=\mu\beta\frac{h_{b}}{h|\mathbf{n}_{b}|}\big{(}\nabla_{X}\cdot\big{(}\mathcal{L}_{ 1}^{\mu}[\beta b]\nabla_{X}\psi\big{)}+\varepsilon B[\nabla_{X},\partial_{z}] \phi_{0}\big{)}|_{z=-1+\beta b},\]
where the expressions of \(f\) and \(g\) are deduced from the decompositions of Observations 2.6 and 2.7 and the construction of \(\phi_{0}\). Moreover, since \(-h_{b}(X)>-2\) (see (1.10)), we can extend the definition of \(\phi_{0}\) to the domain \(\mathcal{S}:=\mathbb{R}^{d}\times[-2,0]\). For any \((X,z)\in\mathcal{S}\), we write
\[\phi_{0}=\frac{\cosh\left((z+1)\sqrt{\mu}|\mathrm{D}|\right)}{\cosh\left( \sqrt{\mu}|\mathrm{D}|\right)}\psi.\]
This extension is a Fourier multiplier depending on \(z\), and we can use the estimates in Proposition A.4 together with the fact that \(A[\nabla_{X},\partial_{z}]\bullet\), given by (2.9), only depends on functions of \(X\) and is polynomial in \(z\). Thus, combining the elliptic estimate (2.13) with (1.12), the non-cavitation conditions (1.2), (1.10), the product estimates for \(H^{k}(\mathbb{R}^{d})\) given by (A.9) and (A.10), we obtain that
\[\|\nabla_{X,z}^{\mu}u\|_{H^{k,0}(\mathcal{S}_{b})} \leq\mu\varepsilon M(k+1)\|\frac{h_{b}}{h}A[\nabla_{X},\partial_{ z}]\phi_{0}\|_{H^{k,0}(\mathcal{S}_{b})}\] \[\quad+\mu\varepsilon M(k+1)\big{|}\frac{\zeta}{h}\big{|}_{H^{k+2} }(|\nabla_{X}\cdot\big{(}\mathcal{L}_{1}^{\mu}[\beta b]\nabla_{X}\psi\big{)} |_{H^{k}}+\big{|}\tilde{B}[\nabla_{X},\partial_{z}]\phi_{0}|_{z=-h_{b}}\big{|} _{H^{k}})\] \[\leq\mu(\varepsilon+\beta)M(k+2)\|\frac{h_{b}}{h}A[\nabla_{X}, \partial_{z}]\phi_{0}\|_{H^{k,0}(\mathcal{S})}+\mu(\varepsilon+\beta)M(k+2)| \nabla\psi|_{H^{k+1}}\] \[\lesssim\mu(\varepsilon+\beta)M(k+2)|\nabla\psi|_{H^{k+2}}.\]
**Remark 2.11**.: _The source term \(\mu\beta\nabla_{X}\cdot\mathcal{L}_{1}^{\mu}[\beta b]\nabla_{X}\psi\) in the Neumann condition of (2.14) is chosen so that the solution \(\phi_{0}\) of the system does not depend on the inverse of a pseudo-differential operator. Indeed, any other source term in the Neumann condition would induce the dependence of the solution on operators of this kind._
We now construct the next order approximation by canceling the error of order \(O(\mu\beta)\). But first, we make an observation on the problem that needs to be solved.
**Observation 2.12**.: _To make the next order approximation \(\phi_{1}\) such that \(\phi_{b}=\phi_{0}+\mu\beta\phi_{1}+O(\mu(\varepsilon+\mu\beta^{2}))\), we solve the problem_
\[\begin{cases}\Delta_{X,z}^{\mu}\phi_{1}=\mu\beta F\ \ \text{in}\ \ \mathcal{S}_{b},\\ \phi_{1}|_{z=0}=0,\ \ \partial_{z}\phi_{1}|_{z=-1+\beta b}=-\nabla_{X}\cdot \big{(}\mathcal{L}_{1}^{\mu}[\beta b]\nabla_{X}\psi\big{)},\end{cases}\]
_where F is to be chosen and satisfies_
\[\|F\|_{H^{k,k}(\mathcal{S}_{b})}\leq M(k+2)|\nabla_{X}\psi|_{H^{k+2}}. \tag{2.17}\]
_so that formally_
\[\begin{cases}\frac{h}{h_{b}}\nabla_{X,z}\cdot P(\Sigma_{b})\nabla_{X,z}(\phi_ {0}+\mu\beta\phi_{1})=O(\mu\varepsilon+\mu^{2}\beta^{2})\ \ \text{in}\ \ \mathcal{S}_{b},\\ (\phi_{0}+\mu\beta\phi_{1})|_{z=0}=\psi,\ \ \frac{h}{h_{b}}\partial_{n_{b}}^{P_{b}}( \phi_{0}+\mu\beta\phi_{1})|_{z=-1+\beta b}=O(\mu\varepsilon+\mu^{2}\beta^{2}). \end{cases}\]
_Moreover, the presence of the source term \(\mu\beta F\) is motivated by the fact that the boundary conditions require a function of the form_
\[\phi_{1}=-h_{b}\frac{\sinh(\frac{z}{h_{b}}\sqrt{\mu}|\mathrm{D}|)}{\cosh(\sqrt{ \mu}|\mathrm{D}|)}\frac{1}{\sqrt{\mu}|\mathrm{D}|}\nabla_{X}\cdot\big{(} \mathcal{L}_{1}^{\mu}[\beta b]\nabla_{X}\psi\big{)},\]
_for \(-h_{b}\leq z\leq 0\). Indeed, if we let \(G=\nabla_{X}\cdot\big{(}\mathcal{L}_{1}^{\mu}[\beta b]\nabla_{X}\psi\big{)}\), then_
\[\partial_{z}\phi_{1}|_{z=-h_{b}} =-\mathcal{F}^{-1}\Big{(}\frac{\cosh(\frac{z}{h_{b}(X)}\sqrt{\mu} |\xi|)}{\cosh(\sqrt{\mu}|\xi|)}\hat{G}(\xi)\Big{)}(X)|_{z=-h_{b}(X)}\] \[=-G(X).\]
_Now, let us compute the Laplace operator. To do so, we introduce the notation_
\[T_{1}(z)[X,\mathrm{D}]\bullet=\mathcal{F}^{-1}\Big{(}\frac{\sinh(\frac{z}{h_{ b}(X)}\sqrt{\mu}|\xi|)}{\cosh(\sqrt{\mu}|\xi|)}\hat{\bullet}\Big{)}(X),\]
_and_
\[T_{2}(z)[X,\mathrm{D}]\bullet=\mathcal{F}^{-1}\Big{(}\frac{\cosh(\frac{z}{h_{ b}(X)}\sqrt{\mu}|\xi|)}{\cosh(\sqrt{\mu}|\xi|)}\hat{\bullet}\Big{)}(X).\]
_Using the identity \(\Delta_{X}=-|\mathrm{D}|^{2}\), we observe that_
\[\partial_{z}^{2}\phi_{1}=\frac{\mu}{h_{b}}T_{1}(z)[X,\mathrm{D}]\frac{\Delta _{X}}{\sqrt{\mu}|\mathrm{D}|}G.\]
_Similarly, after some computations we find_
\[\mu\Delta_{X}\phi_{1} =-\mu h_{b}T_{1}(z)[X,\mathrm{D}]\frac{\Delta_{X}}{\sqrt{\mu}| \mathrm{D}|}G+\mu[h_{b}T_{1}(z)[X,D]\frac{1}{\sqrt{\mu}|D|},\Delta]G\] \[=-\mu T_{1}(z)[X,\mathrm{D}]\frac{\Delta_{X}}{\sqrt{\mu}|\mathrm{ D}|}G+\mu[h_{b}T_{1}(z)[X,D]\frac{1}{\sqrt{\mu}|D|},\Delta]G+\mu\beta bT_{1}(z)[X,\mathrm{D}]\frac{\Delta_{X}}{\sqrt{\mu}|\mathrm{D}|}G.\]
_We define \(\tilde{F}\) by_
\[\tilde{F}=\mu\big{[}h_{b}T_{1}(z)[X,D]\frac{1}{\sqrt{\mu}|D|},\Delta\big{]}G+ \mu\beta bT_{1}(z)[X,\mathrm{D}]\frac{\Delta_{X}}{\sqrt{\mu}|\mathrm{D}|}G\]
_where \(\mu\big{[}h_{b}T_{1}(z)[X,D]\frac{1}{\sqrt{\mu}|D|},\Delta\big{]}G=O(\mu\beta)\) by direct calculation. From this expression, we identify \(F\) by_
\[\Delta_{X,z}^{\mu}\phi_{1} =\mu\beta\tilde{F}+\mu(\frac{1}{h_{b}}-1)T_{1}(z)[X,\mathrm{D}] \frac{\Delta_{X}}{\sqrt{\mu}|\mathrm{D}|}G\] \[=\mu\beta\tilde{F}+\frac{\mu\beta b}{h_{b}}T_{1}(z)[X,\mathrm{D}] \frac{\Delta_{X}}{\sqrt{\mu}|\mathrm{D}|}G\] \[=\mu\beta F.\]
_The estimate (2.17) on \(F\) is a consequence of the boundedness of \(T_{1}\) and \(T_{2}\) for \(z\in[-h_{b},0]\), given by Proposition A.5, while we estimate \(\mathcal{L}_{1}^{\mu}\) in \(H^{k+2}(\mathbb{R}^{d})\) by Proposition 1.10 with inequality (1.12)._
We summarize these observations in the next Proposition.
**Proposition 2.13**.: _Let \(d=1,2\), \(t_{0}>\frac{d}{2}\), and \(k\in\mathbb{N}\). Let \(\psi\in\dot{H}^{k+4}(\mathbb{R}^{d})\). Let also \(b\in C_{c}^{\infty}(\mathbb{R}^{d})\) and \(\zeta\in H^{\max\{t_{0}+2,k+2\}}(\mathbb{R}^{d})\) such that (1.2) and (1.10) are satisfied. Then the function \(\phi_{1}\) given by_
\[\phi_{1}=-h_{b}\frac{\sinh(\frac{z}{h_{b}}\sqrt{\mu}|\mathrm{D}|)}{\cosh(\sqrt {\mu}|\mathrm{D}|)}\frac{1}{\sqrt{\mu}|\mathrm{D}|}\nabla_{X}\cdot\big{(} \mathcal{L}_{1}^{\mu}[\beta b]\nabla_{X}\psi\big{)}, \tag{2.18}\]
_satisfies_
\[\begin{cases}\Delta_{X,z}^{\mu}\phi_{1}=\mu\beta F\ \ \text{in}\ \ \mathcal{S}_{b},\\ \phi_{1}|_{z=0}=0,\ \ \partial_{z}\phi_{1}|_{z=-1+\beta b}=-\nabla_{X}\cdot \big{(}\mathcal{L}_{1}^{\mu}[\beta b]\nabla_{X}\psi\big{)},\end{cases} \tag{2.19}\]
_where \(F\in H^{k,k}(\mathcal{S}_{b})\) is such that_
\[\|F\|_{H^{k,k}(\mathcal{S}_{b})}\leq M(k+2)|\nabla_{X}\psi|_{H^{k+2}}, \tag{2.20}\]
_and_
\[\mathcal{L}_{1}^{\mu}[\beta b]\nabla_{X}\psi=-\frac{1}{\beta}\sinh \left(\beta b(X)\sqrt{\mu}|\mathrm{D}|\right)\mathrm{sech}(\sqrt{\mu}|\mathrm{ D}|)\frac{1}{\sqrt{\mu}|\mathrm{D}|}\nabla_{X}\psi.\]
_Moreover, for \(\phi_{b}\) satisfying (2.2) and \(\phi_{0}\) given by (2.15) there holds,_
\[\|\nabla_{X,z}^{\mu}(\phi_{b}-(\phi_{0}+\mu\beta\phi_{1}))\|_{H^{k,0}( \mathcal{S}_{b})}\lesssim(\mu\varepsilon+\mu^{2}\beta^{2})M(k+2)|\nabla\psi|_ {H^{k+3}}. \tag{2.21}\]
Proof.: By constriction of \(\phi_{1}\) given by (2.18), we know there exists an \(F\) such that (2.20) is satisfied. Now, let us prove (2.21). First, observe that the function
\[u=\phi_{b}-(\phi_{0}+\mu\beta\phi_{1})\]
solves
\[\frac{h}{h_{b}}\nabla_{X,z}^{\mu}P(\Sigma_{b})\nabla_{X,z}^{\mu}u =-\mu\varepsilon A[\nabla_{X},\partial_{z}]\phi_{0}-\mu^{2} \varepsilon\beta A[\nabla_{X},\partial_{z}]\phi_{1}-\mu^{2}\beta^{2}F\] \[=:f.\]
Moreover, at \(z=-h_{b}\), we have the Neumann condition
\[\frac{h}{h_{b}}|\mathbf{n}_{b}|\partial_{n_{b}}^{P_{b}}u =\partial_{z}\phi_{0}-\mu\beta\nabla_{X}b\cdot\nabla_{X}\phi_{0}+ \mu\varepsilon\beta B[\nabla_{X},\partial_{z}]\phi_{0}+\mu\beta\partial_{z} \phi_{1}+\mu^{2}\varepsilon\beta^{2}B[\nabla_{X},\partial_{z}]\phi_{1}\] \[=\mu\varepsilon\beta B[\nabla_{X},\partial_{z}]\phi_{0}+\mu^{2} \varepsilon\beta^{2}B[\nabla_{X},\partial_{z}]\phi_{1}\] \[=:g.\]
Estimating each terms, noting that \(A[\nabla_{X},\partial_{z}]\) is a differential operator of order two and \(B[\nabla_{X},\partial_{z}]\) is of order one, while the error due to \(F\) is given by construction, we obtain that
\[\|\nabla_{X,z}^{\mu}u\|_{H^{k,0}(\mathcal{S}_{b})}\leq\mu(\varepsilon+ \varepsilon\beta+\mu\beta^{2})M(k+2)|\nabla\psi|_{H^{k+2}}.\]
**Observation 2.14**.: _We now construct an approximation of \(\phi_{b}\) to the order \(O(\mu(\mu\varepsilon+\varepsilon\beta+\mu\beta^{2}))\). To do so, we add a term of order \(\mu\varepsilon\) in the approximation of \(\phi_{b}\) in order to cancel the terms of order \(\mu\varepsilon\). In particular, we consider \(\phi_{2}\) solution of the problem_
\[\begin{cases}\partial_{z}^{2}\phi_{2}=-\frac{\zeta}{h_{b}}\big{(}1+\frac{h}{h_ {b}}\big{)}\Delta_{X}\psi\ \ \mathrm{in}\ \ \mathcal{S}_{b},\\ \phi_{2}|_{z=0}=0,\ \ \partial_{z}\phi_{2}|_{z=-1+\beta b}=0.\end{cases}\]
_Indeed, if we use the decomposition given by Observations (2.7) and (2.6), and the definitions of \(\phi_{0}\) and \(\phi_{1}\), we get:_
\[\frac{h}{h_{b}}\nabla_{X,z}^{\mu}\cdot P(\Sigma_{b})\nabla_{X,z}^{\mu}(\phi_{ b}-\phi_{0}-\mu\beta\phi_{1}-\mu\varepsilon\phi_{2})=-\mu\varepsilon\partial_{z}^{2 }\phi_{2}-\mu\varepsilon A[\nabla_{X},\partial_{z}]\phi_{0}+O(\mu^{2} \varepsilon),\]
_and_
\[\frac{h}{h_{b}}|\mathbf{n}_{b}|\partial_{n_{b}}^{P_{b}}(\phi_{b}-\phi_{0}-\mu \beta\phi_{1}-\mu\varepsilon\phi_{2})|_{z=-h_{b}}=-\mu\varepsilon\partial_{z} \phi_{2}|_{z=-h_{b}}+O(\mu(\mu\varepsilon+\varepsilon\beta+\mu\beta^{2})).\]
_Moreover, using the estimates in Proposition A.4 with \(t_{0}>\frac{d}{2}\), one can deduce from the definition of \(A[\nabla_{X},\partial_{z}]\bullet\), given by (2.9), that_
\[\begin{split}\operatorname{LHS}:&=\|A[\nabla_{X}, \partial_{z}]\phi_{0}-\frac{\zeta}{h_{b}}\big{(}1+\frac{h}{h_{b}}\big{)}\Delta_ {X}\psi\|_{H^{k,0}(\mathcal{S}_{b})}\\ &\lesssim\big{|}\frac{\zeta}{h_{b}}(1+\frac{h}{h_{b}})\big{|}_{H^ {\max(t_{0},k)}}\|\Delta_{X}(\phi_{0}-\psi)\|_{H^{k,0}(\mathcal{S})}\\ &\quad+\big{|}\frac{h}{h_{b}}\big{|}_{H^{\max(t_{0},k)}}|\nabla_{ X}\zeta|_{H^{\max(t_{0},k)}}\|\nabla_{X}\partial_{z}\phi_{0}\|_{H^{k,0}(\mathcal{S})} \\ &\quad+\|\frac{h}{h_{b}}\nabla_{X}\cdot\big{(}\frac{1}{\varepsilon }\nabla_{X}\sigma\partial_{z}\phi_{0}\big{)}\|_{H^{k,0}(\mathcal{S})}+\| \partial_{z}\big{(}\frac{1}{\varepsilon}|\nabla_{X}\sigma|^{2}\partial_{z} \phi_{0}\big{)}\|_{H^{k,0}(\mathcal{S})}\\ &\leq\mu M(k+2)|\nabla_{X}\psi|_{H^{k+3}},\end{split}\]
_for any \(k\in\mathbb{N}\)._
With this observation in mind, we can write the following result.
**Proposition 2.15**.: _Let \(d=1,2\), \(t_{0}>\frac{d}{2}\) and \(k\in\mathbb{N}\). Let \(\psi\in\dot{H}^{k+4}(\mathbb{R}^{d})\). Let also \(b\in C_{c}^{\infty}(\mathbb{R}^{d})\) and \(\zeta\in H^{\max\{t_{0}+2,k+2\}}(\mathbb{R}^{d})\) such that (1.2) and (1.10) are satisfied. If \(\phi_{2}\) satisfies the following Laplace problem_
\[\begin{cases}\partial_{z}^{2}\phi_{2}=-\frac{\zeta}{h_{b}}\big{(}1+\frac{h}{h_ {b}}\big{)}\Delta_{X}\psi\ \ \text{in}\ \ \mathcal{S}_{b},\\ \phi_{2}|_{z=0}=0,\ \ \partial_{z}\phi_{2}|_{z=-1+\beta b}=0.\end{cases}\]
_Then its expression is given by:_
\[\phi_{2}=-(\frac{z^{2}}{2}+h_{b}z)\frac{\zeta}{h_{b}}\big{(}1+\frac{h}{h_{b}} \big{)}\Delta_{X}\psi.\]
_Moreover, for \(\phi_{b}\) satisfying (2.2), \(\phi_{0}\) given by (2.15) and \(\phi_{1}\) given by (2.18), there holds_
Proof.: The function \(\phi_{2}\) satisfies a simple ODE and is solved by integrating the equation two times in \(z\):
\[\phi_{2}=\int_{z}^{0}\int_{-1+\beta b}^{z^{\prime}}\frac{\zeta}{h_{b}}\big{(} 1+\frac{h}{h_{b}}\big{)}\Delta_{X}\psi\;\mathrm{d}z^{\prime\prime}\mathrm{d}z ^{\prime}=-(\frac{z^{2}}{2}+h_{b}z)\frac{\zeta}{h_{b}}\big{(}1+\frac{h}{h_{b}} \big{)}\Delta_{X}\psi.\]
Then, by construction, we have that \(u=\phi_{b}-(\phi_{0}+\mu\beta\phi_{1}+\mu\varepsilon\phi_{2})\) satisfies
\[\begin{cases}\frac{h}{h_{b}}\nabla_{X,z}^{\mu}\cdot P(\Sigma_{b})\nabla_{X,z}^ {\mu}u=f\ \ \ \text{in}\ \ \ \mathcal{S}_{b}\\ u|_{z=0}=0,\ \ \ \frac{h}{h_{b}}|n_{b}|\partial_{n_{b}}^{P_{b}}u|_{z=-h_{b}}=g, \end{cases} \tag{2.22}\]
with
\[\begin{split} f&=-\mu\varepsilon[A[\nabla_{X},\partial_{z }]\phi_{0}-\frac{\zeta}{h_{b}}(1+\frac{h}{h_{b}})\Delta_{X}\psi]+\mu^{2}\beta^{ 2}F-\mu^{2}\varepsilon\beta A[\nabla_{X},\partial_{z}]\phi_{1}\\ &\quad-\mu^{2}\varepsilon(\Delta_{X}\phi_{2}+\varepsilon A[\nabla_{X}, \partial_{z}]\phi_{2}),\end{split}\]
and
\[\begin{split} g&=-\mu\varepsilon\beta B[\nabla_{X}, \partial_{z}]\phi_{0}|_{z=-h_{b}}+\mu^{2}\beta^{2}\nabla_{X}b\cdot\nabla_{X} \phi_{1}|_{z=-h_{b}}-\mu^{2}\varepsilon\beta^{2}B[\nabla_{X},\partial_{z}]\phi_ {1}|_{z=-h_{b}}\\ &\quad+\mu^{2}\varepsilon\beta\nabla_{X}b\cdot\nabla_{X}\phi_{2} |_{z=-h_{b}}-\mu^{2}\varepsilon^{2}\beta B[\nabla_{X},\partial_{z}]\phi_{2} |_{z=-h_{b}}.\end{split}\]
Then we use the elliptic estimate (2.13) to get that
\[\|\nabla_{X,z}^{\mu}u\|_{H^{k,0}(\mathcal{S}_{b})}\leq\mu(\mu\varepsilon+ \varepsilon\beta+\mu\beta^{2})M(k+1)(|g|_{H^{k}}+\sum_{j=0}^{k}\|f\|_{H^{k-j,j}( \mathcal{S}_{b})}),\]
with the the usual product estimates for \(H^{k}(\mathbb{R}^{d})\) combined with Observation 2.14, Proposition 2.15 and the fact that \(\phi_{2}\) is polynomial in \(z\), we get
\[\|\nabla_{X,z}^{\mu}u\|_{H^{k,0}(\mathcal{S}_{b})}\leq\mu(\mu\varepsilon+ \varepsilon\beta+\mu\beta^{2})M(k+2)|\nabla_{X}\psi|_{H^{s+3}}.\]
We will now make two observations that will further simplify the presentation.
**Observation 2.16**.: _We may use Plancherel's identity and the Taylor series expansions:_
\[\cosh(x) =1+\frac{x^{2}}{2}\int_{0}^{1}\cosh(tx)(1-t)\:dt\] \[\frac{1}{\cosh(x)} =1+\frac{x^{2}}{2}\int_{0}^{1}\Big{(}\frac{\tanh(tx)^{2}}{\cosh( tx)}-\frac{1}{\cosh(tx)^{3}}\Big{)}(1-t)\:dt,\]
_for \(x\in[0,1]\), to deduce that_
\[\|(\phi_{0}-\psi)-\mu(\frac{z^{2}}{2}+z)|\mathrm{D}|^{2}\psi\|_{H^{k,0}( \mathcal{S}_{b})}\lesssim\mu^{2}||\mathrm{D}|^{4}\psi|_{H^{k}}\leq\mu^{2}| \nabla_{X}\psi|_{H^{k+3}}, \tag{2.23}\]
_with \(z\in(-h_{b},0)\) and assumption (1.10) on \(\beta b(X)\)._
**Observation 2.17**.: _From the second-order expansions given by the previous Observation 2.16 we have_
\[\phi_{0}-\psi=\mu(\frac{z^{2}}{2}+z)|\mathrm{D}|^{2}\psi+\mu^{2}z^{2}R, \tag{2.24}\]
_where \(R\) is some generic function satisfying the estimate_
\[|R|_{H^{k}}\leq M(k)|\nabla_{X}\psi|_{H^{k+3}}. \tag{2.25}\]
_It allows us to approximate the quantity \(\phi_{0}+\mu\varepsilon\phi_{2}\):_
\[\phi_{0}+\mu\varepsilon\phi_{2} =\phi_{0}+\mu(\frac{z^{2}}{2}+h_{b}z)\frac{\varepsilon\zeta}{h_{b }}(1+\frac{h}{h_{b}})|\mathrm{D}|^{2}\psi\] \[=\phi_{0}+(\phi_{0}-\psi)\big{(}\frac{h}{h_{b}}-1\big{)}\big{(} \frac{h}{h_{b}}+1\big{)}+\mu(\mu\varepsilon+\varepsilon\beta)R\] \[=\phi_{0}+(\phi_{0}-\psi)\big{(}\frac{h^{2}}{h_{b}^{2}}-1\big{)}+ \mu(\mu\varepsilon+\varepsilon\beta)R\] \[=\psi+\frac{h^{2}}{h_{b}^{2}}(\phi_{0}-\psi)+\mu(\mu\varepsilon+ \varepsilon\beta)R.\]
We can make the formal computations in Observation 2.17 rigorous.
**Proposition 2.18**.: _Let \(d=1,2\), \(t_{0}>\frac{d}{2}\) and \(k\in\mathbb{N}\) such that \(k\geq t_{0}+1\). Let \(\psi\in\dot{H}^{k+4}(\mathbb{R}^{d})\). Let also \(b\in C_{c}^{\infty}(\mathbb{R}^{d})\) and \(\zeta\in H^{k+3}(\mathbb{R}^{d})\) such that (1.2) and (1.10) are satisfied. Lastly, let \(\phi_{\mathrm{app}}\) be defined by_
\[\phi_{\mathrm{app}}=\psi+\Big{(}\frac{h}{h_{b}}\Big{)}^{2}(\phi_{0}-\psi)+\mu \beta\phi_{1}, \tag{2.26}\]
_with \(\phi_{1}\) given by (2.18). Then for \(\phi_{b}\) satisfying (2.2) there holds,_
\[\|\nabla_{X,z}^{\mu}(\phi_{b}-\phi_{\mathrm{app}})\|_{H^{k,0}(\mathcal{S}_{b} )}\lesssim\mu(\mu\varepsilon+\varepsilon\beta+\mu\beta^{2})M(s+2)|\nabla\psi| _{H^{k+3}}. \tag{2.27}\]
Proof.: We first use Proposition 2.15 to get the estimate
\[\|\nabla^{\mu}_{X,z}(\phi_{b}-\phi_{\mathrm{app}})\|_{H^{k,0}(\mathcal{ S}_{b})} \lesssim\mu(\mu\varepsilon+\varepsilon\beta+\mu\beta^{2})M(k+2)| \nabla\psi|_{H^{k+3}}\] \[\quad+\|\nabla^{\mu}_{X,z}(\phi_{0}+\mu\varepsilon\phi_{1}-\phi_ {\mathrm{app}})\|_{H^{k,0}(\mathcal{S}_{b})}.\]
Making the same approximations as in Observation 2.17 will complete the proof. In particular, accounting for the loss of derivatives given by (2.25) yields,
\[\|\nabla^{\mu}_{X,z}(\phi_{0}+\mu\beta\phi_{1}+\mu\varepsilon\phi_{2}-\phi_{ \mathrm{app}})\|_{H^{k,0}(\mathcal{S})}\lesssim\mu(\mu\varepsilon+\varepsilon \beta)M(k+1)|\nabla\psi|_{H^{k+3}}.\]
Gathering these estimates concludes the proof.
### Multi-scale expansions of \(\overline{V}\)
In this subsection we will use the expression of \(\phi_{0}\), \(\phi_{1}\), and \(\phi_{\mathrm{app}}\) to construct approximations of \(\overline{V}\). The main result is given in the following proposition.
**Proposition 2.19**.: _Let \(d=1,2\), \(t_{0}>\frac{d}{2}\) and \(s\geq 0\). Let \(b\in C^{\infty}_{c}(\mathbb{R}^{d})\) and \(\zeta\in H^{\max\{t_{0}+2,s+3\}}(\mathbb{R}^{d})\) be such that (1.2) and (1.10) are satisfied. Let \(\mathcal{L}^{\mu}_{1}[\beta b]\) and \(\mathcal{L}^{\mu}_{2}[\beta b]\) be two pseudo-differential operators defined by_
\[\mathcal{L}^{\mu}_{1}[\beta b] =-\frac{1}{\beta}\sinh\big{(}\beta b(X)\sqrt{\mu}|\mathrm{D}| \big{)}\mathrm{sech}(\sqrt{\mu}|\mathrm{D}|)\frac{1}{\sqrt{\mu}|\mathrm{D}|}\] \[\mathcal{L}^{\mu}_{2}[\beta b] =-(\mathcal{L}^{\mu}_{1}[\beta b]+b)\frac{1}{\mu|\mathrm{D}|^{2}}.\]
_Let also \(\mathrm{F}_{1}\), \(\mathrm{F}_{2}\), \(\mathrm{F}_{3}\), and \(\mathrm{F}_{4}\) be four Fourier multipliers defined by_
\[\mathrm{F}_{1}=\frac{\tanh\big{(}\sqrt{\mu}|\mathrm{D}|\big{)}}{\sqrt{\mu}| \mathrm{D}|},\quad\mathrm{F}_{2}=\frac{3}{\mu|\mathrm{D}|^{2}}(1-\mathrm{F}_{ 1}),\quad\mathrm{F}_{3}=\mathrm{sech}(\sqrt{\mu}|D|),\quad\mathrm{F}_{4}=\frac {2}{\mu|\mathrm{D}|^{2}}(1-\mathrm{F}_{3}).\]
_Let \(\psi\in\dot{H}^{s+4}(\mathbb{R}^{d})\) and consider the approximation:_
\[\overline{V}_{0} =\frac{1}{h_{b}}\mathrm{F}_{1}\nabla_{X}\psi+\frac{\beta}{h_{b}} \mathcal{L}^{\mu}_{1}[\beta b]\nabla_{X}\psi+\frac{\mu\beta}{2}\nabla_{X} \mathrm{F}_{4}\nabla_{X}\cdot\big{(}\mathcal{L}^{\mu}_{1}[\beta b]\nabla_{X} \psi\big{)} \tag{2.28}\] \[\quad-\frac{\mu\beta^{2}}{2}\nabla_{X}\big{(}b\nabla_{X}\cdot(b \nabla_{X}\psi)\big{)}-\mu\beta^{2}(\nabla_{X}b)\nabla_{X}\cdot(b\nabla_{X} \psi).\]
_Then for \(\overline{V}\) defined by (1.5), there holds_
\[|\overline{V}-\overline{V}_{0}|_{H^{s}}\leq(\mu\varepsilon+\mu^{2}\beta^{2}) M(s+3)|\nabla_{X}\psi|_{H^{s+4}}. \tag{2.29}\]
_Furthermore, let \(\psi\in\dot{H}^{s+5}(\mathbb{R}^{d})\) and consider the approximation:_
\[\overline{V}_{\mathrm{app}} =\nabla_{X}\psi+\frac{\mu}{h}\nabla_{X}\Big{(}\frac{h^{3}}{h_{b} ^{3}}\mathrm{F}_{2}\psi\Big{)}+\frac{\mu\beta}{h}\nabla_{X}\Big{(}\frac{h^{3} }{h_{b}^{3}}\mathcal{L}^{\mu}_{2}[\beta b]\psi\Big{)}+\frac{\mu\beta}{2} \nabla_{X}\mathrm{F}_{4}\nabla_{X}\cdot\big{(}\mathcal{L}^{\mu}_{1}[\beta b] \nabla_{X}\psi\big{)} \tag{2.30}\] \[\quad-\frac{\mu\beta^{2}}{2}\nabla_{X}\big{(}b\nabla_{X}\cdot(b \nabla_{X}\psi)\big{)}-\mu\beta^{2}(\nabla_{X}b)\nabla_{X}\cdot(b\nabla_{X}\psi)\]
_Then there holds_
\[|\overline{V}-\overline{V}_{\mathrm{app}}|_{H^{s}}\leq(\mu^{2}\varepsilon+\mu \varepsilon\beta+\mu^{2}\beta^{2})M(s+3)|\nabla_{X}\psi|_{H^{s+4}}. \tag{2.31}\]
Proof.: We give the proof in four steps.
Step 1._Construction of \(\overline{V}_{0}\)_. To construct \(\overline{V}_{0}\), we use the solution of \(\phi_{0}\) given by (2.15), the solution \(\phi_{1}\) given by (2.18), and formula (2.8), formally discarding terms of order \(\mu\varepsilon\), to get that
\[h_{b}\overline{V}_{0} =\int_{-1+\beta b(X)}^{0}\nabla_{X}\phi_{0}\,\mathrm{d}z+\mu\beta \int_{-1+\beta b(X)}^{0}\nabla_{X}\phi_{1}\,\mathrm{d}z\] \[=I_{1}+I_{2}.\]
Then by direct computations, we get
\[I_{1} =\mathcal{F}^{-1}\Big{(}\int_{-1+\beta b(X)}^{0}\cosh\left((z+1) \sqrt{\mu}|\xi|\right)\mathrm{sech}(\sqrt{\mu}|\xi|)\;i\xi\hat{\psi}(\xi)\; \mathrm{d}z\Big{)}(X)\] \[=\frac{\tanh\left(\sqrt{\mu}|\mathrm{D}|\right)}{\sqrt{\mu}| \mathrm{D}|}\nabla_{X}\psi-\sinh\left(\beta b(X)\sqrt{\mu}|\mathrm{D}|\right) \mathrm{sech}(\sqrt{\mu}|\mathrm{D}|)\frac{1}{\sqrt{\mu}|\mathrm{D}|}\nabla_{ X}\psi.\]
While for \(I_{2}\), we simplify the notation by defining \(G=\nabla_{X}\cdot\mathcal{L}_{1}^{\mu}[\beta b]\nabla_{X}\psi\) and then make the observation
\[\mu\beta\int_{-h_{b}}^{0}\nabla_{X}\phi_{1}(X,z)\mathrm{d}z=\mu\beta h_{b}\int _{-1}^{0}(\nabla_{X}\phi_{1})(X,h_{b}z)\mathrm{d}z.\]
Then by the chain rule, we have the relation
\[\nabla_{X}(\phi_{1}(X,h_{b}z))=(\nabla_{X}\phi_{1})(X,h_{b}z)-\beta\nabla_{X}b (\partial_{z}\phi_{1})(X,h_{b}z),\]
and
\[\partial_{z}(\phi_{1}(X,h_{b}z))=h_{b}(\partial_{z}\phi_{1})(X,h_{b}z),\]
from which we obtain
\[I_{2} =\mu\beta\int_{-h_{b}}^{0}\nabla_{X}\phi_{1}(X,z)\mathrm{d}z\] \[=\mu\beta h_{b}\int_{-1}^{0}\nabla_{X}(\phi_{1}(X,h_{b}z))\mathrm{ d}z+\mu\beta^{2}(\nabla_{X}b)\int_{-1}^{0}\partial_{z}(\phi_{1}(X,h_{b}z)) \mathrm{d}z\] \[=\mu\beta h_{b}\nabla_{X}\big{(}h_{b}(1-\mathrm{sech}(\sqrt{\mu} |\mathrm{D}|))\frac{1}{\mu|\mathrm{D}|^{2}}G\big{)}-\mu\beta^{2}(\nabla_{X}b) h_{b}\frac{\tanh(\sqrt{\mu}|\mathrm{D}|)}{\sqrt{\mu}|\mathrm{D}|}G.\]
Adding these computations yields,
\[\overline{V}_{0} =\frac{1}{h_{b}}\mathrm{F}_{1}\nabla_{X}\psi+\frac{\beta}{h_{b}} \mathcal{L}_{1}^{\mu}[\beta b]\nabla_{X}\psi+\frac{\mu\beta}{2}\nabla_{X}(h_{b }\mathrm{F}_{4}G)-\mu\beta^{2}(\nabla_{X}b)\mathrm{F}_{1}G\] \[=\frac{1}{h_{b}}\mathrm{F}_{1}\nabla_{X}\psi+\frac{\beta}{h_{b}} \mathcal{L}_{1}^{\mu}[\beta b]\nabla_{X}\psi+\frac{\mu\beta}{2}\nabla_{X} \mathrm{F}_{4}G-\frac{\mu\beta^{2}}{2}\nabla_{X}(bG)-\mu\beta^{2}(\nabla_{X}b)G\] \[\quad+\mu^{2}\beta^{2}(R_{1}+R_{2}),\]
where \(R_{1}\) is given by
\[R_{1}=-\frac{1}{2\mu}\nabla_{X}\big{(}b(\mathrm{F}_{4}-1)G\big{)}\]
and \(R_{2}\) is given by
\[R_{2}=-\frac{\nabla_{X}b}{\mu}(\mathrm{F}_{1}-1)G.\]
Then using the estimates in Proposition A.7, and the Sobolev embedding, we obtain that \(R_{1}\) satisfies
\[|R_{1}|_{H^{k}} \leq\frac{1}{\mu}M(k+1)|(\mathrm{F}_{4}-1)G|_{H^{k+1}}\] \[\leq M(k+1)|G|_{H^{k+3}}.\]
Then use definition of \(G\) and (1.12) to deduce that
\[|R_{1}|_{H^{k}} \leq M(k+1)|\mathcal{L}_{1}^{\mu}[\beta b]\nabla_{X}\psi|_{H^{k+4}}\] \[\leq M(k+1)|\nabla_{X}\psi|_{H^{k+4}}. \tag{2.32}\]
The same estimate also holds for \(R_{2}\). To conclude this step, we use (1.15) to approximate \(G=\nabla_{X}\cdot(b\nabla_{X}\psi)+\mu R_{3}\), where \(|R_{3}|_{H^{k}}\leq M(k+1)|\nabla_{X}\psi|_{H^{k+4}}\). Then we have constructed the approximation:
\[\overline{V}_{0} =\int_{-1+\beta b(X)}^{0}\nabla_{X}(\phi_{0}+\mu\beta\phi_{1})\, \mathrm{d}z+\mu^{2}\beta^{2}R_{4}\] \[=\frac{1}{h_{b}}\mathrm{F}_{1}\nabla_{X}\psi+\frac{\beta}{h_{b}} \mathcal{L}_{1}^{\mu}[\beta b]\nabla_{X}\psi+\frac{\mu\beta}{2}\nabla_{X} \mathrm{F}_{4}\nabla_{X}\cdot\mathcal{L}_{1}^{\mu}[\beta b]\nabla_{X}\psi\] \[\quad-\frac{\mu\beta^{2}}{2}\nabla_{X}\big{(}b\nabla_{X}\cdot(b \nabla_{X}\psi)\big{)}-\mu\beta^{2}(\nabla_{X}b)\nabla_{X}\cdot(b\nabla_{X} \psi),\]
for some \(R_{4}\) satisfying (2.32).
Step 2. We will now prove the estimate on \(\overline{V}-\overline{V}_{0}\) for \(k\in\mathbb{N}\), and then use interpolation for \(s\geq 0\). First, define the approximation
\[\phi_{\mathrm{app}}^{1}=\phi_{0}+\mu\beta\phi_{1},\]
and let \(R\) be the function constructed in the previous step satisfying estimate (2.32). Then we have that
\[|\overline{V}-\overline{V}_{0}|_{H^{k}}=\Big{|}\int_{-1+\beta b}^{0}\big{[} \frac{1}{h_{b}}\nabla_{X}(\phi_{b}-\phi_{\mathrm{app}}^{1})-\frac{1}{h}( \varepsilon\nabla_{X}\Big{(}\frac{\zeta}{h_{b}}\Big{)}z+\varepsilon\nabla_{X} \zeta)\partial_{z}\phi_{b}\big{]}\,\mathrm{d}z\Big{|}_{H^{k}}+\mu^{2}\beta^{2}| R|_{H^{k}}.\]
Now, note that \(h_{b}\) and \(h\) are only functions of \(X\) and satisfies (1.2) and (1.10), we can therefore use (A.9), (A.10), and (2.32) to get that
\[|\overline{V}-\overline{V}_{0}|_{H^{k}} \lesssim\Big{|}\frac{1}{h_{b}}\int_{-1+\beta b}^{0}\nabla_{X}( \phi_{b}-\phi_{\mathrm{app}}^{1})\,\mathrm{d}z\Big{|}_{H^{k}}+\varepsilon \Big{|}\frac{1}{h}\nabla_{X}\Big{(}\frac{\zeta}{h_{b}}\Big{)}\int_{-1+\beta b}^ {0}z\partial_{z}\phi_{b}\,\mathrm{d}z\Big{|}_{H^{k}}\] \[\quad+\varepsilon\Big{|}\frac{1}{h}\nabla_{X}\zeta\int_{-1+\beta b }^{0}\partial_{z}\phi_{b}\,\mathrm{d}z\Big{|}_{H^{k}}+\mu^{2}\beta^{2}|R|_{H^{ k}}\] \[\leq M(k)\|\nabla_{X,z}^{\mu}(\phi_{b}-\phi_{\mathrm{app}}^{1})\|_{H ^{k+1,0}(\mathcal{S}_{b})}+\varepsilon M(k+1)\|\partial_{z}\phi_{b}\|_{H^{k,0} (\mathcal{S}_{b})}\] \[\quad+M(k)\sum_{j=1}^{k}\|\nabla_{X,z}^{\mu}\partial_{z}^{j-1}( \phi_{b}-\phi_{\mathrm{app}}^{1})\|_{H^{k-j+1,0}(\mathcal{S}_{b})}+\varepsilon M (k+1)\sum_{j=1}^{k}\|\partial_{z}^{j+1}\phi_{b}\|_{H^{k-j,0}(\mathcal{S}_{b})}\] \[\quad+\mu^{2}\beta^{2}M(k+1)|\nabla_{X}\psi|_{H^{k+4}}\] \[=II_{1}+II_{2}+II_{3}+II_{4}+II_{5}.\]
We will now estimate each term. To estimate \(II_{1}\), we apply (2.21) to get that
\[II_{1}\leq\mu(\varepsilon+\varepsilon\beta+\mu\beta^{2})M(k+3)|\nabla_{X}\psi|_ {H^{k+4}}.\]
To estimate \(II_{2}\), we use Proposition A.4 to see that \(|\partial_{z}\phi_{0}|_{H^{k}}\lesssim\mu|\nabla_{X}\psi|_{H^{k+1}}\) and combine it with (2.16) to get the estimate,
\[II_{2} \leq\varepsilon M(k+1)(\|\nabla_{X,z}^{\mu}(\phi_{b}-\phi_{0})\|_ {H^{k+1,0}}+\|\partial_{z}\phi_{0}\|_{H^{k,0}})\] \[\leq\mu\varepsilon M(s+3)|\nabla_{X}\psi|_{H^{k+3}}.\]
Finally, we will deal with \(II_{3}\) and \(II_{4}\). To that end, we need to trade the derivatives in \(\partial_{z}\) with derivatives in the horizontal variable by relating the functions with an elliptic problem. We introduce the notation
\[f\sim g\iff f(X,z)=r(X)g(X,z), \tag{2.33}\]
with \(r\in H^{k}(\mathbb{R}^{d})\) such that \(|r|_{H^{k}}\leq M(k+1)\). Then, by construction, we have from (2.9) that
\[(1+\mu|\nabla_{X}\sigma|^{2})\partial_{z}^{2}\phi_{b} =-\mu\Delta_{X}\phi_{b}-\mu\varepsilon(A[\nabla_{X},\partial_{z} ]\phi_{b}-\frac{1}{\varepsilon}|\nabla_{X}\sigma|^{2}\partial_{z}^{2}\phi_{b})\] \[=:-\mu\Delta_{X}\phi_{b}-\mu\varepsilon\tilde{A}[\nabla_{X}, \partial_{z}]\phi_{b},\]
where \(\nabla_{X}\sigma\) is given by (2.6) and is of the form
\[\nabla_{X}\sigma\sim\varepsilon(1+z),\]
while \(\tilde{A}[\nabla_{X},\partial_{z}]\) is of the form
\[\tilde{A}[\nabla_{X},\partial_{z}]\phi_{b}\sim\Delta_{X}\phi_{b}+(1+z)\nabla_{ X}f\cdot\nabla_{X}\partial_{z}\phi_{b}+z\partial_{z}\phi_{b},\]
for some function \(f\in H^{k+3}(\mathbb{R}^{d})\). Similarly, for \(\phi_{\text{app}}^{1}=\phi_{0}+\mu\beta\phi_{1}\) defined by (2.14) and (2.19), we have the relation
\[(1+\mu|\nabla_{X}\sigma|^{2})\partial_{z}^{2}(\phi_{b}-\phi_{ \text{app}}^{1}) =\mu\Delta_{X}(\phi_{b}-\phi_{\text{app}}^{1})-\mu\varepsilon\tilde {A}[\nabla_{X},\partial_{z}](\phi_{b}-\phi_{\text{app}}^{1})-\mu\varepsilon \tilde{A}[\nabla_{X},\partial_{z}]\phi_{\text{app}}^{1}\] \[\quad-\mu|\nabla_{X}\sigma|^{2}\partial_{z}^{2}\phi_{\text{app} }^{1}+\mu^{2}\beta^{2}F.\]
where \(F\) is some function satisfying (2.20) and goes into the rest. Consequently, we can trade two derivatives in \(z\) by \(\Delta_{X}\), \(\nabla_{X}\partial_{z}\), and \(\partial_{z}\). From that point, we can deduce that for \(k\geq 3\), we have
\[\partial_{z}^{k}(\phi_{b}-\phi_{\text{app}}^{1})\sim\mu\sum_{\gamma\in\mathbb{N }^{d}\;|\gamma|\leq k-1}\partial_{X}^{\gamma}\partial_{z}\big{(}(\phi_{b}-\phi_ {\text{app}}^{1})-\varepsilon\phi_{\text{app}}^{1}\big{)}+\sum_{j=1}^{k}\mu \varepsilon^{2}\partial_{z}^{j}\phi_{\text{app}}^{1},\]
From this estimate, where we control the residual terms \(r(X)\) in (2.33) with the product estimate (A.9), then combine it with (2.21) and (A.6) to get
\[II_{3} \leq M(k+1)(\|\nabla_{X,z}^{\mu}(\phi_{b}-\phi_{\text{app}}^{1}) \|_{H^{k+1,0}(\mathcal{S}_{b})}+\mu\varepsilon|\nabla_{X}\psi|_{H^{k+1}})\] \[\leq\mu(\varepsilon+\varepsilon\beta+\mu\beta^{2})M(k+3)|\nabla_{ X}\psi|_{H^{k+3}}.\]
To conclude, we estimate \(II_{4}\). Since there is an \(\varepsilon\) appearing we only need to introduce \(\phi_{0}\) and we obtain
\[II_{4} =\varepsilon M(k+1)\sum_{j=1}^{k}\Big{(}\|\partial_{z}^{j+1}( \phi_{b}-\phi_{0})\|_{H^{k-j,0}(\mathcal{S}_{b})}+\|\partial_{z}^{j+1}\phi_{0} \|_{H^{k-j,0}(\mathcal{S}_{b})}\Big{)}\] \[\leq\varepsilon(\mu\varepsilon+\mu\beta+\mu)M(k+3)|\nabla_{X}\psi |_{H^{k+3}}.\]
Step 3._Construction of \(\overline{V}_{\rm app}\)_. The next step is to construct \(\overline{V}_{\rm app}\) by replacing \(\phi_{b}\) with \(\phi_{\rm app}\) in (2.8):
\[\overline{V}_{\rm app}=\int_{-1+\beta b}^{0}\big{[}\frac{1}{h_{b}}\nabla_{X}\phi _{\rm app}-\frac{1}{h}(\varepsilon\nabla_{X}\Big{(}\frac{\zeta}{h_{b}}\Big{)}z +\varepsilon\nabla_{X}\zeta)\partial_{z}\phi_{\rm app}\big{]}\;{\rm d}z. \tag{2.34}\]
Then using (2.26), we obtain that
\[\overline{V}_{\rm app} =\int_{-1+\beta b}^{0}\frac{1}{h_{b}}\nabla_{X}\psi\,{\rm d}z+ \int_{-1+\beta b}^{0}\frac{1}{h_{b}}\nabla_{X}\Big{(}\frac{h^{2}}{h_{b}^{2}}( \phi_{0}-\psi)\Big{)}\;{\rm d}z\] \[\quad+\mu\beta\frac{1}{h_{b}}\int_{-1+\beta b}^{0}\nabla_{X}\phi _{1}{\rm d}z-\mu\varepsilon\beta\int_{-1+\beta b}^{0}\frac{1}{h}(z\nabla_{X} \big{(}\frac{\zeta}{h_{b}})+\nabla_{X}\zeta)\partial_{z}\phi_{1}\;{\rm d}z\] \[=III_{1}+III_{2}+III_{3}+III_{4}+III_{5}.\]
Clearly, \(III_{1}=\nabla_{X}\psi\) and to compute \(III_{2}+III_{3}\) we use formula (2.15) for \(\phi_{0}\):
\[III_{2}+III_{3} =\frac{1}{h}\nabla_{X}\Big{(}\frac{h^{3}}{h_{b}^{3}}\frac{\tanh{( \sqrt{\mu}|{\rm D}|)}}{\sqrt{\mu}|{\rm D}|}\psi\Big{)}\] \[\quad-\frac{1}{h}\nabla_{X}\Big{(}\frac{h^{3}}{h_{b}^{3}}\Big{(} \sinh{(\beta b(X)\sqrt{\mu}|{\rm D}|)}{\rm sech}(\sqrt{\mu}|{\rm D}|)\frac{1} {\sqrt{\mu}|{\rm D}|}\psi-(-1+\beta b)\psi\Big{)}\Big{)}\] \[\quad-\varepsilon\beta\zeta h\frac{\nabla_{X}b}{h_{b}^{3}}\Big{(} \cosh{(\beta b(X)\sqrt{\mu}|{\rm D}|)}{\rm sech}(\sqrt{\mu}|{\rm D}|)-1\Big{)}\psi\] \[=\frac{\mu}{3h}\nabla_{X}\Big{(}\frac{h^{3}}{h_{b}^{3}}\frac{3}{ \mu|{\rm D}|^{2}}\Big{(}1-\frac{\tanh{(\sqrt{\mu}|{\rm D}|)}}{\sqrt{\mu}|{\rm D }|}\Big{)}\Delta_{X}\psi\Big{)}\] \[\quad-\frac{1}{h}\nabla_{X}\Big{(}\frac{h^{3}}{h_{b}^{3}}\Big{(} \sinh{(\beta b(X)\sqrt{\mu}|{\rm D}|)}{\rm sech}(\sqrt{\mu}|{\rm D}|)\frac{1} {\sqrt{\mu}|{\rm D}|}\psi-\beta b\psi\Big{)}\Big{)}+\mu\varepsilon\beta R_{5},\]
where \(R_{5}\) is given by
\[R_{5}=-\zeta h\frac{\nabla_{X}b}{h_{b}^{3}}\Big{(}\cosh{(\beta b(X)\sqrt{\mu}| {\rm D}|)}{\rm sech}(\sqrt{\mu}|{\rm D}|)-1\Big{)}\frac{1}{\mu|{\rm D}|^{2}} \Delta_{X}\psi.\]
Moreover, using the algebra property of the Sobolev spaces (A.9), (A.10), and estimate (1.14), we have that
\[|R_{5}|_{H^{k}}\leq M(k+1)|\nabla_{X}\psi|_{H^{k+1}}. \tag{2.35}\]
Next, we see that \(III_{4}\) is already treated in Step 1. and satisfies:
\[III_{4} =\frac{\mu\beta}{2}\nabla_{X}{\rm F}_{4}\nabla_{X}\cdot\mathcal{ L}_{1}^{\mu}[\beta b]\nabla_{X}\psi-\frac{\mu\beta^{2}}{2}\nabla_{X}\big{(}b \nabla_{X}\cdot(b\nabla_{X}\psi)\big{)}-\mu\beta^{2}(\nabla_{X}b)\nabla_{X} \cdot(b\nabla_{X}\psi)\] \[\quad+\mu^{2}\beta^{2}R_{6},\]
for some function \(R_{6}\) satisfying (2.32). Lastly, for the term \(III_{5}\), we use integration by parts to find the expressions
\[III_{5} =\mu\varepsilon\beta\int_{-1+\beta b}^{0}\frac{1}{h}(z\nabla_{X}( \frac{\zeta}{h_{b}})+\nabla_{X}\zeta)\partial_{z}\phi_{1}\,\mathrm{d}z\] \[=-\mu\varepsilon\beta\frac{h_{b}^{2}}{h}\nabla_{X}(\frac{\zeta}{ h_{b}})\Big{(}\frac{1}{2}\mathrm{F}_{4}\nabla_{X}\cdot(\mathcal{L}_{1}^{\mu}[ \beta b]\nabla_{X}\psi)+\frac{\tanh(\sqrt{\mu}|\mathrm{D}|)}{\sqrt{\mu}| \mathrm{D}|}\nabla_{X}\cdot(\mathcal{L}_{1}^{\mu}[\beta b]\nabla_{X}\psi)\Big{)}\] \[\quad-\mu\varepsilon\beta\frac{h_{b}}{h}\nabla_{X}\zeta\frac{ \tanh(\sqrt{\mu}|\mathrm{D}|)}{\sqrt{\mu}|\mathrm{D}|}\nabla_{X}\cdot( \mathcal{L}_{1}^{\mu}[\beta b]\nabla_{X}\psi)\]
The multipliers are bounded on \(H^{k}(\mathbb{R}^{d})\) and combined with Proposition 1.10 we get that
\[|III_{5}|_{H^{k}}\leq\mu\varepsilon\beta M(k+1)|\nabla_{X}\psi|_{H^{k+1}}.\]
Adding these identities in the definition of \(\overline{V}_{\mathrm{app}}\) we get that
\[\overline{V}_{\mathrm{app}} =\int_{-1+\beta b}^{0}\big{[}\frac{1}{h_{b}}\nabla_{X}\phi_{ \mathrm{app}}-\frac{1}{h}(z\varepsilon\nabla_{X}(\frac{\zeta}{h_{b}})+ \varepsilon\nabla_{X}\zeta)\partial_{z}\phi_{\mathrm{app}}\big{]}\,\mathrm{d}z+ \mu\varepsilon\beta R_{7}\] \[=\nabla_{X}\psi+\frac{\mu}{h}\nabla_{X}\Big{(}\frac{h^{3}}{h_{b} ^{3}}\mathrm{F}_{2}\psi\Big{)}+\frac{\mu\beta}{h}\nabla_{X}\Big{(}\frac{h^{3}} {h_{b}^{3}}\mathcal{L}_{2}^{\mu}[\beta b]\psi\Big{)}+\frac{\mu\beta}{2}\nabla_{ X}\mathrm{F}_{4}\nabla_{X}\cdot\big{(}\mathcal{L}_{1}^{\mu}[\beta b]\nabla_{X} \psi\big{)}\] \[\quad-\frac{\mu\beta^{2}}{2}b\nabla_{X}\big{(}\nabla_{X}\cdot(b \nabla_{X}\psi)\big{)}-\frac{\mu\beta^{2}}{2}(\nabla_{X}b)\nabla_{X}\cdot(b \nabla_{X}\psi),\]
where \(R_{7}\) is some generic function satisfying \(|R_{7}|_{H^{k}}\leq M(k+1)|\nabla_{X}\psi|_{H^{k+4}}\).
Step 4. _Proof of (2.31)._ We use the definition (2.34) of \(\overline{V}_{\mathrm{app}}\) and (A.8) to identify the terms
\[|\overline{V}-\overline{V}_{\mathrm{app}}|_{H^{k}} =\Big{|}\int_{-1+\beta b}^{0}\big{[}\frac{1}{h_{b}}\nabla_{X}( \phi_{b}-\phi_{\mathrm{app}})-\frac{1}{h}(\varepsilon\nabla_{X}\Big{(}\frac{ \zeta}{h_{b}}\Big{)}z+\varepsilon\nabla_{X}\zeta)\partial_{z}(\phi_{b}-\phi_{ \mathrm{app}})\big{]}\,\mathrm{d}z\Big{|}_{H^{k}}\] \[\leq M(k)\|\nabla_{X,z}^{\mu}(\phi_{b}-\phi_{\mathrm{app}})\|_{H^ {k+1,0}(\mathcal{S}_{b})}+M(k+1)\|\partial_{z}(\phi_{b}-\phi_{\mathrm{app}}) \|_{H^{k,0}(\mathcal{S}_{b})}\] \[\quad+M(k)\sum_{j=1}^{k}\|\nabla_{X,z}^{\mu}\partial_{z}^{j-1}( \phi_{b}-\phi_{\mathrm{app}})\|_{H^{k-j+1,0}(\mathcal{S}_{b})}\] \[\quad+\varepsilon M(k+1)\sum_{j=1}^{k}\|\partial_{z}^{j+1}(\phi_{b }-\phi_{\mathrm{app}})\|_{H^{k-j,0}(\mathcal{S}_{b})}\] \[=IV_{1}+IV_{2}+IV_{3}+IV_{4}.\]
For the two first terms we use estimate (2.27) to get that
\[IV_{1}+IV_{2}\leq(\mu^{2}\varepsilon+\mu\varepsilon\beta+\mu^{2}\beta^{2})M(k +3)|\nabla_{X}\psi|_{H^{k+4}}.\]
For the estimate of \(IV_{3}\) and \(IV_{4}\), we will use the same ideas that we used for \(I_{3}\) and \(I_{4}\). We first note that we only need to work with
\[\phi_{\mathrm{app}}^{2}:=\phi_{0}+\mu\beta\phi_{1}+\mu\varepsilon\phi_{2},\]
constructed in Propositions 2.10, 2.13 and 2.15. Indeed, from Observation 2.17 we used the approximation (2.24) and depends polynomially on \(z\). So formula (2.26) is related by \(\phi_{\mathrm{app}}^{2}\) through the relation
\[\partial_{z}^{k}(\phi_{\mathrm{app}}^{2}-\phi_{\mathrm{app}})\sim\mu(\mu \varepsilon+\varepsilon\beta)\partial_{z}^{k}(z^{2}R), \tag{2.36}\]
for \(k\geq 1\) and where \(R=R(X)\) satisfies (2.25). Then by definition of \(\phi_{0}\), \(\phi_{1}\), and \(\phi_{2}\) we have that
\[\partial_{z}^{2}(\phi_{b}-\phi_{\text{app}}^{2}) =-\mu\Delta_{X}(\phi_{b}-\phi_{0}-\mu\beta\phi_{1})-\mu\varepsilon A [\nabla_{X},\partial_{z}](\phi_{b}-\phi_{0})-\mu\varepsilon(A[\nabla_{X}, \partial_{z}]\phi_{0}+\partial_{z}^{2}\phi_{2})\] \[=-\mu\Delta_{X}(\phi_{b}-\phi_{0}-\mu\beta\phi_{1})-\mu\varepsilon \tilde{A}[\nabla_{X},\partial_{z}](\phi_{b}-\phi_{0})-\mu\varepsilon(A[\nabla_{ X},\partial_{z}]\phi_{0}+\partial_{z}^{2}\phi_{2})\] \[\quad-\mu|\nabla_{X}\sigma|^{2}\partial_{z}^{2}(\phi_{b}-\phi_{0} -\mu\beta\phi_{1}-\mu\varepsilon\phi_{2}))-\mu^{2}\beta|\nabla_{X}\sigma|^{2 }\partial_{z}^{2}\phi_{1}+\mu^{2}\varepsilon|\nabla_{X}\sigma|^{2}\partial_{z }^{2}\phi_{2},\]
so that
\[(1+\mu|\nabla_{X}\sigma|^{2})\partial_{z}^{2}(\phi_{b}-\phi_{ \text{app}}^{2}) =-\mu\Delta_{X}(\phi_{b}-\phi_{0}-\mu\beta\phi_{1})-\mu\varepsilon \tilde{A}[\nabla_{X},\partial_{z}](\phi_{b}-\phi_{0})\] \[\quad-\mu\varepsilon(A[\nabla_{X},\partial_{z}]\phi_{0}+\partial_ {z}^{2}\phi_{2})-\mu^{2}|\nabla_{X}\sigma|^{2}\partial_{z}^{2}(\beta\phi_{1}+ \varepsilon\partial_{z}^{2}\phi_{2}).\]
Here derivatives of \(\phi_{1}\) is bounded using Proposition A.5 and by definition of \(\sigma\), given by (2.6), we have that
\[\mu^{2}\beta|\nabla_{X}\sigma|^{2}\partial_{z}^{2}\phi_{1}\sim\mu^{2} \varepsilon^{2}\beta\partial_{z}^{2}\phi_{1}.\]
Moreover, since \(\phi_{2}\) is only polynomial in \(z\) can use the notation above (2.33) to see the last term as
\[\mu^{2}\varepsilon|\nabla_{X}\sigma|^{2}\partial_{z}^{2}\phi_{2}\sim\mu^{2} \varepsilon^{3}(1+z+z^{2}).\]
Also, we see from observation 2.14 that
\[\mu\varepsilon(A[\nabla_{X},\partial_{z}]\phi_{0}+\partial_{z}^{2}\phi_{2}) \sim\mu\varepsilon\Delta_{X}(\phi_{0}-\psi)+\mu\varepsilon(1+z)\nabla_{X}f \cdot\nabla_{X}\partial_{z}\phi_{0}+\mu\varepsilon z\partial_{z}\phi_{0},\]
for some \(f\in H^{k+3}(\mathbb{R}^{d})\). Then arguing as in Step 2, we get the induction relation for \(k\geq 3\):
\[\partial_{z}^{k}(\phi_{b}-\phi_{\text{app}}^{2}) \sim\mu\sum_{\gamma\in\mathbb{N}^{d}\;|\gamma|\leq k-1}\partial_{ X}^{\gamma}\partial_{z}\Big{(}(\phi_{b}-\phi_{\text{app}}^{1})+\varepsilon( \phi_{b}-\phi_{0})\Big{)}\] \[\quad+\mu\varepsilon\sum_{j=1}^{k-2}\partial_{z}^{j}\big{(}\Delta _{X}\phi_{0}+\nabla_{X}f\cdot\nabla_{X}\partial_{z}\phi_{0}+\partial_{z}\phi_ {0}\big{)}+\mu^{2}\varepsilon^{2}\beta\sum_{j=1}^{k}\partial_{z}^{j}\phi_{1}.\]
Then as a result, we use these estimates with the product estimate (A.9), (A.4), and (A.5) to obtain the bound
\[\sum_{j=1}^{k}\|\partial_{z}^{j+1}(\phi_{b}-\phi_{\text{app}}^{2} )\|_{H^{k-j}(\mathcal{S}_{b})} \lesssim\mu M(k+1)\Big{(}\|\partial_{z}(\phi_{b}-\phi_{\text{app}}^ {1})\|_{H^{k,0}(\mathcal{S}_{b})}+\varepsilon\|\partial_{z}(\phi_{b}-\phi_{0} )\|_{H^{k,0}(\mathcal{S}_{b})}\] \[+\mu^{2}\varepsilon|\nabla_{X}\psi|_{H^{k+1}}+\mu^{2}\varepsilon^ {2}\beta|\nabla_{X}\cdot\mathcal{L}_{1}^{\mu}[\beta b]\nabla\psi|_{H^{k+1}} \Big{)},\]
from which the estimate on \(IV_{4}\) follows by (1.12), the relation (2.36) with estimate (2.25), and then (2.2) and (2.2):
\[IV_{4} \leq M(k)\sum_{j=1}^{k}\|\nabla_{X,z}^{\mu}\partial_{z}^{j-1}( \phi_{b}-\phi_{\text{app}}^{2})\|_{H^{k-j+1,0}(\mathcal{S}_{b})}+\mu(\mu \varepsilon+\varepsilon\beta)|\nabla_{X}\psi|_{H^{k+4}}\] \[\leq\mu(\mu\varepsilon+\varepsilon\beta)M(k+2)|\nabla_{X}\psi|_{H^ {k+4}}.\]
The same estimate holds for \(IV_{5}\), and therefore completes the proof.
### Multi-scale expansions of \(\mathcal{G}^{\mu}\)
In this section, we give the expansions of the Dirichlet-Neumann operator. We will use that \(\mathcal{G}^{\mu}\) is directly related to \(\overline{V}\) through (1.4) and (2.8). In particular, we have the following result:
**Proposition 2.20**.: _Under the provisions of Proposition 2.19, we can define the approximations_
\[\frac{1}{\mu}\mathcal{G}_{0}\psi =-\mathrm{F}_{1}\Delta_{X}\psi-\beta(1+\frac{\mu}{2}\mathrm{F}_{4 }\Delta_{X})\nabla_{X}\cdot\big{(}\mathcal{L}_{1}^{\mu}[\beta b]\nabla_{X} \psi\big{)}-\varepsilon\nabla_{X}\cdot\big{(}\zeta\mathrm{F}_{1}\nabla_{X} \psi\big{)}\] \[\quad+\frac{\mu\beta^{2}}{2}\nabla_{X}\cdot\big{(}\mathcal{B}[ \beta b]\nabla_{X}\psi\big{)},\]
_and_
\[\frac{1}{\mu}\mathcal{G}_{1}\psi =-\nabla_{X}\cdot(h\nabla_{X}\psi)-\frac{\mu}{3}\Delta_{X}\Big{(} \frac{h^{3}}{h_{b}^{3}}\mathrm{F}_{2}\Delta_{X}\psi\Big{)}-\mu\beta\Delta_{X} \big{(}\mathcal{L}_{2}^{\mu}[\beta b]\Delta_{X}\psi\big{)} \tag{2.37}\] \[\quad-\frac{\mu\beta}{2}\mathrm{F}_{4}\Delta_{X}\nabla_{X}\cdot \big{(}\mathcal{L}_{1}^{\mu}[\beta b]\nabla_{X}\psi\big{)}+\frac{\mu\beta^{2} }{2}\nabla_{X}\cdot\big{(}\mathcal{B}[\beta b]\nabla_{X}\psi\big{)}, \tag{2.38}\]
_where_
\[\mathcal{B}[\beta b]\nabla_{X}\psi =b\nabla_{X}(\nabla_{X}\cdot(b\nabla_{X}\psi)) \tag{2.39}\] \[\quad+h_{b}\nabla_{X}\big{(}b\nabla_{X}\cdot(b\nabla_{X}\psi))+2 h_{b}(\nabla_{X}b)\nabla_{X}\cdot(b\nabla_{X}\psi).\]
_Moreover, we have the following estimates on the Dirichlet-Neumann operator_
\[\frac{1}{\mu}|\mathcal{G}^{\mu}\psi-\mathcal{G}_{0}\psi|_{H^{s}} \leq(\mu\varepsilon+\mu^{2}\beta^{2})M(s+3)|\nabla_{X}\psi|_{H^{s+ 5}} \tag{2.40}\] \[\frac{1}{\mu}|\mathcal{G}^{\mu}\psi-\mathcal{G}_{1}\psi|_{H^{s}} \leq(\mu^{2}\varepsilon+\mu\varepsilon\beta+\mu^{2}\beta^{2})M(s+ 3)|\nabla_{X}\psi|_{H^{s+5}}. \tag{2.41}\]
Proof.: To prove inequality (2.40), we introduce a generic function \(R\) such that
\[|R|_{H^{s}}\leq M(s+3)|\nabla_{X}\psi|_{H^{s+5}}. \tag{2.42}\]
Then note that the first two terms in \(\mathcal{G}_{0}\) are obtained from the first two terms in \(\overline{V}_{0}\). Indeed, let \(G=\nabla_{X}\cdot\mathcal{L}_{1}^{\mu}[\beta b]\nabla_{X}\psi\) and use formula (2.28) to observe that
\[\frac{1}{\mu}\mathcal{G}_{0} =-\nabla_{X}\cdot(h\overline{V}_{0})\] \[=-\nabla_{X}\cdot\Big{(}\frac{h}{h_{b}}\mathrm{F}_{1}\nabla_{X} \psi\Big{)}-\beta\nabla_{X}\cdot\Big{(}\frac{h}{h_{b}}\mathcal{L}_{1}^{\mu}[ \beta b]\nabla_{X}\psi\Big{)}\] \[\quad-\frac{\mu\beta}{2}\nabla_{X}\cdot(h\mathrm{F}_{4}\nabla_{X }G)+\frac{\mu\beta^{2}}{2}\nabla_{X}\cdot\Big{(}h\big{(}\nabla_{X}\big{(}b \nabla_{X}\cdot(b\nabla_{X}\psi)\big{)}+2(\nabla_{X}b)\nabla_{X}\cdot(b \nabla_{X}\psi)\big{)}\Big{)}\] \[=\mathrm{RHS}_{1}+\mathrm{RHS}_{2}+\mathrm{RHS}_{3},\]
where
\[\mathrm{RHS}_{1}: =-\nabla_{X}\cdot\Big{(}\frac{h}{h_{b}}\mathrm{F}_{1}\nabla_{X} \psi\Big{)}-\beta\nabla_{X}\cdot\Big{(}\frac{h}{h_{b}}\mathcal{L}_{1}^{\mu}[ \beta b]\nabla_{X}\psi\Big{)}\] \[=-\mathrm{F}_{1}\Delta_{X}\psi-\varepsilon\nabla_{X}\cdot\big{(} \frac{\zeta}{h_{b}}\mathrm{F}_{1}\nabla_{X}\psi\big{)}-\beta\nabla_{X}\cdot \big{(}\mathcal{L}_{1}^{\mu}[\beta b]\nabla_{X}\psi\big{)}-\varepsilon\beta \nabla_{X}\cdot\big{(}\frac{\zeta}{h_{b}}\mathcal{L}_{1}^{\mu}[\beta b]\nabla _{X}\psi\big{)},\]
and we use (1.15) to get the approximation
\[\mathcal{L}_{1}^{\mu}[\beta b]\nabla_{X}\psi=-\beta b\nabla_{X}\psi+\mu R.\]
Then we obtain that
\[\text{RHS}_{1} =-\text{F}_{1}\Delta_{X}\psi-\beta\nabla_{X}\cdot\big{(}\mathcal{L}_ {1}^{\mu}[\beta b]\nabla_{X}\psi\big{)}-\varepsilon\nabla_{X}\cdot\big{(}\frac{ \zeta}{h_{b}}\text{F}_{1}\nabla_{X}\psi\big{)}+\varepsilon\nabla_{X}\cdot\big{(} \frac{\zeta}{h_{b}}\beta b\nabla_{X}\psi\big{)}+\mu\varepsilon R\] \[=-\text{F}_{1}\Delta_{X}\psi-\beta\nabla_{X}\cdot\big{(}\mathcal{ L}_{1}^{\mu}[\beta b]\nabla_{X}\psi\big{)}-\varepsilon\nabla_{X}\cdot\big{(} \zeta\text{F}_{1}\nabla_{X}\psi\big{)}+\mu\varepsilon R.\]
For the remaining three terms, we first note that
\[\text{RHS}_{2} :=-\frac{\mu\beta}{2}\nabla_{X}\cdot(h\text{F}_{4}\nabla_{X}G)\] \[=-\frac{\mu\beta}{2}\text{F}_{4}\Delta_{X}G+\frac{\mu\beta^{2}} {2}\nabla_{X}\cdot(b\nabla_{X}G)+\frac{\mu\varepsilon\beta}{2}R_{1}+\frac{\mu ^{2}\beta^{2}}{2}R_{2},\]
where \(R_{1}\) is given by
\[R_{1}=-\nabla_{X}\cdot(\zeta\text{F}_{4}\nabla_{X}G),\]
and \(R_{2}\) is given by
\[R_{2}=\frac{1}{\mu}\nabla_{X}\cdot(b(\text{F}_{4}-1)\nabla_{X}G).\]
Using the estimates in Proposition A.7 and (1.15) allows us to put \(R_{1}\) and \(R_{2}\) in the rest \(R\) satisfying (2.42). Moreover, since \(G=\nabla_{X}\cdot\big{(}\mathcal{L}_{1}^{\mu}[\beta b]\nabla_{X}\psi\big{)}\), we obtain
\[\text{RHS}_{2} =-\frac{\mu\beta}{2}\text{F}_{4}\Delta_{X}\nabla_{X}\cdot\big{(} \mathcal{L}_{1}^{\mu}[\beta b]\nabla_{X}\psi\big{)}+\frac{\mu\beta^{2}}{2} \nabla_{X}\cdot\big{(}b\nabla_{X}(\nabla_{X}\cdot(b\nabla_{X}\psi))\big{)}\] \[\quad+(\mu\varepsilon\beta+\mu^{2}\beta^{2})R.\]
To conclude, we identify the remaining terms with the ones in \(\nabla_{X}\cdot\big{(}\mathcal{B}[\beta b]\nabla_{X}\psi\big{)}\) by (2.39), and we conclude by (2.29) that
\[\frac{1}{\mu}|\mathcal{G}^{\mu}\psi-\mathcal{G}_{0}\psi|_{H^{s}} =|\nabla_{X}\cdot(h(\overline{V}-\overline{V}_{0}))|_{H^{s}}\] \[\leq M(s+3)|\nabla_{X}\psi|_{H^{s+5}}.\]
The proof of inequality (2.41) is similar, where we first use formula (2.30) to get that
\[\frac{1}{\mu}\mathcal{G}_{1}\psi =-\nabla_{X}\cdot(h\overline{V}_{\text{app}})\] \[=-\nabla_{X}\cdot(h\nabla_{X}\psi)-\mu\Delta_{X}\Big{(}\frac{h^{ 3}}{h^{3}_{b}}\text{F}_{2}\psi\Big{)}-\mu\beta\Delta_{X}\Big{(}\frac{h^{3}}{h^ {3}_{b}}\mathcal{L}_{2}^{\mu}[\beta b]\psi\Big{)}\] \[\quad+\frac{\mu\beta^{2}}{2}\nabla_{X}\cdot\Big{(}h(b\nabla_{X} \big{(}\nabla_{X}\cdot(b\nabla_{X}\psi)\big{)}+(\nabla_{X}b)\nabla_{X}\cdot( b\nabla_{X}\psi))\Big{)}.\]
Then using the same arguments as for \(\mathcal{G}_{0}\), for the last three terms, we know there is a function \(R\) such that
\[\frac{1}{\mu}\mathcal{G}_{1}\psi =-\nabla_{X}\cdot(h\nabla_{X}\psi)-\mu\Delta_{X}\Big{(}\frac{h^{3} }{h^{3}_{b}}\text{F}_{2}\psi\Big{)}-\mu\beta\Delta_{X}\Big{(}\frac{h^{3}}{h^ {3}_{b}}\mathcal{L}_{2}^{\mu}[\beta b]\psi\Big{)}\] \[\quad-\frac{\mu\beta}{2}\text{F}_{4}\Delta_{X}\nabla_{X}\cdot \big{(}\mathcal{L}_{1}^{\mu}[\beta b]\nabla_{X}\psi\big{)}+\frac{\mu\beta^{2}} {2}\nabla_{X}\cdot\big{(}\mathcal{B}[\beta b]\nabla_{X}\psi\big{)}+(\mu \varepsilon\beta+\mu^{2}\beta^{2})R,\]
where \(R\) satisfies (2.42). Thus, we only use (1.13) to say
\[\mu\beta\mathcal{L}_{2}^{\mu}[\beta b]=\mu\beta R. \tag{2.43}\]
and combine it with the observation \(\frac{h^{3}}{h^{3}_{b}}-1=\varepsilon R\), allowing us to neglect the term
\[\mu\beta\Delta_{X}\Big{(}(\frac{h^{3}}{h^{3}_{b}}-1)\mathcal{L}^{\mu}_{2}[\beta b ]\Delta_{X}\psi\Big{)}=\mu\varepsilon\beta R.\]
By estimate (2.31) we conclude that (2.41) holds true.
## 3. Derivation of Whitham-Boussinesq systems with bathymetry
In this section, we derive a family of full dispersion Boussinesq systems in the shallow water regime with precision \(O(\mu\varepsilon+\mu^{2}\beta^{2})\). Since the precision is of higher order in \(\beta\), these systems can handle large amplitude topography variations. The first result of this section reads:
**Theorem 3.1**.: _Let \(\mathrm{F}_{1}\) and \(\mathrm{F}_{4}\) be the two Fourier multipliers given in Definition 1.6, and let \(\mathcal{L}^{\mu}_{1}\) be given in Definition 1.10. Then for any \(\mu\in(0,1]\), \(\varepsilon\in[0,1]\), and \(\beta\in[0,1]\) the water waves equations (1.1) are consistent, in the sense of Definition 1.13 with \(n=5\), at order \(O(\mu\varepsilon+\mu^{2}\beta^{2})\) with the Whitham-Boussinesq system:_
\[\begin{cases}\partial_{t}\zeta+\mathrm{F}_{1}\Delta_{X}\psi+\beta(1+\frac{\mu }{2}\mathrm{F}_{4}\Delta_{X})\nabla_{X}\cdot(\mathcal{L}^{\mu}_{1}[\beta b] \nabla_{X}\psi)\\ \qquad\qquad\qquad\qquad\qquad\qquad\qquad+\varepsilon\mathrm{G}_{1}\nabla_{X }\cdot(\zeta\mathrm{G}_{2}\nabla_{X}\psi)-\frac{\mu\beta^{2}}{2}\nabla_{X} \cdot\big{(}\mathcal{B}[\beta b]\nabla_{X}\psi\big{)}=0\\ \partial_{t}\psi+\zeta+\frac{\varepsilon}{2}(\mathrm{G}_{1}\nabla_{X}\psi) \cdot(\mathrm{G}_{2}\nabla_{X}\psi)=0,\end{cases} \tag{3.1}\]
_where_
\[\mathcal{B}[\beta b]\bullet=b\nabla_{X}(\nabla_{X}\cdot(b\bullet))+h_{b}\nabla _{X}\big{(}b\nabla_{X}\cdot(b\bullet)\big{)}+2h_{b}(\nabla_{X}b)\nabla_{X} \cdot(b\bullet),\]
_and \(\mathrm{G}_{1},\mathrm{G}_{2}\) are any Fourier multipliers such that for any \(s\geq 0\) and \(u\in H^{s+2}(\mathbb{R}^{d})\), we have_
\[|(\mathrm{G}_{j}-1)u|_{H^{s}}\lesssim\mu|u|_{H^{s+2}}.\]
Proof.: To start, we replace the Dirichlet-Neumann operator by (A.5) and its expansion given by (2.40) and discarding all the terms of order \(O(\mu(\varepsilon+\mu\beta^{2}))\) in the water waves equations (1.1) yields,
\[\begin{cases}\partial_{t}\zeta+\mathrm{F}_{1}\Delta_{X}\psi+\beta(1+\frac{\mu }{2}\mathrm{F}_{4}\Delta_{X})\nabla_{X}\cdot\big{(}\mathcal{L}^{\mu}_{1}[\beta b ]\nabla_{X}\psi\big{)}+\varepsilon\nabla_{X}\cdot\big{(}\zeta\mathrm{F}_{1} \nabla_{X}\psi\big{)}\\ \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad-\frac{\mu\beta^{2}} {2}\nabla_{X}\cdot\big{(}\mathcal{B}[\beta b]\nabla_{X}\psi\big{)}=(\mu \varepsilon+\mu^{2}\beta^{2})R,\\ \partial_{t}\psi+\zeta+\frac{\varepsilon}{2}|\nabla_{X}\psi|^{2}=\mu \varepsilon R.\end{cases}\]
where we introduced a generic function \(R\) such that
\[|R|_{H^{s}}\leq M(s+3)|\nabla_{X}\psi|_{H^{s+5}}. \tag{3.2}\]
To complete the proof, we use the assumption on \(\mathrm{G}_{j}\) whenever there is the appearance of an \(\varepsilon\). Then apply estimate (2.40) up to the rest \(R\) satisfying (3.2).
The next result concerns a Whitham-Boussinesq system for which the first equation is exact and where the unknowns are given in terms of \((\zeta,\overline{V})\).
**Theorem 3.2**.: _Let \(\mathrm{F}_{1}\) and \(\mathrm{F}_{4}\) be the two Fourier multipliers given in Definition 1.6, and let \(\mathcal{L}^{\mu}_{1}\) be given in Definition 1.10. Then for any \(\mu\in(0,1]\), \(\varepsilon\in[0,1]\), and \(\beta\in[0,1]\) the
water waves equations (1.1) are consistent, in the sense of Definition 1.13 with \(n=6\), at order \(O(\mu\varepsilon+\mu^{2}\beta^{2})\) with the Whitham-Boussinesq system:_
\[\begin{cases}\partial_{t}\zeta+\nabla_{X}\cdot(h\overline{V})=0\\ \partial_{t}\overline{V}+\mathcal{T}_{0}^{\mu}[\beta b,\varepsilon\zeta]\nabla _{X}\zeta+\frac{\varepsilon}{2}\nabla_{X}|\overline{V}|^{2}=\mathbf{0},\end{cases} \tag{3.3}\]
_where_
\[h\mathcal{T}_{0}^{\mu}[\beta b,\varepsilon\zeta]\bullet =\mathrm{F}_{1}\bullet+\beta\mathcal{L}_{1}^{\mu}[\beta b]\bullet +\varepsilon\zeta\mathrm{F}_{1}\bullet+\frac{\mu\beta}{2}h_{b}\nabla_{X} \mathrm{F}_{4}\nabla_{X}\cdot\left(\mathcal{L}_{1}^{\mu}[\beta b]\bullet \right)\] \[\quad-\frac{\mu\beta^{2}}{2}h_{b}\nabla_{X}\big{(}b\nabla_{X} \cdot(b\bullet)\big{)}-\mu\beta^{2}h_{b}(\nabla_{X}b)\nabla_{X}\cdot(b\bullet).\]
Proof.: The first equation is exact by identity (1.6), and so we only work with the second equation of (1.1). However, using Theorem 1.14 we can work directly of on the second equation of (3.3) in the case \(\mathrm{G}_{1}=\mathrm{G}_{2}=\mathrm{Id}\). Also, since we will take the gradient of \(\psi\) we need to increase the regularity of our rest function. In particular, let \(R\) be a generic function such that
\[|R|_{H^{s}}\leq M(s+3)|\nabla_{X}\psi|_{H^{s+6}}.\]
Then by (2.28) there holds
\[h\overline{V} =\frac{h}{h_{b}}\mathrm{F}_{1}\nabla_{X}\psi+\beta\frac{h}{h_{b} }\mathcal{L}_{1}^{\mu}[\beta b]\nabla_{X}\psi+\frac{\mu\beta}{2}h_{b}\nabla_{X }\mathrm{F}_{4}\nabla_{X}\cdot\mathcal{L}_{1}^{\mu}[\beta b]\nabla_{X}\psi\] \[\quad-\frac{\mu\beta^{2}}{2}h_{b}\nabla_{X}\big{(}b\nabla_{X} \cdot(b\nabla_{X}\psi)\big{)}-\mu\beta^{2}h_{b}(\nabla_{X}b)\nabla_{X}\cdot(b \nabla_{X}\psi)+(\mu\varepsilon+\mu^{2}\varepsilon^{2})R.\]
Moreover, by (1.10) and (A.7) we make the observation
\[\frac{h}{h_{b}}\Big{(}\mathrm{F}_{1}\nabla_{X}\psi+\beta\mathcal{L}_{1}^{\mu} [\beta b]\nabla_{X}\psi\Big{)}=\mathrm{F}_{1}\nabla_{X}\psi+\beta\mathcal{L}_ {1}^{\mu}[\beta b]\nabla_{X}\psi+\varepsilon\zeta\mathrm{F}_{1}\nabla_{X} \psi+\mu\varepsilon R,\]
so that
\[h\overline{V}=h\mathcal{T}_{0}^{\mu}[\beta b,\varepsilon\zeta]\nabla_{X}\psi +\mu\varepsilon R.\]
From this expression, we can use the first equation to see that \(\partial_{t}h=-\varepsilon\nabla_{X}\cdot(h\overline{V})\), and then (A.7) use with the relation \(\nabla_{X}\psi=\overline{V}+\mu R\) to get that:
\[h\partial_{t}\overline{V} =(\partial_{t}h)\big{(}\mathrm{F}_{1}\nabla_{X}\psi-\overline{V} \big{)}+h\mathcal{T}_{0}^{\mu}[\beta b,\varepsilon\zeta]\nabla_{X}\partial_{t}\psi\] \[=h\mathcal{T}_{0}^{\mu}[\beta b,\varepsilon\zeta]\nabla_{X} \partial_{t}\psi.\]
We may now use this relation in the second equation of (3.3) where we apply the gradient and \(\mathcal{T}_{0}[\beta b,\varepsilon\zeta]\) to obtain that
\[h\partial_{t}\overline{V}+h\mathcal{T}_{0}^{\mu}[\beta b,\varepsilon\zeta] \nabla_{X}\zeta+\frac{\varepsilon}{2}h\mathcal{T}_{0}^{\mu}[\beta b, \varepsilon\zeta]\nabla_{X}|\overline{V}|^{2}=(\mu\varepsilon+\mu^{2}\beta^{2 })R.\]
Then we conclude from the fact that \(\mathcal{T}_{0}^{\mu}[\beta b,\varepsilon\zeta]=\mathrm{Id}+\mu R\).
#### 3.0.1. Hamiltonian structure
We end this section by briefly commenting on the Hamiltonian structure of Whitham-Boussinesq systems with bathymetry. Recall the Hamiltonian of the water waves equations (1.1) [19]:
\[H(\zeta,\psi)=\frac{1}{2}\int_{\mathbb{R}^{d}}\zeta^{2}\;\mathrm{d}X+\frac{1}{2 \mu}\int_{\mathbb{R}^{d}}\psi\mathcal{G}^{\mu}\psi\;\mathrm{d}X, \tag{3.4}\]
with \(H(\zeta,\psi)\) satisfying the system
\[\begin{cases}\partial_{t}\zeta&=\delta_{\psi}H\\ \partial_{t}\psi&=-\delta_{\zeta}H,\end{cases} \tag{3.5}\]
where \(\delta_{\psi}\) and \(\delta_{\zeta}\) are functional derivatives. Then replacing the Dirichlet-Neumann operator in (3.4) with its approximation (2.40) we observe that
\[H(\zeta,\psi) =\frac{1}{2}\int_{\mathbb{R}^{d}}\zeta^{2}\,\mathrm{d}X+\frac{1}{ 2}\int_{\mathbb{R}^{d}}\mathrm{F}_{1}\nabla_{X}\psi\cdot\nabla_{X}\psi\, \mathrm{d}X \tag{3.6}\] \[\quad+\frac{\varepsilon}{2}\int_{\mathbb{R}^{d}}\zeta\mathrm{G} \nabla_{X}\psi\cdot\mathrm{G}\nabla_{X}\psi\,\mathrm{d}X+\frac{\beta}{2}\int_ {\mathbb{R}^{d}}\mathcal{L}_{1}^{\mu}[\beta b]\nabla_{X}\psi\cdot\nabla_{X} \psi\,\mathrm{d}X\] \[\quad+\frac{\mu\beta}{4}\int_{\mathbb{R}^{d}}\mathrm{F}_{4} \Delta_{X}\mathcal{L}_{1}^{\mu}[\beta b]\nabla_{X}\psi\cdot\nabla_{X}\psi\, \mathrm{d}X-\frac{\mu\beta^{2}}{4}\int_{\mathbb{R}^{d}}\nabla_{X}\psi\cdot \mathcal{B}[\beta b]\nabla_{X}\psi\,\mathrm{d}X\] \[\quad+O(\mu\varepsilon+\mu^{2}\beta^{2}),\]
for some Fourier multiplier \(\mathrm{G}\) of the form \(\mathrm{G}=1+O(\mu)\).
Now, to compute the functional derivatives in system (3.5) we note that the Fourier multipliers that appear are self-adjoint. While for the pseudo-differential operator of order zero, \(\mathcal{L}_{1}^{\mu}\), one can use the fact that there exists an adjoint. However, a simpler approach is to approximate it by (1.16) and gives
\[\mathcal{L}_{1}^{\mu}[\beta b]=-b\mathrm{F}_{3}-\frac{\mu\beta^{2}}{6}b^{3}| \mathrm{D}|^{2}\mathrm{F}_{3}+O(\mu^{2}\beta^{4}).\]
So using that we get
\[\mathrm{RHS}_{1}: =\frac{\beta}{2}\int_{\mathbb{R}^{d}}\mathcal{L}_{1}^{\mu}[\beta b ]\nabla_{X}\psi\cdot\nabla_{X}\psi\,\mathrm{d}X\] \[=-\frac{\beta}{2}\int_{\mathbb{R}^{d}}b\mathrm{F}_{3}\nabla_{X} \psi\cdot\nabla_{X}\psi\,\mathrm{d}X+\frac{\mu\beta^{3}}{12}\int_{\mathbb{R}^ {d}}b^{3}\Delta_{X}\mathrm{F}_{3}\nabla_{X}\psi\cdot\nabla_{X}\psi\,\mathrm{d }X+O(\mu^{2}\beta^{5}),\]
and
\[\mathrm{RHS}_{2}: =\frac{\mu\beta}{4}\int_{\mathbb{R}^{d}}\mathrm{F}_{4}\Delta_{X} \mathcal{L}_{1}^{\mu}[\beta b]\nabla_{X}\psi\cdot\nabla_{X}\psi\,\mathrm{d}X\] \[=-\frac{\mu\beta}{4}\int_{\mathbb{R}^{d}}\mathrm{F}_{4}\Delta_{X} (b\nabla_{X}\psi)\cdot\nabla_{X}\psi\,\mathrm{d}X+O(\mu^{2}\beta^{3}).\]
In particular, the first equation in (3.5) is given by
\[\delta_{\psi}H=-\mathrm{F}_{1}\Delta_{X}\psi+\nabla_{X}\cdot(\mathcal{A}^{\mu} [\beta b]\nabla_{X}\psi)-\varepsilon\mathrm{G}\nabla_{X}\cdot(\zeta\mathrm{G} \nabla_{X}\psi)+\frac{\mu\beta^{2}}{4}\nabla_{X}\cdot\Big{(}(\mathcal{B}[ \beta b]+\mathcal{B}[\beta b]^{*})\nabla_{X}\psi\Big{)},\]
where
\[\mathcal{A}^{\mu}[\beta b]\bullet=\frac{\beta}{2}\big{(}\mathrm{F}_{3}(b \bullet)+b\mathrm{F}_{3}\bullet\big{)}+\frac{\mu\beta}{2}\big{(}\mathrm{F}_{4 }\Delta_{X}(b\bullet)+b\mathrm{F}_{4}\Delta_{X}\bullet\big{)}-\frac{\mu\beta^{ 3}}{12}\big{(}b^{3}\Delta_{X}\mathrm{F}_{3}\bullet+\Delta_{X}\mathrm{F}_{3}(b^ {3}\bullet)\Big{)},\]
and where \(\mathcal{B}[\beta b]^{*}\) stands for the adjoint of \(\mathcal{B}[\beta b]\) and reads
\[\mathcal{B}[\beta b]^{*}\nabla_{X}\psi=b\nabla_{X}\nabla_{X}\cdot(b\nabla_{X} \psi)+b\nabla_{X}(b\nabla_{X}\cdot(h_{b}\nabla_{X}\psi))+2b\nabla_{X}(h_{b} \nabla_{X}b\cdot\nabla_{X}\psi).\]
Similarly for the second equation:
\[\delta_{\zeta}H=-\zeta-\frac{\varepsilon}{2}|\mathrm{G}\nabla_{X}\psi|^{2}.\]
Then using (3.5), we will arrive at the following system
\[\begin{cases}\partial_{t}\zeta+\mathrm{F}_{1}\Delta_{X}\psi-\nabla_{X}\cdot \bigl{(}\mathcal{A}^{\mu}[\beta b]\nabla_{X}\psi\bigr{)}+\varepsilon\mathrm{G} \nabla_{X}\cdot(\zeta\mathrm{G}\nabla_{X}\psi)\\ \qquad\qquad\qquad\qquad\qquad\qquad\qquad-\frac{\mu\beta^{2}}{4}\nabla_{X} \cdot\Bigl{(}\bigl{(}\mathcal{B}[\beta b]+\mathcal{B}[\beta b]^{*}\bigr{)} \nabla_{X}\psi\Bigr{)}=0\\ \partial_{t}\psi+\zeta+\frac{\varepsilon}{2}|\mathrm{G}\nabla_{X}\psi|^{2}=0, \end{cases} \tag{3.7}\]
where its Hamiltonian reads:
\[H(\zeta,\psi) =\frac{1}{2}\int_{\mathbb{R}^{d}}\zeta^{2}\,\mathrm{d}X+\frac{1}{ 2}\int_{\mathbb{R}^{d}}\mathrm{F}_{1}\nabla_{X}\psi\cdot\nabla_{X}\psi\, \mathrm{d}X\] \[\quad+\frac{\varepsilon}{2}\int_{\mathbb{R}^{d}}\zeta\mathrm{G} \nabla_{X}\psi\cdot\mathrm{G}\nabla_{X}\psi\,\mathrm{d}X-\frac{\beta}{2}\int_ {\mathbb{R}^{d}}(1+\frac{\mu}{2}\mathrm{F}_{4}\Delta_{X})b\mathrm{F}_{3} \nabla_{X}\psi\cdot\nabla_{X}\psi\,\mathrm{d}X\] \[\quad-\frac{\mu\beta^{2}}{4}\int_{\mathbb{R}^{d}}\mathcal{B}[ \beta b]\nabla_{X}\psi\cdot\nabla_{X}\psi\,\mathrm{d}X,\]
and is preserved by smooth solutions of (3.7).
**Remark 3.3**.: _If we neglect terms of order \(O(\mu\varepsilon+\mu\beta)\), using \(\mathrm{F}_{3}=1+O(\mu)\), we obtain the system derived in [11]._
## 4. Derivation Whitham-Green-Naghdi systems with bathymetry
In this section, we derive full dispersion Green-Naghdi systems in the shallow water regime with precision \(O(\mu^{2}\varepsilon+\mu\varepsilon\beta+\mu^{2}\beta^{2})\). The following Whiham-Green-Naghdi system may be derived from the water waves equations:
**Theorem 4.1**.: _Let \(\mathrm{F}_{2}\) and \(\mathrm{F}_{4}\) be the two Fourier multipliers given in Definition 1.6, and let \(\mathcal{L}_{2}^{\mu}\) be given in Definition 1.10. Then for any \(\mu\in(0,1]\), \(\varepsilon\in[0,1]\), and \(\beta\in[0,1]\) the water waves equations (1.1) are consistent, in the sense of Definition 1.13 with \(n=5\), at order \(O(\mu^{2}\varepsilon+\mu\varepsilon\beta+\mu^{2}\beta^{2})\) with the Whitham-Green-Naghdi system:_
\[\begin{cases}\partial_{t}\zeta+\nabla_{X}\cdot(h\mathcal{T}_{1}^{\mu}[\beta b,\varepsilon\zeta]\nabla_{X}\psi)-\frac{\mu\beta^{2}}{2}\nabla_{X}\cdot \bigl{(}\mathcal{B}[\beta b]\nabla_{X}\psi\bigr{)}=0\\ \partial_{t}\psi+\zeta+\frac{\varepsilon}{2}|\nabla_{X}\psi|^{2}-\frac{\mu \varepsilon}{2}h^{2}(\sqrt{\mathrm{F}_{2}}\Delta_{X}\psi)^{2}=0,\end{cases} \tag{4.1}\]
_where_
\[\mathcal{B}[\beta b]\bullet=b\nabla_{X}(\nabla_{X}\cdot(b\bullet))+h_{b} \nabla_{X}\bigl{(}b\nabla_{X}\cdot(b\bullet)\bigr{)}+2h_{b}(\nabla_{X}b) \nabla_{X}\cdot(b\bullet),\]
_and_
\[\mathcal{T}_{1}^{\mu}[\beta b,\varepsilon\zeta]\bullet =\mathrm{Id}+\frac{\mu}{3h}\nabla_{X}\sqrt{\mathrm{F}_{2}}\Bigl{(} \frac{h^{3}}{h^{3}_{b}}\sqrt{\mathrm{F}_{2}}\nabla_{X}\cdot\bullet\Bigr{)}+ \frac{\mu\beta}{h}\nabla_{X}\Bigl{(}\mathcal{L}_{2}^{\mu}[\beta b]\nabla_{X} \cdot\bullet\Bigr{)}\] \[\quad+\frac{\mu\beta}{2h}\mathrm{F}_{4}\nabla_{X}\nabla_{X}\cdot \bigl{(}\mathcal{L}_{1}^{\mu}[\beta b]\bullet\bigr{)},\]
_and \(\sqrt{\mathrm{F}_{2}}\) is the square root of \(\mathrm{F}_{2}\)._
Proof.: We see that the first equation can be deduced by trading the Dirichlet-Neumann operator with its approximation (2.41). Indeed, we obtain that
\[\partial_{t}\zeta+\nabla_{X}\cdot(h\nabla_{X}\psi) +\frac{\mu}{3}\Delta_{X}\Bigl{(}\frac{h^{3}}{h^{3}_{b}}\mathrm{F} _{2}\Delta_{X}\psi\Bigr{)}+\mu\beta\Delta_{X}\bigl{(}\mathcal{L}_{2}^{\mu}[ \beta b]\Delta_{X}\psi\bigr{)}\] \[+\frac{\mu\beta}{2}\mathrm{F}_{4}\Delta_{X}\nabla_{X}\cdot \mathcal{L}_{1}^{\mu}[\beta b]\nabla_{X}\psi-\frac{\mu\beta^{2}}{2}\mathcal{B} [\beta b]\nabla_{X}\psi=(\mu^{2}\varepsilon+\mu\varepsilon\beta+\mu^{2}\beta^{2})R,\]
where we introduce a generic function \(R\) such that
\[|R|_{H^{s}}\leq M(s+3)|\nabla_{X}\psi|_{H^{s+5}}. \tag{4.2}\]
Therefore we need to approximate the term
\[\frac{\mu}{3}\Delta_{X}\Big{(}\frac{h^{3}}{h_{b}^{3}}\mathrm{F}_{2}\Delta_{X}\psi \Big{)}\]
at order \(O(\mu^{2}\varepsilon+\mu\varepsilon\beta)\). Indeed, using (A.7) to say \(\mathrm{F}_{2}=1+\mu R\) and \(\sqrt{\mathrm{F}_{2}}=1+\mu R\), we obtain
\[\frac{\mu}{3}\Delta_{X}\Big{(}(\frac{h^{3}}{h_{b}^{3}}-1)\mathrm{F}_{2}\Delta_ {X}\psi\Big{)}=\frac{\mu}{3}\Delta_{X}\sqrt{\mathrm{F}_{2}}\Big{(}(\frac{h^{3 }}{h_{b}^{3}}-1)\sqrt{\mathrm{F}_{2}}\Delta_{X}\psi\Big{)}+\mu^{2}\varepsilon R.\]
Gathering these observations yields
\[\frac{\mu}{3}\Delta_{X}\Big{(}\frac{h^{3}}{h_{b}^{3}}\mathrm{F}_{2}\Delta_{X} \psi\Big{)}=\frac{\mu}{3}\Delta_{X}\sqrt{\mathrm{F}_{2}}\Big{(}\frac{h^{3}}{h_ {b}^{3}}\sqrt{\mathrm{F}_{2}}\Delta_{X}\psi\Big{)}+\mu^{2}\varepsilon R. \tag{4.3}\]
For the second equation, we use (A.5) to make the observation
\[\frac{(\frac{1}{\mu}\mathcal{G}^{\mu}[\varepsilon\zeta,\beta b] \psi+\varepsilon\nabla_{X}\zeta\cdot\nabla_{X}\psi)^{2}}{1+\varepsilon^{2}\mu |\nabla_{X}\zeta|^{2}}-\Big{(}\frac{1}{\mu}\mathcal{G}^{\mu}[\varepsilon\zeta, \beta b]\psi+\varepsilon\nabla_{X}\zeta\cdot\nabla_{X}\psi\Big{)}^{2}\] \[=\frac{\mu\varepsilon^{2}|\nabla_{X}\zeta|^{2}(\frac{1}{\mu} \mathcal{G}^{\mu}[\varepsilon\zeta,\beta b]\psi+\varepsilon\nabla_{X}\zeta \cdot\nabla_{X}\psi)^{2}}{1+\varepsilon^{2}\mu|\nabla_{X}\zeta|^{2}}\] \[=\mu\varepsilon^{2}R.\]
Meaning that we only need to make an approximation of
\[\Big{(}\frac{1}{\mu}\mathcal{G}^{\mu}[\varepsilon\zeta,\beta b]\psi+ \varepsilon\nabla_{X}\zeta\cdot\nabla_{X}\psi\Big{)}^{2},\]
at order \(O(\mu)\). In particular, we use (A.5) to simplify the second equation in the water waves equations (1.1) to get that
\[\partial_{t}\psi+\zeta+\frac{\varepsilon}{2}|\nabla_{X}\psi|^{2}-\frac{\mu \varepsilon}{2}\Big{(}\frac{1}{\mu}\mathcal{G}^{\mu}[\varepsilon\zeta,\beta b ]\psi+\varepsilon\nabla_{X}\zeta\cdot\nabla_{X}\psi\Big{)}^{2}=\mu^{2} \varepsilon R. \tag{4.4}\]
Then using (A.5) we have that
\[\frac{1}{\mu}\mathcal{G}^{\mu}[\varepsilon\zeta,\beta b]\psi =-\nabla_{X}\cdot(h\nabla_{X}\psi)+\mu R\] \[=-h\Delta_{X}\psi-\varepsilon\nabla_{X}\zeta\cdot\nabla_{X}\psi+ (\mu+\beta)R,\]
and we may use this expression to simplify (4.4) where we again use that \(\sqrt{\mathrm{F}_{2}}=1+\mu R\). Thus, we conclude the proof of this theorem with estimate (A.5) up to a rest \(R\) satisfying (4.2).
One may also derive a system with unknowns \((\zeta,\overline{V})\) instead of \((\zeta,\psi)\), for which the first equation is exact. The new system reads:
**Theorem 4.2**.: _Let \(\mathrm{F}_{2}\) and \(\mathrm{F}_{4}\) be the two the Fourier multipliers given in Definition 1.6, let \(\mathcal{L}_{1}^{\mu}\) and \(\mathcal{L}_{2}^{\mu}\) be given in Definition 1.10. Then for any \(\mu\in(0,1]\), \(\varepsilon\in[0,1]\), and \(\beta\in[0,1]\)
the water waves equations (1.1) are consistent, in the sense of Definition 1.13 with \(n=6\), at order \(O(\mu^{2}\varepsilon+\mu\varepsilon\beta+\mu^{2}\beta^{2})\) with the Whitham-Green-Naghdi system:_
\[\begin{cases}\partial_{t}\zeta+\nabla_{X}\cdot(h\overline{V})=0,\\ \partial_{t}(\mathcal{I}^{\mu}[h]\overline{V})+\mathcal{I}[h]\mathcal{T}_{2}^ {\mu}[\beta b,h]\nabla_{X}\zeta+\frac{\varepsilon}{2}\nabla_{X}\big{(}| \overline{V}|^{2}\big{)}+\mu\varepsilon\nabla_{X}\mathcal{R}_{1}^{\mu}[\beta b,h,\overline{V}]=\mathbf{0},\end{cases} \tag{4.5}\]
_where \(\overline{V}\) defined by (1.5),_
\[\mathcal{I}^{\mu}[h]\bullet=\mathrm{Id}-\frac{\mu}{3h}\sqrt{\mathrm{F}_{2}} \nabla_{X}\Big{(}h^{3}\sqrt{\mathrm{F}_{2}}\nabla_{X}\cdot\bullet\Big{)}, \tag{4.6}\]
\[\mathcal{T}_{2}^{\mu}[\beta b,\varepsilon\zeta]\bullet =\mathrm{Id}+\frac{\mu}{3h}\sqrt{\mathrm{F}_{2}}\nabla_{X}\Big{(} \frac{h^{3}}{h_{b}^{3}}\sqrt{\mathrm{F}_{2}}\nabla_{X}\cdot\bullet\Big{)}+ \frac{\mu\beta}{h}\nabla_{X}\Big{(}\mathcal{L}_{2}^{\mu}[\beta b]\nabla_{X} \cdot\bullet\Big{)}\] \[+\frac{\mu\beta h_{b}}{2h}\nabla_{X}\mathrm{F}_{4}\nabla_{X} \cdot\big{(}\mathcal{L}_{1}^{\mu}[\beta b]\bullet\big{)}-\frac{\mu\beta^{2}h_ {b}}{2h}\nabla_{X}\big{(}b\nabla_{X}\cdot(b\bullet)\big{)}-\frac{\mu\beta^{2}h _{b}}{h}(\nabla_{X}b)\nabla_{X}\cdot(b\bullet),\]
_and_
\[\mathcal{R}_{1}^{\mu}[\beta b,h,\overline{V}]=-\frac{h^{2}}{2}(\nabla_{X} \cdot\overline{V})^{2}-\frac{1}{3h}\big{(}\nabla_{X}(h^{3}\nabla_{X}\cdot \overline{V})\big{)}\cdot\overline{V}-\frac{1}{2}h^{3}\Delta_{X}(|\overline{V }|^{2})+\frac{1}{6h}h^{3}\Delta_{X}(|\overline{V}|^{2}). \tag{4.7}\]
Proof.: The first equation is exact so we only need to work on the second equation. Also, since \(\overline{V}\) is related to the gradient of \(\psi\) we need to increase the regularity of the rest function \(R\). In particular, we introduce the function \(R\) such that
\[|R|_{H^{s}}\leq M(s+3)|\nabla_{X}\psi|_{H^{s+6}}. \tag{4.8}\]
Then from the estimate (2.30), (2.43) and the argument in the previous proof, we know that
\[h\overline{V} =h\nabla_{X}\psi+\frac{\mu}{3}\nabla_{X}\sqrt{\mathrm{F}_{2}} \Big{(}\frac{h^{3}}{h_{b}^{3}}\sqrt{\mathrm{F}_{2}}\Delta_{X}\psi\Big{)}+\mu \beta\nabla_{X}\Big{(}\mathcal{L}_{2}^{\mu}[\beta b]\Delta_{X}\psi\Big{)} \tag{4.9}\] \[\quad+\frac{\mu\beta h_{b}}{2}\nabla_{X}\mathrm{F}_{4}\nabla_{X} \cdot\big{(}\mathcal{L}_{1}^{\mu}[\beta b]\nabla_{X}\psi\big{)}-\frac{\mu\beta ^{2}h_{b}}{2}\nabla_{X}\big{(}b\nabla_{X}\cdot(b\nabla_{X}\psi)\big{)}\] \[\quad-\mu\beta^{2}h_{b}(\nabla_{X}b)\nabla_{X}\cdot(b\nabla_{X} \psi)+(\mu^{2}\varepsilon+\mu\varepsilon\beta+\mu^{2}\beta^{2})R.\]
Deriving this equality in time and using the definition of \(\mathcal{T}_{2}[\beta b,\varepsilon\zeta]\bullet\) we obtain the relation
\[h\partial_{t}\overline{V} =\partial_{t}h(\nabla_{X}\psi-\overline{V})+h\mathcal{T}_{2}^{\mu }[\beta b,\varepsilon\zeta]\nabla_{X}\partial_{t}\psi+\frac{\mu}{3}\nabla_{X} \sqrt{\mathrm{F}_{2}}\Big{(}\partial_{t}\Big{(}\frac{h^{3}}{h_{b}^{3}}\Big{)} \sqrt{\mathrm{F}_{2}}\Delta_{X}\psi\Big{)} \tag{4.10}\] \[\quad+(\mu^{2}\varepsilon+\mu\varepsilon\beta+\mu^{2}\beta^{2})R.\]
Moreover, noting that \(\frac{1}{h_{b}}=1+\beta R\) we can deduce that
\[\frac{\mu}{3}\nabla_{X}\sqrt{\mathrm{F}_{2}}\Big{(}\partial_{t}\Big{(}\frac{h^ {3}}{h_{b}^{3}}\Big{)}\sqrt{\mathrm{F}_{2}}\Delta_{X}\psi\Big{)}=-\mu \varepsilon\nabla_{X}\sqrt{\mathrm{F}_{2}}\big{(}h^{2}\big{(}\nabla_{X}\cdot(h \overline{V})\big{)}\sqrt{\mathrm{F}_{2}}\Delta_{X}\psi\big{)}+\mu \varepsilon\beta R,\]
and using the first equation of system (4.5) we have that (4.10) is approximated by
\[h\mathcal{T}_{2}^{\mu}[\beta b,\varepsilon\zeta]\nabla_{X} \partial_{t}\psi =h\partial_{t}\overline{V}+\varepsilon\big{(}\nabla_{X}\cdot(h \overline{V})\big{)}(\nabla_{X}\psi-\overline{V}) \tag{4.11}\] \[\quad+\mu\varepsilon\nabla_{X}\sqrt{\mathrm{F}_{2}}\big{(}h^{2} \big{(}\nabla_{X}\cdot(h\overline{V})\big{)}\sqrt{\mathrm{F}_{2}}\Delta_{X} \psi\big{)}+(\mu^{2}\varepsilon+\mu\varepsilon\beta+\mu^{2}\beta^{2})R.\]
To conclude we simply need to approximate \(\nabla_{X}\psi\) by \(\overline{V}\) where we use (2.30) to get the classical approximation:
\[\begin{cases}\nabla_{X}\psi=\overline{V}+O(\mu)\\ \nabla_{X}\psi=\overline{V}-\frac{\mu}{3h}\nabla_{X}(h^{3}\nabla_{X}\cdot \overline{V})+\mu^{2}R.\end{cases} \tag{4.12}\]
Furthermore, using (4.12) and (4.11) we obtain that
\[h\mathcal{T}_{2}^{\mu}[\beta b,\varepsilon\zeta]\partial_{t} \nabla_{X}\psi =h\partial_{t}\overline{V}-\frac{\mu\varepsilon}{3h}\big{(}\nabla _{X}\cdot(h\overline{V})\big{)}\nabla_{X}(h^{3}\nabla_{X}\cdot\overline{V}) \tag{4.13}\] \[\quad+\mu\varepsilon\nabla_{X}\sqrt{\mathrm{F}_{2}}\big{(}h^{2} \big{(}\nabla_{X}\cdot(h\overline{V})\big{)}\sqrt{\mathrm{F}_{2}}\nabla_{X} \cdot\overline{V}\big{)}\] \[\quad+(\mu^{2}\varepsilon+\mu\varepsilon\beta+\mu^{2}\beta^{2})R.\]
We will now simplify the second equation of the water waves system (1.1) at order \(O(\mu^{2}\varepsilon+\mu\varepsilon\beta+\mu^{2}\beta^{2})\). Using Theorem 1.20 allows us to work with the second equation of (4.1). First use (4.12) to deduce that
\[|\nabla_{X}\psi|^{2}=|\overline{V}|^{2}-\frac{2\mu}{3h}\big{(}\nabla_{X}(h^{3 }\nabla_{X}\cdot\overline{V})\big{)}\cdot\overline{V}+\mu^{2}R.\]
With this relation, we may apply the gradient to the second equation of (4.1), and then apply the operator \(\mathcal{T}_{2}^{\mu}[\beta b,\varepsilon\zeta]\bullet\), using the approximation (4.12), and discarding all the terms of order \(O(\mu^{2}\varepsilon+\mu\varepsilon\beta+\mu^{2}\beta^{2})\) to get
\[\mathcal{T}_{2}^{\mu}[\beta b,\varepsilon\zeta]\nabla_{X} \partial_{t}\psi+\mathcal{T}_{2}^{\mu}[\beta b,\varepsilon\zeta]\nabla_{X} \zeta+\frac{\varepsilon}{2}\mathcal{T}_{2}^{\mu}[\beta b,\varepsilon\zeta] \Big{(}\nabla_{X}(|\overline{V}|^{2}-\frac{2\mu}{3h}\big{(}\nabla_{X}(h^{3} \nabla_{X}\cdot\overline{V})\big{)}\cdot\overline{V})\Big{)}\] \[\quad-\frac{\mu\varepsilon}{2}h^{2}|\nabla_{X}\overline{V}|^{2}=( \mu^{2}\varepsilon+\mu\varepsilon\beta+\mu^{2}\beta^{2})R.\]
Then we apply (4.13), neglecting \(\mathrm{F}_{2}\) whenever there are terms with \(\mu\varepsilon\) and together with the observation
\[\mathcal{T}_{2}^{\mu}[\beta b,\varepsilon\zeta]\bullet=\mathrm{Id}+\frac{\mu }{3h}\nabla_{X}\Big{(}h^{3}\nabla_{X}\cdot\bullet\Big{)}+\mu\beta R,\]
to deduce that
\[\partial_{t}\overline{V}+ \mathcal{T}_{2}^{\mu}[\beta b,\varepsilon\zeta]\nabla_{X}\zeta+ \frac{\varepsilon}{2}\nabla_{X}(|\overline{V}|^{2})-\frac{\mu\varepsilon}{3h^ {2}}\big{(}\nabla_{X}\cdot(h\overline{V})\big{)}\nabla_{X}(h^{3}\nabla_{X} \cdot\overline{V}) \tag{4.14}\] \[+\frac{\mu\varepsilon}{h}\nabla_{X}\big{(}h^{2}\big{(}\nabla_{X} \cdot(h\overline{V})\big{)}\nabla_{X}\cdot\overline{V}\big{)}+\frac{\mu \varepsilon}{3h}\nabla_{X}\big{(}h^{3}\Delta_{X}(|\overline{V}|^{2})\big{)}\] \[-\frac{\mu\varepsilon}{3}\nabla_{X}\Big{(}\frac{1}{h}\big{(} \nabla_{X}(h^{3}\nabla_{X}\cdot\overline{V})\big{)}\cdot\overline{V}\Big{)}- \frac{\mu\varepsilon}{2}\nabla_{X}\big{(}h^{2}|\nabla_{X}\overline{V}|^{2}\big{)}\] \[=(\mu^{2}\varepsilon+\mu\varepsilon\beta+\mu\beta^{2})R.\]
Now, using \(\sqrt{\mathrm{F}_{2}}=1+\mu R\) and \(\partial_{t}h=\varepsilon\nabla_{X}\cdot(h\overline{V})\) remark that
\[\partial_{t}(\overline{V}-\frac{\mu}{3h}\sqrt{\mathrm{F}_{2}} \nabla_{X}(h^{3}\sqrt{\mathrm{F}_{2}}\nabla_{X}\cdot\overline{V}))\] \[=\partial_{t}\overline{V}-\frac{\mu}{3h}\sqrt{\mathrm{F}_{2}} \nabla_{X}(h^{3}\sqrt{\mathrm{F}_{2}}\nabla_{X}\cdot\partial_{t}\overline{V})- \frac{\mu\varepsilon}{3h^{2}}\nabla_{X}\cdot(h\overline{V})\nabla_{X}(h^{3} \nabla_{X}\cdot\overline{V})\] \[\quad+\frac{\mu\varepsilon}{h}\nabla_{X}\big{(}h^{2}\nabla_{X} \cdot(h\overline{V})F_{2}\nabla_{X}\cdot\overline{V}\big{)}\]
So that from (4.14), and discarding all the terms of order \(O(\mu^{2}\varepsilon+\mu\varepsilon\beta+\mu\beta^{2})\), we get
\[\partial_{t}(\overline{V}-\frac{\mu}{3h}\sqrt{\mathrm{F}_{2}}\nabla _{X}(h^{3}\sqrt{\mathrm{F}_{2}}\nabla_{X}\cdot\overline{V}))\] \[= -\mathcal{T}_{2}^{\mu}[\beta b,h]\nabla_{X}\zeta-\frac{\varepsilon }{2}\nabla_{X}\big{(}|\overline{V}|^{2}\big{)}-\frac{\mu\varepsilon}{3h} \nabla_{X}\Big{(}h^{3}\Delta_{X}(|\overline{V}|^{2})\Big{)}+\frac{\mu \varepsilon}{2}\nabla_{X}(h^{2}(\nabla_{X}\cdot\overline{V})^{2})\] \[+\frac{\mu\varepsilon}{3}\nabla_{X}\Big{(}\frac{1}{h}\big{(} \nabla_{X}(h^{3}\nabla_{X}\cdot\overline{V})\big{)}\cdot\overline{V}\Big{)}+ \frac{\mu}{3h}\nabla_{X}(h^{3}\nabla_{X}\cdot(\mathcal{T}_{2}[\beta b,h]\nabla _{X}\zeta))\] \[+\frac{\mu\varepsilon}{2}\nabla_{X}(h^{3}\Delta_{X}(|\overline{ V}|^{2}))+(\mu^{2}\varepsilon+\mu\varepsilon\beta+\mu\beta^{2})R,\]
which at the end gives:
\[\partial_{t}(\mathcal{I}^{\mu}[h]\overline{V})+\mathcal{I}[h]\mathcal{T}_{2}^{ \mu}[\beta b,h]\nabla_{X}\zeta+\frac{\varepsilon}{2}\nabla_{X}\big{(}| \overline{V}|^{2}\big{)}+\mu\varepsilon\nabla_{X}\mathcal{R}_{1}^{\mu}[\beta b,h,\overline{V}]=(\mu^{2}\varepsilon+\mu\varepsilon\beta+\mu\beta^{2})R.\]
#### 4.0.1. Hamiltonian structure
We end this section by briefly commenting on the Hamiltonian structure of Whitham-Green-Naghdi systems with bathymetry. Starting from the Hamiltonian of the water waves equations (3.4) and replacing the Dirichlet-Neumann operator by the approximation (2.37), using also (4.3), we get
\[H(\zeta,\psi) =\frac{1}{2}\int_{\mathbb{R}^{d}}\zeta^{2}\,\mathrm{d}X+\frac{1} {2}\int_{\mathbb{R}^{d}}h|\nabla_{X}\psi|^{2}\,\mathrm{d}X-\frac{\mu}{6}\int_ {\mathbb{R}^{d}}\psi\Delta_{X}\sqrt{\mathrm{F}_{2}}\big{(}\frac{h^{3}}{h_{b}^ {3}}\sqrt{\mathrm{F}_{2}}\Delta_{X}\psi\big{)}\,\mathrm{d}X\] \[-\frac{\mu\beta}{2}\int_{\mathbb{R}^{d}}(\Delta_{X}\psi)\mathcal{ L}_{2}^{\mu}[\beta b]\Delta_{X}\psi\,\mathrm{d}X+\frac{\mu\beta}{4}\int_{ \mathbb{R}^{d}}\nabla_{X}\psi\cdot\mathrm{F}_{4}\Delta_{X}\big{(}\mathcal{L}_ {1}^{\mu}[\beta b]\nabla_{X}\psi\big{)}\,\mathrm{d}X\] \[+\frac{\mu\beta^{2}}{4}\int_{\mathbb{R}^{d}}\psi\nabla_{X}\cdot \big{(}\mathcal{B}[\beta b]\nabla_{X}\psi\big{)}\,\mathrm{d}X+O(\mu^{2} \varepsilon+\mu\varepsilon\beta+\mu^{2}\beta^{2}).\]
Then, we make use of the two expansions given by (1.16) and (1.17)
\[\mathcal{L}_{1}^{\mu}[\beta b]=-b\mathrm{F}_{3}+O(\mu\beta^{2}),\quad\mathcal{ L}_{2}^{\mu}[\beta b]=-\frac{1}{2}b\mathrm{F}_{4}+\frac{\beta^{2}}{6}b^{3} \mathrm{F}_{3}+O(\mu\beta^{4}),\]
and write
\[\mathrm{RHS}_{1}:= -\frac{\mu\beta}{2}\int_{\mathbb{R}^{d}}(\Delta_{X}\psi)\mathcal{ L}_{2}^{\mu}[\beta b]\Delta_{X}\psi\,\mathrm{d}X\] \[=\] \[= \frac{\mu\beta}{4}\int_{\mathbb{R}^{d}}b(\Delta_{X}\psi)\mathrm{F} _{3}\Delta_{X}\psi\,\mathrm{d}X-\frac{\mu\beta^{3}}{12}\int_{\mathbb{R}^{d}}b^ {3}\big{(}\sqrt{\mathrm{F}_{3}}\Delta_{X}\psi\big{)}^{2}\,\mathrm{d}X+O(\mu^ {2}\beta^{5}),\]
where for the last equality, we used \(\mathrm{F}_{2}=\sqrt{\mathrm{F}_{2}}+O(\mu)\), and
\[\mathrm{RHS}_{2}:= \frac{\mu\beta}{4}\int_{\mathbb{R}^{d}}\nabla_{X}\psi\cdot \mathrm{F}_{4}\Delta_{X}\big{(}\mathcal{L}_{1}^{\mu}[\beta b]\nabla_{X}\psi \big{)}\,\mathrm{d}X\] \[= -\frac{\mu\beta}{4}\int_{\mathbb{R}^{d}}\nabla_{X}\psi\cdot \mathrm{F}_{4}\Delta_{X}\big{(}b\nabla_{X}\psi\big{)}\,\mathrm{d}X+O(\mu^{2} \beta^{3}).\]
Deriving the equations associated to this approximated Hamiltonian thus obtained, we get the Whitham-Green-Naghdi system
\[\begin{cases}\partial_{t}\zeta+\nabla_{X}\cdot(h\nabla_{X}\psi)+\frac{\mu}{3} \Delta_{X}\sqrt{\mathrm{F}_{2}}\big{(}\frac{h^{3}}{h^{3}_{b}}\sqrt{\mathrm{F}_{ 2}}\Delta_{X}\psi\big{)}-\frac{\mu\beta}{4}\nabla_{X}\cdot\big{(}\mathcal{Q}^{ \mu}[b]\nabla_{X}\psi\big{)}\\ -\frac{\mu\beta^{2}}{4}\nabla_{X}\cdot\Big{(}\big{(}\mathcal{B}[\beta b]+ \mathcal{B}[\beta b]^{*}\big{)}\nabla_{X}\psi\Big{)}+\frac{\mu\beta^{3}}{6} \Delta_{X}\sqrt{\mathrm{F}_{3}}(b^{3}\sqrt{\mathrm{F}_{3}}\Delta_{X}\psi)=0\\ \partial_{t}\psi+\zeta+\frac{\varepsilon}{2}|\nabla_{X}\psi|^{2}-\frac{\mu \varepsilon}{2}h^{2}(\sqrt{\mathrm{F}_{2}}\Delta_{X}\psi)^{2}=0\end{cases}\]
where
\[\mathcal{Q}^{\mu}[b]\bullet=\Big{(}\mathrm{F}_{3}\nabla_{X}(b\nabla_{X}\cdot \bullet)+\nabla_{X}(b\mathrm{F}_{3}\nabla_{X}\cdot\bullet)\Big{)}+\Big{(} \mathrm{F}_{4}\nabla_{X}\nabla_{X}\cdot\big{(}b\bullet\big{)}+b\mathrm{F}_{4} \Delta_{X}\bullet\Big{)},\]
and
\[\mathcal{B}[\beta b]\bullet=b\nabla_{X}(\nabla_{X}\cdot(b\bullet))+h_{b} \nabla_{X}\big{(}b\nabla_{X}\cdot(b\bullet)\big{)}+2h_{b}(\nabla_{X}b)\nabla_ {X}\cdot(b\bullet)\]
## Appendix A
### On the properties of pseudo-differential operators
In this section, we will give a rigorous meaning to the pseudo-differential operators given in Proposition 1.10. Before turning to the proof, we recall the definition of a symbol.
**Definition A.1**.: _Let \(d=1,2\) and \(m\in\mathbb{R}\). We say \(L\in S^{m}\) is a symbol of order \(m\) if \(L(X,\xi)\) is \(C^{\infty}(\mathbb{R}^{d}\times\mathbb{R}^{d})\) and satisfies_
\[\forall\alpha\in\mathbb{N}^{d},\quad\forall\gamma\in\mathbb{N}^{d},\quad \langle\xi\rangle^{-(m-|\gamma|)}|\partial_{X}^{\alpha}\partial_{\xi}^{\gamma} L(X,\xi)|<\infty.\]
_We also introduce the seminorm_
\[\mathcal{M}_{m}(L)=\sup_{|\alpha|\leq\lceil\frac{d}{2}\rceil+1}\sup_{|\gamma| \leq\lceil\frac{d}{2}\rceil+1}\sup_{(X,\xi)\in\mathbb{R}^{d}\times\mathbb{R} ^{d}}\Big{\{}\langle\xi\rangle^{-(m-|\gamma|)}|\partial_{X}^{\alpha}\partial_ {\xi}^{\gamma}L(X,\xi)|\Big{\}}.\] (A.1)
Moreover, we recall the main tool we will use to justify the pseudo-differential operators in Sobolev spaces:
**Theorem A.2**.: _Let \(d=1,2\), \(s\geq 0\), and \(L\in S^{m}\). Then formula (1.7) defines a bounded pseudo-differential operator from \(H^{s+m}(\mathbb{R}^{d})\) to \(H^{s}(\mathbb{R})\) and satisfies_
\[|\mathcal{L}[X,D]u|_{H^{s}}\leq\mathcal{M}_{m}(L)|u|_{H^{s+m}}.\] (A.2)
With this Theorem at hand, we can now give the proof.
Proof of Proposition 1.10.: We will first prove that for \(s\geq 0\) the operators \(\mathcal{L}_{i}^{\mu}\) are a uniformly bounded on \(H^{s}(\mathbb{R}^{d})\). To prove this point we need to verify that the symbols:
\[L_{1}^{\mu}(\beta b(X),\xi) =-\frac{1}{\beta}\sinh{(\beta b(X)\sqrt{\mu}|\xi|)}\mathrm{sech} (\sqrt{\mu}|\xi|)\frac{1}{\sqrt{\mu}|\xi|}\] \[L_{2}^{\mu}(\beta b(X),\xi) =\frac{1}{\beta}(\sinh{(\beta b(X)\sqrt{\mu}|\xi|)}\mathrm{sech} (\sqrt{\mu}|\xi|)\frac{1}{\sqrt{\mu}|\xi|}-\beta b)\frac{1}{\mu|\xi|^{2}}\] \[L_{3}^{\mu}(\beta b(X),\xi) =-\big{(}\cosh(\beta b(X)\sqrt{\mu}|\xi|)\mathrm{sech}(\sqrt{\mu} |\xi|)-1\big{)}\frac{1}{\mu|\xi|^{2}}\]
are elements of \(S^{0}\) where the constants \(\mathcal{M}_{0}(L_{i})\) are independent from \(\mu\) and \(\beta\). We treat each symbol separately.
We start by proving that the symbol \(L_{1}^{\mu}\) is in \(S^{0}\). To do so, we will split the frequency domain into three regions. First, let \(\beta\sqrt{\mu}|\xi|\leq 1\) and \(\sqrt{\mu}|\xi|\leq 1\). Then since \(L_{1}^{\mu}(\beta b(X),\xi)\in C^{\infty}(\mathbb{R}^{d}\times\mathbb{R}^{d})\) and the Taylor expansion around \((X,0)\) gives us
\[|\partial_{X}^{\alpha}\partial_{\xi}^{\gamma}L_{1}^{\mu}(\beta b(X),\xi)|\lesssim 1.\]
Next, consider the region \(\beta\sqrt{\mu}|\xi|>1\). Then we also have that \(|\xi|\geq\sqrt{\mu}|\xi|>1\). With this in mind, we can prove the necessary decay estimate. Indeed, since \(b\in C_{c}^{\infty}(\mathbb{R}^{d})\) satisfies (1.10), i.e. for \(h_{b,\max}\in(0,1)\):
\[0<h_{b,\min}\leq 1-\beta b(X),\]
combined with \(\mathrm{sech}(x)\sim e^{-x}\) and \(\sinh(x)\sim e^{x}\) for \(x\in\mathbb{R}\), we have that
\[\Big{|}\partial_{X}^{\alpha}\partial_{\xi}^{\gamma}\Big{(}\frac{ \sinh(\beta b(X)\sqrt{\mu}|\xi|)}{\cosh(\sqrt{\mu}|\xi|)}\Big{)}\Big{|} \lesssim\mu^{\frac{|\gamma|}{2}}(1+\sqrt{\mu}|\xi|)^{|\alpha|}e^{-h _{b,\min}\sqrt{\mu}|\xi|}\] \[\lesssim(\frac{2}{h_{b,\min}})^{|\alpha|}\mu^{\frac{|\gamma|}{2}} e^{-\frac{1}{2}h_{b,\min}\sqrt{\mu}|\xi|}.\]
Additionally, there holds
\[\mu^{\frac{|\gamma|}{2}}(1+|\xi|)^{|\gamma|}e^{-\sqrt{\mu}|\xi|}\lesssim 1,\]
and so we obtain the estimate
\[\Big{|}\partial_{X}^{\alpha}\partial_{\xi}^{\gamma}\Big{(}\frac{\sinh(\beta b (X)\sqrt{\mu}|\xi|)}{\cosh(\sqrt{\mu}|\xi|)}\Big{)}\Big{|}\lesssim(1+|\xi|)^{ -|\gamma|}.\] (A.3)
Then combining this estimate with the Leibniz rule we obtain that
\[|\partial_{X}^{\alpha}\partial_{\xi}^{\gamma}L_{1}^{\mu}(\beta b( X),\xi)| \lesssim\frac{1}{\beta}\sum_{\gamma_{1}\in\mathbb{N}^{d}\,:\, \gamma_{1}\leq\gamma}\Big{|}\partial_{X}^{\alpha}\partial_{\xi}^{\gamma_{1}} \Big{(}\frac{\sinh(\beta b(X)\sqrt{\mu}|\xi|)}{\cosh(\sqrt{\mu}|\xi|)}\Big{)} \Big{|}\,|\partial_{\xi}^{\gamma-\gamma_{1}}\Big{(}(\sqrt{\mu}|\xi|)^{-1} \Big{)}\big{|}\] \[\lesssim(\beta\sqrt{\mu}|\xi|)^{-1}\sum_{\gamma_{1}\in\mathbb{N}^ {d}\,:\,\gamma_{1}\leq\gamma}(1+|\xi|)^{-|\gamma_{1}||}|\xi|^{|\gamma_{1}|-| \gamma|}\] \[\lesssim|\xi|^{-|\gamma|}.\]
However, since we have that \(|\xi|\geq\sqrt{\mu}|\xi|>1\), we obtain the desired result
\[|\partial_{X}^{\alpha}\partial_{\xi}^{\gamma}L_{1}^{\mu}(\beta b(X),\xi)| \lesssim\langle\xi\rangle^{-|\gamma|}.\] (A.4)
Lastly, let \(\beta\sqrt{\mu}|\xi|\leq 1\) and \(\sqrt{\mu}|\xi|\geq 1\). In this case, we expand \(x\mapsto\sinh(x)\) to obtain
\[L_{1}^{\mu}(\beta b(X),\xi) =-b(X)\mathrm{sech}(\sqrt{\mu}|\xi|)\] \[-\frac{1}{6\beta}\Big{(}\beta^{3}b(X)^{3}\int_{0}^{1}\cosh(t \beta b(X)\sqrt{\mu}|\xi|)(1-t)^{2}\,\mathrm{d}t\Big{)}(\sqrt{\mu}|\xi|)^{2} \mathrm{sech}(\sqrt{\mu}|\xi|).\]
To conclude, we observe that
\[|\partial_{X}^{\alpha}\partial_{\xi}^{\gamma}\big{(}b(X)\mathrm{sech}(\sqrt{ \mu}|\xi|)\big{)}|\lesssim\mu^{\frac{|\gamma|}{2}}e^{-\sqrt{\mu}|\xi|}\lesssim \langle\xi\rangle^{-|\gamma|}.\]
For the second term, we let \(t\in[0,1]\) and observe that we only need to consider
\[\frac{\cosh(t\beta b(X)\sqrt{\mu}|\xi|)}{\cosh(\sqrt{\mu}|\xi|)}(\sqrt{\mu}| \xi|)^{2},\]
for which the decay estimate follows similarly to (A.3). Indeed, we first observe that
\[\Big{|}\partial_{X}^{\alpha}\partial_{\xi}^{\gamma}\Big{(}\frac{ \cosh(t\beta b(X)\sqrt{\mu}|\xi|)}{\cosh(\sqrt{\mu}|\xi|)}\Big{)}\Big{|} \lesssim\mu^{\frac{|\gamma|}{2}}t^{|\alpha|}(1+\sqrt{\mu}t|\xi|)^{| \alpha|}e^{-h_{b,\min}\sqrt{\mu}|\xi|}\] \[\lesssim\mu^{\frac{|\gamma|}{2}}e^{-\frac{h_{b,\min}}{2}\sqrt{\mu }|\xi|},\]
since \(t\in[0,1]\). Then by the Leibniz rule, we get
\[\Big{|}\partial_{X}^{\alpha}\partial_{\xi}^{\gamma}\Big{(}\frac{ \cosh(t\beta b(X)\sqrt{\mu}|\xi|)}{\cosh(\sqrt{\mu}|\xi|)}\mu|\xi|^{2}\Big{)} \Big{|} \lesssim\sum_{\gamma_{1}\in\mathbb{N}^{d}\,:\,\gamma_{1}\leq\gamma} \mu^{\frac{|\gamma_{1}|}{2}+1}e^{-\frac{h_{b,\min}}{2}\sqrt{\mu}|\xi|}|\xi|^{2 +|\gamma_{1}|-|\gamma|}\] \[\lesssim|\xi|^{-|\gamma|},\]
for \(|\xi|\geq\sqrt{\mu}|\xi|>1\). Consequently, we can conclude this case. By Theorem (A.2) there holds,
\[|\mathcal{L}_{1}^{\mu}[\beta b]u|_{H^{s}}\leq M(s)|u|_{H^{s}}.\]
For the symbol \(L_{2}^{\mu}\), we observe for \(\beta\sqrt{\mu}|\xi|\leq 1\) and \(\sqrt{\mu}|\xi|\leq 1\) and a Taylor expansion that it is smooth and bounded. Moreover, for frequencies such that \(\beta\sqrt{\mu}|\xi|>1\), we can argue as we did for \(L_{1}^{\mu}\) to get sufficient decay in the frequency variable at infinity. Lastly, in the case \(\beta\sqrt{\mu}|\xi|\leq 1\) and \(\sqrt{\mu}|\xi|\geq 1\) we use the following expansion
\[L_{2}^{\mu}(\beta b(X),\xi) =b(X)\big{(}\text{sech}(\sqrt{\mu}|\xi|)-1\big{)}\frac{1}{\mu|\xi |^{2}}\] \[+\frac{1}{6\beta}\Big{(}\beta^{3}b(X)^{3}\int_{0}^{1}\cosh(t \beta b(X)\sqrt{\mu}|\xi|)(1-t)^{2}\mathrm{d}t\Big{)}\text{sech}(\sqrt{\mu}| \xi|),\]
and again argue as we did for \(L_{1}^{\mu}\).
The estimates on \(L_{3}^{\mu}\) is simpler since it does not depend on \(\frac{1}{\beta}\). Thus, using similar arguments we can prove the necessary decay at infinity, and a Taylor series to prove the boundedness for small frequencies.
The estimate (1.15) follows directly from the boundedness on \(H^{s}(\mathbb{R}^{d})\) of \(\mathcal{L}_{2}^{\mu}[\beta b]\) since its symbol is in \(S^{0}\) and that
\[L_{1}^{\mu}(\beta b(X),\xi)+b(X)=-\mu L_{2}^{\mu}(\beta b(X),\xi)|\xi|^{2}.\]
Indeed, the symbol \(r_{1}(X,\xi)=L_{2}^{\mu}(\beta b(X),\xi)|\xi|^{2}\) is an element of \(S^{2}\) and by Theorem A.2 we deduce that \(\mathcal{R}_{1}[X,\mathrm{D}]u(X)=\mathcal{F}^{-1}(r_{1}(X,\xi)\hat{u}(\xi)) (X)\) satisfies
\[|\mathcal{R}_{1}[X,\mathrm{D}]u|_{H^{s}}\lesssim|u|_{H^{s+2}},\]
so that
\[|\mathcal{L}_{1}^{\mu}[\beta b]u+bu|_{H^{s}}=\mu|\mathcal{R}[X, \mathrm{D}]u|_{H^{s}}\lesssim\mu|u|_{H^{s+2}}.\]
The next estimate, given by (1.16), is deduced from the Taylor expansion of the symbol \(L_{1}^{\mu}\) given by:
\[L_{1}^{\mu}(\beta b(X),\xi)=-\Big{(}b(X)+\frac{\mu\beta^{2}}{6}b(X)^{3}|\xi| ^{2}\Big{)}\text{sech}(\sqrt{\mu}|\xi|)-\frac{\mu^{2}\beta^{4}}{120}b(X)^{5}| \xi|^{4}r_{2}(X,\xi),\]
where the rest \(r_{2}\) is given by
\[r_{2}(X,\xi)=\int_{0}^{1}\cosh(t\beta b(X)\sqrt{\mu}|\xi|)(1-t)^{4}\mathrm{d }t\,\text{sech}(\sqrt{\mu}|\xi|),\]
and is an element of \(S^{0}\) by arguing as above. By extension, the symbol \(b(X)^{5}|\xi|^{4}r_{2}(X,\xi)\in S^{4}\) and we conclude by Theorem A.2.
Lastly, we consider estimate (1.17). Again by a Taylor series expansion, we observe that
\[L_{2}^{\mu}(\beta b(X),\xi) =\Big{(}b(X)(\text{sech}(\sqrt{\mu}|\xi|)-1)\frac{1}{\mu|\xi|^{2}} +\frac{\beta^{2}}{6}b(X)^{3}\Big{)}\text{sech}(\sqrt{\mu}|\xi|)\] \[\quad+\frac{\mu\beta^{4}}{120}b(X)^{5}|\xi|^{2}r_{2}(X,\xi),\]
where \(b(X)^{5}|\xi|^{2}r_{2}(X,\xi)\in S^{2}\), allowing us to conclude by Theorem A.2.
**Remark A.3**.: _We note that we could improve the estimates in the proof above. For instance, we can get \(L_{1}\in S^{-1}\). However, the constant \(\mathcal{M}_{-1}(L_{1})\) would be singular with respect to \(\beta\) and \(\mu\)._
### Technical estimates
In this section we give a series of multiplier estimates. To start, we recall the Fourier multiplier depending on the transverse variable:
\[\mathrm{F}_{0}u(X)=\mathcal{F}^{-1}\Big{(}\frac{\cosh((z+1)\sqrt{\mu}|\xi|)}{ \cosh(\sqrt{\mu}|\xi|)}\hat{u}(\xi)\Big{)}(X).\]
Then the first result reads:
**Proposition A.4**.: _Let \(s\in\mathbb{R}\) and take \(u\in\mathscr{S}(\mathbb{R}^{d})\), then there holds_
\[|\mathrm{F}_{0}u|_{H^{s}} \lesssim|u|_{H^{s}}\] \[|\partial_{z}\mathrm{F}_{0}u|_{H^{s}} \lesssim\mu|\nabla_{X}u|_{H^{s+1}}\] \[|\partial_{z}^{2}\mathrm{F}_{0}u| \lesssim\mu|\nabla_{X}u|_{H^{s+1}}.\]
_Moreover, for \(k\in\mathbb{N}\) and under condition (1.10) we have similar estimates on the domain \(\mathcal{S}_{b}=\mathbb{R}^{d}\times[-1+\beta b,0]\):_
\[\|\mathrm{F}_{0}u-u\|_{H^{k,0}(\mathcal{S}_{b})} \lesssim\mu|\nabla_{X}u|_{H^{k+1}}\] \[\|\partial_{z}\mathrm{F}_{0}u\|_{H^{k,0}(\mathcal{S}_{b})} \lesssim\mu|\nabla_{X}u|_{H^{k+1}}\] \[\|\partial_{z}^{2}\mathrm{F}_{0}u\|_{H^{k,0}(\mathcal{S}_{b})} \lesssim\mu|\nabla_{X}u|_{H^{k+1}}.\]
Proof.: The estimates on \(H^{s}(\mathbb{R}^{d})\) are a direct consequence of Plancherel's identity and the Taylor expansion formula for \(x\in\mathbb{R}\):
\[\cosh(x)=1+\frac{x^{2}}{2}\int_{0}^{1}\cosh(tx)(1-t)\,\mathrm{d}t.\]
For the estimates on \(\mathcal{S}_{b}\), we use that \(-h_{b}(X)>-2\), by assumption (1.10), then extend the definition of \(\mathrm{F}_{0}\) to the domain \(\mathcal{S}:=\mathbb{R}^{d}\times[-2,0]\). The first estimate on \(\mathcal{S}_{b}\) is a consequence of
\[\|\mathrm{F}_{0}u-u\|_{H^{k,0}(\mathcal{S}_{b})}\leq\|\mathrm{F}_{0}u-u\|_{H^ {k,0}(\mathcal{S})}=\Big{\|}\frac{\cosh\big{(}(z+1)\sqrt{\mu}|\mathrm{D}| \big{)}}{\cosh\big{(}\sqrt{\mu}|\mathrm{D}|\big{)}}u-u\Big{\|}_{H^{k,0}( \mathcal{S})}\lesssim\mu|\nabla_{X}u|_{H^{k+1}}.\]
The remaining estimates are proved similarly.
The next result concerns the following operators:
\[T_{1}(z)[X,\mathrm{D}]u(X)=\mathcal{F}^{-1}\Big{(}\frac{\sinh(\frac{z}{h_{b}( X)}\sqrt{\mu}|\xi|)}{\cosh(\sqrt{\mu}|\xi|)}\hat{u}(\xi)\Big{)}(X),\]
and
\[T_{2}(z)[X,\mathrm{D}]u(X)=\mathcal{F}^{-1}\Big{(}\frac{\cosh(\frac{z}{h_{b}( X)}\sqrt{\mu}|\xi|)}{\cosh(\sqrt{\mu}|\xi|)}\hat{u}(\xi)\Big{)}(X).\]
We should note that we will apply these operators to functions depending only on \(X\), which makes the dependence in \(z\in[-h_{b},0]\) easier to deal with.
**Proposition A.5**.: _Let \(k\in\mathbb{N}\) and take \(u\in\mathscr{S}(\mathbb{R}^{d})\), then under condition (1.10) we have_
\[\|T_{1}u\|_{H^{k,0}(\mathcal{S}_{b})} \leq M(k)|u|_{H^{k}}\] \[\|T_{2}u\|_{H^{k,0}(\mathcal{S}_{b})} \leq M(k)|u|_{H^{k}}.\]
Proof.: We first observe that \(T_{1}\) is well-defined on \(\mathscr{S}(\mathbb{R}^{d})\). Indeed, for \(t_{0}>\frac{d}{2}\) there holds
\[|T_{1}u(X)| \leq\sup_{z\in[-h_{b}(X),0]}\Big{|}\mathcal{F}^{-1}\Big{(}\frac{ \sinh(\frac{z}{h_{b}(X)}\sqrt{\mu}|\xi|)}{\cosh(\sqrt{\mu}|\xi|)}\hat{u}(\xi) \Big{)}(X)\Big{|}\] \[\leq\sup_{\xi\in\mathbb{R}^{d}}\,\tanh(\sqrt{\mu}|\xi|)\langle \xi\rangle^{t_{0}}|\hat{u}(\xi)|\int_{\mathbb{R}^{d}}\langle\xi\rangle^{-t_{0} }\,\mathrm{d}\xi\] \[<\infty.\]
Moreover, using similar arguments one can prove \(T_{1}u\in\mathscr{S}(\mathbb{R}^{d})\). The same is true for \(T_{2}\).
Next, we prove the estimates. To do so, we first let \(k=0\) and use a change of variable, Holder's inequality, the Sobolev embedding, and Plancherel's identity to make the observation:
\[\|T_{1}u\|_{L^{2}(\mathcal{S}_{b})}^{2} =\int_{\mathbb{R}^{d}}h_{b}(X)\int_{-1}^{0}|\mathcal{F}^{-1} \Big{(}\frac{\sinh(z\sqrt{\mu}|\xi|)}{\cosh(\sqrt{\mu}|\xi|)}\hat{u}(\xi) \Big{)}(X)|^{2}\,\mathrm{d}z\mathrm{d}X\] \[\leq|h_{b}|_{L^{\infty}}\int_{-1}^{0}\int_{\mathbb{R}^{d}}| \mathcal{F}^{-1}\Big{(}\frac{\sinh(z\sqrt{\mu}|\xi|)}{\cosh(\sqrt{\mu}|\xi|)} \hat{u}(\xi)\Big{)}(X)|^{2}\,\mathrm{d}X\mathrm{d}z\] \[\leq M_{0}|u|_{L^{2}}^{2}.\]
For higher derivatives, the proof is the same after an application of the chain rule. The same is true for \(T_{2}\).
The next result is on the Dirichlet-Neumann operator (Theorem 3.15 in [16]):
**Proposition A.6**.: _Let \(s\geq 0\). Let \(\zeta\in H^{s+3}(\mathbb{R}^{d})\) be such that (1.2) is satisfied, and take \(\psi\in\dot{H}^{s+3}(\mathbb{R}^{d})\). Then one has_
\[\frac{1}{\mu}|\mathcal{G}^{\mu}\psi|_{H^{s+1}}\leq M(s+3)|\nabla_{X}\psi|_{H^{ s+2}}.\] (A.5)
Lastly, we have the following estimates on the multipliers:
\[\mathrm{F}_{1}=\frac{\tanh{(\sqrt{\mu}|\mathrm{D}|)}}{\sqrt{\mu}|\mathrm{D}|},\quad\mathrm{F}_{2}=\frac{3}{\mu|\mathrm{D}|^{2}}(1-\mathrm{F}_{1}),\quad \mathrm{F}_{3}=\mathrm{sech}(\sqrt{\mu}|D|),\quad\mathrm{F}_{4}=\frac{2}{\mu| \mathrm{D}|^{2}}(1-\mathrm{F}_{3}).\]
**Proposition A.7**.: _Let \(s\in\mathbb{R}\) and take \(u\in\mathscr{S}(\mathbb{R}^{d})\), then for \(i\in\{1,2,3,4\}\) there holds_
\[|(\mathrm{F}_{i}-1)u|_{H^{s}}\lesssim\mu|\nabla_{X}u|_{H^{s+1}}.\]
Proof.: The estimates are a direct consequence of Plancherel's identity and the Taylor expansion formulas:
\[\cosh(x) =1+\frac{x^{2}}{2}\int_{0}^{1}\cosh(tx)(1-t)\,\mathrm{d}t\] \[\sinh(x) =x+\frac{x^{3}}{6}\int_{0}^{1}\cosh(tx)(1-t)^{2}\,\mathrm{d}t,\] \[\frac{1}{\cosh(x)} =1-\frac{x^{2}}{2}+\frac{x^{4}}{24}\int_{0}^{1}\Big{(}\mathrm{ sech}(tx)-20\mathrm{sech}^{3}(tx)+24\mathrm{sech}^{5}(tx)\Big{)}(1-t)^{3}\, \mathrm{d}t\]
for \(0\leq x\leq 1\).
### Classical estimates
In this section, we recall some classical estimates that will be used throughout the paper. Finally, we end the section with the proof of Proposition 2.9.
**Lemma A.8**.: _Let \(\beta\in[0,1]\), \(b\in C_{c}^{\infty}(\mathbb{R}^{d})\), \(h_{b}=1-\beta b\), \(\mathcal{S}_{b}=(-h_{b},0)\times\mathbb{R}^{d}\), and assume (1.10) holds true. Then for \(u\in H^{1}(\mathcal{S}_{b})\) satisfying \(u|_{z=0}=0\), there holds_
\[\|u\|_{L^{2}(\mathcal{S}_{b})}\lesssim\|\nabla^{\mu}_{X,z}u\|_{L^{2}(\mathcal{ S}_{b})},\] (A.6)
_and_
\[|u|_{z=-h_{b}}|_{L^{2}}\lesssim\|\nabla^{\mu}_{X,z}u\|_{L^{2}(\mathcal{S}_{b})}.\] (A.7)
_Moreover, if we further suppose \(u\in H^{k,k}(\mathcal{S}_{b})\) then_
\[\Big{|}\int_{-1+\beta b(\cdot)}^{0}u(\cdot,z)\;\mathrm{d}z\Big{|}_{H^{k}}^{2} \leq M(k)\big{(}\|\nabla^{\mu}_{X,z}u\|_{H^{k,0}(\mathcal{S}_{b})}^{2}+\sum_{ j=1}^{k}\|\partial_{z}^{j}u\|_{H^{k-j,0}(\mathcal{S}_{b})}^{2}\big{)}.\] (A.8)
Proof.: For the proof of (A.6) we use assumption \(u|_{z=0}=0\) and the Fundamental Theorem of Calculus combined with Cauchy-Schwarz inequality we get that
\[\int_{\mathbb{R}^{d}}\int_{-1+\beta b(X)}^{0}|u(X,z)|^{2}\; \mathrm{d}z\mathrm{d}X =\int_{\mathbb{R}^{d}}\int_{-1+\beta b(X)}^{0}|\int_{z}^{0}( \partial_{z}u)(X,z^{\prime})\;\mathrm{d}z^{\prime}|^{2}\;\mathrm{d}z\mathrm{d}X\] \[\leq(1+\beta|b|_{L^{\infty}})^{2}\int_{\mathbb{R}^{d}}\int_{-1+ \beta b(X)}^{0}|(\partial_{z}u)(X,z^{\prime})|^{2}\;\mathrm{d}z^{\prime}\mathrm{ d}X.\]
For the proof of (A.7), we first use the assumption \(u|_{z=0}=0\) with the Fundamental Theorem of Calculus and Young's inequality to get that
\[\int_{\mathbb{R}}u(X,-h_{b}(X))^{2}\;\mathrm{d}X =\int_{\mathbb{R}}\int_{-1+\beta b(X)}^{0}\partial_{z}\big{(}u(X, z)^{2}\big{)}\;\mathrm{d}z\mathrm{d}X\] \[\leq\int_{\mathbb{R}}\int_{-1+\beta b(X)}^{0}\partial_{z}u(X,z)^{ 2}\;\mathrm{d}z\mathrm{d}X+\int_{\mathbb{R}}\int_{-1+\beta b(X)}^{0}u(X,z)^{2} \;\mathrm{d}z\mathrm{d}X.\]
Then by (A.6) we conclude that
\[|u|_{-h_{b}}|_{L^{2}}\lesssim\|\nabla^{\mu}_{X,z}u\|_{L^{2}(\mathcal{S}_{b})}.\]
For the proof (A.8), we first consider the estimate with one derivative to fix the idea. In particular, we perform a change of variable and then use the chain rule and Holder's inequality to get
\[\Big{|}\nabla_{X}\int_{-1+\beta b(\cdot)}^{0}u(\cdot,z)\;\mathrm{ d}z\Big{|}_{L^{2}}^{2} =\int_{\mathbb{R}^{d}}\left|\nabla_{X}\int_{-1}^{0}u(X,zh_{b}(X))h _{b}(X)\;\mathrm{d}z\right|^{2}\;\mathrm{d}X\] \[\leq\int_{\mathbb{R}^{d}}\big{(}\int_{-1}^{0}|(\nabla_{X}u)(X,zh_ {b}(X))|h_{b}(X)\;\mathrm{d}z\big{)}^{2}\;\mathrm{d}X\] \[\quad+\int_{\mathbb{R}^{d}}\big{(}\int_{-1}^{0}|z|\beta|\nabla_{X} b|\;|(\partial_{z}u)(X,zh_{b}(X))|h_{b}(X)\;\mathrm{d}z\big{)}^{2}\;\mathrm{d}X\] \[\quad+\beta|\nabla_{X}b|_{L^{\infty}}\int_{\mathbb{R}^{d}}\big{(} \int_{-1}^{0}|u(X,zh_{b}(X))|\;\mathrm{d}z\big{)}^{2}\;\mathrm{d}X.\]
Next, we can transform the integral back to its original domain using (1.10), and then apply Cauchy-Schwarz and Holder's inequality to obtain
\[\Big{|}\nabla_{X}\int_{-1+\beta b(\cdot)}^{0}u(\cdot,z)\;\mathrm{d}z\Big{|}_{L^ {2}}^{2}\leq M(k+1)(\|u\|_{H^{1,0}(\mathcal{S}_{b})}^{2}+\|\partial_{z}u\|_{L^ {2}(\mathcal{S}_{b})}^{2})\]
Repeating this process for any \(k\in\mathbb{N}\), using the Leibniz rule, gives us
\[\Big{|}\int_{-1+\beta b(\cdot)}^{0}u(\cdot,z)\,\mathrm{d}z\Big{|}_{H^ {k}}^{2} =\sum_{\gamma\in\mathbb{N}^{d}\,:\,|\gamma|\leq k}\int_{\mathbb{R}^{d}} \big{|}\partial_{X}^{\gamma}\int_{-1}^{0}\big{(}u(X,zh_{b}(X))h_{b}(X)\big{)}\, \mathrm{d}z\big{|}^{2}\,\mathrm{d}X\] \[\leq M(k)(\|u\|_{H^{k,0}(\mathcal{S}_{b})}^{2}+\sum_{j=0}^{k}\| \partial_{z}^{j}u\|_{H^{k-j,0}(\mathcal{S}_{b})}^{2}).\]
To conclude our observation, we use the assumption \(u|_{z=0}=0\) to apply the Poincare inequality (A.6) on the first terms.
Before proving the main result, we need some classical estimates (see Proposition B.2 and Proposition B.4 in [16]).
**Lemma A.9**.: _Let \(t_{0}\geq\frac{d}{2}\), \(s\geq-t_{0}\),, \(f\in H^{\max\{t_{0},s\}}(\mathbb{R}^{d})\), and take \(g\in H^{s}(\mathbb{R}^{d})\) then_
\[|fg|_{H^{s}}\lesssim|f|_{H^{\max\{t_{0},s\}}}|g|_{H^{s}}.\] (A.9)
_Moreover, if there exist \(c_{0}>0\) and \(1+g\geq c_{0}\) then_
\[\Big{|}\frac{f}{1+g}\Big{|}_{H^{s}}\lesssim C(c_{0},|g|_{L^{\infty}})(1+|f|_{H ^{s}})|g|_{H^{s}}.\] (A.10)
Lastly, we will prove the main result of this section:
Proof of Proposition B.9.: We first establish the existence and uniqueness of variational solutions to (2.12). Here the variational formulation associated with (2.12) is given by
\[\int_{\mathcal{S}_{b}}P(\Sigma_{b})\nabla_{X,z}^{\mu}u\cdot\nabla_{X,z}^{\mu} \varphi\,\mathrm{d}z\mathrm{d}X=\int_{\mathcal{S}_{b}}f\varphi\,\mathrm{d}z \mathrm{d}X+\int_{\mathbb{R}^{d}}g\;\varphi|_{z=-h_{b}}\,\mathrm{d}X,\] (A.11)
for \(\varphi\in H^{1}(\mathcal{S}_{b})\). Then using the coercivity estimate (2.5) and the Poincare inequality (A.6) to get that
\[c\|\varphi\|_{H^{1}}\leq\int_{\mathcal{S}_{b}}P(\Sigma_{b})\nabla_{X,z}^{\mu} \varphi\cdot\nabla_{X,z}^{\mu}\varphi\,\mathrm{d}z\mathrm{d}X.\]
While the right-hand side of (A.11) is continuous by Cauchy-Schwarz and the trace inequality (A.7). As a result, by Riesz representation Theorem, there exists a unique variational solution \(u\in H^{1,0}(\mathcal{S}_{b})\).
Next, we will prove that \(u\in H^{k,0}(\mathcal{S}_{b})\) by considering the problem on the fixed strip \(\mathcal{S}=\mathbb{R}^{d}\times[-1,0]\), where we define
\[\Sigma(X,z)=(X,hz+\varepsilon\zeta).\]
Then we have that
\[(u\circ\Sigma_{b}^{-1})\circ\Sigma(X,z)=u(X,zh_{b}):=\tilde{u}(X,z),\]
and through a change of variable, we obtain the equation
\[\int_{\mathcal{S}}\tilde{P}(\Sigma)\nabla_{X,z}^{\mu}\tilde{u}\cdot\nabla_{X, z}^{\mu}\tilde{\varphi}\,\mathrm{d}z\mathrm{d}X=\int_{\mathcal{S}}\tilde{f} \tilde{\varphi}\,\mathrm{d}z\mathrm{d}X+\int_{\mathbb{R}^{d}}g\;\tilde{\varphi }|_{z=-1}\,\mathrm{d}X,\] (A.12)
where \(\tilde{f}(X,z)=f(X,zh_{b}(X))\), \(\tilde{\varphi}(X,z)=\varphi(X,zh_{b}(X))\), and \(\tilde{P}(\Sigma)\) is an elliptic matrix given by
\[\tilde{P}(\Sigma)=\begin{pmatrix}(1+\partial_{z}\theta)\mathrm{Id}&-\sqrt{ \mu}\nabla_{X}\theta\\ -\sqrt{\mu}(\nabla_{X}\theta)^{T}&\frac{1+\mu|\nabla_{X}\theta|^{2}}{1+ \partial_{z}\theta}\end{pmatrix}.\]
with \(\theta(X,z)=(\varepsilon\zeta-\beta b)z+\varepsilon\zeta\). At this point, the problem is classical, and we refer to Proposition 4.5 in [11] to deduce that \(\tilde{u}\in H^{k}(\mathcal{S})\) for \(k\in\mathbb{N}\) and satisfying
\[\|\nabla^{\mu}_{X,z}\tilde{u}\|_{H^{k,0}(\mathcal{S})} \leq M(k+1)(|g|_{H^{k}}+\|\tilde{f}\|_{H^{k,0}})\] \[\leq M(k+1)(|g|_{H^{k}}+\sum_{j=0}^{k}\|\partial_{z}^{j}f\|_{H^{k- j,0}}).\]
In the last inequality, we used the chain rule and the product estimate (A.9). Moreover, for \(k\geq 1+t_{0}\) we have \(\tilde{u}\in C^{2}(\mathcal{S})\) and is a classical solution of
\[\begin{cases}\nabla^{\mu}_{X,z}\tilde{P}(\Sigma)\nabla^{\mu}_{X,z}\tilde{u}= \tilde{f}\\ v|_{z=0}=0,\quad\partial_{n}^{\tilde{P}}\tilde{u}|_{z=-1}=g.\end{cases}\]
Then using the equation, we can control the partial derivatives in \(z\) by the derivatives in \(X\) through
\[\frac{1+|\nabla_{X}\theta|^{2}}{h}\partial_{z}^{2}\tilde{u}=\tilde{f}-\mu \nabla_{X}\cdot(h\nabla_{X}\tilde{u})+\mu\nabla_{X}\cdot(\nabla_{X}\theta \partial_{z}\tilde{u})+\mu\partial_{z}(\nabla_{X}\theta\cdot\nabla_{X}\tilde{u })-\frac{(\partial_{z}|\nabla_{X}\theta|^{2})}{h}\partial_{z}\tilde{u},\]
and the regularity and positivity of \(\frac{1+|\nabla_{X}\theta|^{2}}{h}\). Indeed, there holds,
\[\|\partial_{z}^{k}\tilde{u}\|_{L^{2}(\mathcal{S})}\leq M(k+1)(\|\tilde{u}\|_{ H^{k,0}(\mathcal{S})}+\|\partial_{z}\tilde{u}\|_{H^{k,0}(\mathcal{S})}+\| \partial_{z}^{k}\tilde{f}\|_{L^{2}(\mathcal{S}_{b})}).\]
Having the desired regularity, we may relate these observations with the original problem \(u\) on \(\mathcal{S}_{b}\). In particular, by (1.10) we have that
\[\nabla^{\mu}_{X,z}u(X,z)=\nabla^{\mu}_{X,z}\big{(}\tilde{u}(X,\frac{z}{h_{b}} )\big{)}\in H^{k,0}(\mathcal{S}_{b}),\]
using the chain rule, the regularity of \(h_{b}\), and a change of variable to get that
\[\|\nabla^{\mu}_{X,z}u\|_{H^{k,0}(\mathcal{S}_{b})} \leq M(k+1)(\|\nabla^{\mu}_{X,z}\tilde{u}\|_{H^{k,0}(\mathcal{S} )}+\sum_{j=0}^{k}\|\partial_{z}^{j}\tilde{u}\|_{H^{k,0}(\mathcal{S})})\] \[\leq M(k+1)(|g|_{H^{k}}+\sum_{j=0}^{k}\|\partial_{z}^{j}f\|_{H^{k -j,0}(\mathcal{S}_{b})}).\]
## Acknowledgements
This research was supported by a Trond Mohn Foundation grant. It was also supported by the Faculty Development Competitive Research Grants Program 2022-2024 of Nazarbayev University: Nonlinear Partial Differential Equations in Material Science, Ref. 11022021FD2929.
|
2301.09074 | Average Rényi Entropy of a Subsystem in Random Pure State | In this paper we examine the average R\'{e}nyi entropy $S_{\alpha}$ of a
subsystem $A$ when the whole composite system $AB$ is a random pure state. We
assume that the Hilbert space dimensions of $A$ and $AB$ are $m$ and $m n$
respectively. First, we compute the average R\'{e}nyi entropy analytically for
$m = \alpha = 2$. We compare this analytical result with the approximate
average R\'{e}nyi entropy, which is shown to be very close. For general case we
compute the average of the approximate R\'{e}nyi entropy
$\widetilde{S}_{\alpha} (m,n)$ analytically. When $1 \ll n$,
$\widetilde{S}_{\alpha} (m,n)$ reduces to $\ln m - \frac{\alpha}{2 n} (m -
m^{-1})$, which is in agreement with the asymptotic expression of the average
von Neumann entropy. Based on the analytic result of $\widetilde{S}_{\alpha}
(m,n)$ we plot the $\ln m$-dependence of the quantum information derived from
$\widetilde{S}_{\alpha} (m,n)$. It is remarkable to note that the nearly
vanishing region of the information becomes shorten with increasing $\alpha$,
and eventually disappears in the limit of $\alpha \rightarrow \infty$. The
physical implication of the result is briefly discussed. | MuSeong Kim, Mi-Ra Hwang, Eylee Jung, DaeKil Park | 2023-01-22T08:08:51Z | http://arxiv.org/abs/2301.09074v2 | # Average Renyi Entropy of a Subsystem in Random Pure State
###### Abstract
In this paper we examine the average Renyi entropy \(S_{\alpha}\) of a subsystem \(A\) when the whole composite system \(AB\) is a random pure state. We assume that the Hilbert space dimensions of \(A\) and \(AB\) are \(m\) and \(mn\) respectively. First, we compute the average Renyi entropy analytically for \(m=\alpha=2\). We compare this analytical result with the approximate average Renyi entropy, which is shown to be very close. For general case we compute the average of the approximate Renyi entropy \(\widetilde{S}_{\alpha}(m,n)\) analytically. When \(1\ll n\), \(\widetilde{S}_{\alpha}(m,n)\) reduces to \(\ln m-\frac{\alpha}{2n}(m-m^{-1})\), which is in agreement with the asymptotic expression of the average von Neumann entropy. Based on the analytic result of \(\widetilde{S}_{\alpha}(m,n)\) we plot the \(\ln m\)-dependence of the quantum information derived from \(\widetilde{S}_{\alpha}(m,n)\). It is remarkable to note that the nearly vanishing region of the information becomes shorten with increasing \(\alpha\), and eventually disappears in the limit of \(\alpha\rightarrow\infty\). The physical implication of the result is briefly discussed.
Introduction
Although their motivations are different, the authors of Ref.[1; 2; 3] considered a similar problem: the average von Neumann entropy of a subsystem \(\rho_{A}\) whose Hilbert space dimension is \(m\) when the whole system is a \(mn\)-dimensional random bipartite pure state \(\rho=|\psi\rangle_{AB}\langle\psi|\) with a condition \(m\leq n\). Of course, \(\rho_{A}=\mathrm{Tr}_{B}\rho\) and \(\rho_{B}=\mathrm{Tr}_{A}\rho\). In particular, Ref.[3] introduced the probability distribution
\[P\left(p_{1},\cdots,p_{m}\right)dp_{1}\cdots dp_{m}\propto\delta\left(1-\sum_ {i=1}^{m}p_{i}\right)\prod_{1\leq i<j\leq m}(p_{1}-p_{j})^{2}\prod_{k=1}^{m} \left(p_{k}^{n-m}dp_{k}\right) \tag{1}\]
where \(\{p_{1},\cdots,p_{m}\}\) are eigenvalues of \(\rho_{A}\). Thus, the problem can be summarized as a computation of the following quantity:
\[S_{von}(m,n)\equiv\langle S_{A}\rangle=-\int\left(\sum_{i=1}^{m}p_{i}\ln p_{ i}\right)P\left(p_{1},\cdots,p_{m}\right)dp_{1}\cdots dp_{m}. \tag{2}\]
Page in Ref.[3] computed \(S_{von}(2,n)\) and \(S_{von}(3,n)\) analytically, and \(S_{von}(4,n)\) and \(S_{von}(5,n)\) with the aid of MATHEMATICA 2.0. Finally, he conjectured that \(S_{von}(m,n)\) is
\[S_{von}(m,n)=\sum_{n=n+1}^{mn}\frac{1}{k}-\frac{m-1}{2n}\sim\ln m-\frac{m}{2n} \tag{3}\]
where the last equation is valid only for \(1\ll m\leq n\). The last term \(\frac{m}{2n}\) indicates that the entanglement entropy obeys a volume-law[4]. The Page's conjecture was rigorously proven in Ref.[5; 6; 7]. In particular, authors in Ref.[6; 7] changes the multiple integral of Eq. (2) into a single integral by using a generalized Laguerre polynomial[8].
In Ref.[9] Page applied Eq. (3) to the information loss problem[10; 11] in the Hawking radiation[12; 13]. He assumed that the whole random pure state \(|\psi\rangle_{AB}\) represents the Hawking radiation (\(\rho_{A}\)) and the remaining black hole (\(\rho_{B}\)) states. The reason why the random state is chosen is that the composite state is assumed to be highly complicate and hence, we do not know the state \(|\psi\rangle_{AB}\) exactly. Defining the quantum information \(I_{von}(m,n)=\ln m-S_{von}(m,n)\), he plots the \(\ln m\)-dependence of \(I_{von}(m,n)\) (see Fig. 2(b)) and claimed that the information may come out initially so slowly. His calculation suggests that in order to obtain a sufficient information from Hawking radiation it takes at least the time necessary to radiate half the entropy of the black hole[14; 15]. Research on this issue is not concluded and is still ongoing.
The Page curve (1.3) is extended to the multipartite case[16; 17] and random mixed states[18]. Besides black hole, it is also applied to many different fields such as fermion systems[19; 20], random spin chain[21], bosonic[22] and fermion[23; 24] Gaussian states, quantum thermalization[25; 26; 27; 28], and quantum chaos[29; 30; 31; 32; 33; 26]. It is also applied to the quantum information theories like random quantum circuits[34; 35; 36] and random quantum channels[37; 38; 39].
In this paper we will extend Ref.[3] to the average Renyi entropy defined as
\[S_{\alpha}(m,n)\equiv\langle S_{A,\alpha}\rangle=\frac{1}{1-\alpha}\int\ln \left(\sum_{i=1}^{m}p_{i}^{\alpha}\right)P\left(p_{1},\cdots,p_{m}\right)dp_{1 }\cdots dp_{m}. \tag{1.4}\]
Even though we apply the method of Ref.[7], it is impossible to convert the multiple integral of Eq. (1.4) into a single integral. Thus, it seems to be impossible to compute \(S_{\alpha}(m,n)\) analytically. In the next section, however, we compute \(S_{\alpha=2}(2,n)\) analytically. It was shown in this section that the analytical result of \(S_{\alpha=2}(2,n)\) is very close to the approximate Renyi entropy defined by
\[\widetilde{S}_{\alpha}(m,n)=\frac{1}{1-\alpha}\ln\left(\sum_{i=1}^{m}\langle p _{i}^{\alpha}\rangle\right) \tag{1.5}\]
when \(m=\alpha=2\). In Eq. (1.5) \(\sum_{i=1}^{m}\langle p_{i}^{\alpha}\rangle\) is defined as
\[\sum_{i=1}^{m}\langle p_{i}^{\alpha}\rangle\equiv Z_{\alpha}=\int\left(\sum_{ i=1}^{m}p_{i}^{\alpha}\right)P\left(p_{1},\cdots,p_{m}\right)dp_{1}\cdots dp _{m}. \tag{1.6}\]
In section III we will compute \(Z_{\alpha}\) explicitly for any positive real \(\alpha\). It is represented as double summations. In section IV we compute the approximate Renyi entropy \(\widetilde{S}_{\alpha}(m,n)\) analytically. Defining the quantum information \(I_{\alpha}(m,n)=\ln m-\widetilde{S}_{\alpha}(m,n)\) and using various asymptotic formula, we show that for large \(n\)\(I_{\alpha}(m,n)\) reduces to \(\frac{\alpha}{2n}(m-m^{-1})\), which is in agreement with Eq. (1.3) when \(\alpha=1\) and \(m\gg 1\). We plot \(I_{\alpha}(m,n)\) with varying \(\alpha\) in this section and compare it to the case of von Neumann entropy presented in Ref. [9]. With increasing \(\alpha\), the region for the almost vanishing information of \(I_{\alpha}(m,n)\) becomes shorten and eventually disappears at \(\alpha=\infty\). This means that in the application to the black hole radiation the quantum information of the Renyi entropy comes out more earlier than that of von Neumann entropy. In section V a brief conclusion is given.
## II Computation of \(S_{\alpha=2}(2,n)\)
Defining \(q_{i}=rp_{i}\), one can show from Eq. (4) that \(S_{\alpha}(m,n)\) can be expressed as
\[S_{\alpha}(m,n)=\frac{\alpha}{\alpha-1}\psi(mn)-\frac{1}{\alpha-1}\frac{\int \ln\left(\sum_{i=1}^{m}q_{i}^{\alpha}\right)Qdq_{1}\cdots dq_{m}}{\int Qdq_{1} \cdots dq_{m}} \tag{5}\]
where \(\psi(z)=\Gamma^{\prime}(z)/\Gamma(z)\) is a digamma function and1
Footnote 1: Eq. (5) is called a density of the eigenvalues of the Wishart matrix.
\[Q(q_{1},\cdots,q_{m})dq_{1}\cdots dq_{m}=\prod_{1\leq i<j\leq m}(q_{1}-q_{j})^ {2}\prod_{k=1}^{m}\left(e^{-q_{k}}q_{k}^{n-m}dq_{k}\right). \tag{6}\]
Now, we put \(m=\alpha=2\). In this case it is easy to show
\[\int Qdq_{1}dq_{2}=\frac{2}{n-1}\Gamma^{2}(n). \tag{7}\]
Figure 1: (Color online) The \(n\)-dependence of exact Rényi entropy (red triangle) given in Eq. (11) and approximate Rényi entropy (blue cross) given in Eq. (12) when \(m=\alpha=2\). The black dot represents the exact von Neumann entropy given in Eq. (3) with \(m=2\).
Also the numerator in Eq. (2.1) can be written as
\[\int\ln(q_{1}^{2}+q_{2}^{2})Qdq_{1}dq_{2}=\int_{0}^{\infty}dq_{1}\int_{0}^{\infty }dq_{2}e^{-(q_{1}+q_{2})}\left[q_{1}^{n}q_{2}^{n-2}+q_{1}^{n-2}q_{2}^{n}-2\left(q _{1}^{n-1}\right)^{2}\right]\ln(q_{1}^{2}+q_{2}^{2}). \tag{2.4}\]
In order to compute Eq. (2.4) analytically, we use the following double integral formula[40]
\[\int_{0}^{\infty}dx\int_{0}^{\infty}dy\ln(x^{2}+y^{2})e^{-px-qy}=-\frac{2}{pq} \left[\gamma+\frac{2p^{2}\ln q+2q^{2}\ln p-\pi pq}{2(p^{2}+q^{2})}\right] \tag{2.5}\]
where \(\gamma=0.5772\) is Euler's constant. Applying \(\left(-\frac{\partial}{\partial p}\right)^{m}\left(-\frac{\partial}{\partial q }\right)^{n}\) to both sides of Eq. (2.5) and putting \(p=q=1\) at the final stage of calculation, one can compute
\[F(m,n)=\int_{0}^{\infty}dx\int_{0}^{\infty}dyx^{m}y^{n}\ln(x^{2}+y^{2})e^{-x-y} \tag{2.6}\]
analytically. For example, \(F(2,3)=F(3,2)=-24\gamma+21\pi-14\). In principle, the general expression of \(F(m,n)\) for arbitrary integers \(m\) and \(n\) can be derived with the aid of MATHEMATICA 13.1. Since, however, it is very lengthy and complicated2
Footnote 2: Furthermore, \(F(m,n)\) depends on the \(j^{th}\) term of some recurrence relations, where \(j\) is a function of \(m\) and \(n\). This term is expressed with the aid of few special functions such as Lerch transcendent
\[\Phi(z,s,a)=\sum_{k=0}^{\infty}\frac{z^{k}}{(k+a)^{s}}.\]
, we will not present the explicit expression in this paper.
Using Eq. (2.6) it is easy to show
\[\int\ln(q_{1}^{2}+q_{2}^{2})Qdq_{1}dq_{2}=2F(n,n-2)-2F(n-1,n-1). \tag{2.7}\]
Inserting Eqs. (2.3) and (2.7) into Eq. (2.1) with assuming \(m=\alpha=2\), the average Renyi entropy becomes
\[S_{\alpha=2}(2,n)=2\psi(2n)-(n-1)\frac{F(n,n-2)-F(n-1,n-1)}{\Gamma^{2}(n)}. \tag{2.8}\]
As we will show later, one can show \(\langle p_{1}^{2}+p_{2}^{2}\rangle=(n-2)/(2n+1)\). Therefore, Eq. (1.5) reduces to
\[\widetilde{S}_{\alpha=2}(2,n)=-\ln\left(\frac{n+2}{2n+1}\right). \tag{2.9}\]
In Fig. 1 we plot the \(n\)-dependence of \(S_{\alpha=2}(2,n)\) and \(\widetilde{S}_{\alpha=2}(2,n)\) as red triangle and blue cross respectively. The black dot represent the average von Neumann entropy given in Eq. (1.3) with \(m=2\). As expected, the Renyi entropy is less than the von Neumann entropy. As the figure shows, the exact and approximate Renyi entropies are very close to each other.
## III Computation of \(Z_{\alpha}=\sum_{j=1}^{m}\langle p_{j}^{\alpha}\rangle\)
In this section we will compute \(\sum_{j=1}^{m}\langle p_{j}^{\alpha}\rangle\) analytically. First, we assume that \(\alpha\) is integer for simplicity. Later we will derive the expression of \(Z_{\alpha}\) for any positive real \(\alpha\). This will be used later to compute \(\widetilde{S}_{\alpha}(m,n)\) presented in Eq. (5).
Introducing \(q_{i}=rp_{i}\) again, one can show
\[\sum_{j=1}^{m}\langle p_{j}^{\alpha}\rangle=\frac{\Gamma(mn)}{\Gamma(mn+ \alpha)}\frac{\int\left(\sum_{i=1}^{m}q_{i}^{\alpha}\right)Qdq_{1}\cdots dq_{m }}{\int Qdq_{1}\cdots dq_{m}}. \tag{10}\]
As Refs.[6; 7] shows, we first note
\[\prod_{1\leq i<j\leq m}(q_{1}-q_{j})^{2}=\left|\begin{array}{ccc}p_{0}^{ \beta}(q_{1})&\cdots&p_{0}^{\beta}(q_{m})\\ p_{1}^{\beta}(q_{1})&\cdots&p_{1}^{\beta}(q_{m})\\ \vdots&\ddots&\vdots\\ p_{m-1}^{\beta}(q_{1})&\cdots&p_{m-1}^{\beta}(q_{m})\end{array}\right|^{2} \tag{11}\]
where
\[p_{k}^{\beta}(q)=\sum_{r=0}^{k}\left(\begin{array}{c}k\\ r\end{array}\right)(-1)^{r}\frac{\Gamma(k+\beta+1)}{\Gamma(k+\beta-r+1)}q^{k-r }=(-1)^{k}k!L_{k}^{\beta}(q). \tag{12}\]
In Eq. (12) \(L_{k}^{\beta}(q)\) is a generalized Laguerre polynomial. It is worthwhile noting that Eq. (11) is valid for any real \(\beta\). Thus, we can choose \(\beta\) freely for convenience. Using the properties of the generalized Laguerre polynomial, one can show[40; 41]
\[\int_{0}^{\infty}dqe^{-q}q^{\beta}p_{k_{1}}^{\beta}(q)p_{k_{2}}^{\beta}(q)= \Gamma(k_{1}+1)\Gamma(k_{1}+\beta+1)\delta_{k_{1},k_{2}} \tag{13}\]
and
\[\int_{0}^{\infty}dqe^{-q}q^{a-1}p_{k}^{b}(q)=(1-a+b)_{k}\Gamma(a)(-1)^{k} \tag{14}\]
where \((a)_{k}=a(a+1)\cdots(a+k-1)\).
Now, let us define
\[J_{m}\equiv\frac{\int\left(\sum_{i=1}^{m}q_{i}^{\alpha}\right)Qdq_{1}\cdots dq _{m}}{\int Qdq_{1}\cdots dq_{m}}. \tag{15}\]
First, we consider the case of \(m=2\) for simplicity. In this case we choose \(\beta=n-2\). Using Eq. (11) and orthogonality condition (13) it is easy to show
\[\int Qdq_{1}dq_{2}=2!\left[\int dq_{1}e^{-q_{1}}q_{1}^{n-2}\left(p_{0}^{n-2}( q_{1})\right)^{2}\right]\left[\int dq_{2}e^{-q_{2}}q_{2}^{n-2}\left(p_{1}^{n-2}( q_{2})\right)^{2}\right]. \tag{16}\]
Similarly, it is straightforward to show
\[\int\left(\sum_{i=1}^{2}q_{i}^{\alpha}\right)Qdq_{1}dq_{2} \tag{3.8}\] \[=2!\Bigg{[}\left\{\int dq_{1}e^{-q_{1}}q_{1}^{n-2}\left(p_{0}^{n-2} (q_{1})\right)^{2}\right\}\bigg{\{}\int dq_{2}e^{-q_{2}}q_{2}^{n+\alpha-2} \left(p_{1}^{n-2}(q_{2})\right)^{2}\right\}\] \[\qquad+\left\{\int dq_{1}e^{-q_{1}}q_{1}^{n+\alpha-2}\left(p_{0}^ {n-2}(q_{1})\right)^{2}\right\}\left\{\int dq_{2}e^{-q_{2}}q_{2}^{n-2}\left(p_ {1}^{n-2}(q_{2})\right)^{2}\right\}\Bigg{]}.\]
Inserting Eqs. (3.7) and (3.8) into Eq. (3.6) with \(m=2\) and using Eqs. (3.4) and (3.5), one can show
\[J_{2}=\sum_{k=0}^{1}\frac{I_{k,\alpha}(n-2)}{\Gamma(k+1)\Gamma(k+n-1)} \tag{3.9}\]
where
\[I_{k,\alpha}(x)=\int dqe^{-q}q^{\alpha+x}\left(p_{k}^{x}(q)\right)^{2}. \tag{3.10}\]
Therefore, the multiple integral in Eq. (3.6) is changed into a single integral.
Now, we consider the general case. In this case we choose \(\beta=n-m\). Similar calculation leads
\[J_{m}=\sum_{k=0}^{m-1}\frac{I_{k,\alpha}(n-m)}{\Gamma(k+1)\Gamma(k+n-m+1)}. \tag{3.11}\]
Finally, we should compute \(I_{k,\alpha}(x)\) analytically. First, we note the recursion relation \(p_{k}^{x}(q)=p_{k}^{x+1}(q)+kp_{k-1}^{x+1}(q)\). Applying this recursion relation iteratively, one can derive
\[p_{k}^{x}(q)=\sum_{i=0}^{\ell}\left(\begin{array}{c}\ell\\ i\end{array}\right)(k-i+1)_{i}p_{k-i}^{x+\ell}(q) \tag{3.12}\]
for all nonnegative integer \(\ell\). Choosing \(\ell=\alpha\) and using the orthogonality condition (3.4), one can compute \(I_{k,\alpha}(x)\), whose explicit expression is
\[I_{k,\alpha}(x)=\Gamma^{2}(k+1)\sum_{i=0}^{\alpha}\left(\begin{array}{c} \alpha\\ i\end{array}\right)^{2}\frac{\Gamma(k+\alpha+x-i+1)}{\Gamma(k-i+1)}. \tag{3.13}\]
Thus, inserting Eq. (3.13) into Eq. (3.11) one can derive \(J_{m}\) as double summations. Then, Eq. (3.1) is expressed as
\[Z_{\alpha}\equiv\sum_{i=1}^{m}\langle p_{i}^{\alpha}\rangle=\frac{\Gamma(mn)} {\Gamma(mn+\alpha)}\sum_{k=0}^{m-1}\frac{\Gamma(k+1)}{\Gamma(k+n-m+1)}\sum_{i= 0}^{\alpha}\left(\begin{array}{c}\alpha\\ i\end{array}\right)^{2}\frac{\Gamma(k+n-m+\alpha-i+1)}{\Gamma(k-i+1)}. \tag{3.14}\]
Eq. (3.14) can be used to prove the Page's conjecture (1.3). From Eq. (3.14) one can differentiate \(Z_{\alpha}\) with respect to \(\alpha\). Using \(\Gamma^{\prime}(z)=\Gamma(z)\psi(z)\) and \(\psi(n)=-\gamma+\sum_{k=1}^{n-1}k^{-1}\), it is straightforward to show that \(-\frac{\partial}{\partial\alpha}Z_{\alpha}|_{\alpha=1}\) coincides with Eq. (1.3) exactly.
Finally, let me derive a different expression of \(Z_{\alpha}\), which is valid for any positive real \(\alpha\). In the second summation of Eq. (3.14) the actual upper bound of the parameter \(i\) is \(\min(\alpha,k)\) because if \(k<\alpha\), \(\Gamma(k-i+1)\) located in denominator diverges when \(k+1\leq i\leq\alpha\). In order to avoid this inconvenience, we introduce a new variable \(j=k-i\), which runs from \(-\alpha\) to \(m-1\). In this case \(\Gamma(k-i+1)\) is changed into \(\Gamma(j+1)\), which goes to infinity for \(j\leq-1\). In this reason negative \(j\) does not contribute to \(Z_{\alpha}\). As a result, \(Z_{\alpha}\) is expressed in the form:
\[Z_{\alpha}=\frac{\Gamma(mn)\Gamma^{2}(\alpha+1)}{\Gamma(mn+\alpha)}\sum_{k=0} ^{m-1}\frac{\Gamma(k+1)}{\Gamma(k+n-m+1)}\sum_{j=0}^{m-1}\frac{\Gamma(n-m+ \alpha+1+j)}{\Gamma^{2}(k-j+1)\Gamma^{2}(\alpha-k+j+1)\Gamma(j+1)}. \tag{3.15}\]
Although this expression has similar problem when \(j>k+1\), \(\alpha\) is not involved in the summation upper bound. Thus, Eq. (3.15) is valid for any positive real \(\alpha\). The exactly same expression was derived in Ref.[42]. However, Eq. (3.14) is more convenient if \(\alpha\) is integer and \(\alpha\ll m\) because number of summation is very small compared to that of Eq. (3.15).
## IV Quantum Information from \(\widetilde{S}_{\alpha}(m,n)\)
From the previous sections the approximate Renyi entropy \(\widetilde{S}_{\alpha}(m,n)\) defined in Eq. (1.5) is given by
\[\widetilde{S}_{\alpha}(m,n)=\frac{1}{1-\alpha}\ln Z_{\alpha} \tag{4.1}\]
where \(Z_{\alpha}\) is presented in Eq. (3.14). Now, we assume \(n\gg 1\). Using
\[\lim_{z\rightarrow\infty}\Gamma(1+z)\sim e^{-z}z^{z}\sqrt{2\pi z }\left[1+\frac{1}{12z}+\mathcal{O}(z^{-2})\right] \tag{4.2}\] \[\lim_{x\rightarrow\infty}\left(1+\frac{a}{x}\right)^{x}\sim e^{a }\left[1-\frac{a^{2}}{2x}+\mathcal{O}(x^{-2})\right],\]
one can show
\[\frac{\Gamma(mn)}{\Gamma(mn+\alpha)}\sim(mn)^{-\alpha}\left[1-\frac{ \alpha(\alpha-1)}{2mn}+\mathcal{O}(n^{-2})\right] \tag{4.3}\] \[\frac{\Gamma(n+k-m+1+\alpha-i)}{\Gamma(n+k-m+1)}\] \[\qquad\sim n^{\alpha-i}\left[1+\frac{1}{2n}\left\{i^{2}+i(2m-2k- 2\alpha-1)-\alpha(2m-2k-\alpha-1)\right\}+\mathcal{O}(n^{-2})\right].\]
Then, for large \(n\)\(Z_{\alpha}\) reduces to
\[Z_{\alpha}\sim m^{1-\alpha}\left[1+\frac{\alpha(\alpha-1)}{2n}(m-m^{-1})+ \mathcal{O}(n^{-2})\right]. \tag{4.4}\]
As expected, \(Z_{\alpha}\) becomes \(m\) or \(1\) when \(\alpha=0\) or \(1\).
Combining Eq. (4.1) and Eq. (4.4), for large \(n\)\(\widetilde{S}_{\alpha}(m,n)\) behaves as following:
\[\widetilde{S}_{\alpha}(m,n)\approx\ln m-\frac{1}{\alpha-1}\ln\left[1+\frac{ \alpha(\alpha-1)}{2n}(m-m^{-1})\right]\sim\ln m-\frac{\alpha}{2n}(m-m^{-1}). \tag{4.5}\]
If \(1\ll m\) and \(\alpha=1\), this equation reduces to \(\widetilde{S}_{\alpha}(m,n)\sim\ln m-m/(2n)\), which coincides with Eq. (1.3).
The quantum information can be defined as the deficit of the average Renyi entropy from the maximum:
\[I_{\alpha}(m,n)=\ln m-\widetilde{S}_{\alpha}(m,n)\sim\frac{1}{\alpha-1}\ln \left[1+\frac{\alpha(\alpha-1)}{2n}(m-m^{-1})\right] \tag{4.6}\]
where last equation is valid for \(n\gg 1\).
Figure 2: (Color online) \(\ln m\)-dependence of (a) \(\widetilde{S}_{\alpha}(m,n)\) and (b) \(I_{\alpha}(m,n)\). Here, we take \(mn=2^{4}3^{6}5^{2}=291600\). In both figures the black, red, blue, green, and orange dots correspond to \(\alpha=1\), \(10\), \(100\), \(1000\), and \(\infty\) respectively.
Now, let us consider \(m>n\) case. From the Schmidt decomposition we know that the eigenvalues of the density operators of systems A and B are the same. Thus, the approximate Renyi entropy becomes \(\widetilde{S}_{\alpha}(n,m)\). If \(m\gg 1\), the information reduces to
\[I_{\alpha}(m,n)\sim\ln m-\ln n+\frac{1}{\alpha-1}\ln\left[1+\frac{\alpha( \alpha-1)}{2m}(n-n^{-1})\right]. \tag{4.7}\]
Finally, let us consider \(\alpha\rightarrow\infty\) limit. Eq. (3.14) implies that the leading term of \(Z_{\alpha=\infty}\) is
\[Z_{\alpha=\infty}\sim\frac{\Gamma(mn)}{\Gamma(m)\Gamma(n)}\alpha^{-(m-1)(n-1)}. \tag{4.8}\]
Therefore, \(\widetilde{S}_{\alpha\rightarrow\infty}=0\) and \(I_{\alpha=\infty}=\ln m\).
The \(\ln m\)-dependence of \(\widetilde{S}_{\alpha}(m,n)\) and \(I_{\alpha}(m,n)\) are plotted in Fig. 2 for \(\alpha=1\) (black), \(10\) (red), \(100\) (blue), \(1000\) (green) and \(\infty\) (orange). As Fig. 2(a) exhibits, the \(\ln m\)-dependence of the average Renyi entropy \(\widetilde{S}_{\alpha}(m,n)\) decreases with increasing \(\alpha\), and eventually approaches to zero at \(\alpha=\infty\). As Fig. 2(b) exhibits, the nearly vanishing region of \(I_{\alpha}(m,n)\) is shorten with increasing \(\alpha\) and eventually disappears at \(\alpha=\infty\).
For example, let us define \(m_{*}\), which is the smallest \(m\) with satisfying \(I_{\alpha}(m,n)>0.1\). The \(\alpha\)-dependence of \(m_{*}\) is summarized at Table I. As expected \(m_{*}\) is decreasing with increasing \(\alpha\).
## V Conclusions
In this paper we examine the average Renyi entropy \(S_{\alpha}\) of a subsystem \(A\) when the whole composite system \(AB\) is a random pure state. We assume that the Hilbert space dimensions of \(A\) and \(AB\) are \(m\) and \(mn\) respectively with \(m\leq n\). If \(m\geq n\), the Schmidt decomposition guarantees that the average value is obtained by simply interchanging \(m\) and \(n\). First, we compute the average Renyi entropy analytically for \(m=\alpha=2\). We compare this analytical result with the approximate average Renyi entropy \(\widetilde{S}_{\alpha=2}(2,n)\). As Fig. 1 shows, these two results are very close to each other, especially when \(n\) is large. For general case we
compute \(\widetilde{S}_{\alpha}(m,n)\) analytically. When \(1\ll n\), \(\widetilde{S}_{\alpha}(m,n)\) reduces to \(\ln m-\frac{\alpha}{2n}(m-m^{-1})\), which is in agreement with the asymptotic expression of the average von Neumann entropy given in Ref.[3]. Defining the information by Eq. (4.6), we plot the \(\ln m\) dependence of the information \(I_{\alpha}(m,n)\) in Fig. 2(b). It is remarkable to note that the nearly vanishing region of \(I_{\alpha}(m,n)\) becomes shorten with increasing \(\alpha\), and eventually disappears in the limit of \(\alpha\rightarrow\infty\).
This result has important implication in the application of information loss problem. If we assume that \(A\) and \(B\) are the radiation and remaining black hole states, the information derived from the Renyi entropy can be obtained from Hawking radiation more and more earlier to that of von Neumann entropy with increasing \(\alpha\), and in the limit of \(\alpha=\infty\) the information is radiated as soon as Hawking radiation starts. If this is right, we should reconsider the "Alice and Bob" thought-experiment described in Ref.[43; 44] on no-cloning theorem more carefully. Besides black hole physics, we want to examine the effect of our result in the quantum information theories like random quantum circuit and random quantum channel.
The defect of our result is a fact that our calculation is based on \(\widetilde{S}_{\alpha}(m,n)\). Although we guess \(S_{\alpha}(m,n)\) also exhibits a similar behavior in Fig. 2, we can not prove it on the analytical ground. Numerical calculation is also very difficult when \(m\) is large, because the calculation requires \(m\)-multiple integration. Probably, we may need a new idea to explore this issue.
**Acknowledgement**: This work was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT) (No. 2021R1A2C1094580).
|
2302.00266 | Electron as a Tiny Mirror: Radiation From a Worldline With Asymptotic
Inertia | We present a moving mirror analog of the electron, whose worldline possesses
asymptotic constant velocity with corresponding beta Bogolubov coefficients
that are consistent with finite total emitted energy. Furthermore, the quantum
analog model is in agreement with the total energy obtained by integrating the
classical Larmor power. | Michael R. R. Good, Yen Chin Ong | 2023-02-01T06:17:10Z | http://arxiv.org/abs/2302.00266v1 | # Electron as a Tiny Mirror: Radiation From a Worldline With Asymptotic Inertia
###### Abstract
We present a moving mirror analog of the electron, whose worldline possesses asymptotic constant velocity with corresponding beta Bogolubov coefficients that are consistent with finite total emitted energy. Furthermore, the quantum analog model is in agreement with the total energy obtained by integrating the classical Larmor power.
## I Introduction: Fixed Radiation
Uniform acceleration, while beautiful and simple is not globally physical. Consider the problem of infinite radiation energy from an eternal uniformly accelerated charge. The physics of eternal unlimited motions is not only the cause of misunderstandings, but also the starting point of incorrect physical interpretations, especially when considering global calculable quantities, like the total radiation emitted of a moving charge. Infinite radiation energy afflicts accurate scrutiny of physical connections between acceleration, temperature, and particle creation.
More collective consideration should be prescribed to straighten out the issue. One path forward, is the use of limited non-uniform accelerated trajectories, capable of rendering finite global total radiation energy. The trade-off with these trajectories is usually the lack of simplicity or tractability in determining the radiation spectrum in the first place.
In this short Letter, we present a solution for finite radiation energy and its corresponding spectrum. Limited solutions of this type are rare and can be employed to investigate the physics associated with contexts where a globally continuous equation of motion is desired. For instance, the solution is suited for applications like the harvesting of entropy from a globally defined trajectory of an Unruh-DeWitt detector, the non-equilibrium thermodynamics of the non-uniform Davies-Fulling-Unruh effect, or the dynamical Casimir effect [1], and particle production of the moving mirror model [2; 3; 4].
Providing a simple conceptual and quantitative analog application to understanding the radiation emitted by an electron, we demonstrate the existence of a correspondence (see similar correspondences in [5; 6; 7; 8; 9; 10; 11]) between it and the moving mirror. At the very least, this functional coincidence is general enough to be applied to any tractably integrable rectilinear classical trajectory that emits finite radiation energy. Here, we analytically compute the relevant integrable quantities for the specific solution and demonstrate full consistency. The analog approach treats the electron as a tiny moving mirror, somewhat similar to the Schwarzschild [12], Reissner-Nordstrom [13], and Kerr [14] black mirror analogies, but with asymptotic inertia of a limited acceleration trajectory. Interestingly, the analog reveals previously unknown electron acceleration radiation spectra, thus helping to develop general but precise links between acceleration, gravity, and thermodynamics.
## II Elements of Electrodynamics: Energy from Moving Electrons
In electrodynamics [15; 16; 17], the relativistically covariant Larmor formula (the speed of light \(c\), the electron charge \(q_{e}\), and vacuum permittivity \(\epsilon_{0}\) are set to unity),
\[P=\frac{\alpha^{2}}{6\pi}, \tag{1}\]
is used to calculate the total power radiated by a point charge as it accelerates [15]. Its usefulness is due in part to Lorentz invariance and the fact that proper acceleration, \(\alpha\), is intuitive, being what an accelerometer measures in its own instantaneous rest frame [18].
When any charged particle accelerates, energy is radiated in the form of electromagnetic waves, and the total energy of these waves is found by integrating over coordinate time. That is, the integral
\[E=\int_{-\infty}^{\infty}P\,\mathrm{d}t, \tag{2}\]
demonstrates that the Larmor power, \(P=\alpha^{2}/6\pi\), directly tells an observer the total energy emitted by a point charge along its time-like worldline. This includes trajectories that lack horizons, see e.g., [19]. This result
is finite, only when the proper acceleration is asymptotically zero; i.e. the worldline must be asymptotically inertial.
The force of radiation resistance, whose magnitude is given relativistically as the proper time derivative of the proper acceleration,
\[F=\frac{\alpha^{\prime}(\tau)}{6\pi}, \tag{3}\]
is known as the magnitude of the Lorentz-Abraham-Dirac (LAD) force, e.g. [20]. The power, \(F\cdot v\), associated with this force can be called the Feynman power [21]. The total energy emitted is also consistent with the Feynman power, where one checks:
\[E=-\int_{-\infty}^{\infty}F\cdot v\,\mathrm{d}t. \tag{4}\]
The negative sign demonstrates that the total work against the LAD force represents the total energy loss. That is, the total energy loss from radiation resistance due to Feynman power must equal the total energy radiated by Larmor power. Larmor and Feynman powers are not the same, but the magnitude of the total energy from both are identical, at least for rectilinear trajectories that are asymptotically inertial.
Interestingly, the above results also hold in a quantum analog model of a moving mirror. A central novelty of this work is to explicitly connect the quantum moving mirror radiation spectra with classical moving point charge radiation spectra. Traditionally, e.g. [22, 23, 24, 2, 24, 2], and recently, e.g. [25, 26, 27], moving mirror models in \((1+1)\)-dimensions are employed to study properties of Hawking radiation for black holes. Here, we show that it is also useful to model the spectral finite energy of electron radiation. In particular, a suitably constructed mirror trajectory (which is quite natural) can produce the same total energy consistent with the above via
\[E=\int_{0}^{\infty}\int_{0}^{\infty}\omega|\beta_{\omega\omega^{\prime}}|^{2} \,\mathrm{d}\omega\,\mathrm{d}\omega^{\prime}. \tag{5}\]
The final drifting speed of the mirror or electron will be less than the speed of light, labeled \(s\), with \(0<s<1\). We also denote \(a:=\omega(1+s)+\omega^{\prime}(1-s)\), \(b:=\omega(1-s)+\omega^{\prime}(1+s)\), and \(c:=a+b\); \(d:=a-b\). Note that \(c=2(\omega+\omega^{\prime})\).
## III Go trajectory for finite energy emission
We consider a globally defined, continuous worldline, which is rectilinear, time-like, and possesses asymptotic zero velocity in the far past, while travelling to asymptotically constant velocity in the far future (but asymptotically inertial both to the past and the future). It radiates a finite amount of positive energy and has beta Bogolubov coefficients that are analytically tractable. The 'GO' trajectory, if you will, goes like (Good-Ong 2015 [28]),
\[z(t)=\frac{s}{2\kappa}\ln(e^{2\kappa t}+1), \tag{6}\]
where \(\kappa\) is an acceleration parameter. Its total power, if we applied the Larmor formula \(P=\alpha^{2}/6\pi\), is
\[P=\frac{2\kappa^{2}}{3\pi}\frac{s^{2}e^{-4\kappa t}\left(1+e^{-2\kappa t} \right)^{2}}{\left[\left(1+e^{-2\kappa t}\right)^{2}-s^{2}\right]^{3}}. \tag{7}\]
Notice the power is always positive and asymptotically drops to zero. See Figure 1 for an illustration.
The Feynman power, \(F\cdot v\), associated with the self-force Eq. (3) is
\[F\cdot v=\frac{2\kappa^{2}s^{2}e^{4\kappa t}\left(j_{1}e^{6\kappa t}+j_{2}e^ {4\kappa t}+e^{2\kappa t}+1\right)}{3\pi\left(-j_{1}e^{4\kappa t}+2e^{2\kappa t }+1\right)^{3}} \tag{8}\]
where \(j_{1}=s^{2}-1\) and \(j_{2}=2s^{2}-1\). Just like the Larmor power, Eq. (7), the Feynman power, Eq. (8), asymptotically dies off, but unlike the Larmor power, the Feynman power has a period of negative radiation reaction. See Figure 2 for an illustration.
We now compute the total energy, using either the Larmor power or Feynman power, and integrating over time. In terms of the rapidity \(\eta=\tanh^{-1}v\) and Lorentz factor \(\gamma\), the total energy is given by
\[E=\frac{\kappa}{24\pi}\left[\left(\gamma^{2}-1\right)+\left(\frac{\eta}{s}-1 \right)\right]. \tag{9}\]
We remark that \(\left(\frac{\eta}{s}-1\right)=\frac{1}{2s}\ln\frac{1+s}{1-s}-1\) is proportional to the lowest order soft energy of inner bremsstrahlung in the case of beta decay (equation 3 in [11]), which is the deep IR contribution. One can see that Eq. (9) is finite for all \(0<s<1\) and consistent with both the
Figure 1: A plot of the Larmor power, Eq. (1), of the GO trajectory, Eq. (6), as a function of time and final constant speed, \(s=0.9\), i.e. Eq. (7). Here \(\kappa=1\). This plot helps illustrate that the Larmor power never emits negative energy flux (NEF) and asymptotically dies off, consistent with a physically finite amount of total radiation energy, Eq. (2).
Larmor power and Feynman power. After we compute the beta Bogoliubov spectrum and plot it in Figure 3, we will compute its total energy and plot it in Figure 4. We call Eq. (9), the Larmor energy to differentiate it from the Bogoliubov energy, Eq. (5) found after substituting Eq. (13). The energy is a function of the final constant speed, \(s\).
Finally, the spectrum given by the Bogolubov coefficients is best found by first considering the presence of a mirror in vacuum, e.g. [29; 30]. The mode functions that correspond to the in-vacuum state,
\[\phi^{\rm in}_{\omega^{\prime}}=\frac{1}{\sqrt{4\pi\omega^{\prime}}}\left[e^{- i\omega^{\prime}v}-e^{-i\omega^{\prime}p(u)}\right], \tag{10}\]
and mode functions that correspond to the out-vacuum state,
\[\phi^{\rm out}_{\omega}=\frac{1}{\sqrt{4\pi\omega}}\left[e^{-i\omega f(v)}-e^ {-i\omega u}\right], \tag{11}\]
comprise the two sets of incoming and outgoing modes needed for the Bogolubov coefficients. The \(f(v)\) and \(p(u)\) functions express the trajectory of the mirror, Eq. (6), but in null coordinates, \(u=t-z\) and \(v=t+z\). In spacetime coordinates congruent with Eq. (6), one form of the beta integral [19] for one side of the mirror is,
\[\beta_{\omega\omega^{\prime}}=\int_{-\infty}^{\infty}\mathrm{d}z\frac{e^{i \omega_{n}z-i\omega_{p}t(z)}}{4\pi\sqrt{\omega\omega^{\prime}}}\left[\omega_ {p}-\omega_{n}t^{\prime}(z)\right], \tag{12}\]
where \(\omega_{p}=\omega+\omega^{\prime}\) and \(\omega_{n}=\omega-\omega^{\prime}\). Combining the results for each side of the mirror [31] by adding the squares of the beta Bogolubov coefficients ensures that we account for all the radiation emitted by the mirror [32]. The overall count per mode per mode is
\[|\beta_{\omega\omega^{\prime}}|^{2}=\frac{s^{2}\omega\omega^{\prime}Z}{2\pi abcd \kappa}\left(\frac{e^{\frac{\pi d}{4\kappa}}-1}{e^{\frac{\pi d}{4\kappa}}-1} \right)e^{\frac{\pi b}{4\kappa}}, \tag{13}\]
where \(Z=b\operatorname{csch}\left(\frac{\pi a}{4\kappa}\right)+a\operatorname{csch} \left(\frac{\pi b}{4\kappa}\right)\). Eq. (13) combines the squares, \(|\beta_{R}|^{2}+|\beta_{L}|^{2}\), of the coefficients for each side of mirror [28]. See a plot of the symmetry between the modes \(\omega\) and \(\omega^{\prime}\) in Figure 3.
It is then straightforward to verify that the total energy obtained by integrating the power is the same as by using the beta Bogoliubov integral Eq. (5). We cannot prove this analytically, but a numerical integral is sufficiently convincing. See Figure 4 for a plot of the Larmor and Bogoliubov energies as a function of final constant speed.
Figure 4: A plot of the Larmor energy, Eq. (9) and Bogoliubov energy Eq. (5) using coefficients Eq. (13), both as a function of final constant speed, \(s\). Here \(0<s<0.99\) and \(\kappa=1\). This plot helps confirm that Larmor energy and Bogoliubov energy are equivalent, substantiating the double-sided moving mirror as an analog model of the electron.
Figure 3: A plot of the coefficients Eq. (13), as a function of in and out modes, \(\omega\) and \(\omega^{\prime}\), where final constant speed, \(s=0.444\) for illustration. Here \(\kappa=1\). This plot underscores the symmetry of the modes in the particle per mode squared distribution spectrum of the beta Bogoliubov coefficients, Eq. (13).
Figure 2: A plot of the Feynman power, \(F\cdot v\) associated with the self-force Eq. (3), of the GO trajectory, Eq. (6), as a function of time and final constant speed, \(s=0.9\), i.e. Eq. (8). Here \(\kappa=1\). This plot helps illustrate the Feynman power dies off asymptotically, has a period of negative radiation reaction, and is also consistent with a physically finite amount of total radiation energy, Eq. (4).
Discussions: mirrors, electrons, and black holes
Prior studies of accelerated electrons and their relationship to mirrors are few, however several works, e.g. [33; 34], connect electrons to the general Davies-Fulling-Unruh effect (e.g. perhaps the most well-known is Bell and Leinaas [35] which considered the possibility of using accelerated electrons as thermometers to demonstrate the QFT relationship between acceleration and temperature). Nevertheless, perhaps an early clue a functional identity existed was made in 1982 by Ford and Vilenkin [24] who found the LAD self-force was the same form for both mirrors and electrons. In 1995, Nikishov and Ritus [5] asserted the spectral symmetry and found that the LAD radiation reaction has a term that corresponds to the negative energy flux (NEF) from moving mirrors. Ritus examined [6; 7; 8] the correspondence connecting the radiation from both the electron and mirror systems, claiming not only a deep symmetry between the two, but an fundamental identity related to bare charge quantization [10]. Recently, the duality was extended to Larmor power [32] and the deep infrared [11]. The approach has pedagogical application; for instance, it was used to demonstrate the physical difference between radiation power loss and kinetic power loss [9].
The GO moving mirror was initially constructed to model the evaporation of black holes that exhibit a "death gasp" [36; 37; 38] - an emission of NEF due to unitarity being preserved. Therefore it is a study result that the total finite energy emitted from the double-sided mirror matches the result from the Larmor formula for an electron. In this sense the GO mirror trajectory is a crude but functional analog of a drifting electron that starts at zero velocity and speeds away to some constant velocity. A single-sided moving mirror does not account for all the radiation emitted, and differs from the electron spectra. Notably, there is no known NEF radiated from an electron.
In the literature, one finds some properties that are shared by black holes and electrons. For example, the ratio of the magnetic moment of an electron to its spin angular momentum is \(ge/2m\) with \(g=2\), which is twice the value of the gyromagnetic ratio for a classical rotating charged bodies (\(g=1\)). Curiously, as Carter has shown [39], a Kerr-Newmann black hole also has \(g=2\). This has led to some speculations on whether the electron is a Kerr-Newman singularity (the angular momentum and charge of the electron are too large for a black hole of the electron's mass, so there is no horizon) [40] (see also [41; 42]). The no-hair property of black holes are also similar to elementary particles: all electrons look the same. Of course our mirror model is too simple to seek certain further connections between particle physics and black holes, in particular it does not involve any charge or angular momentum. Nevertheless, _precisely_ because of this, it is surprising that its total emitted energy should be given by the integral of the Larmor formula.
Near-term possible theoretical applications of electron-mirror correspondence include extension to non-rectilinear trajectories; notably the uniform accelerated worldlines of Letaw [43] which have Unruh-like temperatures [44] and power distributions [45]. Applying the general study of Kothawala and Padmanabhan [46] to electrons moving along time-dependent accelerations and comparing the effect to an Unruh-DeWitt detector could prove fruitful for understanding thermal response. Moreover, moving mirror models can be useful in cosmology [47], in particular in modeling particle production due to the expansion of space [48]. This expansion is accelerated due to an unknown dark energy, which may not be a cosmological constant and thus can decay [49; 50]. If dark energy is some kind of vacuum energy, it might subject to further study from mirror analogs just like Casimir energy. (In fact, dark energy could be Casimir-like [51].)
Near-term possible experimental applications of electron-mirror holography include leveraging the correspondence to disentangle effects in experiments like the Analog Black Hole Evaporation via Lasers (AnaBHEL) collaboration [52] and the RDK II collaboration [53; 54], see also [55]. The former exploits the accelerating relativistic moving mirror as a probe of the spectrum of quantum vacuum radiation [56; 57] and the later measures the photon spectrum with high precision as the electron-mirror is subjected to extreme accelerations during the process of radiative neutron beta decay.
###### Acknowledgements.
MG thanks the FY2021-SGP-1-STMM Faculty Development Competitive Research Grant No. 021220FD3951 at Nazarbayev University. YCO thanks the National Natural Science Foundation of China (No. 11922508) for funding support.
|
2306.12395 | On Some Problems of Operator Theory and Complex Analysis | In 1955 Kadison \cite{14} asked whether the analogue of the classical
Burnside's theorem of the Linear Algebra holds in the infinite dimensional
case. We use reproducing kernels method to solve the Kadison question. Namely,
we prove that any proper weakly closed subalgebra $\mathcal{A}$ of the algebra
$\mathcal{B}\left( H\right) $ of bounded linear operators on infinite
dimensional complex Hilbert spaces $H$ has a nontrivial invariant subspace,
i.e., $\mathcal{A}$ is a nontransitive algebra. This solves The Transitive
Algebra Problem positively, and hence Hyperinvariant Subspace Problem and
Invariant Subspace Problem are also solved positively. In this context, we also
consider the celebrated Riemann Hypothesis of the theory of meromorphic
functions and solve it in negative. | Mubariz T. Garayev | 2023-06-19T21:08:03Z | http://arxiv.org/abs/2306.12395v2 | # On Some Problems of Operator Theory and Complex Analysis
###### Abstract.
In 1955 Kadison [14] asked whether the analogue of the classical Burnside's theorem of the Linear Algebra holds in the infinite dimensional case. We use reproducing kernels method to solve the Kadison question. Namely, we prove that any proper weakly closed subalgebra \(\mathcal{A}\) of the algebra \(\mathcal{B}\left(H\right)\) of bounded linear operators on infinite dimensional complex Hilbert spaces \(H\) has a nontrivial invariant subspace, i.e., \(\mathcal{A}\) is a nontransitive algebra. This solves The Transitive Algebra Problem positively, and hence Hyperinvariant Subspace Problem and Invariant Subspace Problem are also solved positively. In this context, we also consider the celebrated Riemann Hypothesis of the theory of meromorphic functions and solve it in negative.
Key words and phrases:Hardy space, reproducing kernel, hyperinvariant subspace, invariant subspace, transitive algebra problem, Riemann hypothesis 2020 Mathematics Subject Classification: 47A15, 11M06
## 1. Introduction
Let \(H\) be an infinite dimensional complex Hilbert space, and \(\mathcal{B}\left(H\right)\) be a Banach algebra of all bounded linear operators on \(H.\) A closed subspace \(E\) of \(H\) is said to be nontrivial invariant subspace of an operator \(T\) in \(\mathcal{B}\left(H\right)\) if \(\left\{0\right\}\neq E\neq H\) and \(TE\subset E,\) i.e., for each \(x\in E,\)\(Tx\in E.\) It is called hyperinvariant if \(AE\subset E\) for every operator \(A\) in the commutant \(\left\{T\right\}^{{}^{\prime}}=\left\{S\in\mathcal{B}\left(H\right):ST=TS\right\}\) of operator \(T.\)
The following two famous questions of operator theory and functional analysis are open [7, 36, 28]:
**Problem 1**.: _(Invariant Subspace Problem). Does every bounded linear operator on a Hilbert space \(H\) have a nontrivial invariant subspace?_
**Problem 2**.: _(Hyperinvariant Subspace Problem). Does every nonscalar bounded linear operator on a Hilbert space \(H\) have a proper hyperinvariant subspace?_
Despite a number of partial results in direction of solving the invariant and hyperinvariant subspace problems, these questions have remained open.
Obviously, a scalar operator \(T\), i.e., an operator of the form \(T=cI\), \(c\in\mathbb{C}\), where \(I\) is an identity operator on \(H\), has not a nontrivial hyperinvariant subspace since \(\left\{T\right\}^{\prime}=\mathcal{B}\left(H\right)\) and \(\mathcal{B}\left(H\right)\) is a transitive algebra.
Through the paper, the term algebra will be used to mean a weakly closed subalgebra containing the identity of the Banach algebra \(\mathcal{B}\left(H\right).\)\(A\) subalgebra \(\mathcal{A}\) of \(\mathcal{B}\left(H\right)\) is said to be transitive if it has no nontrivial invariant subspace. The terminology comes from the fact, which is well-known from the of famous Lomonosov's Lemma [19] (see also Arveson [2]), that \(\mathcal{A}\) is transitive if and only if \(\mathcal{A}x\) is dense in \(H\) for every nonzero \(x\) in \(H.\) It is well known (and easy to prove) that \(\mathcal{B}\left(H\right)\) is a transitive algebra. It is not known: is \(\mathcal{B}\left(H\right)\) the only transitive algebra?
So, the transitive algebra problem raised by Kadison in his paper [14] is the following problem.
**Problem 3**: _(The Transitive Algebra Problem). \(\mathcal{A}\) is any transitive algebra, is \(\mathcal{A}=\mathcal{B}\left(H\right)\)?_
In case of Banach space the invariant subspace problem has been answered in the negative by Per Enflo in his seminal paper [10]. We note that in his recent arXiv's preprint [11] Enflo asserts that he positively solves the invariant subspace problem (see Problem 1 above) in Hilbert spaces, while nowadays the official expert's decision about validity of the result, apparently, absent. In any case, in the present article, in Section 2, we positively solve a more general question, namely, we give a positive answer to the above mentioned Problem 3. Our result obviously implies positive solutions of Problems 1 and 2.
For the history, known results and some recent developments on the invariant subspace problem and hyperinvariant subspace problem, see, for instance, the works [7, 3, 29, 9, 10, 25, 36, 28, 32] and their references.
In this paper, using the positive solution of invariant subspace problem in \(H^{2}\), we also disprove the Riemann Hypothesis.
## 2 The Solution of the Transitive Algebra Problem in Hilbert Spaces
The classical Burnside's theorem [13, 21, 6] states that any proper subalgebra of the algebra of operators on a finite dimensional complex vector space has a nontrivial invariant subspace. This is a much stronger assertion than existence of invariant subspace for single operators: a singly generated algebra is certainly
proper, since it is commutative. In 1955, Kadison [14] asked whether the analogue of Burnside's theorem holds in the infinite dimensional case (see Problem 3). Beginning with the fundamental paper [2] of Arveson, many partial results are known on Problem 3 in Hilbert spaces see for example, Radjavi and Rosenthal [28], Lomonosov [20, 21], Shulman [31], Kissin, Shulman and Turovskii [18], Mustafaev [24], Turovskii [34, 35] and Kissin [17].
In the present section, we prove that the answer to Problem 3 is affirmative (see Theorem 1 below). Our proof is based on a simple reproducing kernels argument for operators on the classical Hardy space \(H^{2}\left(\mathbb{D}\right);\) it is classical well known fact that every infinite dimensional separable complex Hilbert space \(H\) is isometrically isomorphic to \(\ell^{2},\) and hence to \(H^{2}\left(\mathbb{D}\right).\)
Recall that the Hardy space \(H^{2}=H^{2}\left(\mathbb{D}\right)\) consists of all analytic functions \(f\left(z\right)=\overset{\infty}{\underset{n=0}{\sum}}\widehat{f}\left(n \right)z^{n}\) on \(\mathbb{D}\) with the sequence of Taylor coefficients \(\left\{\widehat{f}\left(n\right)\right\}_{n\geq 0}\) in \(\ell^{2}.\) Equivalently, \(H^{2}\) is the space of analytic functions \(f\) with finite integral means
\[\left\|f\right\|_{2}=\underset{0\leq r<1}{\sup}\left(\frac{1}{2\pi}\overset{2 \pi}{\underset{0}{\int}}\left|f\left(re^{it}\right)\right|^{2}dt\right)^{1/2} <+\infty.\]
It is easy to show that \(\left\|f\right\|_{2}=\left(\overset{\infty}{\underset{n=0}{\sum}}\left| \widehat{f}\left(n\right)\right|^{2}\right)^{1/2}.\)\(H^{\infty}=H^{\infty}\left(\mathbb{D}\right)\) is the Banach space of all bounded analytic functions \(f\) on \(\mathbb{D}\) such that \(\left\|f\right\|_{\infty}:=\underset{z\in\mathbb{D}}{\sup}\left|f\left(z \right)\right|<+\infty.\)
It follows from the inequality
\[\left|f\left(z\right)\right|\leq\left\|f\right\|_{2}\left(1-\left|z\right|^{2} \right)^{-1/2},\]
valid for \(f\in H^{2}\) and \(z\in\mathbb{D},\) that \(H^{2}\) is a reproducing kernel Hilbert space on \(\mathbb{D}.\) The set \(\left\{z^{n}:n\geq 0\right\}\) is an orthonormal basis for \(H^{2},\) and hence its reproducing kernel has the form
\[k_{\lambda}\left(z\right)=\overset{\infty}{\underset{n=0}{\sum}}\overline{ \lambda^{n}}z^{n}=\frac{1}{1-\overline{\lambda}z}(z,\lambda\in\mathbb{D}).\]
The function \(\widehat{k}_{\lambda}\left(z\right):=\frac{k_{\lambda}\left(z\right)}{\left\| k_{\lambda}\right\|_{2}}\) is called the normalized reproducing kernel at \(\lambda.\) It is easy to see that \(\widehat{k}_{\lambda}\left(z\right)\longrightarrow 0\) weakly as \(\lambda\longrightarrow\partial\mathbb{D},\) that is the Hardy space \(H^{2}\) is a standard reproducing kernel Hilbert space in sense of Nordgren and Rosenthal [27]. Indeed, note that \(\left\langle f,\widehat{k}_{\lambda}\right\rangle=\sqrt{1-\left|\lambda \right|^{2}}f(\lambda)\) for \(f\in H^{2},\) and this obviously approaches \(0\) for \(f\in H^{\infty},\) and hence for all \(f\in H^{2}\) whenever \(\left|\lambda\right|\longrightarrow 1\text{\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod \textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod \textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod \textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod \textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod \textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod \textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod \textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod \textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod \textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod \textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod \textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod \textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\period\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\period\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\period\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\textperiod\period\
**Lemma 1**.: _Let \(\mu\in\mathbb{D}\) be fixed. For any operator \(T\) in \(\mathcal{B}\left(H^{2}\right)\) we have_
\[\lim_{\lambda\longrightarrow\partial\mathbb{D}}\left(T\widehat{k}_{\lambda} \right)(\mu)=0.\]
Proof.: The proof is immediate from the standardness property of \(H^{2}.\) In fact, since \(\widehat{k}_{\lambda}\longrightarrow 0\) weakly as \(\lambda\longrightarrow\partial\mathbb{D},\) we have:
\[\left(T\widehat{k}_{\lambda}\right)(\mu)=\left\langle T\widehat{k}_{\lambda},k_{\mu}\right\rangle=\left\langle\widehat{k}_{\lambda},T^{*}k_{\mu}\right\rangle \longrightarrow 0\]
as \(\lambda\longrightarrow\partial\mathbb{D}\). This proves the lemma.
It can be given the following interpretation to the statement of Lemma 1. For any operator \(T\in\mathcal{B}\left(H^{2}\right)\) we define its "two-variables" Berezin symbol \(\widetilde{T}_{tv}\) by the formula
\[\widetilde{T}_{tv}\left(\lambda,\mu\right):=\left\langle T\widehat{k}_{ \lambda},\widehat{k}_{\mu}\right\rangle\left(\lambda,\mu\in\mathbb{D}\right).\]
For \(\mu=\lambda,\) we have the usual Berezin symbol \(\widetilde{T}\) of operator \(T:\)
\[\widetilde{T}\left(\lambda\right):=\left\langle T\widehat{k}_{\lambda}, \widehat{k}_{\lambda}\right\rangle\left(\lambda\in\mathbb{D}\right),\]
which originally introduced by Berezin [4, 5]. It is trivial from the Cauchy-Schwarz inequality that
\[\sup_{\lambda\in\mathbb{D}}\left|\widetilde{T}\left(\lambda\right) \right|\text{ (Berezin number)} \leq\sup_{\lambda,\mu\in\mathbb{D}}\left|\widetilde{T}_{tv}\left( \lambda,\mu\right)\right|\text{ ("small" Berezin norm)}\] \[\leq\sup_{\lambda\in\mathbb{D}}\left\|T\widehat{k}_{\lambda} \right\|\text{ ("big" Berezin norm)}\leq\left\|T\right\|\]
for every \(T\in\mathcal{B}\left(H^{2}\right).\)
Since \(\left\{\widehat{k}_{\lambda}\right\}\) weakly converges to zero whenever \(\lambda\) approaches to the boundary points \(\zeta\in\partial\mathbb{D},\) it is elementary that for any \(\mu\in\mathbb{D}\)\(\lim_{\lambda\longrightarrow\zeta}\left\langle\widehat{k}_{\lambda},T^{*}k_{\mu} \right\rangle=0\) if and only if \(\lim_{\lambda\longrightarrow\zeta}\left\langle\widehat{k}_{\lambda},T^{*} \widehat{k}_{\mu}\right\rangle=0\) (since \(\sqrt{1-\left|\mu\right|^{2}}\neq 0\) for all \(\mu\in\mathbb{D}\)). This shows that the statement of Lemma 1 means that \(\lim_{\lambda\longrightarrow\partial\mathbb{D}}\left\langle T\widehat{k}_{ \lambda},\widehat{k}_{\mu}\right\rangle=0\) for all fixed \(\mu\in\mathbb{D},\) that is for every operator \(T\) in \(\mathcal{B}\left(H^{2}\right)\) its two-variables Berezin symbol \(\widetilde{T}_{tv}\left(\lambda,\mu\right)\) "semi-vanishes" on the boundary \(\partial\mathbb{D}.\)
The main result of this section, is the following theorem which gives a positive answer to the transitive algebra problem in Problem 3, and hence, it solves the Kadison question affirmatively.
**Theorem 1**.: _Every proper weakly closed unital subalgebra \(\mathcal{A}\) of algebra \(\mathcal{B}\left(H^{2}\right)\) has a closed nontrivial invariant subspace, i.e., \(\mathcal{A}\) is nontransitive; equivalently, only transitive subalgebra of \(\mathcal{B}\left(H^{2}\right)\) is \(\mathcal{B}\left(H^{2}\right)\) itself._
Proof.: Let \(\mathcal{A}\subset\mathcal{B}\left(H^{2}\right)\) be a proper weakly closed unital subalgebra. For every nonzero \(g\in H^{2},\) we set
\[E_{g}:=clos\mathcal{A}g:=clos\left\{Ag:A\in\mathcal{A}\right\}.\]
Clearly, \(E_{g}\neq\left\{0\right\}\) and \(AE_{g}\subset E_{g}\) for all \(A\in\mathcal{A}\). We will prove that there exists \(g_{0}\in H^{2}\backslash\left\{0\right\}\) such that \(E_{g_{0}}\) is not dense in \(H^{2}\), i.e., \(E_{g_{0}}\neq H^{2}\). For this aim, suppose in contrary that \(E_{g}=H^{2}\) for all nonzero \(g\in H^{2}\). Then, in particular, \(E_{\widehat{k}_{\lambda}}=H^{2}\) for all \(\lambda\in\mathbb{D}\). Let \(f\in H^{2}\) be an arbitrary nonzero function. Then for any \(\varepsilon>0\) there exists \(A_{\varepsilon,\lambda,f}\in\mathcal{A}\) such that \(\left\|f-A_{\varepsilon,\lambda,f}\widehat{k}_{\lambda})\right\|_{2}<\varepsilon.\) Hence
\[\left|f\left(0\right)-\left(A_{\varepsilon,\lambda,f}\widehat{k}_{\lambda} \right)\left(0\right)\right|\leq\left\|f-A_{\varepsilon,\lambda,f}\widehat{k}_ {\lambda}\right\|_{2}<\varepsilon\]
for every \(\lambda\in\mathbb{D}\), that is
\[\left|f\left(0\right)-\left(A_{\varepsilon,\lambda,f}\widehat{k}_{\lambda} \right)\left(0\right)\right|<\varepsilon\;\left(\lambda\in\mathbb{D}\right).\]
From this, for any \(B\in\mathcal{A}\) we have
\[\left|f\left(0\right)\right| <\left|\left(A_{\varepsilon,\lambda,f}\widehat{k}_{\lambda} \right)\left(0\right)\right|+\varepsilon\] \[\leq\left|\left(A_{\varepsilon,\lambda,f}\widehat{k}_{\lambda} \right)\left(0\right)-\left(B\widehat{k}_{\lambda}\right)\left(0\right)\right| +\left|\left(B\widehat{k}_{\lambda}\right)\left(0\right)\right|+\varepsilon\] \[\leq\left\|A_{\varepsilon,\lambda,f}-B\right\|+\left|\left(B \widehat{k}_{\lambda}\right)\left(0\right)\right|+\varepsilon,\]
hence by virtue of Lemma 1, we deduce that for any \(\varepsilon>0\) and any \(\zeta\in\partial\mathbb{D}\) there exists \(\delta_{\varepsilon,\zeta}>0\) such that \(\left|\left(B\widehat{k}_{\lambda}\right)\left(0\right)\right|<\varepsilon\) for all \(\lambda\in\mathbb{D}\) with \(\left|\lambda-\zeta\right|<\delta_{\varepsilon,\zeta}\), therefore we have that
\[\left|f\left(0\right)\right|\leq\left\|A_{\varepsilon,\lambda,f}-B\right\|+2\varepsilon\]
for all \(\lambda\in\mathbb{D}\) such that \(\left|\lambda-\zeta\right|<\delta_{\varepsilon,\zeta}.\) Consequently,
\[\left|f\left(0\right)\right|-2\varepsilon\leq\inf_{B\in\mathcal{A}}\;\;\left\| A_{\varepsilon,\lambda,f}-B\right\|=\operatorname{dist}(A_{\varepsilon, \lambda,f},\mathcal{A})=0,\]
thus \(\left|f\left(0\right)\right|\leq 2\varepsilon\) for any \(\varepsilon>0\), which is impossible since \(f\) is arbitrary. The theorem is proven.
Since \(\left\{T\right\}^{\prime}\) is a proper weakly closed unital algebra of \(\mathcal{B}\left(H^{2}\right)\) for any non-scalar operator \(T\in\mathcal{B}\left(H^{2}\right),\) the following corollary is immediate from Theorem 1. This solves the hyperinvariant subspace problem in Hilbert spaces affirmatively (see Problem 2).
**Corollary 1**.: _Every nonscalar bounded linear operator on the Hardy space \(H^{2}\) has a closed nontrivial hyperinvariant subspace._
Since every hyperinvariant subspace is an invariant subspace of operator \(T\in\mathcal{B}\left(H^{2}\right)\) (or by considering that the weakly closed algebra generated by \(T\) and \(I\) is a proper subalgebra of \(\mathcal{B}\left(H^{2}\right)\)), the following corollary gives a positive answer to the invariant subspace problem in \(H^{2}\), and hence, in any infinite dimensional separable complex Hilbert space \(H\) (see Problem 1).
**Corollary 2**: _Every bounded linear operator on the Hardy space \(H^{2}\) has a closed nontrivial invariant subspace._
## 3 The Solution of the Riemann Hypothesis
This section is mainly motivated with the recent works of S. W. Noor [26], J. Manzur, W. Noor, C. F. Santos [22, 23], and A. Ghosh, Y. Kremnizer, S. W. Noor [12]. The goal of this section is to prove that the positive solution of the invariant subspace problem (see Corollary 2 in Section 2) in the Hardy space \(H^{2}\) implies a negative answer to the celebrated Riemann Hypothesis.
The Riemann Hypothesis says that all the nontrivial zeros of the Riemann \(\zeta-\)function \(\zeta\left(z\right):=\sum\limits_{n=1}^{\infty}\frac{1}{n^{z}}\) lie on the critical line \(\mathrm{Re}(z)=\frac{1}{2}\) of the complex plane \(\mathbb{C}\). Notice that the trivial zeros of the \(\zeta\)-function are negative integers \(-2k\), \(k\geq 1\).
Our proof of Riemann Hypothesis based on Corollary 2, which is already proved in Section 2, some important results of the paper [23] related with some semigroup of weighted composition operators on \(H^{2}\) and Nordgren and Rosenthal's standardness property of some subspaces in \(H^{2}\).
In [26], the author defined the following subspace, namely, let \(\mathcal{N}\) denote the linear span (i.e.,, linear hull) of the functions
\[h_{k}\left(z\right):=\frac{1}{1-z}\log\left(\frac{1+z+\cdots+z^{k-1}}{k}\right),\ k\geq 2,\]
which all belong to the Hardy space \(H^{2}\left(\mathbb{D}\right)\) and all are outer functions see [26, Lemma 7] and [22, Corollary 14] (it is also shown in [26] that \(\mathcal{N}\in\mathrm{Lat}(W)\), i.e., \(W_{n}\mathcal{N}\subset\mathcal{N}\) for all \(n\geq 2\) and Riemann Hypothesis holds if and only if \(\mathcal{N}\) is dense in \(H^{2}\), i.e., \(\overline{\mathcal{N}}=H^{2}\)). Also in [26], a multiplicative semigroup of weighted composition operators \(W:=\left(W_{n}\right)_{n\in\mathbf{N}}\) on \(H^{2}\) is introduced:
\[W_{n}f:=\left(1+z+\cdots+z^{n-1}\right)f\left(z^{n}\right)=\frac{1-z^{n}}{1-z }f\left(z^{n}\right).\]
Each \(W_{n}\) is bounded on \(H^{2}\), \(W_{1}=I\) (identity operator on \(H^{2}\)) and \(W_{m}W_{n}=W_{mn}\) for each \(m,n\geq 1\).
The following is proved in [23, Proposition 4]:
**Proposition 1**: _Each \(\frac{W_{n}}{\sqrt{n}}\) for \(n\geq 2\) is a shift with infinite multiplicity and hence \(W_{n}^{*}\) is universal in the sense of Rota [30]._
A proper invariant subspace \(E\) for an operator \(T\) is called maximal if it is not contained in any other proper invariant subspace for \(T.\) In this case \(E^{\perp}\) is a minimal invariant subspace for \(T^{*}.\) Hence the invariant subspace problem may be reformulated in terms of \(W_{n}\) (see [23, Corollary 6]):
**Proposition 2**.: _For any \(n\geq 2,\) every maximal invariant subspace for \(W_{n}\) has co-dimension one if and only if the invariant subspace problem has a positive solution._
In [23], the authors notice that studying the invariant subspaces for \(W\) may shed light on both the Riemann Hypothesis and the Invariant Subspace Problem. Here, in this section, we will partially confirm this point of view.
The main objective of this section is to consider a special sublattice of the lattice \(\operatorname{Lat}\left(W\right)\) of common invariant subspaces of the \(W_{n}\) for \(n\geq 2.\) Define the manifolds (see [23, p.5]):
\[\mathcal{M}:=\operatorname{span}\left\{h_{k}-h_{\ell}:k,\ell\geq 2\right\}\]
and
\[\mathcal{M}_{d}:=\operatorname{span}\left\{h_{k}-h_{\ell}:k,\ell\in d\mathbf{ N}\right\}\]
for \(d\in\mathbf{N}\) whose closures belong to \(\operatorname{Lat}(W)\) due to the identity \(W_{n}h_{k}=h_{nk}-h_{n}\) (see [23]). It is clear that \(\mathcal{M}_{d}\subset\mathcal{M}\subset\mathcal{N},\) and \(d_{1}\) divides \(d_{2}\) if and only if \(\mathcal{M}_{d_{2}}\subset\mathcal{M}_{d_{1}}.\) It follows that \(\left(\mathcal{M}_{d}\right)_{d\in\mathbf{N}}\) is a sublattice of \(\operatorname{Lat}(W)\) which is isomorphic to \(\mathbf{N}\) with respect to division, and \(\mathcal{M}_{p}\) for any prime \(p\) is a maximal element in \(\left(\mathcal{M}_{d}\right)_{d\in\mathbf{N}}.\) Notice that for any \(k,\ell\in d\mathbf{N}\) we have that \(h_{k}-h_{\ell}=\left(h_{k}-h_{d}\right)-\left(h_{\ell}-h_{d}\right)=W_{d}(h_{ k/d}-h_{\ell/d})\in W_{d}\left(\mathcal{N}\right)\) (where \(h_{1}\equiv 0\)) and hence \(\mathcal{M}_{d}=W_{d}\left(\mathcal{N}\right).\) Since each \(W_{d}\) is a shift of infinite multiplicity (see Prop.1), it follows that \(\mathcal{M}_{d}\) has infinite co-dimension.
The following important connection between the maximality property of the subspace \(\overline{\mathcal{M}}\) and the Riemann Hypothesis is proved in [23, Theorem 9]. Let \(\vee_{n}E_{n}\) denote the smallest closed subspace containing the sets \(E_{n}.\)
**Theorem 2**.: _([23]). The closure of \(\mathcal{M}\) is a proper element in \(\operatorname{Lat}(W)\) and equals \(\underset{d=2}{\overset{\infty}{\vee}}\mathcal{M}_{d}.\) The closure of \(\mathcal{M}\) is maximal in \(\operatorname{Lat}(W)\) if and only if the Riemann Hypothesis is true. In this case we have co-\(dim(\mathcal{M})=1.\)_
Now we are ready to show that actually the subspace \(\mathcal{M}\) also has infinite co-dimension. Before stating our result we need the following known results from author's papers [16, Lemma 2.4] and [15, Lemma 3.2 and Lemma 3.3]. Notice that if we interested in (closed) subspaces of \(H^{2},\) there is no reason for these subspaces to be still standard in sense of Nordgren and Rosenthal [27]; see the paper [15] where
this question is discussed in its full generality (see also [8, Section 4]). Below, as before, \(\widetilde{A}\) denotes the Berezin symbol of operator \(A.\)
**Proposition 3**.: _([16]). Let \(\mathcal{H}=\mathcal{H}\left(\Omega\right)\) be a standard functional Hilbert space over some set \(\Omega\) with \(\partial\Omega\neq\emptyset,\) and let \(E\subset\mathcal{H}\) be a closed subspace. Then \(E\) is a finite codimensional subspace if and only if_
\[\lim_{\lambda\longrightarrow\partial\Omega}\widetilde{P}_{UE}(\lambda)=1\]
_for all unitary operators \(U\) on \(\mathcal{H}.\)_
By using this result, it is proved in [15] the following.
**Proposition 4**.: _([15]). If \(\mathcal{H}\left(\Omega\right)\) is a standard infinite dimensional functional Hilbert space, then any subspace \(E\subset\mathcal{H}\left(\Omega\right)\) with finite co-dimension is also standard._
**Lemma 2**.: _We have co-\(dim(\overline{\mathcal{M}})=+\infty.\)_
Proof.: Suppose in contrary that co-\(dim(\overline{\mathcal{M}})<+\infty.\) Then according to Proposition 4, the subspace \(\overline{\mathcal{M}}\) is standard, and hence \(\left\langle f,\widehat{k}_{\overline{\mathcal{M}},\lambda}\right\rangle \longrightarrow 0\) as \(\lambda\longrightarrow\partial\mathbb{D}\) for all \(f\in\overline{\mathcal{M}},\) where \(\widehat{k}_{\overline{\mathcal{M}},\lambda}\) is the normalized reproducing kernel of the closed subspace \(\overline{\mathcal{M}}\). Since, by Theorem 3, we know that \(\overline{\mathcal{M}}\neq H^{2},\) there exists a nonzero function \(G\in H^{2}\) such that \(\left\langle h,G\right\rangle=0\) for every \(f\in\overline{\mathcal{M}}.\) Then we have:
\[0=\left\langle f,G\right\rangle=\left\langle f,G-\widehat{k}_{\overline{ \mathcal{M}},\lambda}\right\rangle+\left\langle f,\widehat{k}_{\overline{ \mathcal{M}},\lambda}\right\rangle,\]
or equivalently
\[\left\langle f,\widehat{k}_{\overline{\mathcal{M}},\lambda}-G\right\rangle= \left\langle f,\widehat{k}_{\overline{\mathcal{M}},\lambda}\right\rangle \longrightarrow 0\text{ as }\lambda\longrightarrow\partial\mathbb{D},\]
hence \(\left\langle f,\widehat{k}_{\overline{\mathcal{M}},\lambda}-G\right\rangle \longrightarrow 0\) as \(\lambda\longrightarrow\partial\mathbb{D}\) for all \(f\in\overline{\mathcal{M}}.\) This shows that the weak limit of the normalized reproducing kernel \(\widehat{k}_{\overline{\mathcal{M}},\lambda}\) of the subspace \(\overline{\mathcal{M}}\) equals \(G,\) which contradicts to \(G\neq 0.\) The lemma is proved.
Now we are ready to state the main result of the present section, which disproves the Riemann Hypothesis.
**Theorem 3**.: _The Riemann Hypothesis is not true._
Proof.: It follows from Corollary 2 in Section 2 and Proposition 2 that every maximal invariant subspace of operators \(W_{n},\)\(n\geq 2,\) has co-dimension one. However, according to Lemma 2, co-\(dim(\overline{\mathcal{M}})=+\infty,\) and hence the subspace \(\overline{\mathcal{M}}\) can not be a maximal invariant subspace for \(W_{n},\)\(n\geq 2,\) thus by virtue of Manzur, Noor and Santos theorem (see Theorem 2), we deduce that the Riemann Hypothesis is not true. The theorem is proven. |
2303.17433 | On Consistent Kinetic Mixing and the Higgs Low-Energy Theorems | A popular class of extensions of the Standard Model (SM) are models of a new
Abelian gauge boson $X$, called $dark$ or $hidden\ photon$, that kinetically
mixes with the SM photon. We revisit the matching procedure of kinetic mixing
terms in the electroweak symmetric phase to the ones in the broken phase. Our
central finding is that in order to obtain the correct matching prescription
one has to take into account mixing of the hidden photon with the neutral
component of the weak $SU(2)_L$ bosons. This mixing is generated by a
dimension-six operator and, in theories where $SU(2)_L$ multiplets are charged
under the novel Abelian gauge group, is necessarily induced at the one-loop
level. We illustrate this matching procedure for the loop-generated kinetic
mixing in $U(1)_{L_\mu-L_\tau}$. Furthermore, we show how to obtain general
expressions for the Higgs decay amplitudes to two neutral vector bosons from
the vacuum polarisation amplitudes via the low-energy theorems. As an
application, we derive general expression for the branching ratios of the
decays $h\to\gamma X$ and $h\to XX$ in $U(1)_{B-L}$. | Patrick Foldenauer | 2023-03-30T14:58:26Z | http://arxiv.org/abs/2303.17433v1 | # On Consistent Kinetic Mixing and the Higgs Low-Energy Theorems
###### Abstract:
A popular class of extensions of the Standard Model (SM) are models of a new Abelian gauge boson \(X\), called _dark_ or _hidden photon_, that kinetically mixes with the SM photon. We revisit the matching procedure of kinetic mixing terms in the electroweak symmetric phase to the ones in the broken phase. Our central finding is that in order to obtain the correct matching prescription one has to take into account mixing of the hidden photon with the neutral component of the weak \(SU(2)_{L}\) bosons. This mixing is generated by a dimension-six operator and, in theories where \(SU(2)_{L}\) multiplets are charged under the novel Abelian gauge group, is necessarily induced at the one-loop level. We illustrate this matching procedure for the loop-generated kinetic mixing in \(U(1)_{L_{\nu}-L_{\tau}}\). Furthermore, we show how to obtain general expressions for the Higgs decay amplitudes to two neutral vector bosons from the vacuum polarisation amplitudes via the low-energy theorems. As an application, we derive general expression for the branching ratios of the decays \(h\to\gamma X\) and \(h\to XX\) in \(U(1)_{B-L}\).
Introduction
Experimental evidence like the gravitational observation of dark matter (DM) and the detection of neutrino oscillations have firmly established the existence of new physics beyond the Standard Model (SM). In the past, these hints have lead many physicists to construct theories of new physics completing the SM at high energy scales, like for example supersymmetric theories, models of grand unification or string theory. A typical shared characteristic of such ultra-violet (UV) completions is the presence of novel heavy states that can, in principle, couple sizeably to the SM sector. Such new heavy states can be tested for example at high-energy experiments like particle colliders. However, as illustrated in Fig. 1 the landscape of particle physics experiments is much more diverse with a plethora of observational strategies testing physics at low energies with ever increasing intensities. Among these are meson factories and beam dump experiments, or astrophysical and cosmological probes. In general, new gauge bosons of an extra \(U(1)_{X}\) symmetry are well-motivated candidates for novel particles that can naturally have ever smaller masses as their coupling to the SM decreases, i.e. that live at the _sensitivity frontier_ of the experimental landscape.
In the minimal hidden photon scenario the gauge boson associated to an additional \(U(1)_{X}\) symmetry is kinetically mixed with the SM photon via the operator [1, 2]
\[\mathcal{L}\supset-\frac{\epsilon_{A}}{2}F_{\mu\nu}X^{\mu\nu}\,, \tag{1}\]
where \(F_{\mu\nu}\) and \(X_{\mu\nu}\) denote the \(U(1)_{\rm em}\) and \(U(1)_{X}\) field strength tensors, respectively. Since this term is a a gauge-invariant, renormalisable operator, the kinetic mixing parameter \(\epsilon_{A}\), in principle, is a free parameter of the theory. However, in many non-minimal hidden photon models \(\epsilon_{A}\) is generated at the loop level via vacuum polarisation diagrams as the one shown in the right panel of Fig. 2 due to fermions charged under both \(U(1)\) symmetries running in the loop. In these models
Figure 1: The current sensitivity frontier in the landscape of experimental searches for new physics beyond the Standard Model.
the loop-induced kinetic mixing typically scales as \(\epsilon_{A}\propto g_{x}/16\pi^{2}\), with \(g_{x}\) denoting the coupling constant of \(U(1)_{X}\).
The kinetic mixing term in Eq. (1) can be diagonalised by a non-unitary field transformation of the kind
\[A^{\mu}\to A^{\mu}-\epsilon_{A}\,X^{\mu}\qquad\Rightarrow\qquad e \,A_{\mu}\,j^{\mu}_{\rm em}\to e\,A_{\mu}\,j^{\mu}_{\rm em}-\epsilon_{A}\,e \,X_{\mu}\,j^{\mu}_{\rm em}\,. \tag{2}\]
This field redefinition induces a coupling of the new \(X\) boson to the SM electromagnetic current \(j^{\mu}_{\rm em}\). This interaction motivates the name _hidden photon_ for the \(X\) boson, since it couples to the QED current analogously to the SM photon, but suppressed by \(\epsilon_{A}\).
This new hidden photon can generically acquire mass. In the most simple case, the novel \(U(1)_{X}\) symmetry is Higgsed, _i.e._ it is broken by the vacuum expectation value (VEV) \(f\) of a new scalar singlet \(S\),
\[\mathcal{L}=(D_{\mu}S)^{\dagger}D^{\mu}S\supset\frac{g_{x}^{2}f^{2 }}{2}\,X_{\mu}X^{\mu}\,. \tag{3}\]
Hence, the mass of the hidden photon, \(m_{A^{\prime}}\propto g_{x}\,f\), is proportional to the \(U(1)_{X}\) coupling \(g_{x}\). Thus, the smaller the gauge coupling \(g_{x}\) (or the feebler the interactions of the hidden photon) the smaller the mass of the hidden photon. This mechanism makes hidden photons a prime candidate for new physics hiding along the sensitivity frontier illustrated in Fig. 1 and warrants for a careful study of matching a potential UV hidden photon model onto the low-energy QED regime.
## 2 A closer look at the origin of kinetic mixing
The kinetic mixing in Eq. (1) of the \(U(1)_{X}\) boson with the photon of QED cannot be fundamental as the \(U(1)_{\rm em}\) only arises after electroweak symmetry breaking (EWSB). Hence, we want to study how this operator arises from mixing in the underlying UV theory in the unbroken phase.
Naive picture.In the literature it is often assumed that the fundamental mixing of the hidden photon is not with the SM photon, but with the hypercharge boson \(B\) of \(U(1)_{Y}\),
\[\mathcal{L}\supset-\frac{\epsilon_{B}}{2}B_{\mu\nu}X^{\mu\nu}\,, \tag{4}\]
where \(B_{\mu\nu}\) denotes the field strength tensor of the hypercharge boson. This mixing term can be either elementary or generated at the loop level through fermions carrying charge under both \(U(1)_{Y}\) and \(U(1)_{X}\). These two cases are illustrated by the diagrams in Fig. 2. After decomposing the hypercharge boson into its mass eigenstate components, \(B_{\mu}=c_{w}A_{\mu}-s_{w}Z_{\mu}\), where
Figure 2: Diagrams of kinetic mixing between the hypercharge boson \(B_{\mu}\) and the \(U(1)_{X}\) boson \(X_{\mu}\) at tree level (left) and one-loop level (right).
and \(s_{w}\equiv\sin\theta_{W}\) denote the cosine and sine of the Weinberg angle \(\theta_{W}\), the mixing term in Eq. (4) reads
\[\mathcal{L}\supset-c_{w}\,\frac{\epsilon_{B}}{2}F_{\mu\nu}X^{\mu\nu}+s_{w}\, \frac{\epsilon_{B}}{2}Z_{\mu\nu}X^{\mu\nu}\,. \tag{5}\]
Matching the terms in Eq. (1) and Eq. (5), we find the simple expression
\[\epsilon_{A}=c_{w}\;\epsilon_{B}\,, \tag{6}\]
relating the fundamental mixing of the hidden photon with the hypercharge boson, \(\epsilon_{B}\), and the mixing with the SM photon in the broken phase, \(\epsilon_{A}\).
The full picture.A more careful treatment of the matching procedure reveals that the above matching relation in Eq. (6) cannot be the full picture. In fact, there exists a dimension-six operator inducing mixing between the \(U(1)_{X}\) and the \(SU(2)_{L}\) bosons [3],
\[\mathcal{O}_{WX}=\frac{c_{WX}}{\Lambda^{2}}\,H^{\dagger}\sigma^{i}H\,W^{i}_{ \mu\nu}X^{\mu\nu}\,. \tag{7}\]
Here \(H\) denotes the SM Higgs doublet, \(W^{i}_{\mu\nu}\) is the \(SU(2)_{L}\) field strength tensor, and \(\Lambda\) represents the scale of new physics at which this operator is generated, e.g. by integrating out some heavy new fields. In the broken phase this operator leads to an effective kinetic mixing term between the neutral component of the weak bosons, \(W^{3}\), and the hidden photon of the form
\[\mathcal{O}_{WX}\supset-\frac{\epsilon_{W}}{2}\,W^{3}_{\mu\nu}X^{\mu\nu}\,, \tag{8}\]
where we have identified \(\epsilon_{W}\equiv c_{WX}\,v^{2}/\Lambda^{2}\) with the Higgs VEV \(v\). In analogy to what we did above, we also decompose the neutral weak boson \(W^{3}\) into its mass eigenstate components, \(W^{3}_{\mu}=s_{w}\,A_{\mu}+c_{w}\,Z_{\mu}\), which leads to a kinetic mixing term of,
\[\mathcal{O}_{WX}\supset-s_{w}\,\frac{\epsilon_{W}}{2}\,F_{\mu\nu}X^{\mu\nu}-c _{w}\,\frac{\epsilon_{W}}{2}\,Z_{\mu\nu}X^{\mu\nu}\,. \tag{9}\]
Combining this with Eq. (5), our matching relation Eq. (6) is modified to also account for the mixing contribution with the weak boson \(W^{3}\),
\[\epsilon_{A}=c_{w}\;\epsilon_{B}+s_{w}\;\epsilon_{W}\;. \tag{10}\]
This is a very important result, since in generic hidden photon models with \(SU(2)_{L}\) multiplets charged under the novel \(U(1)_{X}\), the operator Eq. (7) will necessarily be generated at the one-loop level. In these models, the loop contribution to the \(W^{3}-X\) mixing in Eq. (8) can be computed in analogy to the standard Abelian mixing case [3]. We identify the kinetic mixing contribution as the transverse component \(\Pi_{WX}\) of the full vacuum polarisation amplitude,
\[\Pi^{\mu\nu}_{WX}=\Pi_{WX}\;[g^{\mu\nu}p_{1}\cdot p_{2}-p_{1}^{\mu}p_{2}^{\nu }]+\Delta_{WX}\;g^{\mu\nu}\,, \tag{11}\]
The loop contribution to the kinetic mixing is then computed as
\[\Pi_{WX}\!=\!-\frac{g\,g_{x}}{8\pi^{2}}\sum_{f}\int_{0}^{1}\!dx\,x(1-x)\,T_{3} ^{f}\;\left(v_{X}^{f}+a_{X}^{f}\right)\log\left(\frac{\mu^{2}}{m_{f}^{2}-x(1-x )q^{2}}\right)\,, \tag{12}\]
where the sum includes all \(SU(2)_{L}\) degrees of freedom \(f\) with weak charge \(T_{3}^{f}\) also charged under \(U(1)_{X}\) with vector and axial-vector couplings \(v_{X}^{f}\) and \(a_{X}^{f}\), respectively.
## 3 A concrete example: kinetic mixing in \(U(1)_{L_{\mu}-L_{\tau}}\)
In generic \(U(1)_{X}\) models, SM fermions can also be charged under the new symmetry, leading to a gauge interaction of the hidden photon of the type,
\[{\cal L}_{\rm int}=-g_{x}\,j_{X}^{\mu}X_{\mu}\,, \tag{13}\]
where _a priori_ the current \(j_{X}^{\mu}=\sum_{\psi}q_{\psi}\,\bar{\psi}\gamma^{\mu}\psi\) can include all SM matter fields, especially also the \(SU(2)_{L}\) quark and lepton doublets \(\psi=Q,L\). Restricting the gauge current \(j_{X}^{\mu}\) to only contain SM fields (_i.e._ disallowing for any new fermions), the minimally anomaly-free models are \(U(1)_{B-L},U(1)_{L_{\mu}-L_{\tau}},U(1)_{L_{\mu}-L_{\tau}},U(1)_{L_{\mu}-L_{ \tau}}\) and linear combinations of these. In these models, at the very least, (some of) the lepton doublets \(L_{i}\) are charged under the new \(U(1)\) symmetry such that \({\cal O}_{WX}\) is induced via loops at the renormalizable level (since the scale at which this operator is generated is the electroweak scale, \(\Lambda=v\)). We will now study how such a loop-generated term affects kinetic mixing in the electroweak-broken and -symmetric phase in the example of \(U(1)_{L_{\mu}-L_{\tau}}\).
In the broken phase we can perform the usual, well-known QED mixing computation with two Dirac fermions \(f=\mu,\tau\) running in the loop. In the infrared (IR) limit of zero momentum transfer, \(q=0\), the resulting mixing parameter reads
\[\epsilon_{A}=\frac{e\,g_{\mu\tau}}{6\pi^{2}}\,\log\left(\frac{m_{\mu}}{m_{\tau }}\right)\,, \tag{14}\]
Simultaneously, the _naive_ UV computation, in which we only account for mixing with the hypercharge boson \(B\), results in a mixing coefficient of
\[\epsilon_{B}=\frac{g^{\prime}\,g_{\mu\tau}}{24\pi^{2}}\,\,\left[3\log\left( \frac{m_{\mu}}{m_{\tau}}\right)+\log\left(\frac{m_{\nu_{\mu}}}{m_{\nu_{\tau}}} \right)\right]\,,\]
\[\includegraphics[scale=0.5]{fig/M2.eps}\,. \tag{15}\]
Obtaining these two results, Eq. (14) and Eq. (15), we have explicitly confirmed that the naive matching relation in Eq. (6) does not hold and therefore cannot be the correct prescription. From our considerations in Section 2 we already know how to amend the naive mixing prescription, such that the computations of the mixing in the broken and unbroken phase match. The solution is to also take into account the loop-induced mixing between the hidden photon and the neutral \(SU(2)_{L}\) boson. In \(U(1)_{L_{\mu}-L_{\tau}}\) the second and third generation lepton carry charge under the new symmetry. Hence, a \(W^{3}-X\) mixing term is generated from the mixing with the diagram with \(L_{\mu}\) and \(L_{\tau}\) running in the loop,
\[\epsilon_{W}=\frac{g\,g_{\mu\tau}}{24\pi^{2}}\,\,\left[\log\left(\frac{m_{\mu }}{m_{\tau}}\right)-\log\left(\frac{m_{\nu_{\mu}}}{m_{\nu_{\tau}}}\right) \right]\,,\]
\[\includegraphics[scale=0.5]{fig/M2.eps}\,. \tag{16}\]
The resulting mixing contribution exactly yields the missing piece to obtain the mixing coefficient \(\epsilon_{A}\) of Eq. (14) in the broken phase according to the full matching prescription in Eq. (10). It is particularly noteworthy that the contributions from the neutrinos to \(\epsilon_{B}\) and \(\epsilon_{W}\) exactly cancel. This is expected, since only electrically charged particles can run in the \(\gamma-X\) loop, such that neutrinos can never contribute to the hidden photon mixing with the SM QED photon in the broken phase.
## 4 The Higgs low-energy theorems
As a byproduct of computing all the neutral boson vacuum polarisation amplitudes, \(\Pi^{\mu\nu}_{V_{i}V_{j}}\), we can derive universal expressions for the decay amplitudes of the Higgs to a pair of neutral bosons from the low-energy theorem [4, 5],
\[\lim_{p_{h}\to 0}\mathcal{M}(h\to V_{i}V_{j})\to\frac{\partial}{\partial v} \mathcal{M}(V_{i}\to V_{j}). \tag{17}\]
Starting from the general one-loop corrected effective low-energy Lagrangian in the electroweak broken phase, the different mixing contributions can be written as
\[\mathcal{L}=-\frac{1}{4}\left(F_{\mu\nu},Z_{\mu\nu},X_{\mu\nu}\right)\left[ \begin{pmatrix}1&0&\epsilon_{A}\\ 0&1&\epsilon_{Z}\\ \epsilon_{A}&\epsilon_{Z}&1\end{pmatrix}+\mathbf{\Pi}\right]\begin{pmatrix}F^ {\mu\nu}\\ Z^{\mu\nu}\\ X^{\mu\nu}\end{pmatrix}+\frac{1}{2}\left(A_{\mu},Z_{\mu},X_{\mu}\right)\ \left[ \mathbf{M}+\mathbf{\Delta}\right]\begin{pmatrix}A^{\mu}\\ Z^{\mu}\\ X^{\mu}\end{pmatrix}, \tag{18}\]
where \(\mathbf{M}=\mathrm{diag}(0,m_{Z}^{2},m_{X}^{2})\) denotes the tree-level mass matrix of the neutral bosons and \(\epsilon_{A}\) and \(\epsilon_{Z}\) are the tree-level kinetic mixing coefficients. Furthermore, the loop-generated contributions to kinetic and mass mixing are encoded in the matrices
\[\mathbf{\Pi}=\begin{pmatrix}\Pi_{\gamma\gamma}&\Pi_{\gamma Z}&\Pi_{\gamma X}\\ \Pi_{\gamma Z}&\Pi_{ZZ}&\Pi_{ZX}\\ \Pi_{\gamma X}&\Pi_{ZX}&\Pi_{XX}\end{pmatrix}, \mathbf{\Delta}=\begin{pmatrix}0&0&0\\ 0&\Delta_{ZZ}&\Delta_{ZX}\\ 0&\Delta_{ZX}&\Delta_{XX}\end{pmatrix}. \tag{19}\]
We can diagonalise the tree-level kinetic mixing terms in Eq. (18) via a non-unitary field redefinition given by
\[G=\begin{pmatrix}1&0&-\frac{\epsilon_{A}}{\sqrt{1-\epsilon_{A}^{2}-\epsilon_{ Z}^{2}}}\\ 0&1&-\frac{\epsilon_{Z}}{\sqrt{1-\epsilon_{A}^{2}-\epsilon_{Z}^{2}}}\\ 0&0&\frac{1}{\sqrt{1-\epsilon_{A}^{2}-\epsilon_{Z}^{2}}}\end{pmatrix}. \tag{20}\]
After diagonalisation we find the general Higgs decay amplitude to read [3],
\[\mathcal{M}^{\mu\nu}_{h\to V_{i}V_{j}} =\,\partial_{v}\left[G^{T}\ \mathbf{\Pi}\ G\right]_{ij}\,[p_{2}^{\mu}\,p_{1}^{\nu}-p_{1}\cdot p _{2}\,g^{\mu\nu}]\ +\ \partial_{v}\left[G^{T}\ \left[\mathbf{M}+\mathbf{\Delta}\right]G\right]_{ij}\,g^{\mu\nu}, \tag{21}\]
where we have factored out the gauge boson polarisation vectors of the amplitude \(\mathcal{M}_{h\to V_{i}V_{j}}=\mathcal{M}^{\mu\nu}_{h\to V_{i}V_{j}}\ \epsilon_{\mu, \lambda}^{*}(p_{1})\,\epsilon_{\nu,\lambda^{\prime}}^{*}(p_{2})\). To leading order in the small mixing coefficients \(\epsilon_{A}\) and \(\epsilon_{Z}\), for the rotated matrices in Eq. (19) we find the symmetric matrices,
\[G^{T}\ \mathbf{\Pi}\ G =\mathbf{\Pi}-\begin{pmatrix}0&0&\epsilon_{A}\,\Pi_{\gamma\gamma }+\epsilon_{Z}\,\Pi_{\gamma Z}\\.&0&\epsilon_{A}\,\Pi_{\gamma Z}+\epsilon_{Z}\,\Pi_{ZZ}\\.&.&2\epsilon_{A}\Pi_{\gamma X}+2\epsilon_{Z}\Pi_{ZX}\end{pmatrix}, \tag{22}\] \[G^{T}\ \left[\mathbf{M}+\mathbf{\Delta}\right]\ G =\left[\mathbf{M}+\mathbf{\Delta}\right]-\begin{pmatrix}0&0&0\\.&0&\epsilon_{Z}\,(m_{Z}^{2}+\Delta_{ZZ})\\.&.&2\epsilon_{Z}\,\Delta_{ZX}\end{pmatrix}, \tag{23}\]
Note that the mass mixing terms, \(\Delta_{V_{i}V_{j}}\), are only generated in theories where the loop fermions have axial-vector charges under both gauge groups.
In the case that only SM fermions are charged under the novel \(U(1)_{X}\) symmetry, the expressions relevant for computing the Higgs decay amplitudes to photons, Z and X bosons according to Eq. (21) are given by
\[\partial_{v}\;\Pi_{\gamma X}(0) = \sum_{f}N_{c}^{f}\;\frac{e\,g_{x}}{12\,\pi^{2}\,v}\,Q_{f}\;v_{X}^{ f}\,, \tag{24}\] \[\partial_{v}\;\Pi_{ZX}(0) = \sum_{f}N_{c}^{f}\;\frac{e\,g_{x}}{24\,\pi^{2}\,v}\;\frac{T_{3}^{ f}-2\,s_{w}^{2}\,Q_{f}}{s_{w}\,c_{w}}\;v_{X}^{f}\,,\] (25) \[\partial_{v}\;\Pi_{XX}(0) = \sum_{f}N_{c}^{f}\;\frac{g_{x}^{2}}{24\,\pi^{2}\,v}\;v_{X}^{f\,2}\,, \tag{26}\]
where the sum runs over all heavy fermions with with \(m_{f}\,\gg m_{h}\), _i.e._ only the top quark in the SM.
In a practical example of the Higgs low-energy theorems, we can compute the branching ratios of the SM Higgs decaying to \(\gamma X\) and \(XX\) in a model of gauged \(U(1)_{B-L}\). For example, the relevant contributions to the decay \(h\to\gamma X\) are shown in a diagrammatic representation in Fig. 3 to leading order in the kinetic mixing parameters \(\epsilon_{A}\) and \(\epsilon_{Z}\). Assuming the new gauge boson to be light, \(m_{X}\ll m_{h}\), we can universally express the branching ratios to leading order in the gauge coupling \(g_{x}\) and the kinetic mixing parameter as
\[\mathcal{BR}_{h\to\gamma X} \simeq (0.92\,g_{x}^{2}+6.36g_{x}\epsilon_{A}+11.01\epsilon_{A}^{2})\; \cdot 10^{-3}, \tag{27}\] \[\mathcal{BR}_{h\to XX} \simeq g_{x}^{2}(2.5\,g_{x}^{2}-5.7\,g_{x}\epsilon_{A}+3.2\epsilon_{A}^{2 })\;\cdot 10^{-3}. \tag{28}\]
Evaluating these expressions for still allowed values of the gauge coupling and mixing of \(g_{x}\sim 10^{-4}\) and \(\epsilon_{A}\sim 10^{-3}\) yields model-independent branching ratios of \(\mathcal{BR}_{h\to\gamma X}\sim 10^{-8}\) and \(\mathcal{BR}_{h\to XX}\sim 10^{-17}\). While the process \(h\to XX\) seems hopeless to be tested at any conceivable future detector, the process \(h\to\gamma X\) could be tested at an upcoming collider like the FCC-hh aiming at collecting up to \(\mathcal{O}(10^{10})\) Higgs bosons.
## 5 Conclusion
In summary, hidden photons are well-motivated candidates for new physics hiding along the experimental _sensitivity frontier_. In the minimal setup, the interactions of these particles with the
Figure 3: Diagrammatic representation of the amplitude \(\mathcal{M}(h\to\gamma X)\) due to the various contributions from fermion loops at linear order in the kinetic mixing parameters \(\epsilon_{A}\) and \(\epsilon_{Z}\).
SM sector arises purely through kinetic mixing. Due to gauge invariance the kinetic mixing of the novel \(X\) boson has to proceed with the hypercharge boson \(B\) in the electroweak symmetric phase. At dimension-six, however, there exists an operator coupling the \(X\) boson to the \(SU(2)_{L}\) bosons, generating a mixing term between the hidden photon and \(W^{3}\), which can effectively arise at the renormalisable level. In theories in which \(SU(2)_{L}\) multiplets are carrying charge under the new \(U(1)_{X}\) symmetry this novel type of mixing is always generated at the one-loop level. It is vital to take this \(W^{3}-X\) mixing into account in order to obtain the correct matching onto the effective mixing with the photon in the electroweak broken phase. In essence, the correct matching of the mixing of the hidden photon with the hypercharge and neutral weak boson, \(\epsilon_{B}\) and \(\epsilon_{W}\), onto the mixing with the photon, \(\epsilon_{A}\), is given by the relation
\[\epsilon_{A}=c_{w}\;\epsilon_{B}+s_{w}\;\epsilon_{W}\;.\]
Importantly, the weak mixing contribution \(\epsilon_{W}\) is unavoidably generated at the one-loop level in the phenomenologically interesting anomaly-free hidden photon models like \(U(1)_{B-L}\), \(U(1)_{L_{\mu}-L_{\tau}}\), \(U(1)_{L_{\mu}-L_{\tau}}\), and combinations of these.
Finally, we have demonstrated how to obtain the decay amplitudes of the Higgs to a pair of neutral bosons from the vacuum polarisation amplitudes via the Higgs low-energy theorems. This method automatically generates all relevant contributions to the decay amplitude at a fixed order in the kinetic mixing.
## Acknowledgements
I would like to thank my collaborator Martin Bauer for the fruitful collaboration that has lead to this work. PF is supported by the Spanish Agencia Estatal de Investigacion through the grants PID2021-125331NB-I00 and CEX2020-001007-S, funded by MCIN/AEI/10.13039/501100011033.
|
2309.02703 | Spatial extent of molecular gas, dust, and stars in massive galaxies at
z=2-2.5 determined with ALMA and JWST | We present the results of 0.6"-resolution observations of CO J=3-2 line
emission in 10 massive star-forming galaxies at z=2.2-2.5 with the Atacama
Large Millimeter/submillimeter Array (ALMA). We compare the spatial extent of
molecular gas with those of dust and stars, traced by the 870 $\mu$m and 4.4
$\mu$m continuum emissions, respectively. The average effective radius of the
CO emission is 1.75$\pm$0.34 kpc, which is about 60 percent larger than that of
the 870 $\mu$m emission and is comparable with that of the 4.4 $\mu$m emission.
Utilizing the best-fit parametric models, we derive the radial gradients of the
specific star-formation rate (sSFR), gas depletion timescale, and gas-mass
fraction within the observed galaxies. We find a more intense star-formation
activity with a higher sSFR and a shorter depletion timescale in the inner
region than in the outer region. The central starburst may be the primary
process for massive galaxies to build up a core. Furthermore, the gas-mass
fraction is high, independent of the galactocentric radius in the observed
galaxies, suggesting that the galaxies have not begun to quench star formation.
Given the shorter gas depletion timescale in the center compared to the outer
region, quenching is expected to occur in the center first and then propagate
outward. We may be witnessing the observed galaxies in the formation phase of a
core prior to the forthcoming phase of star formation propagating outward. | Ken-ichi Tadaki, Tadayuki Kodama, Yusei Koyama, Tomoko L. Suzuki, Ikki Mitsuhashi, Ryota Ikeda | 2023-09-06T04:30:46Z | http://arxiv.org/abs/2309.02703v2 | Spatial extent of molecular gas, dust, and stars in massive galaxies at \(z\sim 2.2-2.5\) determined with ALMA and JWST
###### Abstract
We present the results of 0\(\farcs\)6-resolution observations of CO \(J=3-2\) line emission in 10 massive star-forming galaxies at \(z\sim 2.2-2.5\) with the Atacama Large Millimeter/submillimeter Array (ALMA). We compare the spatial extent of molecular gas with those of dust and stars, traced by the 870 \(\mu\)m and 4.4 \(\mu\)m continuum emissions, respectively. The average effective radius of the CO emission is 1.75\(\pm\)0.34 kpc, which is about 60 percent larger than that of the 870 \(\mu\)m emission and is comparable with that of the 4.4 \(\mu\)m emission. Utilizing the best-fit parametric models, we derive the radial gradients of the specific star-formation rate (sSFR), gas depletion timescale, and gas-mass fraction within the observed galaxies. We find a more intense star-formation activity with a higher sSFR and a shorter depletion timescale in the inner region than in the outer region. The central starburst may be the primary process for massive galaxies to build up a core. Furthermore, the gas-mass fraction is high, independent of the galactocentric radius in the observed galaxies, suggesting that the galaxies have not begun to quench star formation. Given the shorter gas depletion timescale in the center compared to the outer region, quenching is expected to occur in the center first and then propagate outward. We may be witnessing the observed galaxies in the formation phase of a core prior to the forthcoming phase of star formation propagating outward.
galaxies: starburst -- galaxies: high-redshift -- galaxies: ISM 0000-0002-4002-8870]Ken-ichi Tadaki
0000-0002-4070-7885]Tadayuki Kodama
0000-0002-4070-387X]Yusel Koyama
0000-0002-4070-387X]Tomoko L. Suzuki
0000-0002-4070-387X]Ikki Mitsuhashi
0000-0002-4882-788X]Ryota Ikeda
## 1 Introduction
Massive galaxies populate two distinct regions on the stellar mass vs. star-formation rate (SFR) plane: the so-called main-sequence of star-forming galaxies and quiescent regime below the main-sequence (e.g. Renzini & Peng, 2015). At \(z=1-3\), quiescent galaxies have a compact core in the center (e.g. Toft et al., 2007) and their radial profile of the stellar emission is well characterized by Sersic models with a Sersic index of \(n=4\)(e.g. Daddi et al., 2005). In contrast, star-forming galaxies (SFGs) at similar redshifts have an exponential disk with \(n=1\)(e.g. Wuyts et al., 2011). The bimodality on the stellar mass-SFR plane and the correlation between star-formation activity and morphology suggest that quenching of the SFR and transformation of galaxy morphology occur at almost the same epoch and on a short timescale.
The scenario of the star formation spreading outward from the galaxy center (so-called "inside-out growth") is widely accepted to explain the morphological evolution of massive galaxies (van Dokkum et al., 2010), i.e., galaxies form a compact core and later build the outer envelope through star formation and minor mergers (Bezanson et al., 2009). ALMA observations have revealed that massive SFGs commonly have a dusty star-forming core with an effective radius of \(R_{\rm e}\)=1-2 kpc (e.g. Barro et al.
2016; Elbaz et al., 2018). However, the spatial distributions of HST/1.6 \(\mu\)m emission suggest that the massive SFGs already have an extended stellar disk with \(R_{\rm e}\)=2-6 kpc (Tadaki et al., 2020, hereafter T20). These results do not necessarily support that massive SFGs are in the inside-out growth phase.
A difficulty to verify the inside-out scenario is that the 1.6 \(\mu\)m emission, 0.5 \(\mu\)m in the rest frame at \(z\sim 2\), can be attenuated by dust, particularly in the central region of the galaxy, where compact 870 \(\mu\)m emission is detected. This dust-extinction effect leads to a situation where the spatial distribution of 1.6 \(\mu\)m emission is more extended than that of the stellar mass, which is predicted by numerical simulations with radiative transfer calculations (Popping et al., 2022) and has been observationally confirmed with spectral energy distribution (SED) fitting approaches (Suess et al., 2019). High-resolution 4.4 \(\mu\)m images obtained by JWST/NIRCam (Rieke et al., 2023) provide a more accurate view of galaxy morphology with less effect of dust extinction (Suess et al., 2022; van der Wel et al., 2023).
Another problem is that the information about the spatial distribution of molecular gas is lacking, which makes it difficult for us to understand how massive SFGs form stars and quench star formation. As spatially-resolved observations of distant galaxies with CO emission lines tracing molecular gas are still limited in normal SFGs in the main-sequence due to faint emission (Chen et al., 2017; Kaasinen et al., 2020; Ikeda et al., 2022; Liu et al., 2023), no systematic surveys of the CO emission lines of galaxies coordinated with JWST 4.4 \(\mu\)m observations have ever been conducted. In this letter, we present the results of ALMA observations of CO emission lines from 10 massive SFGs with a stellar mass of \(\log(M_{\star}/M_{\odot})>11\) at \(z\sim 2.2-2.5\) in the PRIMERUDS field (JWST Cycle 1 GO program; ID: 1837; PI: J. Dunlop) where JWST 4.4 \(\mu\)m images are obtained. For understanding how massive SFGs form a core, we compare the spatial extent of three components: gas, dust, and stars.
## 2 Observations
T20 reported ALMA 0\(\farcs\)2-resolution observations of 870 \(\mu\)m continuum emission in 62 massive SFGs at \(1.9<z<2.6\) to characterize the spatial extent of dust emission. In T20, the stellar masses were estimated with SED fitting of multi-wavelength photometry data from the ultraviolet to near-infrared of sources in the 3D-HST catalog (Skelton et al., 2014). The total infrared luminosities were estimated from the single-band photometry at Herschel/PACS 160 \(\mu\)m, 100 \(\mu\)m, or Spitzer/MIPS 24 \(\mu\)m (Wuyts et al., 2011), and were converted to dust-obscured SFRs (Kennicutt, 1998). Unobscured SFRs were estimated from the rest-frame 2800 A luminosity, but they do not contribute little to total SFRs (T20). Figure 1 shows a scatter plot of the stellar mass vs. SFR for galaxies at \(1.9<z<2.6\). The T20 sample lies on the main sequence of star formation at \(z\sim 2\), suggesting that they are representative of massive SFGs at \(z\sim 2\)(Whitaker et al., 2014).
For two (U4-16504 and U4-16795) of the 62 massive SFGs, the CO \(J=3-2\) line emission has been observed with ALMA at a 0\(\farcs\)6 resolution (Tadaki et al., 2017, hereafter T17). From the T20 sample, we select additional six massive SFGs in redshift ranges of \(z=2.17-2.21\) and \(z=2.51-2.55\). Their CO \(J=3-2\) emission lines can be covered with a single frequency setup of ALMA Band-3 receivers since the redshift is accurately determined through H\(\alpha\) spectroscopy (\(\Delta z<0.01\); Wisnioski et al., 2019) or H\(\alpha\) narrow-band imaging (\(\Delta z<0.02\); Tadaki et al., 2013). ALMA observations targeting the six massive SFGs were conducted in 2021 December and 2022 July. The on-source time was 85 minutes per pointing. The data were calibrated in the standard manner using CASA(CASA Team et al., 2022). Then, we cleaned the emission down to the 1.5\(\sigma\) level to create 50 km s\({}^{-1}\) channel maps with a robust parameter of +2.0, leading to a spatial resolution of 0\(\farcs\)6 and a noise level of 0.13
Figure 1: Stellar mass vs. SFR for our ALMA sample of 10 massive SFGs at \(z\sim 2.2-2.5\) (magenta pentagons), the T20 sample with ALMA 870 \(\mu\)m data (blue circles), and all galaxies at \(1.9<z<2.6\) (gray dots) taken from the 3D-HST catalog (Skelton et al., 2014; Momcheva et al., 2016). The shaded region indicates the range of \(\pm 0.4\) dex from the star-formation main sequence at \(z\sim 2\)(Whitaker et al., 2014).
0.14 mJy beam\({}^{-1}\). The data quality is similar to that in T17. When the line emission is integrated over a velocity range of 200-750 km s\({}^{-1}\), the CO emission is detected at more than 8\(\sigma\) for all the targets (Figure 2). In addition, two massive SFGs (U4-17519 and U4-28156) of the T20 sample are serendipitously detected in CO in the same ALMA observations. Adding two from T17, we have obtained a sample of 10 massive SFGs in total (Table 1). We extract the CO spectra in an aperture of 1''diameter and fit them to Gaussian profiles to determine the spectroscopic redshift (Table 1). Our sample is the largest sample of massive SFGs at \(z\sim 2.2-2.5\) with spatially-resolved data of both 870 \(\mu\)m continuum and CO \(J=3-2\) line emission.
## 3 Size measurements
### ALMA data of 870 \(\mu\)m and CO emissions
CO and 870 \(\mu\)m continuum emissions were observed with an interferometer; then, we model the visibility data, not the images (Figure 2). This approach has advantages of not being affected by uncertainties in Fourier transforms and deconvolution of a dirty beam. For the 870 \(\mu\)m continuum emission, T20 have measured the flux density, effective radius, major-to-minor axis ratio, and position angle of each galaxy, assuming an elliptical exponential disk model, and characterized its spatial extent (Table 1). For U4-34138 and U4-34617, the best-fit models account for only 50%-70% of the total flux densities directly measured from the short-baseline data, suggesting that a single component model is not sufficient for characterizing the spatial distribution of the 870 \(\mu\)m continuum emission. We do not use these two
Figure 2: The JWST 4.4 \(\mu\)m images (4′′\(\times\)4′′) for our sample of 10 massive SFGs (left and center columns in each panel). Magenta contours display the 870 \(\mu\)m flux densities in the ALMA 0′′2-resolution images, plotted for every 4\(\sigma\). Cyan contours do the velocity-integrated flux of CO \(J=3-2\) emission in the same image, plotted for every 4\(\sigma\). The right columns in each panel show the residual 4.4 \(\mu\)m images after the best-fit model is subtracted. Magenta, cyan, and black filled ellipses at the bottom-left corner correspond to the FWHM of PSF in the 870 \(\mu\)m, CO, and 4.4 \(\mu\)m images, respectively. Green lines show the regions masked in measuring the effective radii of the 4.4 \(\mu\)m emission. White crosses show the central position of the best-fit model in the JWST 4.4 \(\mu\)m image.
galaxies in section 3.3 and section 4 as their effective radii can be underestimated.
In the same way as T17, we measure the effective radius of CO emission for 10 massive SFGs with model fitting. In the fitting, we fix the axis ratio and the position angle of the CO emission to those of 870 \(\mu\)m emission because the spatial resolution of the CO observations is not as high as that of the 870 \(\mu\)m observations. We fit elliptical exponential models to the velocity-integrated CO data, using the UVWULTIFIT (Marti-Vidal et al., 2014). Consequently, the "circularized" effective radius of CO emission is found to range from 1.3 kpc to 2.3 kpc (Table 1). Here, the circularized effective radius, \(R_{\rm e}\), is obtained by multiplying the effective radius along the major axis by the square root of the axis ratio and is used in the following sections.
### JWST data of 4.4 \(\mu\)m emission
Our sample of 10 massive SFGs are all located in the PRIMER-UDS field. We use the v7 data release of mosaic images in the F444W filter (4.4 \(\mu\)m) with a pixel scale of 0\(\farcs\)04 from the Dawn JWST Archive, in which data were processed with the Grizli software (Brammer, 2023; Valentino et al., 2023). We measure the effective radii of the 4.4 \(\mu\)m continuum emission for massive SFGs in the following procedure.
First, we select 18 unsaturated stars with AB magnitudes of 20-21 in the Spitzer/IRAC 4.5 \(\mu\)m band (Ashby et al., 2013) from the 3D-HST catalog (Skelton et al., 2014) and stack their normalized cut-out images in a sub-pixel scale of 0\(\farcs\)02. We use the stacked image for deconvolution of the point-spread function (PSF) in the following size measurements. We generate segmentation maps of stars and massive SFGs by using SExtractor(Bertin & Arnouts, 1996) to mask neighboring sources. The publicly released weight maps incorporate pixel-to-pixel variations and the noise from the sky background, but do not include the Poisson noise for individual sources. We therefore make full variance images based on the original count-rate data and convert them to sigma images by following the procedure described in the Dawn JWST Archive1. Using the GALFIT code (Peng et al., 2010), we fit JWST 4.4 \(\mu\)m cut-out images (4\(\arcsec\)\(\times\)4\(\arcsec\)) of 18 stars as a point source. Four of 18 stars show the reduced chi-square value of \(\chi^{2}_{\nu}>2\) or a companion in the unmasked region. Removing these four stars, we remake the stacked PSF image from the remaining 14 stars. A full width at half maximum (FWHM) of a star in the updated stacked PSF image is 0\(\farcs\)16.
Footnote 1: [https://dawn-cph.github.io/dja/blog/2023/07/18/image-data-products/](https://dawn-cph.github.io/dja/blog/2023/07/18/image-data-products/)
Next, we fit the cut-out images of massive SFGs with Sersic models, fixing the sky value at zero and allowing seven parameters to vary: the centroid position, flux density, effective radius, Sersic index, axis ratio, and position angle. The derived centroid position of the 4.4 \(\mu\)m emission coincides well with the peak of 870 \(\mu\)m emission
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline ID & \(z_{\rm CO}\) & \(R_{\rm e,870}\)2 & \(q_{\rm 870}\)3 & \(Sdv_{\rm CO}\)4 & \(R_{\rm e,CO}\)5 & \(R_{\rm e,4.4}\)6 & \(a_{\rm e}\)7 & \(q_{\rm 4.4}\)8 & \(n_{\rm 4.4}\)8 & \(n_{\rm 4.4}\)8 \\ & (kpc) & & (Jy km s\({}^{-1}\)) & (kpc) & (kpc) & & \\ \hline U4-16442 & 2.217\(\pm\)0.003 & 0.97\(\pm\)0.14 & 0.29\(\pm\)0.12 & 0.70\(\pm\)0.08 & 1.33\(\pm\)0.30 & 0.85 & 0.50 & 3.2 \\ U4-16504 & 2.527\(\pm\)0.003 & 1.25\(\pm\)0.11 & 0.85\(\pm\)0.14 & 0.82\(\pm\)0.11 & 2.30\(\pm\)0.42 & 2.12 & 0.78 & 1.7 \\ U4-16795 & 2.523\(\pm\)0.004 & 0.91\(\pm\)0.05 & 0.53\(\pm\)0.06 & 0.53\(\pm\)0.08 & 1.34\(\pm\)0.35 & 1.40 & 0.48 & 1.2 \\ U4-17519 & 2.222\(\pm\)0.001 & 1.56\(\pm\)0.06 & 0.92\(\pm\)0.06 & 1.61\(\pm\)0.10 & 2.00\(\pm\)0.20 & 2.06 & 0.86 & 1.3 \\ U4-28156 & 2.209\(\pm\)0.004 & 1.73\(\pm\)0.08 & 0.72\(\pm\)0.06 & 0.89\(\pm\)0.15 & 1.48\(\pm\)0.49 & 1.74 & 0.68 & 1.4 \\ U4-28473 & 2.523\(\pm\)0.004 & 0.86\(\pm\)0.06 & 0.44\(\pm\)0.06 & 0.79\(\pm\)0.08 & 1.56\(\pm\)0.30 & 1.37 & 0.55 & 1.7 \\ U4-28702 & 2.176\(\pm\)0.001 & 1.02\(\pm\)0.16 & 0.67\(\pm\)0.23 & 0.51\(\pm\)0.05 & 2.01\(\pm\)0.32 & 1.65 & 0.95 & 1.3 \\ U4-34138 & 2.518\(\pm\)0.003 & 0.48\(\pm\)0.10 & 0.47\(\pm\)0.25 & 0.34\(\pm\)0.06 & 1.52\(\pm\)0.51 & 3.30 & 0.89 & 2.9 \\ U4-34617 & 2.533\(\pm\)0.004 & 0.30\(\pm\)0.07 & 0.27\(\pm\)0.33 & 0.30\(\pm\)0.06 & 1.42\(\pm\)0.55 & 2.69 & 0.44 & 1.6 \\ U4-36247 & 2.179\(\pm\)0.001 & 0.55\(\pm\)0.08 & 0.55\(\pm\)0.20 & 0.57\(\pm\)0.06 & 1.97\(\pm\)0.34 & 2.24 & 0.89 & 1.3 \\ \hline \end{tabular}
\end{table}
Table 1: Source list of 10 massive SFGs
(Figure 2). The circularized effective radius is found to range from 0.8 kpc to 3.3 kpc (Table 1). The median value of the Sersic index is \(n=1.76\), which is intermediate between SFGs and quiescent galaxies (Wuyts et al., 2011).
The residual images after subtracting the best-fit model from the original images show sub-structures, such as spiral arms and off-center clumps (Figure 2), which cannot be characterized by simple elliptical models. Despite the presence of residual emissions, the fitting errors are very small, \(\sim\)1% or less in each parameter. This may be attributed to the fact that our sample sources are very bright at 4.4 \(\mu\)m with AB magnitudes of 20-21.5 and that the signal-to-noise ratio is high. The actual uncertainty is dominated by deviations from a simple parametric model, rather than the statistical errors of the fitting.
To verify the validity of Sersic models, we measure the total flux densities in the JWST 4.4 \(\mu\)m images by using SExtractor/MAG_ISOCOR and compare them with the flux densities of the best-fit models. We find that the models account for 98-107 % of the total flux densities, suggesting that the spatial distribution of the primary component is mostly characterized by the Sersic models. An exception is U4-34138; the best-fit model accounts for 114% of its total flux density, and both ALMA 870 \(\mu\)m and JWST 4.4 \(\mu\)m indicate that a single component is not sufficient for explaining their spatial distributions.
### Size comparisons
For eight massive SFGs excluding U4-34138 and U4-34617, we have measured the effective radii of the CO, 870 \(\mu\)m, and 4.4 \(\mu\)m emissions. The average and standard deviation of the effective radii of the CO, 870 \(\mu\)m, and 4.4 \(\mu\)m emissions are 1.75\(\pm\)0.34 kpc, 1.11\(\pm\)0.36 kpc, and 1.68\(\pm\)0.43 kpc, respectively. Figure 3 shows size comparisons between the three tracers. The effective radius of the 870 \(\mu\)m emission is 34% smaller than that of the 4.4 \(\mu\)m emission. This trend is also reported in submillimeter bright galaxies (Chen et al., 2022). The size difference between 870 \(\mu\)m and 4.4 \(\mu\)m is much smaller than the difference between 870 \(\mu\)m and 1.6 \(\mu\)m (T20), which is consistent with the fact that 1.6 \(\mu\)m should be much more severely affected by dust extinction. We also measure the effective radius of 4.4 \(\mu\)m emission for additional 31 massive SFGs in the PRIMER-UDS field, selected from T20 sample with 870 \(\mu\)m size measurements, in the same way as section 3.2. We confirm that the 870 \(\mu\)m emission is more compact than the 4.4 \(\mu\)m emission in the range of \(R_{\rm e,870}<2\) kpc (Figure 3). On the other hand, there appears to be little difference in size for large galaxies with \(R_{\rm e,870}>2\) kpc. We note that our sample in the CO observations is somewhat biased toward galaxies with compact dust emission.
We compare the effective radii of the CO emission with those of the 870 \(\mu\)m and 4.4 \(\mu\)m emissions. The CO emission is more extended than the 870 \(\mu\)m emission, which is consistent with previous studies of galaxies at \(z=1-3\)(Chen et al., 2017; Calistro Rivera et al., 2018; Ikeda et al., 2022). Whereas the low-\(J\) CO line emission traces the mass of interstellar medium, the 870 \(\mu\)m emission is sensitive to dust heating by massive stars associated with ongoing star-forming activity. We discuss the size difference between molecular gas and dust in section 4. In contrast, the spatial extent of the CO emission is comparable with that of the 4.4 \(\mu\)m emission. Although the current resolution and sensitivity of the CO data do not allow us to capture the sub-structures as seen in the JWST 4.4 \(\mu\)m images, the spatial distributions of the CO and 4.4 \(\mu\)m emissions may be the same, at least in a kilo-parsec scale.
## 4 Radial profiles in massive galaxies
CO, 870 \(\mu\)m, and 4.4 \(\mu\)m emissions trace molecular gas mass, dust-obscured SFR, and stellar mass, respectively. We have characterized the spatial extent of each component and derived the best-fit models in section 3. Here we interpret these parametric models to investigate the radial dependence of physical properties in massive SFGs with a compact dusty core. For the CO data, we estimate the radial profile of molecular gas masses from the CO \(J=3-2\) luminosities, following the recipe of Tacconi et al. (2020) for SFGs in the main sequence where two conversion factors are used: \(R_{31}\)=1.8 for CO \(J=3-2\) to CO \(J=1-0\) and \(\alpha_{\rm CO}=4.36\) for CO to H\({}_{2}\). For the 870 \(\mu\)m and 4.4 \(\mu\)m data, we do not directly convert their observed luminosities to the SFR and stellar mass, respectively. Galaxy-integrated values of dust-obscured SFRs and stellar masses of the massive SFGs have been estimated from multi-wavelength data (T20). By scaling the best-fit models of the 870 \(\mu\)m and 4.4 \(\mu\)m emissions to the galaxy-integrated values, we obtain the radial profiles of the SFR and stellar mass, respectively. In this work, we do not take into account for dust-unobscured star formation in the radial profile of SFR. The spatial spatial distribution of H\(\alpha\) emission, another tracer of star formation, is more extended than that of dust continuum emission in massive SFGs (Wilman et al., 2020; Tadaki et al., 2020), suggesting that the unobscured component dominates star formation in the outer region. However, since the H\(\alpha\)-based SFR corresponds to only 10% of the total SFR in massive SFGs (van Dokkum et al., 2015), including the
H\(\alpha\) component could not significantly affect the estimate of the half SFR radii.
Then, we calculate the radial profiles of the specific SFR (sSFR), defined as the SFR per unit stellar mass for eight massive SFGs (Figure 4). In the "inside-out growth" scenario (see Section 1), a core forms first in the massive galaxy and then star formation propagates outward. In this phase, the sSFR is larger in the outer region than in the inner region because ongoing star-formation activity contributes to forming an extended disk. Our results, however, show that the sSFR increases towards the center, which is the opposite trend reported in previous studies of massive galaxies (Nelson et al., 2016; Tacchella et al., 2018; Spilker et al., 2019). If the current star formation is maintained, the radial profile of stellar mass will be more centrally concentrated. This is supported by the fact that the Sersic index of massive SFGs is higher than \(n=1\) observed in less massive SFGs. The massive SFGs may be building up a central core and transforming their morphology from disk-dominated to bulge-dominated, eventually leading to a \(n=4\) profile as seen in massive quiescent galaxies at similar redshifts (Wuyts et al., 2011; Lang et al., 2014).
Moreover, we find that the central region has a shorter gas-depletion timescale \(\tau_{\rm depl}\), defined as the gas mass divided by the SFR, than the outer region. The gas-depletion timescales in starburst galaxies are 0.5 dex shorter than those in normal SFGs, even when the same CO-to-H\({}_{2}\) conversion factor is used (Kennicutt & De Los Reyes, 2021). In normal SFGs, a radial dependence of \(\tau_{\rm depl}\) is not seen in the regime where molecular gas dominates interstellar medium (Leroy et al., 2008). These results suggest the existence of a bimodality in star-formation activity, the normal mode and starburst mode, also as pointed out by studies of high-redshift galaxies (Daddi et al., 2010; Genzel et al., 2010). The physical condition of gas in the central region of massive SFGs is likely similar to that in starburst galaxies, in which galaxy-galaxy mergers could trigger intense star formation. Our sample of massive SFGs show no indication of major mergers, but some of them have potential, small companions within \(\sim 10\) kpc in their JWST 4.4 \(\mu\)m images (Figure 2). The tidal interaction by small satellites can develop non-axisymmetric structures in the galaxy, such as spiral arms, which drive gas flows to the center (Mihos & Hernquist, 1994). The spiral-arm-driven starburst scenario is supported by the fact that non-axisymmetric substructures are identified in the JWST 4.4 \(\mu\)m images after the primary component is subtracted (Figure 2).
The radial profile of gas-mass fraction \(f_{\rm gas}\), defined by the gas mass divided by the stellar mass, tells where galaxies begin to quench. The gas mass fraction is constant over the galaxy at a large gas fraction of about 1.0, indicating that galaxies are gas-rich over the entire region. Even if a small CO-to-H\({}_{2}\) conversion factor of \(\alpha_{\rm CO}=1\) in the starburst mode is applied, the gas mass fraction is still larger than that in nearby SFGs (Saintonge et al., 2017). The presence of such large gas reservoirs implies that quenching has not begun in the massive SFGs. If cold gas is not accreted onto galaxies from their surroundings, the existing gas begins to be consumed from the center by star formation within several hundred million years. Our sample of massive SFGs are likely to be in the phase immediately before "inside-out" quenching (Tacchella et al., 2015).
Finally, we note a few caveats in our analysis. Our approach implicitly assumes that luminosity-to-mass ratios are constant within a galaxy. We use 4.4 \(\mu\)m emissions, which corresponds to the rest-frame 1.5 \(\mu\)m, as the tracer of the spatial distribution of the stellar mass. The rest-frame 1.5 \(\mu\)m emission is less affected by dust
Figure 3: Size comparisons between the CO, 870 \(\mu\)m, and 4.4 \(\mu\)m emissions for eight massive SFGs (magenta pentagons). The solid lines show the 1:1 correspondence between the circularized effective radii compared. Blue circles indicate the T20 sample with ALMA 870 \(\mu\)m and JWST 4.4 \(\mu\)m data, but without CO data.
extinction. Nevertheless, Zhang et al. (2023) show, using radiative transfer models, that even 4.4 \(\mu\)m sizes are about 75 percent overestimated than the half mass radii in massive SFGs. If this effect is significant in our sample, the true spatial extent of stars is more compact than that of the SFR, indicating a suppressed sSFR in the center. Also, a radial gradient of dust temperature affects the conversion of the 870 \(\mu\)m flux density to the total infrared luminosity or SFR. When a dust temperature is higher at the center, SFR profiles could be more centrally concentrated than the 870 \(\mu\)m ones. A deviation from parametric models is another problem to accurately characterize the spatial distribution of emissions. Unlike the JWST 4.4 \(\mu\)m images, sub-structures are not visible in the ALMA 870 \(\mu\)m images. This may simply be due to the low signal-to-noise ratio (\(\sim\)10-30) in the ALMA observations. The peak flux density in the residual images of JWST 4.4 \(\mu\)m emission is less than 15% of the peak in the original image (Figure 2). To capture the sub-structures of dust emission, it is necessary to have deeper 870 \(\mu\)m images than used in this work, as demonstrated in previous studies of bright galaxies (e.g., Iono et al., 2016; Hodge et al., 2019). Although these uncertainties are desired to be addressed in the future, we are beginning to see with excellent data from the JWST and ALMA how massive galaxies form a core.
We thank the referee for constructive comments that improved the paper. This paper makes use of the following ALMA data: ADS/JAO.ALMA#2021.1.01291.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. This work is based in part on observations made with the NASA/ESA/CSA James Webb Space Telescope. These observations are associated with program #1837. We acknowledge the PRIMER team for developing their observing program with a zero-exclusive-access period. The data products presented herein were retrieved from the Dawn JWST Archive (DJA). DJA is an initiative of the Cosmic Dawn Center, which is funded by the Danish National Research Foundation under grant No. 140. This work was supported by JSPS KAKENHI Grant Numbers 20K14526. Data analysis was in part carried out on the Multi-wavelength Data Analysis System operated by the Astronomy Data Center (ADC), National Astronomical Observatory of Japan. CASA (CASA Team et al., 2022), UV-MULTIFIT (Marti-Vidal et al., 2014), GALFIT (Peng et al., 2010), Source Extractor (Bertin & Arnouts, 1996)
## Appendix A Size measurements through visibility fitting
We extract the spatial frequency (\(u\),\(v\)) and the real/imaginary part of individual visibility by using CASA toolkit/table.getcol to derive the \(uv\) distance and the amplitudes. We show in Figure 5 the amplitudes of the visibilities as a function of the \(uv\) distance along the minor axis for 870 \(\mu\)m and CO data. For nine of 10 massive SFGs, the amplitudes of the CO data more rapidly decline as \(uv\) distance compared to the 870 \(\mu\)m data, indicating that the CO emission is more compact. We note that UVMULTIFIT fits individual visibility data, not the averaged one shown in Figure 5.
Figure 4: Best-fit model radial profiles of sSFR, gas depletion timescale, and gas mass fraction. Solid red lines and shaded region show the average and standard deviation in eight massive SFGs, respectively. Black dashed lines show the individual models.
Figure 5: Normalized visibility amplitudes versus \(uv\) distances for our sample of 10 massive SFGs. Magenta and cyan symbols show 870 \(\mu\)m and CO data, respectively. Dashed lines indicate the best-fitting model for exponential profile. |
2308.03720 | Witt Differential Operators | For a smooth scheme $X$ over a perfect field $k$ of positive characteristic,
we define (for each $m\in\mathbb{Z}$) a sheaf of rings
$\mathcal{\widehat{D}}_{W(X)}^{(m)}$ of differential operators (of level $m$)
over the Witt vectors of $X$. If $\mathfrak{X}$ is a lift of $X$ to a smooth
formal scheme over $W(k)$, then for $m\geq0$ modules over
$\mathcal{\widehat{D}}_{W(X)}^{(m)}$ are closely related to modules over
Berthelot's ring $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(m)}$ of differential
operators of level $m$ on $\mathfrak{X}$. Our construction therefore gives an
description of suitable categories of modules over these algebras, which
depends only on the special fibre $X$. There is an embedding of the category of
crystals on $X$ (over $W_{r}(k)$) into modules over
$\mathcal{\widehat{D}}_{W(X)}^{(0)}/p^{r}$; and so we obtain an alternate
description of this category as well. For a map $\varphi:X\to Y$ we develop the
formalism of pullback and pushforward of
$\mathcal{\widehat{D}}_{W(X)}^{(m)}$-modules and show all of the expected
properties. When working mod $p^{r}$, this includes compatibility with the
corresponding formalism for crystals, assuming $\varphi$ is smooth. In this
case we also show that there is a ``relative de Rham Witt resolution''
(analogous to the usual relative de Rham resolution in $\mathcal{D}$-module
theory) and therefore that the pushforward of (a quite general subcategory of)
modules over $\mathcal{\widehat{D}}_{W(X)}^{(0)}/p^{r}$ can be computed via the
reduction mod $p^{r}$ of Langer-Zink's relative de Rham Witt complex. Finally
we explain a generalization of Bloch's theorem relating integrable de Rham-Witt
connections to crystals. | Christopher Dodd | 2023-08-07T16:46:18Z | http://arxiv.org/abs/2308.03720v2 | # Witt differential operators
###### Abstract.
For a smooth scheme \(X\) over a perfect field \(k\) of positive characteristic, we define (for each \(m\in\mathbb{Z}\)) a sheaf of rings \(\widehat{\mathcal{D}}^{(m)}_{W(X)}\) of differential operators (of level \(m\)) over the Witt vectors of \(X\). If \(\mathfrak{X}\) is a lift of \(X\) to a smooth formal scheme over \(W(k)\), then for \(m\geq 0\) modules over \(\widehat{\mathcal{D}}^{(m)}_{W(X)}\) are closely related to modules over Berthelot's ring \(\widehat{\mathcal{D}}^{(m)}_{\mathfrak{X}}\) of differential operators of level \(m\) on \(\mathfrak{X}\). Our construction therefore gives an description of suitable categories of modules over these algebras, which depends only on the special fibre \(X\). There is an embedding of the category of crystals on \(X\) (over \(W_{r}(k)\)) into modules over \(\widehat{\mathcal{D}}^{(0)}_{W(X)}/p^{r}\); and so we obtain an alternate description of this category as well. For a map \(\varphi:X\to Y\) we develop the formalism of pullback and pushforward of \(\widehat{\mathcal{D}}^{(m)}_{W(X)}\)-modules and show all of the expected properties. When working mod \(p^{r}\), this includes compatibility with the corresponding formalism for crystals, assuming \(\varphi\) is smooth. In this case we also show that there is a "relative de Rham Witt resolution" (analogous to the usual relative de Rham resolution in \(\mathcal{D}\)-module theory) and therefore that the pushforward of (a quite general subcategory of) modules over \(\widehat{\mathcal{D}}^{(0)}_{W(X)}/p^{r}\) can be computed via the reduction mod \(p^{r}\) of Langer-Zink's relative de Rham Witt complex. Finally we explain a generalization of Bloch's theorem relating integrable de Rham-Witt connections to crystals.
###### Contents
* 1 Introduction
* 1.1 Notations and conventions
* 1.2 Acknowledgements
* 2 Higher Derivations on Witt Vectors
* 2.1 Local Coordinates
* 2.2 Compliment: The algebra \(\widehat{\mathcal{D}}^{(0)}_{W(X),\mathrm{crys}}\)
* 3 Accessibility
* 3.1 Accessible modules
* 3.2 The crystalline version
* 3.3 Frobenius descent
* 4 Operations on Accessible Modules
* 4.1 Operations on modules: Right Modules and the left-right interchange
* 4.2 Operations on modules: Pull-Back
* 4.3 Operations on modules: Pushforward
* 4.4 The de Rham-Witt resolution
* 5 The algebra \(\widehat{\mathcal{U}}\), and applications
* 5.1 The algebra \(\widehat{\mathcal{U}}\).
* 5.2 De Rham-Witt connections mod \(p^{r}\)
## 1. Introduction
Let \(X\) be a smooth algebraic variety over \(\mathbb{C}\). Then one may attach to \(X\) its (algebraic) de Rham cohomology, \(\mathbb{H}_{\mathrm{dR}}^{!}(X)\), which is defined as the hypercohomology of the algebraic de Rham complex of \(X\). It is a fundamental theorem of Grothendieck that this cohomology is isomorphic to the singular cohomology of the analytic space attached to \(X\), \(X^{\mathrm{an}}\). Guided by the philosophy that any cohomology theory should come with a good theory of local coefficients, Grothendieck defined a site, the infinitesimal site of \(X\), which has the property that the de Rham cohomology of \(X\) is isomorphic to the cohomology of the structure sheaf of \(X\) in the infinitesimal site. He further showed that the data of a sheaf on the infinitesimal site is equivalent to the data of an \(\mathcal{O}_{X}\)-module \(\mathcal{M}\) (in the usual Zariski topology of \(X\)), equipped with a flat connection. Let us recall that this is a morphism
\[\nabla:\mathcal{M}\to\mathcal{M}\otimes\Omega_{X}^{1}\]
which satisfies the Leibniz rule \(\nabla(f\cdot m)=f\cdot\nabla(m)+m\otimes df\), as well as the following: we can extend \(\nabla\) to a morphism
\[\nabla:\mathcal{M}\otimes\Omega_{X}^{i}\to\mathcal{M}\otimes\Omega_{X}^{i+1}\]
via the rule \(\nabla(m\otimes\phi)=\nabla(m)\cdot\phi+m\otimes d\phi\), where the multiplication in the first term denotes the map \((\mathcal{M}\otimes\Omega_{X}^{1})\otimes\Omega_{X}^{i}\to\mathcal{M}\otimes \Omega_{X}^{i+1}\) coming from multiplication in the de Rham complex. Then we demand that \(\nabla\circ\nabla=0\) (this is what it means for a connection to be flat).
Therefore, to any such \((\mathcal{M},\nabla)\) we can associated a complex of sheaves
\((\mathcal{M}\otimes\Omega_{X},\nabla)\), whose hypercohomology \(\mathbb{H}_{\mathrm{dR}}^{!}(\mathcal{M})\) is the de Rham cohomology of \(\mathcal{M}\). If \(\tilde{\mathcal{M}}\) is the sheaf in the infinitesimal site associated to \((\mathcal{M},\nabla)\), then there is an isomorphism
\[\mathbb{H}_{\mathrm{dR}}(\mathcal{M})\tilde{\to}\mathbb{H}_{\mathrm{inf}}( \tilde{\mathcal{M}})\]
where the cohomology group on the right is the cohomology of \(\tilde{\mathcal{M}}\) in the infinitesimal site (c.f. [22]).
At around the same time, the theory of \(\mathcal{O}_{X}\)-modules with flat connection, in a rather different guise, was beginning to be developed, by both M. Sato and his collaborators in Kyoto and by J. Bernstein and his collaborators in Moscow. This different guise goes under the name of \(\mathcal{D}\)-modules1, where \(\mathcal{D}\) stands for the sheaf of (finite order) differential operators on \(X\). In fact, a sheaf of modules over \(\mathcal{D}\) is equivalent to an \(\mathcal{O}_{X}\)-module with flat connection. Thus \(\mathcal{D}\)-modules (i.e. sheaves of modules over \(\mathcal{D}\)) provide another (equivalent) answer to the question of how to find a good theory of local coefficients for the de Rham cohomology. Furthermore, the fact that \(\mathcal{D}\) is itself a locally noetherian sheaf of algebras with a nice presentation allows one to develop the theory in new directions. For instance, there is a well behaved notion of coherence for \(\mathcal{D}\)-modules, and to a coherent \(\mathcal{D}\)-module one may attach an invariant called the singular support, which is a closed subvariety of \(T^{*}X\), the cotangent bundle on \(X\).
Footnote 1: To be completely precise, Sato and his school worked with complex analytic spaces, while Bernstein worked with algebraic varieties. The two theories are similar enough, and had enough influence on each other, that we will mention them both in this introduction
Furthermore, if \(\varphi:X\to Y\) is a morphism of smooth algebraic varieties over \(\mathbb{C}\), then there is formalism of push-forward and pull-back. As there is, in general, no morphism of sheaves of rings between \(\varphi^{-1}(\mathcal{D}_{Y})\) and \(\mathcal{D}_{X}\), this formalism is a bit
non-trivial. Let us take a moment to review it. First, the pullback: as a \(\mathcal{D}\)-module is equivalent to a module with flat connection; if one has a flat connection on some \(\mathcal{O}_{Y}\)-module \(\mathcal{M}\), then the composed map
\[\varphi^{*}\mathcal{M}\xrightarrow{\varphi^{*}\mathcal{M}}\varphi^{*}\mathcal{M }\otimes\varphi^{*}\Omega^{1}_{Y}\to\varphi^{*}\mathcal{M}\otimes\Omega^{1}_{X}\]
is a flat connection on \(\varphi^{*}\mathcal{M}\). Applying this to \(\mathcal{D}_{Y}\) itself, one endows \(\varphi^{*}\mathcal{D}_{Y}\) with the structure of a left \(\mathcal{D}_{X}\)-module. As it is also a right \(\varphi^{-1}(\mathcal{D}_{Y})\)-module via the right action of \(\mathcal{D}_{Y}\) on itself, we see that \(\varphi^{*}\mathcal{D}_{Y}\) acquires the structure of a \((\mathcal{D}_{X},\varphi^{-1}(\mathcal{D}_{Y}))\) bimodule; this object is denoted \(\mathcal{D}_{X\to Y}\). Thus we may define a functor on derived categories
\[L\varphi^{*}:D(\mathcal{D}_{Y}-\mathrm{mod})\to D(\mathcal{D}_{X}-\mathrm{mod})\]
via
\[L\varphi^{*}\mathcal{M}\mathrel{\mathop{:}}=\mathcal{D}_{X\to Y}\otimes_{ \varphi^{-1}(\mathcal{D}_{Y})}^{L}\varphi^{-1}(\mathcal{M}\,)\]
If \(\mathcal{M}\) is an object in \(\mathcal{D}_{Y}-\mathrm{mod}\), then \(\mathcal{H}^{0}(L\varphi^{*}\mathcal{M})=\varphi^{*}\mathcal{M}\), equipped with the flat connection defined above.
Now let us consider the push-forward. The existence of the bimodule \(\mathcal{D}_{X\to Y}\) allows us to construct a push-forward on _right_\(\mathcal{D}\)-modules as follows: if \(\mathcal{N}\) is a complex of right \(\mathcal{D}\)-modules on \(X\), then
\[\mathcal{N}\mathrel{\mathop{:}}=\mathcal{D}_{X}\otimes_{\mathcal{D}_{X}}^{L} \mathcal{D}_{X\to Y}\]
is a complex of _right_\(\varphi^{-1}(\mathcal{D}_{Y})\)-modules. Therefore we obtain a functor
\[\int_{\varphi}:D(\mathrm{mod}-\mathcal{D}_{X})\to D(\mathrm{mod}-\mathcal{D}_ {Y})\]
(where \(\mathrm{mod}-\mathcal{D}\) stands for the category of right \(\mathcal{D}\)-modules) defined as
\[\int_{\varphi}\mathcal{N}\mathrel{\mathop{:}}=R\varphi_{*}(\mathcal{N}\mathrel {\otimes_{\mathcal{D}_{X}}^{L}}\mathcal{D}_{X\to Y})\]
Of course, as our aim is to study left \(\mathcal{D}\)-modules, and as \(\mathcal{D}\) is a non-commutative sheaf of rings, this does not satisfy our original goal. However, the situation is salvaged by the following:
**Proposition 1.1**.: _(c.f. [24], proposition 1.2.12) There is an equivalence of categories \(\mathcal{D}_{X}-\mathrm{mod}\to\mathrm{mod}-\mathcal{D}_{X}\). On the underlying \(\mathcal{O}_{X}\)-modules, this functor is given by \(\mathcal{M}\to\mathcal{M}\otimes_{\mathcal{O}_{X}}\omega_{X}\). This is referred to as the left-right interchange._
Therefore, one may obtain a push-forward functor on left \(\mathcal{D}\)-modules by applying the equivalence to right modules, applying the above functor \(\int_{\varphi}\), and then applying the equivalence from right to left \(\mathcal{D}_{Y}\)-modules. More efficiently, one may use the left-right interchange to directly define from \(\mathcal{D}_{X\to Y}\) a \((\varphi^{-1}(\mathcal{D}_{Y}),\mathcal{D}_{X})\)-bimodule, denoted \(\mathcal{D}_{Y\gets X}\) (c.f. the beginning of 19Push!: below for details) and then define
\[\int_{\varphi}:D(\mathcal{D}_{X}-\mathrm{mod})\to D(\mathcal{D}_{Y}-\mathrm{ mod})\]
via
\[\int_{\varphi}\mathcal{M}\mathrel{\mathop{:}}=R\varphi_{*}(\mathcal{D}_{Y \gets X}\otimes_{\mathcal{D}_{X}}^{L}\mathcal{M}\,)\]
Though one may easily prove many basic properties of the pushforward from this formula, it does not seem to lend itself to any obvious evaluation. For instance, if
\(Y=\operatorname{Spec}(\mathbb{C})\) is a point, and \(\mathcal{M}\cdot=\mathcal{M}\) is a left \(\mathcal{D}_{X}\)-module, we would hope that the pushforward of \(\mathcal{N}\) agrees with the de Rham cohomology. That this is indeed the case (up to a harmless homological shift) is contained in the following fundamental
**Theorem 1.2**.: _(c.f. [24], section 1.5) Suppose \(Y\) is a point. Then \(\mathcal{D}_{Y\gets X}=\omega_{X}\) (with its natural right \(\mathcal{D}_{X}\)-module structure). Consider the de Rham complex of \(\mathcal{D}_{X}\), \((\mathcal{D}_{X}\otimes_{\mathcal{O}_{X}}\Omega_{X},\nabla)\). Giving each term \(\mathcal{D}_{X}\otimes\Omega_{X}^{i}\) the structure of a right \(\mathcal{D}_{X}\)-module via the right multiplication of \(\mathcal{D}_{X}\) on itself makes this complex into a complex of right \(\mathcal{D}_{X}\)-modules. This complex is exact except at the rightmost term (where \(i=\text{dim}(X)\)), and we have that the action map_
\[\mathcal{D}_{X}\otimes_{\mathcal{O}_{X}}\omega_{X}\to\omega_{X}\]
_induces an isomorphism \(\mathcal{H}^{\text{dim}(X)}(\mathcal{D}_{X}\otimes_{\mathcal{O}_{X}}\Omega_ {X})\tilde{\to}\omega_{X}\)._
_In other words, \((\mathcal{D}_{X}\otimes_{\mathcal{O}_{X}}\Omega_{X}^{\cdot},\nabla)\) is a locally free resolution of \(\omega_{X}\) in the category of right \(\mathcal{D}_{X}\)-modules._
_Plugging this resolution into the definition of the pushforward yields_
\[\int_{\varphi}\mathcal{M}\tilde{\to}\mathbb{H}_{dR}(\mathcal{M})[\text{dim}(X)]\]
Along with the resolution of \(\omega_{X}\) as a right \(\mathcal{D}_{X}\)-module, there is (via the left-right-interchange) a corresponding resolution of \(\mathcal{O}_{X}\) as a left \(\mathcal{D}_{X}\)-module, called the Spencer resolution. It allows one to prove the other basic interpretation of the de Rham cohomology in terms of the category of modules with flat connection:
**Corollary 1.3**.: _There is a locally free resolution of \(\mathcal{O}_{X}\), in the category of left \(\mathcal{D}_{X}\)-modules, whose terms are \(\mathcal{D}_{X}\otimes_{\mathcal{O}_{X}}\mathcal{T}_{X}^{i}\) (here, \(\mathcal{T}_{X}^{i}\) is the \(i\)th exterior power of the tangent sheaf); and the differential is obtained from the differential in the de Rham complex of \(\mathcal{D}_{X}\) via the left-right interchange. Therefore, we have for any \(\mathcal{M}\in\mathcal{D}_{X}-\text{mod}\) an isomorphism_
\[\text{Ext}^{i}_{\mathcal{D}_{X}}(\mathcal{O}_{X},\mathcal{M})\tilde{\to} \mathbb{H}^{i}_{dR}(\mathcal{M})\]
After identifying \(\mathcal{D}\)-modules with sheaves on the infinitesimal site, this yields another proof of the fact that de Rham cohomology of a connection agrees with its infinitesimal cohomology.
In fact there is a generalization of this to any smooth morphism \(\varphi:X\to Y\). In that case, one may consider the relative connection \(\nabla:\mathcal{D}_{X}\to\mathcal{D}_{X}\otimes_{\mathcal{O}_{X}}\Omega_{X/Y}^ {1}\) and the associated relative de Rham complex
\[(\mathcal{D}_{X}\otimes_{\mathcal{O}_{X}}\Omega_{X/Y},\nabla)\]
This is again a locally free complex of right \(\mathcal{D}_{X}\)-modules, which is a resolution of \(\mathcal{D}_{Y\gets X}\). Using this, one may go on to show
**Theorem 1.4**.: _Let \(\varphi:X\to Y\) be smooth of relative dimension \(d\). Let \(\mathcal{M}\in\mathcal{D}_{X}-\text{mod}\), and \((\mathcal{M}\otimes_{\mathcal{O}_{X}}\Omega_{X/Y},\nabla)\) the corresponding relative de Rham complex. Then_
_1) There is an isomorphism_
\[\int_{\varphi}\mathcal{M}[-d]\tilde{=}R\varphi_{*}(\mathcal{M}\otimes_{ \mathcal{O}_{X}}\Omega_{X/Y}^{\cdot},\nabla)\]
_in the category \(D(\mathcal{O}_{Y})-\text{mod}\). The resulting \(\mathcal{D}_{Y}\)-module structure on the sheaf \(R^{i}\varphi_{*}(\mathcal{M}\otimes_{\mathcal{O}_{X}}\Omega_{X/Y}^{\cdot},\nabla)\) corresponds to the Gauss-Manin connection._
_2) Let \(\varphi^{!}=L\varphi^{*}[d]\). Then there is an isomorphism_
\[R\varphi_{*}R\mathcal{H}om_{\mathcal{D}_{X}}(\varphi^{!}\mathcal{N}^{\cdot}, \mathcal{M}^{\cdot})\tilde{\to}R\mathcal{H}om_{\mathcal{D}_{Y}}(\mathcal{N}^{ \cdot},\int_{\varphi}\mathcal{M}^{\cdot})\]
_for any \(\mathcal{N}^{\cdot}\in D(\mathcal{D}_{Y}-\text{mod})\) and any \(\mathcal{M}^{\cdot}\in D(\mathcal{D}_{X}-\text{mod})\). In particular, \(\int_{\varphi}\) is the right adjoint to \(\varphi^{!}\)._
For a nice exposition of this, we refer the reader to [27], chapter 4. Without going into any more detail, let us mention that the theory of \(\mathcal{D}\)-modules allows one to study many other interesting phenomena related to the topology of algebraic varieties. For instance, there is a very rich theory of nearby and vanishing cycles for \(\mathcal{D}\)-modules, there is a \(\mathcal{D}\)-module duality which is generalization of Poincare duality, and that there are deep connections with Hodge theory.
Now let us turn to the main interest of this paper, the theory of varieties in characteristic \(p>0\). Fix now a perfect field \(k\) of characteristic \(p>0\); letters \(X\) and \(Y\) will now denote smooth varieties over \(k\). Let \(W(k)\) denote the \(p\)-typical Witt vectors of \(k\), and set \(W_{r}(k)=W(k)/p^{r}\). In this case it was predicted by Grothendieck that there should be a cohomology theory, the crystalline cohomology, denoted \(\mathbb{H}_{\text{crys}}(X;W_{r}(k))\), with coefficients in modules over \(W_{r}(k)\) (for each \(r\)). The crystalline cohomology is supposed to have the following property: if \(\mathfrak{X}_{r}\) is a flat lift of \(X\) to a scheme over \(W_{r}(k)\), then there is an isomorphism
\[\mathbb{H}_{\text{dR}}(\mathfrak{X}_{r})\tilde{\to}\mathbb{H}_{\text{crys}}(X; W_{r}(k)) \tag{1.1}\]
Furthermore, the crystalline cohomology should be the cohomology of the structure sheaf in a suitable site, the crystalline site of \(X\). One should then be able to take the limit over \(r\) and obtain a cohomology \(\mathbb{H}_{\text{crys}}(X)\) with values in \(W(k)\).
This prediction was borne out in Berthelot's monumental work [3]. In that work, he constructs the crystalline site of \(X\) (it is modeled after the infinitesimal site) and defines crystalline cohomology as the cohomology of the structure sheaf in the crystalline site; and proves, among many other things, the isomorphism 1.1. As the crystalline cohomology is defined for any \(X\), without using a lift, the theory shows that the de Rham cohomology of such a lift depends only on special fibre \(X\).
There is, in some cases, an interpretation of sheaves of \(\mathcal{O}\)-modules on the crystalline site over \(W_{r}(k)\) (we shall refer to them as \(W_{r}(k)\)_-crystals_ from now on) in terms of flat connections. Namely, if \(\mathfrak{X}_{r}\) is a lift of \(X\) as above, then there is an equivalence from \(W_{r}(k)\)-crystals on \(X\) to \(\mathcal{O}_{\mathfrak{X}_{r}}\)-modules with flat connection (satisfying an additional condition called nilpotence). This means that crystals always have such a description locally on \(X\). In general, however, \(X\) admits no such lift, and so there is no "concrete" description of the category of crystals analogous to the theory of \(\mathcal{D}_{X}\)-modules sketched above for complex varieties.
In order to find such a description, the first step is to identify the analogue of the de Rham complex for crystals. This task was accomplished in the monumental work [25] of Illusie (following ideas of Bloch [10] and Deligne). The required complex is called the de Rham-Witt complex, denoted \(W\Omega^{i}_{X}\). This is a complex of sheaves on the formal scheme of Witt vectors of \(X\), \(W(X)\). As \(W(X)\) is a functorially defined lift of \(X\) to mixed characteristic, it had been suspected (since at least [36]) that it might be a good place to look for cohomology theories which take values in \(W(k)\), and the de Rham-Witt complex realizes this vision. The theory of this complex is
extensively developed in [25], but for now let us only mention the fact that there is a canonical isomorphism
\[\mathbb{H}^{\cdot}_{\mathrm{crys}}(X)\tilde{\to}\mathbb{H}^{\cdot}(W\Omega^{i}_{X})\]
Furthermore, Etesse (in [20]) has constructed a functor from crystals to modules equipped with a de Rham-Witt connection, and has shown that, when the underlying \(\mathcal{O}\)-module \(\mathcal{M}\) is a vector bundle, there is a comparison
\[\mathbb{H}^{\cdot}_{\mathrm{crys}}(\mathcal{M})\tilde{\to}\mathbb{H}^{\cdot}_ {dRW}(\mathcal{M})\]
where on the right we have the hypercohomology of the de Rham-Witt complex with values in \(\mathcal{M}\).
It therefore seems natural to look for the analogue of the above theory of \(\mathcal{D}_{X}\)-modules and try to interpret crystals in terms of some kind of \(\mathcal{D}\)-modules on \(W(X)\); one would hope also for the analogues of the basic functorialities discussed above, as well as suitable comparisons between crystalline cohomology, the \(\mathcal{D}\)-module push-forward, and the de Rham-Witt cohomology.
This paper accomplishes exactly such a construction. In the remainder of this introduction, we will explain the idea behind our theory, as well as stating the main theorems of the paper and giving an outline of the contents.
Let us begin by supposing that \(A\) admits local coordinates \(\{T_{1},\ldots T_{n}\}\), i.e., there is an etale map \(\mathrm{Spec}(A)\to\mathbb{A}^{n}_{k}\). Then one has a concrete description of the ring of (\(k\)-linear) differential operators \(\mathcal{D}_{A}\) as the subring of \(\mathrm{End}_{k}(A)\) generated by multiplication by elements of \(A\) and operators of the form \(\partial^{[j]}_{i}\) where \(\partial^{[j]}_{i}\) is the unique differential operator on \(A\) satisfying
\[\partial^{[j]}_{i}(T^{m}_{i})=\binom{m}{j}T^{m-j}_{i}\]
and \(\partial^{[j]}_{i}(T^{m}_{l})=0\) for \(l\neq i\). The collection of operators \(\partial^{[j]}_{i}\) is very well behaved; for instance, we have for all \(j\) the relation
\[\partial^{[pj]}_{i}\circ F=F\circ\partial^{[j]}_{i}\]
where \(F:A\to A\) is the Frobenius. The subring of \(\mathcal{D}_{A}\) generated by \(A\) and the operators \(\{\partial^{[j]}_{i}\}_{j\leq p^{m}}\) is denoted2\(\overline{\mathcal{D}}^{(m)}_{A}\), and the above relation is the key to showing that the functor of pullback by Frobenius can be upgraded to a functor
Footnote 2: The notation in this area is not entirely standardized; see directly below for our conventions on rings of differential operators
\[F^{*}:\overline{\mathcal{D}}^{(m)}_{A}-\mathrm{mod}\to\overline{\mathcal{D}}^ {(m+1)}_{A}-\mathrm{mod}\]
In fact, this functor is an equivalence of categories.
Now consider the ring \(W(A)\). It is a \(W(k)\) algebra, equipped with a lift of Frobenius \(F\) and an operator \(V\) which acts as \(p\cdot F^{-1}\). The filtration \(V^{i}(W(A))\) is a decreasing filtration by ideals, with respect to which \(W(A)\) is complete. One has that \(p^{i}\in V^{i}(W(A))\), and \(W(A)\) is also \(p\)-adically complete. There is a canonical isomorphism
\[W(A)/V(W(A))\tilde{\to}A \tag{1.2}\]
On this algebra we construct a family of operators, denoted \(\{\partial_{i}\}_{\lambda}\), where \(\lambda\in\mathbb{Q}^{+}\) is a positive rational number of the form \(j/p^{r}\) with \(j\) a positive integer and \(r\geq 0\). For all \(\lambda\) the operators \(\{\partial_{i}\}_{\lambda}\) preserve the filtration \(\{V^{i}(W(A))\}\). If \(\lambda\in\mathbb{Z}\) then
is a lift of \(\partial_{i}^{[\lambda]}\), via the isomorphism (1.2). If \(\operatorname{val}_{p}(\lambda)=r<0\) then \(\{\partial_{i}\}_{\lambda}(W(A))\subset V^{r}(W(A))\); this means that for any such \(\lambda\) and any \(\alpha\in W(A)\) we can define the operator \(F^{-r}(\alpha)\cdot\{\partial_{i}\}_{\lambda}:W(A)\to W(A)\) (this is because, while \(F^{-r}(\alpha)\notin W(A)\) in general, its action on \(V^{r}(W(A))\) is well-defined).
Most importantly, the operators satisfy the relations
\[\{\partial_{i}\}_{\lambda}F=F\{\partial_{i}\}_{\lambda/p}\]
and therefore
\[\{\partial_{i}\}_{\lambda}V=V\{\partial_{i}\}_{p\lambda}\]
As in the case of \(\partial_{i}^{[j]}\), there is a formula for the action of \(\{\partial_{i}\}_{\lambda}\) on \(W(A)\) in terms of coordinates; more important than this is the fact there is an intrinsic construction of these operators in terms of Hasse-Schmidt derivations on \(W(A)\) (this is detailed in Theorem 2.5 below). In particular, the algebra of \(W(k)\)-linear endomorphisms of \(W(A)\) generated3 by \(W(A)\) itself and terms of the form \(F^{-v}(\alpha)\cdot\{\partial_{i}\}_{\lambda}\) (where \(v=\max\{0,-\operatorname{val}_{p}(\lambda)\}\) can be defined independently of any choice of coordinates. We term this algebra the Witt-differential operators of \(W(A)\), denoted \(\widehat{\mathcal{D}}_{W(A)}\). For each integer \(m\) (positive or negative), we have also the differential operators of level \(\leq m\); this is the subalgebra of \(\widehat{\mathcal{D}}_{W(A)}\) generated4 by operators \(F^{-v}(\alpha)\cdot\{\partial_{i}\}_{\lambda}\) for which \(\operatorname{val}_{p}(\lambda)\leq m\). This algebra, denoted \(\widehat{\mathcal{D}}_{W(A)}^{(m)}\) can be defined independently of the choice of coordinates (all of this is detailed in 2.1 below).
Footnote 3: strictly speaking, we need to work with a kind of completion of this algebra
Footnote 4: as above, generated in the topological sense
At this point the reader might (quite reasonably) ask: what is the justification for this definition? And what is the relationship between modules over this ring and crystals (or flat connections)? Both of these questions are answered by the following construction: as \(A\) is affine, it admits a lift to a \(p\)-adically complete, \(W(k)\)-flat algebra \(\mathcal{A}\). We may choose an endomorphism \(F:\mathcal{A}\to\mathcal{A}\) which lifts the Frobenius map (in fact, as \(A\) admits local coordinates, we can even demand that \(\mathcal{A}\) admits local coordinates \(\{T_{1},\ldots T_{n}\}\) for which \(F(T_{i})=T_{i}^{p}\); we call such lifts _coordinatized_). By a basic property of the Witt vectors, such a lift induces a morphism
\[\Phi:\mathcal{A}\to W(A)\]
which intertwines the Frobenius lift \(F\) with the Witt vector Frobenius on \(W(A)\). On \(\mathcal{A}\) we have, for each \(m\geq 0\), Berthelot's ring of arithmetic differential operators \(\widehat{\mathcal{D}}_{\mathcal{A}}^{(m)}\). For the purpose of this introduction we focus on the case \(m=0\), in which case \(\widehat{\mathcal{D}}_{\mathcal{A}}^{(0)}\) is simply the \(p\)-adic completion of subalgebra of \(\operatorname{End}_{W(k)}(\mathcal{A})\) generated by \(\mathcal{A}\) and the continuous, \(W(k)\)-linear derivations of \(\mathcal{A}\). This algebra acts faithfully on \(\mathcal{A}\) itself.
Then we have
**Theorem 1.5**.: _Consider the \((\widehat{\mathcal{D}}_{W(A)}^{(0)},\widehat{\mathcal{D}}_{\mathcal{A}}^{(0)})\)-bimodule Hom\({}_{W(k)}(\mathcal{A},W(A))\)-this is a bimodule via the obvious actions of \(\widehat{\mathcal{D}}_{W(A)}^{(0)}\) on \(W(A)\) and \(\widehat{\mathcal{D}}_{\mathcal{A}}^{(0)}\) on \(\mathcal{A}\). Let \(\Phi^{*}\widehat{\mathcal{D}}_{\mathcal{A}}^{(0)}\) denote the completion of \(W(A)\otimes_{\mathcal{A}}\widehat{\mathcal{D}}_{\mathcal{A}}^{(0)}\) along the filtration \(V^{i}(W(A))\otimes_{\mathcal{A}}\widehat{\mathcal{D}}_{\mathcal{A}}^{(0)}\). Then there is an embedding_
\[\Phi^{*}\widehat{\mathcal{D}}_{\mathcal{A}}^{(0)}\to\text{Hom}_{W(k)}(\mathcal{ A},W(A))\]
_which takes a tensor \(\alpha\otimes P\) to the map \(a\to\alpha\cdot\Phi(P(a))\). The image of this embedding is exactly the \((\widehat{\mathcal{D}}^{(0)}_{W(A)},\widehat{\mathcal{D}}^{(0)}_{\mathcal{A}})\)-bisubmodule generated by \(\Phi\). The object \(\Phi^{*}\widehat{\mathcal{D}}^{(0)}_{\mathcal{A}}\) is faithfully flat as a right \(\widehat{\mathcal{D}}^{(0)}_{\mathcal{A}}\)-module, and projective as a left \(\widehat{\mathcal{D}}^{(0)}_{W(A)}\)-module; in fact it is a summand of \(\widehat{\mathcal{D}}^{(0)}_{W(A)}\) itself._
This is a combination of 3.2 and Theorem 3.9 below. This bimodule allows us to closely relate modules over \(\widehat{\mathcal{D}}^{(0)}_{\mathcal{A}}\) (which are, essentially, \(\mathcal{A}\)-modules with continuous flat connection over \(\mathcal{A}\)) to modules over \(\widehat{\mathcal{D}}^{(0)}_{W(A)}\). Before giving the exact statement, let us mention that these results are directly inspired by Berthelot's Frobenius descent. To state it, recall that, for each \(m\geq 0\), Berthelot puts the structure of a \((\widehat{\mathcal{D}}^{(m+1)}_{\mathcal{A}},\widehat{\mathcal{D}}^{(m)}_{ \mathcal{A}})\) bimodule on \(F^{*}\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}\). This bimodule induces an equivalence of categories
\[\mathcal{M}\to F^{*}\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}\otimes_{ \widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}}\mathcal{M}:=F^{*}\mathcal{M}\]
from \(\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}-\mathrm{mod}\) to \(\widehat{\mathcal{D}}^{(m+1)}_{\mathcal{A}}-\mathrm{mod}\). The bimodule \(F^{*}\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}\) can be identified with the \((\widehat{\mathcal{D}}^{(m+1)}_{\mathcal{A}},\widehat{\mathcal{D}}^{(m)}_{ \mathcal{A}})\) bi-submodule of \(\mathrm{End}_{W(k)}(\mathcal{A})\) generated by \(F\). So this theory shows that taking an \(m\)th power Frobenius map corresponds to increasing the order of the differential operators by \(m\). Roughly speaking, the map \(\Phi:\mathcal{A}\to W(A)\) is something like a perfection map, or, an infinite-power Frobenius map. The Witt differential operators, then, are exactly the (infinite order) differential operators that correspond to Berthelot's arithmetic differential operators after taking the pullback by \(\Phi\). This is, in fact, how the definition of these operators was originally arrived at.
Now let us explain some important consequences of Theorem 1.5. Reducing everything \(\mathrm{mod}\ p\), we obtain a \((\widehat{\mathcal{D}}^{(0)}_{W(A)}/p,\mathcal{D}^{(0)}_{A})\)-bimodule \(\Phi^{*}\widehat{\mathcal{D}}^{(0)}_{\mathcal{A}}/p\) (here \(\mathcal{D}^{(0)}_{A}\) are the pd-differential operators on \(A\), which by definition are the reduction \(\mathrm{mod}\ p\) of the arithmetic differential operators of level \(0\)). We have
**Theorem 1.6**.: _Let \(\Phi_{1},\Phi_{2}\) be two coordinatized lifts of Frobenius on \(\mathcal{A}\)._
_1) There is a canonical isomorphism of \((\widehat{\mathcal{D}}^{(0)}_{W(A)}/p,\mathcal{D}^{(0)}_{A})\)-bimodules_
\[\epsilon_{12}:\Phi_{1}^{*}\widehat{\mathcal{D}}^{(0)}_{\mathcal{A}}/p\mbox{$ \stackrel{{\rightarrow}}{{\rightarrow}}$}\Phi_{2}^{*}\widehat{ \mathcal{D}}^{(0)}_{\mathcal{A}}/p\]
_If \(\Phi_{3}\) is a third such lift we have the cocycle condition \(\epsilon_{23}\circ\epsilon_{12}=\epsilon_{13}\)._
_2) Suppose \(p>2\). Then there in fact a canonical isomorphism of \((\widehat{\mathcal{D}}^{(0)}_{W(A)},\widehat{\mathcal{D}}^{(0)}_{\mathcal{A}})\)-bimodules \(\epsilon_{12}:\Phi_{1}^{*}\widehat{\mathcal{D}}^{(0)}_{\mathcal{A}}\mbox{$ \stackrel{{\rightarrow}}{{\rightarrow}}$}\Phi_{2}^{*}\widehat{ \mathcal{D}}^{(0)}_{\mathcal{A}}\). For a third lift \(\Phi_{3}\) we have the cocycle condition \(\epsilon_{23}\circ\epsilon_{12}=\epsilon_{13}\)._
The second statement of this theorem is proved in Theorem 3.6, by showing that \(\Phi_{1}-\Phi_{2}\) is itself a differential operator of the required type; this requires \(p>2\). To prove the first statement (for all \(p\)) we use a totally different technique, which relies on the fact that \(W(A)/p\to A\) is a square zero infinitesimal extension. Such extensions are ubiquitous in the theory of flat connections, because, as observed by Grothendieck, giving a flat connection on a sheaf is essentially the same as giving a canonical extension of that sheaf to every square zero extension (c.f. [7], intro, for a detailed discussion of this). Our proof of part 1) of the above theorem is essentially a rephrasing of this argument (see 5.12 and 5.15 for details).
Now let us suppose that \(X\) is an arbitrary smooth scheme over \(k\). We have the formal scheme \(W(X)\), as well as the formal scheme \(W(X)_{p=0}\), whose structure
sheaf is given by \(\mathcal{O}_{W(X)}/p\) (in all cases the topological space is \(X\) itself). From the construction, there is a sheaf of \(\widehat{\mathcal{D}}^{(0)}_{W(X)}\), whose sections over an open affine \(\operatorname{Spec}(A)\) which admits local coordinates agree with \(\widehat{\mathcal{D}}^{(0)}_{W(A)}\). In that case where \(X=\operatorname{Spec}(A)\) is as above, we let \(\mathfrak{X}=\operatorname{Specf}(\mathcal{A})\). Then \(\Phi^{*}\widehat{\mathcal{D}}^{(0)}_{\mathcal{A}}\) sheafifies to \(\Phi^{*}\widehat{\mathcal{D}}^{(0)}_{\mathfrak{X}}\), a sheaf of \((\widehat{\mathcal{D}}^{(0)}_{W(X)},\widehat{\mathcal{D}}^{(0)}_{\mathfrak{X}})\) bimodules.
So, globalizing all of the above yields
**Corollary 1.7**.: _1) There is a sheaf of \((\widehat{\mathcal{D}}^{(0)}_{W(X)}/p,\mathcal{D}^{(0)}_{X})\) bimodules on \(X\), denoted \(\mathcal{B}^{(0)}_{X}\), whose sections over an open affine \(\operatorname{Spec}(A)\) which admits local coordinates are equal to \(\Phi^{*}\mathcal{D}^{(0)}_{A}\). The functor \(\mathcal{M}\to\mathcal{B}^{(0)}_{X}\otimes_{\mathcal{D}^{(0)}_{X}}\mathcal{M}\) from \(\mathcal{D}^{(0)}_{X}-\text{mod}\) to \(\widehat{\mathcal{D}}^{(0)}_{W(X)}/p-\text{mod}\) is exact and fully faithful, and the derived functor_
\[\mathcal{B}^{(0)}_{X}\otimes_{\mathcal{D}^{(0)}_{X}}^{L}:D(\mathcal{D}^{(0)}_ {X}-\text{mod})\to D(\widehat{\mathcal{D}}^{(0)}_{W(X)}/p-\text{mod})\]
_is fully faithful as well. A complex \(\mathcal{M}\colon\in D(\widehat{\mathcal{D}}^{(0)}_{W(X)}/p-\text{mod})\) is called **accessible** if it is of the form \(\mathcal{B}^{(0)}_{X}\otimes_{\mathcal{D}^{(0)}_{X}}^{L}\mathcal{N}\) for some \(\mathcal{N}\in D(\mathcal{D}^{(0)}_{X}-\text{mod})\)._
_2) Let \(\mathcal{M}\colon\in D(\widehat{\mathcal{D}}^{(0)}_{W(X)}-\text{mod})\) be a cohomologically complete5 complex. Then \(\mathcal{M}\colon\) is said to be **accessible** if \(\mathcal{M}\cdot\otimes_{W(k)}^{L}k\) is accessible inside \(D(\widehat{\mathcal{D}}^{(0)}_{W(X)}/p-\text{mod})\). The complex \(\mathcal{M}\) is accessible iff, for any open affine \(\operatorname{Spec}(A)\subset X\), which admits local coordinates, and any coordinatized lift of Frobenius \(\Phi\), we have_
Footnote 5: This is a technical condition, which can be considered the analogue for complexes of being \(p\)-adically complete. It ensures that the functor \(\otimes_{W(k)}^{L}k\) is well behaved
\[\mathcal{M}\cdot\breve{\to}\Phi^{*}\widehat{\mathcal{D}}^{(0)}_{\mathfrak{X}} \otimes_{\widehat{\mathcal{D}}^{(0)}_{X}}^{L}\mathcal{N}\cdot\]
_for a cohomologically complete complex \(\mathcal{N}\colon\in D(\widehat{\mathcal{D}}^{(0)}_{\mathfrak{X}}-\text{mod})\) (here, the symbol \(\Phi^{*}\widehat{\mathcal{D}}^{(0)}_{\mathfrak{X}}\widehat{\otimes}_{\widehat {\mathcal{D}}^{(0)}_{X}}^{L}\) denotes the derived tensor product, followed by the cohomological completion). In particular, the latter condition is independent of the choice of \(\Phi\). The inclusion functor \(D_{acc}(\widehat{\mathcal{D}}^{(0)}_{W(X)}-\text{mod})\to D_{cc}(\widehat{ \mathcal{D}}^{(0)}_{W(X)}-\text{mod})\) admits a right adjoint._
_3) Suppose \(p>2\), and suppose that \(\mathfrak{X}\) is a smooth formal scheme over \(W(k)\) whose special fibre is \(X\) (it might not exist in general). Then there is an equivalence of categories_
\[D_{cc}(\widehat{\mathcal{D}}^{(0)}_{\mathfrak{X}}-\text{mod})\to D_{acc}( \widehat{\mathcal{D}}^{(0)}_{W(X)}-\text{mod})\]
_where on the left, \(D_{cc}\) stands for cohomologically complete complexes, and on the right we have the accessible complexes inside \(D(\widehat{\mathcal{D}}^{(0)}_{W(X)}-\text{mod})\)._
These facts are proved below in 3.1 (and for the existence of the right adjoint see 4.5). There, one will also find analogues of the above for abelian categories, at least under certain circumstances. For instance, one may work with \(\widehat{\mathcal{D}}^{(0)}_{W(X)}/p^{r}-\text{mod}\) (for some \(r\geq 1\)) and in this situation there is an abelian category of accessible modules; denoted \(\widehat{\mathcal{D}}^{(0)}_{W(X)}/p^{r}-\text{mod}_{\text{acc}}\), and inside that there are categories of quasicoherent6 and coherent modules. There is also an abelian category of coherent accessible modules over \(\widehat{\mathcal{D}}^{(0)}_{W(X)}\) (c.f. 3.20). These results have some other natural variants
as well. Most importantly, one can replace \(\widehat{\mathcal{D}}^{(0)}_{W(X)}\) with a certain completion, \(\widehat{\mathcal{D}}^{(0)}_{W(X),\mathrm{crys}}\). We have also the algebras \(\mathcal{D}^{(0)}_{X,\mathrm{crys}}\) and \(\widehat{\mathcal{D}}^{(0)}_{\mathfrak{X},\mathrm{crys}}\), modules over which correspond to sheaves with topologically nilpotent flat connection. The analogue of 1.7 holds in this context, and in fact the restriction \(p>2\) is unnecessary in part 3) (c.f. Theorem 3.6 below). In particular, we obtain an embedding of the category of quasi-coherent crystals on \(X\) (over \(W_{r}(k)\)) into the category of accessible \(\widehat{\mathcal{D}}^{(0)}_{W(X)}/p^{r}\)-modules, and the same holds at the derived level (this is detailed in 3.2 below).
Another useful variant is the category of complete accessible modules over
\(\widehat{\mathcal{D}}^{(0)}_{W(X)}/p^{r}\); these are modules which are given as the completion (along the topology defined by \(V^{i}(\mathcal{O}_{W(X)})\)) of an accessible module; we show (c.f. Lemma 3.24) that this filtration is extremely well-behaved on accessible modules, and therefore that the resulting completion functor is exact and conservative; there is therefore a version in the derived category as well.
With the basic categories defined, the rest of the paper is devoted to the construction of the basic operations (the left-right interchange, pullback and pushforward), the analogue of the de Rham resolution (which we call the de Rham-Witt resolution), and the functor from accessible modules to de Rham-Witt connections. We give an overview of these sections now.
The first basic operation we discuss is the left-right interchange on accessible modules. To do so, we need to define the notion of an accessible _right_ module over \(\widehat{\mathcal{D}}^{(0)}_{W(X)}\). Starting in the local case, we need the right handed version of the bimodule \(\Phi^{*}\widehat{\mathcal{D}}^{(0)}_{\mathcal{A}}\); one may swiftly define it as \(\mathrm{Hom}_{\widehat{\mathcal{D}}^{(0)}_{W(X)}}(\Phi^{*}\widehat{\mathcal{D} }^{(0)}_{\mathcal{A}},\widehat{\mathcal{D}}^{(0)}_{W(A)}):=\Phi^{!}\widehat{ \mathcal{D}}^{(0)}_{\mathcal{A}}\). There is also a characterization of it as a certain submodule of \(\mathrm{Hom}_{\mathcal{A}}(W(A),\widehat{\mathcal{D}}^{(0)}_{\mathcal{A}})\) (in line with one would predict from Grothendieck duality; c.f. 4.1 below). It is a \((\widehat{\mathcal{D}}^{(0)}_{\mathcal{A}},\widehat{\mathcal{D}}^{(0)}_{W(A)})\) bimodule and the analogues of Theorem 1.6 and 1.7 hold without any essential change. This already shows (via the left-right interchange on \(\widehat{\mathcal{D}}^{(0)}_{\mathcal{A}}\)-modules) that there is _local_ left-right interchange; to obtain the global interchange we need a little more.
First of all, we have
\[\omega_{\mathcal{A}}\widehat{\otimes}^{L}_{\widehat{\mathcal{D}}^{(0)}_{ \mathcal{A}}}\Phi^{!}\widehat{\mathcal{D}}^{(0)}_{\mathcal{A}}\tilde{=} \omega_{\mathcal{A}}\otimes_{\widehat{\mathcal{D}}^{(0)}_{\mathcal{A}}}\Phi^{! }\widehat{\mathcal{D}}^{(0)}_{\mathcal{A}}\tilde{\to}W\omega_{A}\]
where \(W\omega_{A}\) is the highest nonzero entry in the de Rham-Witt complex. In particular \(W\omega_{A}\) inherits the structure of a right \(\widehat{\mathcal{D}}^{(0)}_{W(A)}\)-module, which can be shown to be independent of all the choices involved.
Next, we make the important
**Definition 1.8**.: 4.5 Let \(\widehat{\mathcal{D}}^{(0)}_{W(X),\mathrm{acc}}\) be the image of \(\widehat{\mathcal{D}}^{(0)}_{W(X)}\) under the right adjoint to the inclusion functor \(D_{acc}(\widehat{\mathcal{D}}^{(0)}_{W(X)}-\mathrm{mod})\to D_{cc}( \widehat{\mathcal{D}}^{(0)}_{W(X)}-\mathrm{mod})\). This sheaf admits the structure of a \((\widehat{\mathcal{D}}^{(0)}_{W(X)},\widehat{\mathcal{D}}^{(0)}_{W(X)})\) bimodule.
Then, when \(X=\mathrm{Spec}(A)\) admits local coordinates, we have
\[\widehat{\mathcal{D}}^{(0)}_{W(X),\mathrm{acc}}=\Phi^{*}\widehat{\mathcal{D}}^ {(0)}_{\mathfrak{X}}\widehat{\otimes}_{\widehat{\mathcal{D}}^{(0)}_{\mathfrak{ X}}}\Phi^{!}\widehat{\mathcal{D}}^{(0)}_{\mathfrak{X}}\]
(here \(\widehat{\otimes}\) denotes the \(p\)-adic completion). This formula was inspired by Berthelot's isomorphism \(\widehat{\mathcal{D}}^{(m+1)}_{\mathfrak{X}}{\buildrel\over{=}}F^{!}F^{*}{ \widehat{\mathcal{D}}}^{(m)}_{\mathfrak{X}}{\buildrel\over{=}}F^{*}F^{!}{ \widehat{\mathcal{D}}}^{(m)}_{\mathfrak{X}}\) in his Frobenius descent theory. So, one may say that, this bimodule, rather than \(\widehat{\mathcal{D}}^{(0)}_{W(X)}\) itself, can be considered the fundamental object of the accessible theory7. Now, using this fact, one may put two commuting structures of a right \(\widehat{\mathcal{D}}^{(0)}_{W(X)}\)-module on the object
Footnote 7: There are interesting aspects to Witt-differential operator theory beyond the accessible realm; however, we don’t discuss them much in this paper, as they relate mostly to the case where the level of differential operator is negative
\[W\omega_{X}{\buildrel\over{\otimes}}_{\mathcal{O}_{W(X)}}\widehat{\mathcal{D} }^{(0)}_{W(X),\mathrm{acc}}\]
(where the hat denotes suitable completion, c.f. 4.10 for details). Then we have
**Corollary 1.9**.: _The functor \(\mathcal{M}^{\cdot}\to(W\omega_{X}{\buildrel\over{\otimes}}_{\mathcal{O}_{W(X )}}\widehat{\mathcal{D}}^{(0)}_{W(X),\mathrm{acc}})\otimes^{L}_{\widehat{ \mathcal{D}}^{(0)}_{W(X)}}\mathcal{M}^{\cdot}\) yields an equivalence from left accessible to right accessible modules over \(\widehat{\mathcal{D}}^{(0)}_{W(X)}\). If \(X=\text{Spec}(A)\) admits local coordinates, then this equivalence is compatible with the classical equivalence from left to right \(\widehat{\mathcal{D}}^{(0)}_{\mathfrak{X}}\)-modules._
This is proved in 4.11 below.
Now we turn to pullback and pushforward. Let \(\varphi:X\to Y\) be a morphism of smooth varieties over \(k\). There is, by the functoriality of the Witt vectors, a canonical morphism \(W\varphi:W(X)\to W(Y)\), which lifts \(\varphi\). As in the classical case, the operations are defined by suitable bimodules; we in fact have both a \((\widehat{\mathcal{D}}^{(0)}_{W(X)},W\varphi^{-1}(\widehat{\mathcal{D}}^{(0)}_ {W(Y)}))\)-bimodule \(\widehat{\mathcal{D}}^{(0)}_{W(X)\to W(Y),\mathrm{acc}}\) and a \((W\varphi^{-1}(\widehat{\mathcal{D}}^{(0)}_{W(Y)}),\widehat{\mathcal{D}}^{(0)} _{W(X)})\) bimodule, \(\widehat{\mathcal{D}}^{(0)}_{W(Y)\gets W(X),\mathrm{acc}}\) (which is obtained via the left right swap). The definitions are slightly involved, however, one can say that they are closely modeled on the analogous definitions in the classical case (c.f. 3.2 below). We have the following
**Definition 1.10**.: The functor \(LW\varphi^{*}:D(\widehat{\mathcal{D}}^{(0)}_{W(Y)}-\text{mod})\to D(\widehat{ \mathcal{D}}^{(0)}_{W(X)}-\text{mod})\) is defined as
\[\mathcal{M}^{\cdot}\to\widehat{\mathcal{D}}^{(0)}_{W(X)\to W(Y),\mathrm{acc}} \otimes^{L}_{W\varphi^{-1}(\widehat{\mathcal{D}}^{(0)}_{W(Y)})}W\varphi^{-1}( \mathcal{M}^{\cdot})\]
The basic properties of this functor are given in the
**Theorem 1.11**.: _1) \(LW\varphi^{*}\) takes \(D_{\mathrm{acc}}(\widehat{\mathcal{D}}^{(0)}_{W(Y)}-\text{mod})\) to \(D_{\mathrm{acc}}(\widehat{\mathcal{D}}^{(0)}_{W(X)}-\text{mod})\)._
_2) Let \(LW\varphi^{*}:D(\widehat{\mathcal{D}}^{(0)}_{W(Y)}/p-\text{mod})\to D(\widehat{ \mathcal{D}}^{(0)}_{W(X)}/p-\text{mod})\) denote the functor defined by the bimodule \(\widehat{\mathcal{D}}^{(0)}_{W(X)\to W(Y),\mathrm{acc}}/p\). Then, under the equivalence of categories_
\[D(\mathcal{D}^{(0)}_{X}-\text{mod}){\buildrel\over{\to}}D_{\mathrm{acc}}( \widehat{\mathcal{D}}^{(0)}_{W(X)}/p-\text{mod})\]
_(and the analogous one for \(Y\), c.f. 1.7, part \(1\)) above), the functor \(LW\varphi^{*}\) corresponds to \(L\varphi^{*}\), the usual \(\mathcal{D}^{(0)}\)-module pullback._
_3) Let \(p>2\), and suppose that \(\mathfrak{X}\) and \(\mathfrak{Y}\) are smooth formal schemes (over \(W(k)\)), lifting \(X\) and \(Y\), respectively, and let \(\varphi:\mathfrak{X}\to\mathfrak{Y}\) be a morphism lifting \(\varphi\). Then, under the equivalence of categories_
\[D_{cc}(\mathcal{D}^{(0)}_{\mathfrak{X}}-\text{mod}){\buildrel\over{\to}}D_{ \mathrm{acc}}(\widehat{\mathcal{D}}^{(0)}_{W(X)}-\text{mod})\]
(and the analogous one for \(Y\), c.f. 1.7, part \(3\)) above), the functor \(LW\varphi^{*}\) corresponds to \(L\varphi^{*}\), the usual \(\widehat{\mathcal{D}}^{(0)}\)-module pullback._
These facts are proved in 4.2 below. Various standard consequences, such as the fact that pullback commutes with composition, are also derived there; there is also a version of part 3) for lifts of \(X\) and \(Y\) to \(W_{r}(k)\) for some \(r>1\). If \(X\) and \(Y\) are both affine, and we have coordinatized lifts of Frobenius on both of them, there is an "explicit" description of \(\widehat{\mathcal{D}}^{(0)}_{W(X)\to W(Y),\mathrm{acc}}\), which is 4.15 (this is the key to proving the above theorem).
Now let us turn to pushforward, where the situation is (mostly) similar.
**Definition 1.12**.: The functor \(\int_{W\varphi}:\)\(D_{\mathrm{acc}}(\widehat{\mathcal{D}}^{(0)}_{W(X)}-\mathrm{mod})\to D_{ \mathrm{acc}}(\widehat{\mathcal{D}}^{(0)}_{W(Y)}-\mathrm{mod})\) is defined by
\[\int_{W\varphi}\mathcal{M}\,:=R(W\varphi)_{*}(\widehat{\mathcal{D}}^{(0)}_{W( Y)\gets W(X),\mathrm{acc}}\otimes^{L}_{\widehat{\mathcal{D}}^{(0)}_{W(X)}} \mathcal{M}\,)_{\mathrm{acc}}\]
where \(()_{\mathrm{acc}}\) denotes the right adjoint to the inclusion from accessible \(\widehat{\mathcal{D}}^{(0)}_{W(Y)}\)-modules to all \(\widehat{\mathcal{D}}^{(0)}_{W(Y)}\)-modules. There is also, for \(r\geq 1\) the analogous functor
\[\int_{W\varphi}\mathcal{M}\,:=R(W\varphi)_{*}(\widehat{\mathcal{D}}^{(m)}_{W( Y)\gets W(X),\mathrm{acc}}/p^{r}\otimes^{L}_{\widehat{\mathcal{D}}^{(0)}_{W( X)}/p^{r}}\mathcal{M}\,)_{\mathrm{acc}}\]
Unlike in the case of pullback, I don't have a proof that the object
\[R(W\varphi)_{*}(\widehat{\mathcal{D}}^{(m)}_{W(Y)\gets W(X),\mathrm{acc}} \otimes^{L}_{\widehat{\mathcal{D}}^{(0)}_{W(X)}}\mathcal{M}\,)\]
is accessible when \(\mathcal{M}\,\)' is- this question involves tricky issues related to when the projection formula holds for some extremely large objects. At any rate, with this definition we have the following analogue of Theorem 1.11:
**Theorem 1.13**.: _1) Under the equivalence of categories_
\[D(\mathcal{D}^{(0)}_{X}-\mathrm{mod})\tilde{\to}D_{acc}(\widehat{\mathcal{D} }^{(0)}_{W(X)}/p-\mathrm{mod})\]
_(and the analogous one for \(Y\), c.f. 1.7, part \(1\)) above), the functor \(\int_{W\varphi}\) corresponds to \(\int_{\varphi}\), the usual \(\mathcal{D}^{(0)}\)-module pushforward._
_2) Let \(p>2\), and suppose that \(\mathfrak{X}\) and \(\mathfrak{Y}\) are smooth formal schemes (over \(W(k)\)), lifting \(X\) and \(Y\), respectively, and let \(\varphi:\mathfrak{X}\to\mathfrak{Y}\) be a morphism lifting \(\varphi\). Then, under the equivalence of categories_
\[D_{cc}(\mathcal{D}^{(0)}_{\mathfrak{X}}-\mathrm{mod})\tilde{\to}D_{acc}( \widehat{\mathcal{D}}^{(0)}_{W(X)}-\mathrm{mod})\]
_(and the analogous one for \(Y\), c.f. 1.7, part \(3\)) above), the functor \(\int_{W\varphi}\) corresponds to \(\int_{\varphi}\), the usual \(\widehat{\mathcal{D}}^{(0)}\)-module pushforward._
_3) Fix \(r\geq 1\) and consider the functor_
\[\widehat{\int}_{W\varphi}\mathcal{M}\,:=R(W\varphi)_{*}(\widehat{\mathcal{D} }^{(0)}_{W(Y)\gets W(X),c-acc}/p^{r}\widehat{\otimes}^{L}_{\widehat{ \mathcal{D}}^{(0)}_{W(X)}/p^{r}}\widehat{\mathcal{M}}\,)\]
_where on the right we have the (derived) completion of \(\mathcal{M}^{\cdot}\) with respect to \(V^{i}(\mathcal{O}_{W(X)}/p^{r})\), and \(\widehat{\mathcal{D}}^{(0)}_{W(Y)\gets W(X),c-\text{acc}}/p^{r}\) is a suitable completion of \(\widehat{\mathcal{D}}^{(0)}_{W(Y)\gets W(X),\text{acc}}/p^{r}\). Then \(\widehat{\int_{W\varphi}\mathcal{M}^{\cdot}}\) is isomorphic to the (derived) completion of \(\int_{W\varphi}\mathcal{M}^{\cdot}\) with respect to \(V^{i}(\mathcal{O}_{W(Y)}/p^{r})\). In other words, the completed pushforward is given by the "naive" formula._
As in the case of pullback, we can deduce many of the basic properties of pushforward from this theorem (this is all done in??Push!: below).
Now, let us discuss how these functors are related. The main result is
**Theorem 1.14**.: _1) Let \(\varphi:X\to Y\) be smooth of relative dimension \(d\), and let \(W\varphi^{!}=LW\varphi^{*}[d]\). Then there is an isomorphism_
\[R\varphi_{*}R\mathcal{H}\text{om}_{\widehat{\mathcal{D}}^{(0)}_{W(X)}}(W \varphi^{!}\mathcal{N}^{\cdot},\mathcal{M}^{\cdot})\tilde{\to}R\mathcal{H} \text{om}_{\widehat{\mathcal{D}}^{(0)}_{W(Y)}}(\mathcal{N}^{\cdot},\int_{W \varphi}\mathcal{M}^{\cdot})\]
_for any \(\mathcal{N}^{\cdot}\in D_{\text{acc}}(\widehat{\mathcal{D}}^{(0)}_{W(Y)}- \text{mod})\) and any \(\mathcal{M}^{\cdot}\in D_{\text{acc}}(\widehat{\mathcal{D}}^{(0)}_{W(X)}- \text{mod})\). In particular, \(\int_{W\varphi}\) is the right adjoint to \(\varphi^{!}\)._
_2) Fix \(r\geq 1\), and suppose \(\mathcal{M}\in\widehat{\mathcal{D}}^{(0)}_{W(X)}-\text{mod}_{\text{qcoh}}\) is an accessible quasicoherent \(\widehat{\mathcal{D}}^{(0)}_{W(X)}\)-module which is nilpotent. Let \(\tilde{\mathcal{M}}\) denote the associated crystal on \(X\) over \(W_{r}(k)\). Then, for each \(i\)\(\mathcal{H}^{i}(\int_{W\varphi}\mathcal{M})\) is an accessible quasicoherent \(\widehat{\mathcal{D}}^{(0)}_{W(Y)}\)-module which is nilpotent, and there is a functorial isomorphism_
\[\mathcal{H}^{i-d}(\int_{W\varphi}\mathcal{M})\tilde{\to}R^{i}\varphi_{*,\text {crys}}(\tilde{\mathcal{M}})\]
This is theorems 4.32 and 4.34 below.
To finish things off, let us discuss the de Rham-Witt theory. By a theorem of Etesse, there is a natural functor from crystals over \(W_{r}(k)\) to de Rham-Witt connections over \(W_{r}\Omega^{1}_{X}\); and one may easily show that it extends to a functor to continuous de Rham-Witt connections over \(W\Omega^{1}_{X}/p^{r}\) (this object, being flat over \(W_{r}(k)\), is better behaved). Using this theory we put a canonical continuous de Rham-Witt connection on the object \(\widehat{\mathcal{D}}^{(0)}_{W(X),c-\text{acc}}/p^{r}\) (this is the completion of \(\widehat{\mathcal{D}}^{(0)}_{W(X),\text{acc}}/p^{r}\) along \(V^{i}(\mathcal{O}_{W(X)}/p^{i})\)). From this we deduce
**Theorem 1.15**.: _There is an equivalence of categories from the (abelian) category of accessible quasicoherent \(\widehat{\mathcal{D}}^{(0)}_{W(X)}/p^{r}\) modules to the category of quasicoherent modules over \(W(X)_{p^{r}=0}\) with continuous flat connection over \(W\Omega^{1}_{X}/p^{r}\)._
The fact that this functor is fully faithful is actually quite straightforward; the essential surjectivity is trickier. In fact, the case of vector bundles is handled by the paper of Bloch [11], whose central lemma also forms the core of our proof (c.f. Lemma 5.18 below).
Finally, let us present the de Rham-Witt resolution. If \(\varphi:X\to Y\) is a smooth morphism of relative dimension \(d\), we have the relative de Rham-Witt complex of Langer-Zink, as developed in [32], denoted \(W\Omega_{X/Y}\). The de Rham-Witt connection
on \(\widehat{\mathcal{D}}^{(0)}_{W(X),c-\mathrm{acc}}/p^{r}\) yields also a continuous connection map
\[\nabla:\widehat{\mathcal{D}}^{(0)}_{W(X),c-\mathrm{acc}}/p^{r}\to\widehat{ \mathcal{D}}^{(0)}_{W(X),c-\mathrm{acc}}/p^{r}\widehat{\otimes}_{\mathcal{O}_{ W(X)}/p^{r}}W\Omega^{1}_{X/Y}/p^{r}\]
(as above the hat denotes completion with respect to \(V^{i}(\mathcal{O}_{W(X)}/p^{r})\)), and therefore an associated de Rham-Witt complex \((\widehat{\mathcal{D}}^{(0)}_{W(X),c-\mathrm{acc}}/p^{r}\widehat{\otimes}_{ \mathcal{O}_{W(X)}/p^{r}}W\Omega_{X/Y}/p^{r},\nabla)\). We have
**Theorem 1.16**.: _The complex \((\widehat{\mathcal{D}}^{(0)}_{W(X),c-\mathrm{acc}}/p^{r}\widehat{\otimes}_{ \mathcal{O}_{W(X)}/p^{r}}W\Omega_{X/Y}/p^{r},\nabla)\) is exact except at the right-most term (which is the \(dth\)) and we have_
\[\mathcal{H}^{d}((\widehat{\mathcal{D}}^{(0)}_{W(X),c-\mathrm{acc}}/p^{r} \widehat{\otimes}_{\mathcal{O}_{W(X)}/p^{r}}W\Omega_{X/Y}/p^{r},\nabla)) \tilde{\to}\widehat{\mathcal{D}}^{(0)}_{W(Y)\gets W(X),c-\mathrm{acc}}/p^ {r}\]
_where \(\widehat{\mathcal{D}}^{(0)}_{W(Y)\gets W(X),c-\mathrm{acc}}/p^{r}\) is the completion of \(\widehat{\mathcal{D}}^{(0)}_{W(Y)\gets W(X),\mathrm{acc}}/p^{r}\) along \(V^{i}(\mathcal{O}_{W(X)}/p^{r})\); when \(Y\) is a point it is simply \(W\omega_{X}/p^{r}\). It follows that, for any \(\mathcal{M}\in\widehat{\mathcal{D}}^{(0)}_{W(X)}/p^{r}-\mathrm{mod}_{acc}\) there is an isomorphism_
\[\widehat{\int}_{W\varphi_{p^{r}}=0}\mathcal{M}\tilde{\to}R\varphi_{*}( \widehat{\mathcal{M}}\widehat{\otimes}_{\mathcal{O}_{W(X)}/p^{r}}W\Omega^{ \cdot}_{X/Y}/p^{r})\]
_of sheaves over \(\mathcal{O}_{W(Y)}/p^{r}\); as above all of the completions are along \(V^{i}(\mathcal{O}_{W(X)}/p^{r})\)._
This is Theorem 4.39 below. The second part of the theorem therefore implies that crystalline cohomology for an arbitrary quasicoherent crystal can be computed via the de Rham-Witt complex mod \(p^{r}\). This result has antecedents in the work of Etesse [20] and Berthelot [6], who worked instead with the de Rham-Witt complex \(W_{r}\Omega_{X}\). To get a comparison theorem when working with that complex forces one to consider crystals which are flat over \(W_{r}(k)\) (Etesse considered only vector bundles). Despite these differences, the proofs (including the one in this paper) all rely essentially on the basic fact that the de-Rham-Witt complex mod \(p\) is quasi-isomorphic to the de Rham complex of \(X\). In our strategy, though, evaluating the complex on a single object (namely \(\widehat{\mathcal{D}}^{(0)}_{W(X),c-\mathrm{acc}}/p^{r}\)) allows one to deduce the result for the entire category, including for those accessible modules which are not nilpotent. This appears to be new.
### Notations and conventions
Throughout the entire paper a perfect field \(k\) of characteristic \(p>0\) is fixed; as indicated in the introduction, some of the results depend on weather \(p=2\), we will indicate this when it occurs. We let \(W(k)\) denote the \(p\)-typical Witt vectors of \(k\).
Letters \(X,Y,Z\) denote a smooth varieties over \(k\). Letters \(A,B,C\) denote smooth \(k\)-algebras, i.e., finite type \(k\)-algebras such that the induced morphism \(\mathrm{Spec}(A)\to\mathrm{Spec}(k)\) is smooth. Gothic letters \(\mathfrak{X},\mathfrak{Y},\mathfrak{Z}\) denote smooth formal schemes over \(W(k)\), whose special fibres are denoted by the corresponding standard letters. We use \(\mathcal{A}\), \(\mathcal{B}\), \(\mathcal{C}\) to denote smooth algebra over \(W(k)\); we use \(\mathfrak{X}_{r}\), \(\mathfrak{Y}_{r}\), \(\mathfrak{Z}_{r}\) to denote smooth schemes over \(W_{r}(k)\) and \(\mathcal{A}_{r}\), \(\mathcal{B}_{r}\), \(\mathcal{C}_{r}\) smooth algebras over \(W_{r}(k)\).
We shall make essential use of Berthelot's arithmetic differential operators; c.f. [4]. In particular, for \(m\geq 0\), \(\widehat{\mathcal{D}}^{(m)}_{\mathfrak{X}}\) denotes the arithmetic differential operators of level \(m\), a \(W(k)\)-flat and \(p\)-adically complete sheaf of algebras whose reduction mod \(p\) is denoted \(\mathcal{D}^{(m)}_{X}\). When \(m=0\) this algebra is the usual PD differential operators of [7]; for a quick definition and exposition of the basic properties of
we recommend [8], chapter 2. In addition, the construction has been extended by Shiho to define algebras \(\widehat{\mathcal{D}}^{(m)}_{\mathfrak{X}}\) for \(m<0\); c.f. [38] for details.
For all such \(X\) over \(k\) we have the formal scheme of \(p\)-typical Witt vectors \(W(X)\), whose underlying topological space is \(X\) and whose topology is defined by a collection of ideals \(V^{i}(\mathcal{O}_{W(X)})\) (c.f. [13],[14] for a detailed account). None of these are locally finitely generated, but we have the finite type scheme \(W_{m}(X)\) whose structure sheaf is \(\mathcal{O}_{W(X)}/V^{m}(\mathcal{O}_{W(X)})\). We also have the formal schemes \(W(X)_{p^{r}=0}\) for any \(r\geq 1\), whose underlying sheaf of rings are given by \(\mathcal{O}_{W(X)}/p^{r}\). By a quasicoherent sheaf on this scheme we mean an inverse limit of quasicoherent sheaves on the finite type schemes \(W_{m}(X)_{p^{r}=0}\). The morphisms in this category are continuous maps of \(\mathcal{O}_{W(X)}/p^{r}\)-modules. This category is additive, though not abelian and otherwise terribly behaved.
Finally, let us take a moment to discuss the notion of cohomological completeness. This notion, which also goes under the name derived completeness, has been developed extensively in, e.g., [28][39], Tag 091N, [18]. We will use as a reference [28], chapter 1 as it develops the notion in the context we need- a noncommutative sheaf of rings \(\mathcal{R}\) over a topological space \(X\), so that \(\mathcal{R}\) possesses a global central element \(p\) (called \(h\) in [28]) and such that \(\mathcal{R}\) has no nonzero \(p\)-torsion. For us \(\mathcal{R}\) will in fact be a flat \(W(k)\)-module. In this case a complex \(\mathcal{M}\colon\in D(\mathcal{R}-\mathrm{mod})\) is called cohomologically complete if
\[R\mathcal{H}om_{\mathcal{R}}(\mathcal{R}[p^{-1}],\mathcal{M}\colon)=R\mathcal{ H}om_{W(k)}(W(k)[p^{-1}],\mathcal{M}\colon)=0\]
A \(p\)-torsion-free sheaf of modules is cohomologically complete iff it is \(p\)-adically complete in the usual sense (c.f. [28], lemma 1.5.4). Further, any complex in the image of the natural functor \(D(\mathcal{R}/p^{n}-\mathrm{mod})\to D(\mathcal{R}-\mathrm{mod})\) is cohomologically complete (for any \(n\geq 1\)). The collection of such complexes forms a thick triangulated subcategory, which we denote \(D_{cc}(\mathcal{R}-\mathrm{mod})\). The main facts about \(D_{cc}(\mathcal{R}-\mathrm{mod})\) that we need are, firstly, the Nakayama lemma: if \(\mathcal{M}\colon\in D_{cc}(\mathcal{R}-\mathrm{mod})\), then \(\mathcal{M}\otimes^{L}_{W(k)}k=0\) iff \(\mathcal{M}\colon=0\) (this is [28], proposition 1.5.8.). Secondly, we shall use the fact that the inclusion functor \(D_{cc}(\mathcal{R}-\mathrm{mod})\to D(\mathcal{R}-\mathrm{mod})\) admits a right adjoint, the derived completion functor (c.f. [28], proposition 1.5.6), concretely, this functor is given by
\[\mathcal{M}\colon\to R\mathcal{H}om_{\mathcal{R}}(\mathcal{R}[p^{-1}]/ \mathcal{R}[-1],\mathcal{M}\colon):=\widehat{\mathcal{M}\colon}\]
For complexes over the various \(W(k)\)-torsion-free rings appearing in this paper, the symbol \(\widehat{?}\) will denote cohomological completion. We should mention that this symbol is also sometimes used for complexes which are annihilated by \(p^{r}\), where it stands for the (derived) completion with respect to \(V^{i}(\mathcal{O}_{W(X)}/p^{r})\) (c.f. 3.25 and the discussion directly above). As complexes annihilated by \(p^{r}\) are automatically cohomologically complete (in the sense used in this paper), the two notions are essentially disjoint, and hopefully this will not cause undo confusion.
### Acknowledgements
I would like to thank Michel Gros for inviting me to speak about this work at Rennes, and for helpful conversations afterwards. I would also like to that Bernard Le Stum for friendly conversations about related topics. I would like to gratefully acknowledge the support of the NSF.
## 2. Higher Derivations on Witt Vectors
In this section we define, for each \(m\in\mathbb{Z}\), the sheaf \(\mathcal{E}_{W(X)}^{(m)}\) of higher derivations on the formal scheme \(W(X)\) (of level \(m\)), as well as the algebra \(\widehat{\mathcal{D}}_{W(X)}^{(m)}\) of Witt-differential operators, which is (the completion of) the sheaf of algebras generated by \(\mathcal{O}_{W(X)}\) and \(\mathcal{E}_{W(X)}^{(m)}\). The key to defining \(\mathcal{E}_{W(X)}^{(m)}\) is the observation that Hasse-Schmidt derivations on \(X\) can be lifted canonically to the Witt-vectors.
In order to make all this precise, we recall the
**Definition 2.1**.: Let \(R\) be a commutative ring, and let \(D_{0}:B\to A\) be a morphism of commutative \(R\)-algebras. A Hasse-Schmidt derivation (or higher derivation) of length \(n\) from \(B\) to \(A\) over \(R\) is a sequence of \(R\)-linear operators \(D=(D_{0},\ldots,D_{n})\) (we allow \(n=\infty\) for such a sequence indexed by \(\mathbb{N}\)) such that
\[D_{l}(xy)=\sum_{i+j=l}D_{i}(x)D_{j}(y)\]
If \(A=B\), we shall suppose that \(D_{0}=Id\) unless otherwise specified. In that case we simply refer to a Hasse-Schmidt derivation of \(A\) (over \(R\)). The operator \(D_{i}\) is referred to as the \(i\)th component of \(D\).
We refer the reader to [40] and [33] for more details and some interesting applications (e..g to jet spaces in algebraic geometry). At any rate it follows immediately from the definition that \(D_{1}\in\mathrm{Der}_{R}(A,B)\), and then by a quick induction it follows that each \(D_{i}\) is an \(R\)-linear differential operator from \(B\) to \(A\) of order \(\leq i\).
In the sequel, it will be convenient to use the elementary fact that a Hasse-Schmidt derivation is equivalent to a map of \(R\)-algebras
\[\varphi_{D}:B\to A[t]/t^{n+1}\]
where the sequence \(D=(D_{0},\ldots,D_{n})\) corresponds to
\[\varphi_{D}(b)=\sum_{i=0}^{n}D_{i}(b)t^{i}\]
(here, if \(n=\infty\), then the target is taken to be the power series ring \(A[[t]]\)). For instance, from this characterization and the infinitesimal lifting property (c.f. [15], ch. 2, proposition 6), we see immediately that if \(B\to B^{\prime}\) is an etale morphism of \(R\)-algebras such that the structure morphism \(D_{0}:B\to A\) extends to a morphism \(D_{0}^{\prime}:B^{\prime}\to A\), then any Hasse-Schmidt derivation \(D=(D_{0},\ldots,D_{n})\) from \(B\) to \(A\) over \(R\) extends uniquely to \(D^{\prime}=(D_{0}^{\prime},\ldots,D_{n}^{\prime})\) from \(B^{\prime}\) to \(A\) over \(R\). In particular, a Hasse-Schmidt derivation of \(A\) over \(R\) extends uniquely to any localization \(A_{g}\) (for \(g\in A\)).
It turns out that, after applying appropriate Frobenius twists, higher derivations are very well-behaved on Witt vectors. To explain this, we start with the following basic result:
**Lemma 2.2**.: _Let \(A\) be a smooth \(k\)-algebra. For any \(r\geq 0\), let \(\mathcal{A}_{r+1}\) be a flat \(W_{r+1}(k)\)-algebra such that \(\mathcal{A}_{r+1}/p\tilde{=}A\). Then there is an embedding_
\[W_{r+1}(A)\to\mathcal{A}_{r+1}\]
_which is \(F^{r}\)-semilinear over \(W_{r+1}(k)\). The map is given as follows: if \((f_{1},f_{2}\ldots,f_{r+1})\in W_{r+1}(A)\) we choose any lifts \(\tilde{f}_{i}\) of \(f_{i}\) in \(\mathcal{A}_{r+1}\) and send_
\[(f_{1},f_{2}\ldots,f_{r+1})\to\tilde{f}_{1}^{p^{r}}+p\tilde{f}_{2}^{p^{r-1}}+ \cdots+p^{r}\tilde{f}_{r+1}\]
_Further, if \(F\) is any lift of the absolute Frobenius map to \(\mathcal{A}_{r+1}\), then the restriction of \(F\) to the image of \(W_{r+1}(A)\) is isomorphic to the Witt-vector Frobenius; in particular, we have_
\[F(\tilde{f}_{1}^{p^{r}}+p\tilde{f}_{2}^{p^{r-1}}+\cdots+p^{r}\tilde{f}_{r+1})= \tilde{f}_{1}^{p^{r+1}}+p\tilde{f}_{2}^{p^{r}}+\cdots+p^{r}\tilde{f}_{r+1}^{p}\]
_For each \(i\geq 0\), the ideal \(V^{i}(W_{r+1}(A))\) is given by the intersection of \(p^{i}\mathcal{A}_{r+1}\) with \(W_{r+1}(A)\)._
_If \(\varphi^{\#}:B\to A\) is a morphism of smooth \(k\)-algebras, and we consider any lift to \(\varphi^{\#}_{r+1}:\mathcal{B}_{r+1}\to\mathcal{A}_{r+1}\), then \(\varphi^{\#}_{r+1}(W_{r+1}(B))\subset W_{r+1}(A)\), and the induced map_
\[\varphi^{\#}_{r+1}:W_{r+1}(B)\to W_{r+1}(A)\]
_agrees with the functorial map \(W\varphi^{\#}\) coming from Witt vector theory._
Proof.: By definition, the ghost map
\[w_{r}:W_{r+1}(\mathcal{A}_{r+1})\to\mathcal{A}_{r+1}\]
given by
\[(\tilde{f}_{1},\tilde{f}_{2}\ldots,\tilde{f}_{r+1})\to\tilde{f}_{1}^{p^{r}}+p \tilde{f}_{2}^{p^{r-1}}+\cdots+p^{r}\tilde{f}_{r+1}\]
is a ring homomorphism, and since \(p^{r+1}\) annihilates \(\mathcal{A}_{r+1}\), we see that \(w_{r}\) factors through \(W_{r+1}(\mathcal{A}_{r+1}/p)=W_{r+1}(A)\). Call the image of this map \(W^{\prime}\). I claim that the surjection \(W_{r+1}(A)\to W^{\prime}\) is also injective. For, if
\[\tilde{f}_{1}^{p^{r}}+p\tilde{f}_{2}^{p^{r-1}}+\cdots+p^{r}\tilde{f}_{r+1}=0\]
then, taking the image in \(A\), we see that \(f_{1}^{p^{r}}=0\) which implies \(f_{1}=0\) and so \(\tilde{f}_{1}=p\tilde{g}_{1}\); but then \(\tilde{f}_{1}^{p^{r}}=0\). Therefore \(p\tilde{f}_{2}^{p^{r-1}}+\cdots+p^{r}\tilde{f}_{r+1}=0\) in \(pA_{r+1}\) but since \(A_{r+1}\) is flat over \(W_{r+1}(k)\) this implies \(\tilde{f}_{2}^{p^{r-1}}+\cdots+p^{r-1}\tilde{f}_{r+1}=0\) in \(A_{r}=A_{r+1}/p^{r}\). Thus an induction on \(r\) shows that each \(f_{i}=0\) (where as above \(f_{i}\) is the image of \(\tilde{f}_{i}\) in \(A\)) as required.
Now let us show that the map \(w_{r}\) is \(F^{r}\)-semilinear. Choose a lift of Frobenius on \(\mathcal{A}_{r+1}\) whose restriction to \(W_{r+1}(k)\) is the Witt-vector Frobenius. By a universal property of Witt-vectors (c.f., e.g., [37] proposition 1.1.23) there is a unique map
\[s:\mathcal{A}_{r+1}\to W_{r+1}(\mathcal{A}_{r+1})\]
so that \(w_{r}\circ S=F^{r}\). By our choice of \(F\) the map \(s\) is a morphism of \(W_{r+1}(k)\)-algebras; therefore \(w_{r}\) is \(F^{r}\)-semilinear over \(W_{r+1}(k)\) as claimed.
Next let \(F\) be any lift of Frobenius on \(A_{r+1}\). As \(F\) is a map of algebras we have
\[F(\tilde{f}_{1}^{p^{r}}+p\tilde{f}_{2}^{p^{r-1}}+\cdots+p^{r}\tilde{f}_{r+1})= F(\tilde{f}_{1})^{p^{r}}+pF(\tilde{f}_{2})^{p^{r-1}}+\cdots+p^{r}F(\tilde{f}_{r+1})\]
so that \(F(W^{\prime})\subset W^{\prime}\); further, by what we have just proved the term \(F(\tilde{f}_{1})^{p^{r}}+pF(\tilde{f}_{2})^{p^{r-1}}+\cdots+p^{r}F(\tilde{f}_{ r+1})\) depends only on the images \(F(f_{i})=f_{i}^{p}\) in \(A\); whence the statement. The fact that
\[V^{i}(W_{r+1}(A))=p^{i}\mathcal{A}_{r+1}\cap W_{r+1}(A)\]
follows immediately from the definition of \(V^{i}(W_{r+1}(A))\).
Finally, consider a map \(\varphi^{\#}:B\to A\), which, by the infinitesimal lifting property, can be lifted to a map \(\varphi^{\#}_{r+1}:\mathcal{B}_{r+1}\to\mathcal{A}_{r+1}\). Then
\[\varphi^{\#}(\tilde{f}_{1}^{p^{r}}+p\tilde{f}_{2}^{p^{r-1}}+\dots+p^{r}\tilde{ f}_{r+1})=\varphi^{\#}(\tilde{f}_{1})^{p^{r}}+\dots+p^{r}\varphi^{\#}(\tilde{f}_{r+1})\]
from which the result follows immediately.
In the sequel, we shall make extensive use of this map, writing \(W_{r+1}(A^{(r)})\subset\mathcal{A}_{r+1}\) for the image.
An important special case of this lemma is given by the
**Example 2.3**.: Suppose \(A=k[T_{1},\dots,T_{n}]\), and \(\mathcal{A}_{r+1}=W_{r+1}(k)[T_{1},\dots,T_{n}]\). Then the image of \(W_{r+1}(A^{(r)})\to\mathcal{A}_{r+1}\) is the \(W_{r+1}(k)\)-submdule spanned by
\[\{p^{j}\prod_{i=1}^{n}T_{i}^{a_{i}p^{r-j}}\}\]
where the \(\{a_{i}\}\) are any natural numbers. Indeed, it is clear that all of the displayed monomials are contained in \(W_{r+1}(A^{(r)})\). To see the converse, we use induction on \(r\), the case \(r=0\) being trivial. Consider a term of the form \(p^{j}f^{p^{r-j}}\in W_{r+1}(A^{(r)})\), for \(j<r\) (when \(j=r\) the result is obvious). By induction we have an expression
\[p^{j}f^{p^{r-1-j}}=\sum_{j=0}^{r-1}\sum_{i}b_{i}p^{j}T_{i}^{a_{i}p^{r-1-j}} \mathrm{mod}(p^{r})\]
We may consider the lift of Frobenius which takes \(T_{i}\to T_{i}^{p}\). Applying this map to the displayed equality yields
\[p^{j}f^{p^{r-j}}=\sum_{j=0}^{r-1}\sum_{i}b_{i}p^{j}T_{i}^{a_{i}p^{r-j}} \mathrm{mod}(p^{r})\]
which implies
\[p^{j}f^{p^{r-j}}=\sum_{j=0}^{r-1}\sum_{i}b_{i}p^{j}T_{i}^{a_{i}p^{r-j}}+\sum_{ i}b_{i}p^{r}T^{a_{i}}\]
as claimed.
We will construct our canonical lifts of Hasse-Schmidt derivations using the above construction of \(W_{r+1}(A^{(r)})\); to apply it, we make use of the following straightforward application of the infinitesimal lifting property:
**Lemma 2.4**.: _For any \(r\geq 0\), let \(\mathcal{A}_{r+1},\mathcal{B}_{r+1}\) be flat \(W_{r+1}(k)\)-algebras such that \(\mathcal{A}_{r+1}/p\tilde{=}A\) and \(\mathcal{B}_{r+1}/p\tilde{=}B\). Let \(D=(D_{0},\dots,D_{n})\) be any Hasse-Schmidt derivation from \(B\) to \(A\), over \(k\). Then there is a lift to a Hasse-Schmidt derivation \(\tilde{D}=(\tilde{D}_{0},\dots,\tilde{D}_{n})\), over \(W_{r+1}(k)\), from \(\mathcal{B}_{r+1}\) to \(\mathcal{A}_{r+1}\),_
In general, there will be many lifts of a given Hasse-Schmidt derivation. However, upon restricting to Witt vectors we have the following essential result:
**Theorem 2.5**.: _Let \(B,A\) be smooth \(k\)-algebras, and let \(D=(D_{0},D_{1},\dots):B\to A\) be a Hasse-Schmidt derivation (of any length). Let \(\tilde{D}=(\tilde{D}_{0},\tilde{D}_{1},\dots)\) be a lift of \(D\) to a Hasse-Schmidt derivation from \(\mathcal{B}_{r+1}\) to \(\mathcal{A}_{r+1}\) (where, as above, these are flat lifts of \(B\) and \(A\), respectively). Then \(\tilde{D}\) takes the subalgebra \(W_{r+1}(B^{(r)})\) into \(W_{r+1}(A^{(r)})\). Further, the induced map \(\tilde{D}:W_{r+1}(B^{(r)})\to W_{r+1}(A^{(r)})\) is independent of the choice of lift \(\tilde{D}\)._
Proof.: We begin by showing that \(\tilde{D}\) takes \(W_{r+1}(B^{(r)})\) into \(W_{r+1}(A^{(r)})\). We recall the following formula for the action of Hasse-Schmidt derivations on powers, which is easily checked by induction:
\[\tilde{D}_{j}(f^{n})=\sum_{i_{1}+\cdots+i_{n}=j}\tilde{D}_{i_{1}}(f)\cdots\tilde {D}_{i_{n}}(f)\]
so that we have
\[\tilde{D}_{j}(p^{l}f^{p^{r-l}})=p^{l}\sum_{i_{1}+\cdots+i_{p^{r-l}}=j}\tilde{D}_ {i_{1}}(f)\cdots\tilde{D}_{i_{p^{r-l}}}(f) \tag{2.1}\]
Now, the set
\[\{(i_{1},\ldots,i_{p^{r-l}})\in\mathbb{N}^{p^{r-l}}|i_{1}+\cdots+i_{p^{r-l}}=j\}\]
is acted upon by the symmetric group \(S_{p^{r-l}}\), and, after grouping like terms together, the coefficient of a term \(\tilde{D}_{i_{1}}(f)\cdots\tilde{D}_{i_{p^{r-l}}}(f)\) in (2.1) is precisely the size of the \(S_{p^{r-l}}\) orbit of \((i_{1},\ldots,i_{p^{r-l}})\).
To calculate this size, suppose that there are \(m\) distinct numbers occurring in \((i_{1},\ldots,i_{p^{r-l}})\), call them \(\{j_{1},\ldots,j_{m}\}\). Let \(\{C_{n}\}_{n=1}^{m}\) denote the corresponding \(m\) subsets of \(\{1,\ldots,p^{r-l}\}\), so that
\[C_{n}=\{t|i_{t}=j_{n}\}\]
The stabilizer of the action of \(S_{p^{r-l}}\) on \((i_{1},\ldots,i_{p^{r-l}})\) is then the group
\[S_{C_{1}}\times\cdots\times S_{C_{m}}\]
of permutations which preserve each \(C_{i}\). If we let \(c_{i}=|C_{i}|\) then we deduce that
\[|\mathcal{O}_{(i_{1},\ldots,i_{p^{r-l}})}|=\frac{(p^{r-l})!}{c_{1}!\cdots c_{ m}!}\]
where the left hand side denotes the size of the orbit. Summing up, we deduce that
\[\tilde{D}_{j}(p^{l}f^{p^{r-l}})=\sum_{\mathcal{O}}p^{l}\frac{(p^{r-l})!}{c_{1 }!\cdots c_{m}!}(\tilde{D}_{j_{1}}(f))^{c_{1}}\cdots(\tilde{D}_{j_{m}}(f))^{c_ {m}} \tag{2.2}\]
where \(\mathcal{O}\) ranges over the set of orbits of \(S_{p^{r-l}}\) on \(\{(i_{1},\ldots,i_{p^{r-l}})\in\mathbb{N}^{p^{r-l}}|i_{1}+\cdots+i_{p^{r-l}}=j\}\).
So, we must show that
\[p^{l}\frac{(p^{r-l})!}{c_{1}!\cdots c_{m}!}(\tilde{D}_{j_{1}}(f))^{c_{1}} \cdots(\tilde{D}_{j_{m}}(f))^{c_{m}}\in W_{r+1}(A^{(r)})\]
Consider the map \(W_{r+1}(k)[T_{1},\ldots,T_{m}]\to A_{r+1}\) given by sending \(T_{i}\to\tilde{D}_{j_{i}}(f)\). This map takes \(W_{r+1}(k[T_{1},\ldots,T_{m}]^{(r)})\) to \(W_{r+1}(A^{(r)})\), and so it suffices to prove that
\[p^{l}\frac{(p^{r-l})!}{c_{1}!\cdots c_{m}!}(T_{1})^{c_{1}}\cdots(T_{m})^{c_{m }}\in W_{r+1}(k[T_{1},\ldots,T_{m}]^{(r)})\]
On the other hand, the multinomial formula tells us that
\[p^{l}(T_{1}+\cdots+T_{m})^{p^{r-l}}=\sum_{k_{1}+\ldots k_{m}=p^{r-l}}p^{l} \frac{(p^{r-l})!}{k_{1}!\cdots k_{m}!}T_{1}^{k_{1}}\cdots T_{m}^{k_{m}}\]
and by 2.3 all of the terms of this sum are in \(W_{r+1}(k[T_{1},\ldots,T_{m}]^{(r)})\), since the left hand side is. So putting \(k_{i}=c_{i}\) implies the result.
To see the uniqueness, we note that we have just proved an equality
\[p^{l}\frac{(p^{r-l})!}{c_{1}!\cdots c_{m}!}(\tilde{D}_{j_{1}}(f))^{c_{1}}\cdots( \tilde{D}_{j_{m}}(f))^{c_{m}}=b_{c_{1},\ldots,c_{m}}p^{\alpha}(\tilde{D}_{j_{1} }(f))^{c_{1}^{\prime}p^{r-\alpha}}\cdots(\tilde{D}_{j_{m}}(f))^{c_{m}^{\prime}p ^{r-\alpha}}\]
where
\[\alpha=\text{val}(p^{l}\frac{(p^{r-l})!}{c_{1}!\cdots c_{m}!})\]
and \(b_{c_{1},\ldots,c_{m}}\) is the image of \(p^{l}\frac{(p^{r-l})!}{c_{1}!\cdots c_{m}!}p^{-\alpha}\) in \(\mathbb{Z}/p^{r+1}\), and \(c_{i}=c_{i}^{\prime}p^{r-\alpha}\) for all \(i\). But by Lemma 2.2, such a term depends only on the image of \(\{\tilde{D}_{j_{1}}(f),\ldots,\tilde{D}_{j_{m}}(f)\}\) in \(A\), i.e., it depends only on the Hasse-Schmidt derivation \(D\). Since \(\tilde{D}_{j}(p^{l}f^{p^{r-l}})\) is a sum of such terms, we see that it is independent of the choice of lift \(\tilde{D}\) as claimed.
If we unpack the proof of the theorem, we in fact obtain a formula for the Hasse-Schmidt derivation as a sequence of maps \(W_{r+1}(B^{(r)})\to W_{r+1}(A^{(r)})\); if we then apply the isomorphisms \(A\tilde{=}A^{(r)}\) and \(B\tilde{=}B^{(r)}\) we obtain a Hasse-Schmidt derivation \(W_{r+1}(B)\to W_{r+1}(A)\). It will be useful to record this explicitly:
**Corollary 2.6**.: _Let \(A,B\) be smooth \(k\)-algebras and \(D\) a Hasse-Schmidt derivation from \(B\) to \(A\) as above. Let \(j\geq 1\). For \(l\in\{0,\ldots,r\}\), we have the set \(\mathcal{S}_{l}=\{(i_{1},\ldots,i_{p^{r-l}})\in\mathbb{N}^{p^{r-l}}|i_{1}+ \cdots+i_{p^{r-l}}=j\}\). To each element \(v\in\mathcal{S}_{l}\), we may associate a partition \(\{C_{i}\}_{i=1}^{m}\) of \(\{1,\ldots,p^{r-l}\}\). Let \(c_{i}=|C_{i}|\). The number \(c_{i}\) depends only on the associated orbit \(\mathcal{O}\) of the action of the symmetric group \(S_{p^{r-l}}\); as does the set \(\{j_{1},\ldots,j_{m}\}\) of distinct numbers8 occurring in \(v\). Then we have_
Footnote 8: We index so that each element of \(C_{i}\) is \(j_{i}\)
_1) \(\alpha_{\mathcal{O}}:=\text{val}(p^{l}\frac{(p^{r-l})!}{c_{1}!\cdots c_{m}!}) \geq l\) and if \(\alpha_{\mathcal{O}}\leq r\) we have \(p^{r-\alpha_{\mathcal{O}}}|c_{i}\) for each \(i\in\{1,\ldots m\}\)._
_2) Let \(c_{i}=c_{i}^{\prime}p^{r-\alpha_{\mathcal{O}}}\), and let \(b_{\mathcal{O}}\) be the image of \(p^{l}\frac{(p^{r-l})!}{c_{1}!\cdots c_{m}!}p^{-\alpha}\) in \(\mathbb{F}_{p}\). Define_
\[\tilde{D}_{j}(0,\ldots f_{l+1},\ldots 0)=\sum_{\mathcal{O}}(0,\ldots,b_{ \mathcal{O}}\prod_{i=1}^{m}D_{j_{i}}(f_{l+1})^{c_{i}^{\prime}},\ldots,0)\]
_where on the left hand side the term \(f_{l+1}\) is in the \(l+1\) position, and on the right the term \(b_{\mathcal{O}}\prod_{i=1}^{m}D_{j_{i}}(f_{l+1})\) occurs in the \(\alpha_{\mathcal{O}}+1\) position (the term is taken to be \(0\) if \(\alpha_{\mathcal{O}}>r\)). If we extend this map to all of \(W_{r+1}(A)\) by addition of components, then \(\tilde{D}\) is a \(W_{r+1}(k)\)-linear Hasse-Schmidt derivation from \(W_{r+1}(B)\) to \(W_{r+1}(A)\)._
Proof.: Everything is immediate from the proof of the previous theorem except (perhaps) the \(W_{r+1}(k)\)-linearity; for a Hasse-Schmidt derivation this is equivalent to demanding that \(\tilde{D}_{j}(W(k))=0\) for all \(j\geq 1\). But this is immediate from the formula; by definition we have \(\sum_{i=1}^{m}c_{i}j_{i}=j\), so that if \(j\geq 1\) then at least one \(j_{i}\) is \(\geq 1\) as well; and we have \(D_{j_{i}}(\lambda)=0\) for \(\lambda\in k\) and \(j_{i}\geq 1\).
_Remark 2.7_.: I an unaware of a _direct_ proof that the output of the formula recorded above actually is a Hasse-Schmidt derivation; one deduces this implicitly in the
theorem from the fact that the lift \(\tilde{D}\) is a Hasse-Schmidt derivation. In fact, it seems non-obvious that the maps written above are even additive.
**Definition 2.8**.: The Hasse-Schmidt derivations occurring in 2.6 are called _canonical_ Hasse-Schmidt derivations.
_Remark 2.9_.: In fact, the construction can be extended to the case where \(A\) and \(B\) are finite type over a noetherian \(k\)-algebra \(R\); in this case, one obtains (by the same formula), \(W_{r+1}(R)\)-linear HS derivations \(W_{r+1}(B)\to W_{r+1}(A)\). We won't use this generalization in this work.
Until further notice we fix a morphism \(B\to A\) of algebras, which are each smooth over \(k\). Let us show that the canonical Hasse-Schmidt derivations enjoy many favorable properties.
**Proposition 2.10**.: _Let \((D_{0},\dots D_{n})\) be a Hasse-Schmidt derivation from \(B\) to \(A\) (over \(R\)). Then for any \(j\leq n\) we have_
\[\tilde{D}_{j}F=F\tilde{D}_{j/p}\]
_where \(\tilde{D}\) denotes the canonical lift of \(D\), \(F\) is the Witt-vector Frobenius (on \(W_{r+1}(B)\) and \(W_{r+1}(A)\), respectively), and the right hand side is defined to be \(0\) if \(p\) does not divide \(j\)._
Proof.: Let us first show this result when \(r=0\). In this case we have
\[D_{j}(F(f))=D_{j}(f^{p})=\sum_{i_{1}+\dots i_{p}=j}D_{i_{1}}(f)\cdots D_{i_{p} }(f)\]
For each term in the sum, either \(i_{1}=i_{2}=\dots=i_{p}\), or there are at least two distinct indices. In the latter case, grouping like terms together and arguing as in the proof of Theorem 2.5 gives a coefficient of the form
\[\frac{p!}{c_{1}!\cdots c_{m}!}D_{j_{1}}(f)^{c_{1}}\dots D_{j_{m}}(f)^{c_{m}}=0\]
since each \(c_{i}<p\) and \(p=0\) in \(A\). Thus we obtain
\[D_{j}(f^{p})=D_{j/p}(f)^{p}\]
as required (where the right hand side is taken to be \(0\) if \(p\) does not divide \(j\)).
Now let us consider the case of the action of \(\tilde{D}_{j}\) from \(W_{r+1}(B)\) to \(W_{r+1}(A)\). From 2.6, we obtain the formula
\[\tilde{D}_{j}(0,\dots F(f_{l+1}),\dots 0)=\sum_{\mathcal{O}}(0,\dots,b_{ \mathcal{O}}\prod_{i=1}^{m}D_{j_{i}}(F(f_{l+1}))^{c_{i}^{\prime}},\dots,0)\]
where the notation is as above. In particular, with \(c_{i}=c_{i}^{\prime}p^{r-\alpha_{\mathcal{O}}}\) we have \(\sum_{i=1}^{m}c_{i}j_{i}=j\). From the case \(r=0\) of the proposition we therefore obtain an equality
\[\sum_{\mathcal{O}}(0,\dots,b_{\mathcal{O}}\prod_{i=1}^{m}D_{j_{i} }(F(f_{l+1}))^{c_{i}^{\prime}},\dots,0)=\sum_{\mathcal{O}}(0,\dots,b_{ \mathcal{O}}\prod_{i=1}^{m}F(D_{j_{i}/p}(f_{l+1}))^{c_{i}^{\prime}},\dots,0)\] \[=F(\sum_{\mathcal{O}}(0,\dots,b_{\mathcal{O}}\prod_{i=1}^{m}D_{j _{i}/p}(f_{l+1})^{c_{i}^{\prime}},\dots,0))\]
By definition any term in which \(p\) does not divide all the \(j_{i}\) is zero. So, if \(p\) does not divide \(j\), then from the equality \(\sum_{i=1}^{m}c_{i}j_{i}=j\) we see that \(p\) cannot divide all the \(j_{i}\), so that the sum is zero.
If \(p\) does divide \(j\), then multiplication by \(p\) induces a bijection between \(\{(i_{1},\ldots,i_{p^{r-l}})\in\mathbb{N}^{p^{r-l}}|i_{1}+\cdots+i_{p^{r-l}}=j /p\}\) and the subset of \(\{(i_{1},\ldots,i_{p^{r-l}})\in\mathbb{N}^{p^{r-l}}|i_{1}+\cdots+i_{p^{r-l}}=j\}\) in which \(p\) divides every term; and this bijection preserves the numbers \(\{c_{1},\ldots,c_{m}\}\) and \(\{c_{1}^{\prime},\ldots,c_{m}^{\prime}\}\) associated to each orbit. Thus from the displayed formula we see that \(\tilde{D}_{j}F=F\tilde{D}_{j/p}\) as desired.
We would like to consider the interaction of the canonical Hasse-Schmidt derivations with the natural quotient maps on Witt vectors. To that end we let \(R:W_{r+1}(A)\to W_{r}(A)\) denote the natural projection.
Then we have
**Proposition 2.11**.: _Let \(D=(D_{0},\ldots,D_{n})\) be a Hasse-Schmidt derivation from \(B\) to \(A\), and let \(\tilde{D}\) be the associated canonical Hasse-Schmidt derivation from \(W_{r+1}(B)\) to \(W_{r+1}(A)\). Then \(\tilde{D}\) takes \(V^{i}(W_{r+1}(B))\) into \(V^{i}(W_{r+1}(A))\). Furthermore,_
_a) Under the isomorphisms_
\[R:W_{r+1}(A)/V^{r}(W_{r+1}(A))\tilde{\to}W_{r}(A)\]
_and \(R:W_{r+1}(B)/V^{r}(W_{r+1}(B))\tilde{\to}W_{r}(B)\) we have \(R\circ\tilde{D}_{j}=\tilde{D}_{j/p}\circ R\), where the right hand side is zero if \(p\) does not divide \(j\)._
_b) Now suppose \(\text{val}(j)=a\leq r\). Then_
\[\tilde{D}_{j}(W_{r+1}(B))\subset V^{r-a}(W_{r+1}(A))\]
Proof.: The fact that \(\tilde{D}\) takes \(V^{i}(W_{r+1}(B))\) into \(V^{i}(W_{r+1}(A))\) is immediate from 2.6.
Next, to prove \(a)\), we first prove that if \(p\) does not divide \(j\), then \(R\circ\tilde{D}_{j}=0\). To see this, we use the formula
\[\tilde{D}_{j}(0,\ldots f_{l+1},\ldots 0)=\sum_{\mathcal{O}}(0,\ldots,b_{ \mathcal{O}}\prod_{i=1}^{m}D_{j_{i}}(f_{l+1})^{c_{i}^{\prime}},\ldots,0)\]
Since, in this formula, \(\sum c_{i}j_{i}=j\), the assumption that \(p\) does not divide \(j\) implies that, for each term in the sum, at least one \(c_{i}\) is not divisible by \(p\). Therefore the fact that \(p^{r-\alpha_{\mathcal{O}}}|c_{i}\) implies \(\alpha_{\mathcal{O}}\geq r\) for each orbit \(\mathcal{O}\), which shows that on the right hand side each nonzero term is in the \(r+1\) position. Therefore \(R\) annihilates this term.
Now suppose \(p\) divides \(j\). Let \(r>l\). For an \(S_{p^{r-l}}\) orbit inside \(\mathcal{S}_{l}=\{(i_{1},\ldots,i_{p^{r-l}})\in\mathbb{N}^{p^{r-l}}|i_{1}+ \cdots+i_{p^{r-l}}=j\}\), we write \(p|\mathcal{O}\) if each of the numbers \(c_{i}\) is divisible by \(p\). If we choose a member of \(\mathcal{O}\) for which each of the sets \(C_{i}\) is an interval in \(\{1,\ldots,p^{r-l}\}\) then \(p|\mathcal{O}\) implies that each \(C_{i}\) is an interval of the form \(\{pm_{i}+1,\ldots,pm_{i+1}\}\). Thus to the orbit \(\mathcal{O}\) we may associate the partition of \(\{1,\ldots,p^{r-l-1}\}\) where \(C_{i}^{\prime}=\{m_{i}+1,\ldots,m_{i+1}\}\). Further, to each \(C_{i}\) is associated the number \(j_{i}\) such that \(\sum_{i=1}^{m}c_{i}j_{i}=j\); this implies \(\sum_{i=1}^{m}(c_{i}/p)j_{i}=j/p\), so if we associate \(j_{i}\) to each \(C_{i}^{\prime}\) we obtain an element of \(\tilde{\mathcal{S}}_{l}=\{(i_{1},\ldots,i_{p^{r-l-1}})\in\mathbb{N}^{p^{r-l-1} }|i_{1}+\cdots+i_{p^{r-l-1}}=j/p\}\). Clearly,
if we choose a different member of \(\mathcal{O}\) the resulting elements of \(\tilde{\mathcal{S}}_{l}\) will be conjugate. Thus we obtain a map
\[\{S_{p^{r-l}}\text{ orbits on }\mathcal{S}_{l}\text{ with }p|\mathcal{O}\}\to\{S_{p^{r-l-1}}\text{ orbits on }\tilde{\mathcal{S}}_{l}\}\]
and this map is a bijection. Indeed, given an orbit \(\mathcal{O}^{\prime}\) in \(\tilde{S}_{l}\) we may choose an element for which the members of the partition \(C^{\prime}_{i}\) consists of intervals \(\{m_{i}+1,\dots,m_{i+1}\}\); and to this partition we attach the partition \(\{pm_{i}+1,\dots,pm_{i+1}\}\) of \(\{1,\dots,p^{r-l}\}\), with the same associated number \(j_{i}\) as before.
Further, by the claim directly below, we have
\[p^{l}\frac{(p^{r-l})!}{c_{1}!\cdots c_{m}!}\equiv p^{l}\frac{(p^{r-l-1})!}{(c_{ 1}/p)!\cdots(c_{m}/p)!}\text{ mod }p^{r}\]
which tells us that \(\alpha_{\mathcal{O}}=\alpha_{\mathcal{O}^{\prime}}\) if \(\alpha_{O}<r\), and in this case, \(b_{\mathcal{O}}=b_{\mathcal{O}^{\prime}}\). Recall that the numbers \(\{c_{i}\}\) satisfy \(c_{i}=c^{\prime}_{i}p^{r-\alpha_{\mathcal{O}}}\); thus in this case we also have \(c_{i}/p=c^{\prime}_{i}p^{r-1-\alpha_{\mathcal{O}^{\prime}}}\).
Now consider \(\tilde{D}_{j}\) acting on \((0,\dots f_{l+1},\dots 0)\); if \(l=r\) then \(R\) annihilates this term so suppose \(r>l\). As above, if \(\mathcal{O}\) is an orbit such that \(p\) does not divide \(c_{i}\) for some \(i\), then
\[R\circ(0,\dots,b_{\mathcal{O}}\prod_{i=1}^{m}D_{j_{i}}(f_{l+1})^{c^{\prime}_{ i}},\dots,0)=0\]
and the same is true for any orbit \(\mathcal{O}\) in which \(\alpha_{\mathcal{O}}\geq r\). Thus we have
\[R\circ\tilde{D}_{j}(0,\dots f_{l+1},\dots 0)=\sum_{p|\mathcal{O}}R\circ(0, \dots,b_{\mathcal{O}}\prod_{i=1}^{m}D_{j_{i}}(f_{l+1})^{c^{\prime}_{i}},\dots,0)\]
\[=\sum_{\{p|\mathcal{O},\alpha_{\mathcal{O}}<r\}}R\circ(0,\dots,b_{\mathcal{O}} \prod_{i=1}^{m}D_{j_{i}}(f_{l+1})^{c^{\prime}_{i}},\dots,0)\]
Now, we have
\[\sum_{\{p|\mathcal{O},\alpha_{\mathcal{O}}<r\}}R\circ(0,\dots,b_{\mathcal{O}} \prod_{i=1}^{m}D_{j_{i}}(f_{l+1})^{c^{\prime}_{i}},\dots,0)=\sum_{\{\mathcal{O }^{\prime},\alpha_{\mathcal{O}^{\prime}}<r\}}(0,\dots,b_{\mathcal{O}^{\prime} }\prod_{i=1}^{m}D_{j_{i}}(f_{l+1})^{c^{\prime}_{i}},\dots,0)\]
\[=\sum_{\mathcal{O}^{\prime}}(0,\dots,b_{\mathcal{O}^{\prime}}\prod_{i=1}^{m}D_ {j_{i}}(f_{l+1})^{c^{\prime}_{i}},\dots,0)\]
where the last equality follows because \(\alpha_{\mathcal{O}^{\prime}}\geq r\) implies that the term \((0,\dots,b_{\mathcal{O}^{\prime}}\prod_{i=1}^{m}D_{j_{i}}(f_{l+1})^{c^{\prime }_{i}},\dots,0)\) is zero in \(W_{r}(A)\). But the above discussion implies that this last sum is \(\tilde{D}_{j/p}\circ R(0,\dots f_{l+1},\dots 0)\) as claimed.
Finally, part \(b)\) follows immediately from \(a)\) and induction on \(r\).
To finish the proof we need the
_Claim 2.12_.: Let \(r>l\) and let \(\{c_{1},\dots,c_{m}\}\) be positive integers such that \(\sum c_{i}=p^{r-l}\). Suppose \(c_{i}\) is divisible by \(p\) for each \(i\). We have
\[p^{l}\frac{(p^{r-l})!}{c_{1}!\cdots c_{m}!}\equiv p^{l}\frac{(p^{r-l-1})!}{(c_{ 1}/p)!\cdots(c_{m}/p)!}\text{ mod }p^{r}\]
Proof.: Consider the polynomial ring \(\mathbb{Z}/p^{r+1}[T_{1},\dots,T_{m}]\). The multinomial formula yields
\[p^{l}(T_{1}+\dots+T_{m})^{p^{r-l}}=\sum_{k_{1}+\dots k_{m}=p^{r-l}}p^{l}\frac{( p^{r-l})!}{k_{1}!\cdots k_{m}!}T_{1}^{k_{1}}\cdots T_{m}^{k_{m}} \tag{2.3}\]
If we take the image of \(p^{l}(T_{1}+\cdots+T_{m})^{p^{r-l}}\) in \(\mathbb{Z}/p^{r}[T_{1},\ldots,T_{m}]\), then we have
\[p^{l}(T_{1}+\cdots+T_{m})^{p^{r-l}}=p^{l}((T_{1}+\cdots+T_{m})^{p})^{p^{r-1-l}}= F_{1}(p^{l}(T_{1}+\cdots+T_{m})^{p^{r-1-l}})\]
where \(F_{1}\) is an appropriately chosen lift of Frobenius. Since \(p^{l}(T_{1}+\cdots+T_{m})^{p^{r-1-l}}\in W_{r}(\mathbb{F}_{p}[T_{1},\ldots T_{m }]^{(r-1)})\subset\mathbb{Z}/p^{r}[T_{1},\ldots,T_{m}]\), the same equation holds if we replace \(F_{1}\) with any lift of Frobenius, by Lemma 2.2. Now, applying the multinomial formula again yields
\[p^{l}(T_{1}+\cdots+T_{m})^{p^{r-1-l}}=\sum_{j_{1}+\ldots j_{m}=p^{r-l-1}}p^{l} \frac{(p^{r-l-1})!}{j_{1}!\cdots j_{m}!}T_{1}^{j_{1}}\cdots T_{m}^{j_{m}}\]
and so if we apply the map \(F\) for which \(F(T_{i})=T_{i}^{p}\) we obtain
\[F(p^{l}(T_{1}+\cdots+T_{m})^{p^{r-1-l}})=\sum_{j_{1}+\ldots j_{m}=p^{r-l-1}}p^ {l}\frac{(p^{r-l-1})!}{j_{1}!\cdots j_{m}!}T_{1}^{pj_{1}}\cdots T_{m}^{pj_{m}}\]
and if we consider the term where \(j_{i}=c_{i}/p\) and compare with (2.3) we obtain the result.
With these preliminaries in hand, we can give the definition of Witt-differential operators over the algebra \(A\). We shall be working with the space
\(\operatorname{Hom}_{W(k)}^{V}(W(B),W(A))\) of continuous, \(W(k)\)-linear endomorphisms of \(W(A)\) which preserve \(V^{i}(W(A))\) for each \(i\geq 0\). We note that \(W(A)\to\operatorname{Hom}_{W(k)}^{V}(W(B),W(A))\) via the map which takes \(a\in W(A)\) to \(a\cdot\varphi^{\#}\), and \(W(B)\to\operatorname{Hom}_{W(k)}^{V}(W(B),W(A))\) by the map which takes \(b\) to \(\varphi^{\#}(b\cdot)\).
For each \(r\geq 0\), we define
\[\operatorname{Hom}_{W(k)}^{V}(W_{r+1}(B),W_{r+1}(A))\subset\operatorname{Hom }_{W(k)}(W_{r+1}(B),W_{r+1}(A))\]
to be the subspace of \(W(k)\)-linear endomorphisms of \(W_{r+1}(A)\) which preserve the filtration \(V^{i}(W_{r+1}(A))\).
We have the natural restriction map
\[R:\operatorname{Hom}_{W(k)}^{V}(W_{r+1}(B),W_{r+1}(A))\to\operatorname{Hom}_{ W(k)}^{V}(W_{r}(B),W_{r}(A))\]
and from the definitions it follows directly that
\[\operatorname{Hom}_{W(k)}^{V}(W(B),W(A))\tilde{=}\lim_{r}\operatorname{Hom}_{ W(k)}^{V}(W_{r+1}(B),W_{r+1}(A))\]
We have the following elementary observation:
**Lemma 2.13**.: _For each \(j\geq 0\), and each \(r\geq 0\) (including \(r=\infty\)) there is a well-defined operation_
\[W_{r+1}(A)\times V^{j}(W_{r+1}(A))\to V^{j}(W_{r+1}(A))\]
_given by_
\[(a,V^{j}(b))\to V^{j}(a\cdot b):=F^{-j}(a)\cdot b\]
_and this operation is \(F^{-j}\)-semilinear over \(W_{r+1}(k)\), and preserves the filtration \(V^{i}(W_{r+1}(A))\)._
To justify the notation, note that when \(r=\infty\), the product \(F^{-j}(x)\cdot y\) actually makes sense inside \(W(A)[p^{-1}]\), and agrees with the operation as defined above.
**Definition 2.14**.: Let \(m\in\mathbb{Z}\), and \(r\geq-m\). An element
\[\varphi\in\operatorname{Hom}_{W(k)}^{V}(W_{r+1}(B),W_{r+1}(A))\]
is an elementary Witt-differential operator of level \(\leq m\) if there is a canonical Hasse-Schmidt derivation \(D=(\tilde{D}_{0},\ldots,\tilde{D}_{i})\) of length \(0<i\leq p^{r+m}\) on \(W_{r+1}(A)\) so that \(\varphi=F^{\operatorname{val}(i)-r}(a)\cdot\tilde{D}_{i}\) for some \(a\in W_{r+1}(A)\) (this is a well-defined operator by 2.11).
We define \(\mathcal{E}W_{r+1}^{(m)}(B;A)\) to be the \(W_{r+1}(A)\)-submodule of
\(\operatorname{Hom}_{W(k)}^{V}(W_{r+1}(B),W_{r+1}(A))\) generated by the elementary Witt-differential operators of level \(\leq m\). By 2.11, we have that the map
\[R:\mathcal{E}W_{r+1}^{(m)}(B;A)\to\operatorname{Hom}_{W(k)}^{V}(W_{r}(B),W_{r }(A))\]
has image contained in \(\mathcal{E}W_{r}(A)\); thus we may define
\[\mathcal{E}W^{(m)}(B;A):=\lim_{r}\mathcal{E}W_{r+1}^{(m)}(B;A)\subset \operatorname{Hom}_{W(k)}^{V}(W(B),W(A))\]
When \(A=B\) and \(\varphi^{\#}\) is the identity map we write \(\mathcal{E}W_{r+1}^{(m)}(A)\) and \(\mathcal{E}W^{(m)}(A)\) for \(\mathcal{E}W_{r+1}^{(m)}(B;A)\) and \(\mathcal{E}W^{(m)}(B;A)\), respectively. In this case, The \(W(A)\)-module \(\mathcal{E}W^{(m)}(A)\) is naturally filtered by
\[\mathcal{E}W^{(m),r}(A)=\{\varphi\in\mathcal{E}W^{(m)}(A)|\varphi(W(A))\subset V ^{r}(W(A))\}\]
Now we give the
**Definition 2.15**.: The ring \(\mathcal{D}_{W(A)}^{(m)}\) of (uncompleted) Witt-differential operators of level \(\leq m\) on \(A\) is defined to be the subalgebra of continuous \(W(k)\)-linear endomorphisms of \(W(A)\) generated by multiplication by elements of \(W(A)\) and the elements of \(\mathcal{E}W_{A}^{(m)}\). This ring possesses a filtration by two sided ideals
\[I^{(r)}:=(V^{r}(W(A)),\mathcal{E}W^{(m),r}(A))\]
and we define \(\mathcal{D}_{W_{r}(A)}^{(m)}:=\mathcal{D}_{W(A)}^{(m)}/I^{(r)}\).
We define \(\widehat{\mathcal{D}}_{W(A)}^{(m)}\), the ring of Witt-differential operators on \(A\), of level \(\leq m\), to be the completion of \(\mathcal{D}_{W(A)}^{(m)}\) along \(\{I^{(r)}\}\); i.e., the inverse limit of the \(D_{W_{r}(A)}^{(m)}\). Since each element of \(I^{(r)}\) takes \(W(A)\) to \(V^{r}(W(A))\), this ring also acts on \(W(A)\).
Finally, we let \(\widehat{\mathcal{D}}_{W(A)}\) be the inductive limit of \(\widehat{\mathcal{D}}_{W(A)}^{(m)}\) under the obvious maps \(\widehat{\mathcal{D}}_{W(A)}^{(m)}\to\widehat{\mathcal{D}}_{W(A)}^{(m+1)}\); similarly \(\mathcal{D}_{W(A)}\) is the inductive limit of \(\mathcal{D}_{W(A)}^{(m)}\).
This definition is inspired by Berthelot's definition of arithmetic differential operators of level \(\leq m\) (c.f. [4],[5], as well as [17] for a more operator-theoretic point of view) and, as we shall see below, yields closely related categories of modules, at least when \(m\geq 0\).
### Local Coordinates
In this subsection we shall give a presentation of the modules \(\mathcal{E}W^{(m)}(B;A)\) and the algebra \(\widehat{\mathcal{D}}_{W(A)}^{(m)}\) for \(m\in\mathbb{Z}\) in terms of local coordinates on \(B\) (or \(A\) in the latter case) and deduce some basic structural results. Therefore, throughout this subsection, we shall suppose that that there is an etale
map \(k[T_{1},\ldots,T_{n}]\to B\). This map yields Hasse-Schmidt derivations \(B\to A\), which we shall denote by \(\partial_{l}^{[i]}\); these are defined on \(k[T_{1},\ldots T_{n}]\) as
\[\partial_{l}^{[i]}(T_{l}^{j})=\binom{j}{i}\varphi^{\#}(T_{l}^{j-i})\]
and
\[\partial_{l}^{[i]}(T_{l^{\prime}}^{j})=0\]
for \(l\neq l^{\prime}\) and \(i>0\). By [40] these operators extend uniquely to Hasse-Schmidt derivations on \(B\). When \(B=A\) and \(\varphi^{\#}=Id\) these Hasse-Schmidt derivations are iterative; i.e, we have
\[\partial_{l}^{[i]}\partial_{l}^{[j]}=\binom{i+j}{i}\partial_{l}^{[i+j]}\]
Further, one sees immediately from the definition of these operators that they all mutually commute; i.e.,
\[\partial_{l}^{[i]}\partial_{l^{\prime}}^{[j]}=\partial_{l^{\prime}}^{[j]} \partial_{l}^{[i]}\]
for all \(l,l^{\prime},i,j\).
If \(I=(i_{1},\ldots,i_{n})\) is a multi-index, we shall also use the notation \(\partial^{[I]}=\partial_{1}^{[i_{1}]}\cdots\partial_{n}^{[i_{n}]}\). If \(J\) is another multi-index, then we also have \(\partial^{J}=\partial_{1}^{j_{1}}\cdots\partial_{n}^{j_{n}}\) and \((\partial^{[I]})^{J}=(\partial_{1}^{[i_{1}]})^{j_{1}}\cdots(\partial_{n}^{[i_ {n}]})^{j_{n}}\).
Now we give the
**Definition 2.16**.: Let \(\alpha\in W(A)\) and \(r,j\in\mathbb{Z}\) such that \(j>0\). We define the operator \(\{\partial_{l}\}_{jp^{r}}\) as follows: it is equal to \(\varphi^{\#}(\partial_{l}^{[jp^{r^{\prime}+r}]}):W_{r^{\prime}+1}(B)\to W_{r^{ \prime}+1}(A)\) where \(\partial_{l}^{[jp^{r^{\prime}+r}]}\) is the canonical lift of the HS derivation \(\partial_{l}^{[jp^{r^{\prime}+r}]}\) to \(W_{r^{\prime}+1}(B)\). The inverse limit of these operators is well defined on \(W(B)\) by 2.11.
Now suppose \(r>0\) and let \(v_{j}:=\operatorname{val}(j)\). Then we define the operator
\[F^{v_{j}-r}(\alpha)\cdot\{\partial_{l}\}_{j/p^{r}}\]
as the operator \(F^{v_{j}-r}(\alpha)\cdot\varphi^{\#}(\partial_{l}^{[jp^{r^{\prime}-r}]})\) from \(W_{r^{\prime}+1}(B)\to W_{r^{\prime}+1}(A)\); here, we denote by \(\partial_{l}^{[jp^{r^{\prime}-r}]}\) the canonical lift of \(\partial_{l}^{[jp^{r^{\prime}-r}]}\) to \(W_{r^{\prime}+1}(B)\) for any \(r^{\prime}\) such that \(r^{\prime}\geq r-\operatorname{val}(j)\), for \(r^{\prime}<r-\operatorname{val}(j)\) this operator is taken to be \(0\). By 2.11, we have
\[\partial_{l}^{[jp^{r^{\prime}-r}]}(W_{r^{\prime}+1}(B))\subset V^{r-v_{j}}(W_ {r^{\prime}+1}(A))\]
so that this operator is well-defined on \(W_{r^{\prime}+1}(B)\).
Further, we note that by 2.11, \(a)\), we have; for \(R:W_{r^{\prime}+1}(A)\to W_{r^{\prime}}(A)\)
\[R(F^{v_{j}}(\alpha)\partial_{l}^{[jp^{r^{\prime}-r}]})=F^{v_{j}}(\alpha) \partial_{l}^{[jp^{r^{\prime}-1-r}]}R\]
so that the inverse limit of these operators is in fact well-defined on \(W(B)\). We note that when \(1\leq j\leq p^{r}\), we have \(F^{v_{j}-r}(\alpha)\cdot\{\partial_{l}\}_{j/p^{r}}\in\mathcal{E}W(A)\). We may extend this definition to the case \(j=0\) by defining \(v_{0}=r\); then we have
\[F^{v_{j}-r}(\alpha)\cdot\{\partial_{l}\}_{j/p^{r}}=\alpha\]
for \(j=0\) and any \(r\geq 0\).
If \(J=(j_{1},\ldots,j_{n})\) denotes a multi-index in \(\mathbb{N}\) for each \(j_{l}\), let \(v_{J}=\min_{1\leq l\leq n}\text{val}\{j_{l}\}\). Suppose this minimum is realized at index \(l^{\prime}\). Then we define
\[F^{v_{J}-r}(\alpha)\cdot\{\partial\}_{J/p^{r}}:=F^{v_{J}-r}(\alpha)\prod_{l=1}^ {n}\{\partial_{l}\}_{j_{l}/p^{r}}=F^{v_{l^{\prime}}-r}(\alpha)\{\partial_{l^{ \prime}}\}_{j_{l^{\prime}}/p^{r}}\cdot\prod_{l=1,l\neq l^{\prime}}^{n}\{ \partial_{l}\}_{j_{l}/p^{r}}\]
and we note that this product is in \(\mathcal{D}_{W(A)}\) whenever each \(j_{i}\leq p^{r}\). If \(J=(0,\ldots,0)\) then we set \(v_{J}=r\), this gives \(F^{v_{J}-r}(\alpha)\cdot\{\partial\}_{J/p^{r}}=\alpha\) as above.
We shall show that (products of) such operators form a basis \(\widehat{\mathcal{D}}_{W(A)}^{(0)}\), in a suitable sense. Namely,
**Theorem 2.17**.: _Let \(m\in\mathbb{Z}\). Let \(Q\in\widehat{\mathcal{D}}_{W(A)}^{(m)}\). Then we may write_
\[Q=\sum_{I}(\sum_{r=1}^{\infty}\sum_{J_{I}}F^{-r}(\alpha_{J_{I}})\{\partial\}_{ J_{I}/p^{r}}+\sum_{K_{I}}\alpha_{K_{I}}\{\partial\}_{K_{I}})\{\partial\}_{p^{m}}^{I}\]
_where \(\alpha_{J_{I}},\alpha_{K_{I}}\in W(A)\), \(I=(i_{1},\ldots,i_{n})\in\mathbb{N}^{n}\), \(\{\partial\}_{p^{m}}^{I}:=\{\partial_{1}\}_{p^{m}}^{i_{1}}\cdots\{\partial_{n }\}_{p^{m}}^{i_{n}}\), \(J_{I}=(j_{1},\ldots,j_{n})\) satisfies \(0\leq j_{l}<p^{r}\) for each \(j_{l}\) and \((p,j_{l})=1\) for at least one \(l\); unless \(r=1\) where we allow \(J_{I}=0\), and \(K_{I}=(k_{1},\ldots,k_{n})\) satisfies \(0\leq k_{i}<p^{m}\) with at least one \(k_{i}\neq 0\) (if \(m\leq 0\) this sum is taken to be empty). We demand that the terms_
\[\sum_{r=1}^{\infty}\sum_{J_{I}}F^{-r}(\alpha_{J_{I}})\{\partial\}_{J_{I}/p^{r }}+\sum_{K_{I}}\alpha_{K_{I}}\{\partial\}_{K_{I}}\]
_approach \(0\) (in the topology of \(\widehat{\mathcal{D}}_{W(A)}^{(m)}\)) as \(I\to\infty\). Furthermore, the elements \(\alpha_{I_{J}}\) are unique._
We shall break the proof of this up into several steps, beginning by showing that we can write the elementary Witt-differential operators (of level \(0\)) in the required form.
**Lemma 2.18**.: _Let \(\tilde{D}\) be a canonical Hasse-Schmidt derivation of length \(\leq p^{r}\) from \(W_{r+1}(B)\) to \(W_{r+1}(A)\). Then, for any \(0<j\leq p^{r}\) the associated canonical Hasse-Schmidt derivation \(\tilde{D}_{j}\) on \(W_{r+1}(A)\) is equal to_
\[\sum_{I}F^{v_{I}-r}(\beta_{I})\cdot\{\partial\}_{I/p^{r}}\]
_where \(I=(i_{1},\ldots,i_{n})\backslash\{(0,\cdots,0)\in\mathbb{N}^{n}\) satisfies \(\sum_{l=1}^{n}i_{l}\leq p^{r}\), and \(\beta_{I}\in W_{r+1}(A)\). Furthermore, if \(v_{I}\geq\text{val}(j)\), then we have \(\beta_{I}\in V^{v_{I}-\text{val}(j)}(W_{r+1}(A))\)._
Proof.: We recall that the operator \(\tilde{D}_{j}\) is obtained by choosing any lift of the Hasse-Schmidt derivation \(D\) to a Hasse-Schmidt derivation \(\mathcal{B}_{r+1}\to\mathcal{A}_{r+1}\) and restricting to \(W_{r+1}(B^{(r)})\). If we write
\[D_{j}=\sum_{I}\alpha_{I}\partial^{[I]}\]
for \(\alpha_{I}\in A\) (and \(\partial^{[I]}=\varphi^{\#}(\partial_{1}^{[i_{1}]}\cdots\partial_{n}^{[i_{n}]})\)) then since \(D_{j}\) is a differential operator of order \(j\leq p^{r}\) we obtain \(|I|\leq p^{r}\) for all \(I\) such that \(\alpha_{I}\neq 0\).
Now, write
\[\tilde{D}_{j}=\sum_{I}\tilde{\alpha}_{I}\partial^{[I]}\]
for \(\tilde{\alpha}_{I}\in A_{r+1}\) lifting \(\alpha_{I}\). To prove the lemma, we must show that, after possibly altering \(\tilde{\alpha}_{I}\) in a way that does not change the action on \(W_{r+1}(A^{(r)})\), we have \(F^{r-v_{I}}(\tilde{\alpha}_{I})\in W_{r+1}(A^{(r)})\) (where \(F:\mathcal{A}_{r+1}\to\mathcal{A}_{r+1}\) is a lift of Frobenius); and that \(\tilde{\alpha}_{I}\in p^{v_{I}-\operatorname{val}(j)}\mathcal{A}_{r+1}\) if \(v_{I}>\operatorname{val}(j)\).
Let \(r\geq 0\). We'll first show that \(\tilde{\alpha}_{I}\in p^{v_{I}-\operatorname{val}(j)}\mathcal{A}_{r+1}\) if \(v_{I}>\operatorname{val}(j)\). If this does not hold, let \(I_{0}\) be the least element (in the lexicographic ordering) amongst \(I\) such that \(\tilde{\alpha}_{I}\notin p^{v_{I}-\operatorname{val}(j)}\mathcal{A}_{r+1}\) and \(v_{I}>\operatorname{val}(j)\).
Then we have \(\partial^{[I_{0}]}(p^{r-v_{I_{0}}}T^{I_{0}})=p^{r-v_{I_{0}}}\), while \(\partial^{[J]}(p^{r-v_{I}}T^{I_{0}})=0\) for all other \(J\) with \(\tilde{\alpha}_{J}\notin p^{v_{J}-\operatorname{val}(j)}\mathcal{A}_{r+1}\) and \(v_{J}>\operatorname{val}(j)\). For \(K\) such that \(v_{K}\leq\operatorname{val}(j)\) we have \(\partial^{[K]}(p^{r-v_{I}}T^{I_{0}})\subset V^{r-v_{K}}(W_{r+1}(A^{(r)}))\subset V ^{r-\operatorname{val}(j)}(W_{r+1}(A^{(r)}))\) (by 2.11 applied to the HS derivations \(\partial^{[i]}_{m}\)). So, in order for \(\tilde{D}_{j}(p^{r-v_{I_{0}}}T^{I_{0}})\subset V^{r-\operatorname{val}(j)}(W_ {r+1}(A^{(r)}))\) we must have
\[p^{r-v_{I_{0}}}\tilde{\alpha}_{I_{0}}\in p^{r-\operatorname{val}(j)}\mathcal{A }_{r+1}\]
which yields \(\tilde{\alpha}_{I_{0}}\in p^{v_{I_{0}}-\operatorname{val}(j)}\mathcal{A}_{r+1}\) after all.
Now we check that, after possibly altering \(\tilde{\alpha}_{I}\) in a way that does not change the action on \(W_{r+1}(A^{(r)})\), we have \(F^{r-v_{I}}(\tilde{\alpha}_{I})\in W_{r+1}(A^{(r)})\) for each \(I\) such that \(v_{I}\leq r\). Note that every term of the form \(\tilde{\alpha}_{I}\partial^{[I]}\), for which \(F^{r-v_{I}}(\tilde{\alpha}_{I})\in W_{r+1}(A^{(r)})\), takes \(W_{r+1}(B^{(r)})\) to \(W_{r+1}(A^{(r)})\) (as explained above in 2.16). So let \(I_{0}\) be the least element (in the lexicographic ordering) amongst \(I\) such that \(F^{r-v_{I}}(\tilde{\alpha}_{I})\notin W_{r+1}(A^{(r)})\). Then
\[\tilde{D}_{j}(p^{r-v_{I_{0}}}T^{I_{0}})=\sum_{I}\tilde{\alpha}_{I}\partial^{[I ]}(p^{r-v_{I_{0}}}T^{I_{0}})\]
\[=\sum_{\{I|F^{r-v_{I}}(\tilde{\alpha}_{I})\in W_{r+1}(A^{(r)})\}}\tilde{\alpha }_{I}\partial^{[I]}(p^{r-v_{I_{0}}}T^{I_{0}})+\sum_{\{I|F^{r-v_{I}}(\tilde{ \alpha}_{I})\notin W_{r+1}(A^{(r)})\}}\tilde{\alpha}_{I}\partial^{[I]}(p^{r-v_ {I_{0}}}T^{I_{0}})\]
where \(\beta\in W_{r+1}(A^{(r)})\); the last equality comes from noting that each \(\tilde{\alpha}_{I}\partial^{[I]}(p^{r-v_{I_{0}}}T^{I_{0}})\) in the first sum is contained in \(W_{r+1}(A^{(r)})\), and in the second sum each term is zero except the one coming from \(I_{0}\). Since \(\tilde{D}_{j}\) itself takes \(W_{r+1}(B^{(r)})\) to \(W_{r+1}(A^{(r)})\), we see that \(\beta+p^{r-v_{I_{0}}}\tilde{\alpha}_{I_{0}}\in W_{r+1}(A^{(r)})\), which forces \(p^{r-v_{I_{0}}}\tilde{\alpha}_{I_{0}}\in W_{r+1}(A^{(r)})\). Writing
\[p^{r-v_{I_{0}}}\tilde{\alpha}_{I_{0}}=\sum_{i=r-v_{I_{0}}}^{r}p^{i}a^{p^{r-i}}\]
we obtain
\[\tilde{\alpha}_{I_{0}}=\sum_{i=0}^{v_{I_{0}}}p^{i}a^{p^{v_{I_{0}}-i}}+p^{v_{I_ {0}}+1}\gamma\]
for some \(\gamma\in\mathcal{A}_{r+1}\). Therefore
\[F^{r-v_{I_{0}}}(\tilde{\alpha}_{I_{0}})=\sum_{i=0}^{v_{I_{0}}}p^{i}a^{p^{r-i}}+ p^{v_{I_{0}}+1}F^{r-v_{I_{0}}}(\gamma)\]
And since \(\partial^{[I_{0}]}(W_{r+1}(B^{(r)}))\subset V^{r-v_{I_{0}}}(W_{r+1}(A^{(r)}))\), we see that \(p^{v_{I_{0}}+1}\gamma\partial^{[I_{0}]}\) annihilates \(W_{r+1}(B^{(r)})\). Therefore, altering \(\tilde{\alpha}_{I_{0}}\) to \(\tilde{\alpha}_{I_{0}}-p^{v_{I_{0}}+1}\gamma\), we have that
\(F^{r-v_{I_{0}}}(\tilde{\alpha}_{I_{0}})\in W_{r+1}(A^{(r)})\). Continuing up the lexicographic ordering in this way, we obtain the result.
Next we have
**Lemma 2.19**.: _Every element \(\phi\in\mathcal{E}W_{r+1}^{(0)}(B;A)\) admits a representation_
\[\phi=\sum_{I}F^{v_{I}-r}(\alpha_{I})\{\partial\}_{I/p^{r}}\]
_where \(\alpha_{I}\in W_{r+1}(A)\), \(I=(i_{1},\ldots,i_{n})\in\mathbb{N}^{n}\backslash\{(0,\ldots,0)\}\), with \(\sum_{l=1}^{n}i_{l}\leq p^{r}\); and, as above, \(v_{I}=\text{min}\{\text{val}(i_{l})\}\). Furthermore,_
\[\sum_{I}F^{v_{I}-r}(\alpha_{I})\{\partial\}_{I/p^{r}}=\sum_{I}F^{v_{I}-r}( \alpha_{I}^{\prime})\{\partial\}_{I/p^{r}}\]
_iff \(\alpha_{I}^{\prime}-\alpha_{I}\in V^{v_{I}+1}(W_{r+1}(A))\) for all \(I\)._
Proof.: In the previous lemma we showed that such a representation exists for \(\phi\) if \(\phi\) is itself a canonical Hasse-Schmidt derivation; more precisely, if \(\phi=\tilde{D}_{j}\) for some \(j\leq p^{r}\) then
\[\phi=\sum_{I}F^{v_{I}-r}(\beta_{I})\{\partial\}_{I/p^{r}}\]
where the sum ranges over \(I\) such that \(\sum_{l=1}^{n}i_{l}\leq j\), and we have \(\beta_{I}\in V^{v_{I}-\text{val}(j)}(W_{r+1}(A^{(r)}))\) whenever \(v_{I}>\text{val}(j)\). We must show that a similar representation exists for operators of the form \(F^{\text{val}(j)-r}(\alpha)\cdot\phi\) for some \(\alpha\in W_{r+1}(A)\). We write
\[F^{\text{val}(j)-r}(\alpha)\cdot\phi\] \[=\sum_{\{I|v_{I}\leq\text{val}(j)\}}F^{\text{val}(j)-r}(\alpha)F^ {v_{I}-r}(\beta_{I})\{\partial\}_{I/p^{r}}+\sum_{\{I|v_{I}>\text{val}(j)\}}F^ {\text{val}(j)-r}(\alpha)F^{v_{I}-r}(\beta_{I})\{\partial\}_{I/p^{r}}\] \[=\sum_{\{I|v_{I}\leq\text{val}(j)\}}F^{\text{val}(j)-v_{I}}( \alpha)\cdot F^{v_{I}-r}(\alpha\beta_{I})\{\partial\}_{I/p^{r}}+\sum_{\{I|v_{I }>\text{val}(j)\}}F^{\text{val}(j)-r}(\alpha)F^{v_{I}-r}(\beta_{I})\{\partial \}_{I/p^{r}}\]
Now, the first sum is already in the required form since we may write
\[F^{\text{val}(j)-v_{I}}(\alpha)\cdot F^{v_{I}-r}(\alpha\beta_{I})=F^{v_{I}-r}( F^{r-\text{val}(j)-2v_{I}}(\alpha)\cdot\beta_{I})\]
and \(r-\text{val}(j)-2v_{I}=(r-v_{I})+(\text{val}(j)-v_{I})\geq 0\).
On the other hand, for \(I\) such that \(v_{I}>\text{val}(j)\), we may, by the previous lemma, write \(\beta_{I}=V^{v_{I}-\text{val}(j)}(\beta_{I}^{\prime})\) so that
\[\sum_{\{I|v_{I}>\text{val}(j)\}}F^{\text{val}(j)-r}(\alpha)F^{v_{I}-r}( \beta_{I})\{\partial\}_{I/p^{r}}\] \[=\sum_{\{I|v_{I}>\text{val}(j)\}}F^{\text{val}(j)-r}(\alpha)F^{v_{ I}-r}(V^{v_{I}-\text{val}(j)}(\beta_{I}^{\prime}))\{\partial\}_{I/p^{r}}\] \[\qquad=\sum_{\{I|v_{I}>\text{val}(j)\}}F^{v_{I}-r}(V^{v_{I}-\text{ val}(j)}(\alpha\beta_{I}^{\prime}))\{\partial\}_{I/p^{r}}\]
which is a sum of the required form; therefore, the claimed representation exists.
Now we deal with the uniqueness. First note that, if \(\alpha_{I}^{\prime}-\alpha_{I}\in V^{v_{I}+1}(W_{r+1}(A))\), then, since the operator \(\{\partial\}_{I/p^{r}}\) takes \(W_{r+1}(B)\) into \(V^{r-v_{I}}(W_{r+1}(A))\) (by 2.11), we
have that \(F^{v_{I}-r}(\alpha^{\prime}_{I}-\alpha_{I})\cdot\{\partial\}_{I/p^{r}}\) acts as \(0\). So we must show the other implication; namely, that if an operator
\[\sum_{I}F^{v_{I}-r}(\beta_{I})\{\partial\}_{I/p^{r}}\]
acts as \(0\), then \(\beta_{I}\in V^{v_{I}+1}(W_{r+1}(A))\). This is equivalent to the assertion that, if a differential operator
\[\sum_{I}\tilde{\beta}_{I}\partial^{[I]}\]
(with \(F^{r-v_{I}}(\tilde{\beta}_{I})\in W_{r+1}(A^{(r)})\)) is zero on \(W_{r+1}(B^{(r)})\), then
\[F^{r-v_{I}}(\tilde{\beta}_{I})\in V^{v_{I}+1}(W_{r+1}(A^{(r)}))\]
for all \(I\); we shall in fact show that \(\tilde{\beta}_{I}\in p^{v_{I}+1}A_{r+1}\), which immediately implies the above statement.
Now, for any \(I\) we may consider \(p^{r-v_{I}}T^{I}\in W_{r+1}(A^{(r)})\). If \(I_{1}\) is the least (in the lexicographic ordering) index such that \(\tilde{\beta}_{I_{1}}\neq 0\), we have
\[0=(\sum_{I}\tilde{\beta}_{I}\partial^{[I]})(p^{r-v_{I_{1}}}T^{I_{1}})=p^{r-v_{ I_{1}}}\tilde{\beta}_{I_{1}}\]
which implies \(\tilde{\beta}_{I_{1}}\in p^{v_{I}+1}A_{r+1}\) as claimed. Further, by the forward implication, this implies that the term \(\tilde{\beta}_{I_{1}}\partial^{[I_{1}]}\) acts as zero on \(W_{r+1}(B^{(r)})\), we see that the operator \(\sum_{I>I_{1}}\tilde{\beta}_{I}\partial^{(I)}\) acts as zero on \(W_{r+1}(B^{(r)})\) as well. Thus we may replace \(\sum_{I}\tilde{\beta}_{I}\partial^{(I)}\) by \(\sum_{I>I_{1}}\tilde{\beta}_{I}\partial^{(I)}\) and run the argument again; continuing in this way implies the result.
From this, we deduce
**Corollary 2.20**.: _Every element \(\phi\in\mathcal{E}W^{(0)}(B;A)\) may be written as_
\[\phi=\sum_{r=0}^{\infty}\sum_{I}F^{-r}(\alpha_{I})\{\partial\}_{I/p^{r}}\]
_where \(\alpha_{I}\in W(A)\), \(I=(i_{1},\dots,i_{n})\in\mathbb{N}^{n}\backslash\{(0,\dots,0)\}\), with \(\sum_{l=1}^{n}i_{l}\leq p^{r}\), and \(v_{I}=0\) for all \(I\). Furthermore, this representation is unique._
Proof.: By definition
\[\mathcal{E}W^{(0)}(B;A)=\lim_{r}\mathcal{E}W^{(0)}_{r+1}(B;A)\]
and, if \(\phi_{r}\) is the image of \(\phi\) in \(\mathcal{E}W^{(0)}_{r+1}(B;A)\), then, by slightly rewriting the previous lemma, we have an expression
\[\phi_{r}=\sum_{r^{\prime}=0}^{r}\sum_{I}F^{-r^{\prime}}(\alpha_{I,r})\{ \partial\}_{I/p^{r^{\prime}}}\]
with \(v_{I}=0\) for all \(I\) (in this expression, terms of the form \(F^{-r^{\prime}}(\alpha_{I,r})\{\partial\}_{I/p^{r^{\prime}}}\) correspond to terms with \(v_{I}=r-r^{\prime}\) in the statement of the previous lemma).
Thus, by the uniqueness statement of the previous lemma, the image
\[\overline{\alpha_{I,r}}\in W_{r+1}(A)/V^{r-r^{\prime}+1}(W_{r+1}(A))\]
is well-defined. So, fixing an index \(I/p^{r^{\prime}}\), we obtain an element
\[\alpha_{I}:=\lim_{r}\overline{\alpha}_{I,r}\in\lim_{r}W_{r+1}(A)/V^{r-r^{\prime} +1}(W_{r+1}(A))\tilde{=}W(A)\]
so that the action of \(\sum_{r=0}^{\infty}\sum_{I}F^{v_{I}-r}(\alpha_{I})\{\partial\}_{I/p^{r}}\) agrees with \(\phi\) on \(W(A)\).
To see that this representation is unique, suppose that a term of the form \(\sum_{r=0}^{\infty}\sum_{I}F^{v_{I}-r}(\alpha_{I})\{\partial\}_{I/p^{r}}\) acts as zero on \(W(A)\). Consider a term \(F^{v_{I}-r^{\prime}}(\alpha_{I})\{\partial\}_{I/p^{r^{\prime}}}\) for some index \(r^{\prime}\). Then, for each \(r\geq r^{\prime}\), the uniqueness statement of the previous lemma implies that \(\alpha_{I}\in V^{r-r^{\prime}+1}(W_{r+1}(A))\). Since this is true for all \(r\), we see \(\alpha_{I}=0\) as claimed.
Now we can deduce from this the analogous presentation for \(\mathcal{E}W^{(m)}(B;A)\):
**Corollary 2.21**.: _Let \(m\in\mathbb{Z}\). Every element \(\phi\in\mathcal{E}W^{(m)}(B;A)\) may be written as_
\[\phi=\sum_{r=0}^{\infty}\sum_{I}F^{-r}(\alpha_{I})\{\partial\}_{I/p^{r}}+\sum _{J}\alpha_{J}\{\partial\}_{J}\]
_where, in the first sum, \(\alpha_{I}\in W(A)\), \(I=(i_{1},\ldots,i_{n})\in\mathbb{N}^{n}\backslash\{(0,\ldots,0)\}\), with \(\sum_{l=1}^{n}i_{l}\leq p^{r}\), and \(v_{I}=0\) for all \(I\), and, in the second sum, we have \(\alpha_{J}\in W(A)\) and \(\sum_{l=1}^{n}j_{l}\leq p^{m}\). Furthermore, this representation is unique._
Proof.: By 2.10 we have \(\tilde{D}_{p^{m}j}F^{m}=F^{m}\tilde{D}_{j}\) for any canonical Hasse-Schmidt derivation from \(W_{r+1}(B)\) to \(W_{r+1}(A)\). Therefore if \(\phi\) is a canonical Hasse-Schmidt derivation of length \(\leq p^{m}\), we have \(\phi F^{m}=F^{m}\psi\) where \(\psi\in\mathcal{E}W^{(0)}_{r+1}(B;A)\). So, by Lemma 2.19 we have
\[\phi\circ F^{m}=F^{m}(\sum_{I}F^{v_{I}-r}(\alpha_{I})\{\partial\}_{I/p^{r}})= \sum_{I}F^{m+v_{I}-r}(\alpha_{I})\{\partial\}_{p^{m}I/p^{r}}\circ F^{m}\]
where the notation is as in loc cit. So for an arbitrary element of the form \(\sum_{i}F^{m_{i}}(\alpha_{i})\phi_{i}\in\)\(\mathcal{E}W^{(m)}_{r+1}(B;A)\) (where \(\phi_{i}\) is the \(i\)th component of a canonical Hasse-Schmidt derivation, and \(m_{i}=\min\{0,\operatorname{val}(i)-r\}\)) we have
\[\sum_{i}F^{m_{i}}(\alpha_{i})\phi_{i}\circ F^{m}=\sum_{i}\sum_{I}F^{m_{i}}( \alpha_{i})\cdot F^{m+v_{I}-r}(\alpha_{I,i})\{\partial\}_{p^{m}I/p^{r}}\circ F ^{m}\]
As in the proof of Lemma 2.19, taking the inverse limit and re-indexing yields an expression
\[\phi\circ F^{m}=(\sum_{r=0}^{\infty}\sum_{I}F^{-r}(\alpha_{I})\{\partial\}_{I/ p^{r}}+\sum_{J}\alpha_{J}\{\partial\}_{J})\circ F^{m}\]
for \(\phi\in\mathcal{E}W^{(m)}(B;A)\); this easily implies the result (as \(F^{m}\) is an isomorphism on \(W(A)[p^{-1}]\)).
Now we turn to proving the full statement of Theorem 2.17. To do this, we need to also analyze products. We start with the elementary
**Lemma 2.22**.: _Inside \(\mathcal{D}_{W(A)}\) for any \(i\geq 1\), \(r\geq 0\), we have_
\[\{\partial_{l}\}_{1/p^{r}}^{i}=u\cdot i!\{\partial_{l}\}_{i/p^{r}}\]
_where \(u\in\mathbb{Z}_{p}\) is a unit (which depends on \(r\) and \(i\)). In particular, for \(I=(i_{1},\ldots,i_{n})\) with each \(i_{j}<p^{r}\) we have_
\[\{\partial\}_{I/p^{r}}=u^{\prime}\prod_{l,r^{\prime}}\{\partial_{l}\}_{1/p^{r ^{\prime}}}^{i_{r^{\prime}}}\]
_where \(r^{\prime}\in\{1\ldots,r-1\}\), \(0\leq i_{r^{\prime}}<p\), and \(u^{\prime}\) is a unit in \(\mathbb{Z}_{p}\)._
Proof.: Let \(r^{\prime}\geq r\). Then \(\{\partial_{l}\}_{i/p^{r}}\) acts on \(W_{r^{\prime}+1}(A)\) via the action of \(\partial_{l}^{[ip^{r^{\prime}-r}]}\) on \(W_{r+1}(A^{(r)})\subset\mathcal{A}_{r^{\prime}+1}\). But we have
\[(\partial_{l}^{[p^{r^{\prime}-r}]})^{i}=\prod_{j=1}^{i}\binom{jp^{r^{\prime}- r}}{p^{r^{\prime}-r}}\partial_{l}^{[ip^{r^{\prime}-r}]}\]
(this follows easily from the fact that \(\{\partial_{l}^{[i]}\}\) form an iterative HS derivation). Now, we have that
\[\binom{jp^{r^{\prime}-r}}{p^{r^{\prime}-r}}=\prod_{m=0}^{p^{r^{\prime}-r}-1} \frac{(jp^{r^{\prime}-r}-m)}{(p^{r^{\prime}-r}-m)}=j\cdot\prod_{m=1}^{p^{r^{ \prime}-r}-1}\frac{(jp^{r^{\prime}-r}-m)}{(p^{r^{\prime}-r}-m)}\]
For each \(m\in\{1,\ldots,p^{r^{\prime}-r}-1\}\), we have \(\operatorname{val}(p^{r^{\prime}-r}-m)=\operatorname{val}(m)=\operatorname{ val}(jp^{r^{\prime}-r}-m)\). Therefore
\[\operatorname{val}\binom{jp^{r^{\prime}-r}}{p^{r^{\prime}-r}}=\operatorname{ val}(j)\]
and we see that
\[(\partial_{l}^{[p^{r^{\prime}-r}]})^{i}=u_{r^{\prime}}\cdot i!\partial_{l}^{[ ip^{r^{\prime}-r}]}\]
where \(u_{r^{\prime}}\) is a unit in \(\mathbb{Z}/p^{r+1}\). The fact that \(u_{r^{\prime}+1}\equiv u_{r^{\prime}}\) mod \(p^{r^{\prime}}\) follows from the claim directly below, therefore we may set \(u:=\lim_{r^{\prime}}u_{r^{\prime}}\in\mathbb{Z}_{p}\) to prove the result.
To finish the proof, we need to show
_Claim 2.23_.: We have the identity
\[\binom{lp^{r}}{p^{j}}\equiv\binom{lp^{r-1}}{p^{j-1}}\mod\;p^{r} \tag{2.4}\]
for all natural numbers \(l\), \(r\), and \(j\).
Proof.: This is not difficult to check directly. However, since \(d^{([p]}(T^{lp^{r}})=\binom{lp^{r}}{p^{j}}T^{lp^{r}-p^{j}}\) inside \(\mathbb{Z}/p^{r+1}[T]\), and \(\,d^{[p^{j-1}]}(T^{lp^{r-1}})=\binom{lp^{r-1}}{p^{j-1}}T^{lp^{r-1}-p^{j-1}}\) inside \(\mathbb{Z}/p^{r}[T]\), this equality also follows from \(R\circ d^{[p^{j}]}=d^{[p^{j-1}]}\circ R\), which is 2.11.
For the course of the next few lemmas, we write \(\widehat{\mathcal{D}}_{W(A),b}^{(m)}\) for the set of elements in \(\widehat{\mathcal{D}}_{W(A)}^{(m)}\) which have a representation as in Theorem 2.17. If
\[Q=\sum_{I}(\sum_{r=1}^{\infty}\sum_{J_{I}}F^{-r}(\alpha_{J_{I}})\{\partial\}_ {J_{I}/p^{r}}+\sum_{K_{I}}\alpha_{K_{I}}\{\partial\}_{K_{I}}\}\{\partial\}_{p^{ m}}^{I}\]
is an element of \(\widehat{\mathcal{D}}_{W(A),b}^{(m)}\), we say that \(Q\in\widehat{\mathcal{D}}_{W(A),b,i}^{(m)}\) if \(Q\) admits a representation as above so that for each \(r<i\) and each associated \(J_{I}\), we have \(\alpha_{J_{I}}\in V^{i-r}(W(A))\); we demand also that \(\alpha_{K_{I}}\in V^{i}(W(A))\) for all \(K_{I}\). Note that if we have for each \(s\) an element \(\phi_{s}\in\widehat{D}_{W(A),b,s}\) then
\[\sum_{s=i}^{\infty}\phi_{s}\in\widehat{\mathcal{D}}_{W(A),b,i}^{(m)}\]
for all \(i\geq 0\).
**Lemma 2.24**.: _Fix \(m\in\mathbb{Z}\). Suppose \(\phi\in\mathcal{E}W^{(m)}(A)\), and \(Q\in\widehat{\mathcal{D}}_{W(A),b}^{(m)}\). Then \(\phi\cdot Q\in\widehat{\mathcal{D}}_{W(A),b}^{(m)}\). Further, if \(\phi\in\mathcal{E}W^{m,(r)}(A)\), then \(\phi\cdot Q\in\widehat{\mathcal{D}}_{W(A),b.r}^{(m)}\). If \(Q\in\widehat{\mathcal{D}}_{W(A),b,r}^{(m)}\), then \(\phi\cdot Q\in\widehat{\mathcal{D}}_{W(A),b.r}^{(m)}\) for any \(\phi\). Similarly, \(a\cdot Q\in\widehat{\mathcal{D}}_{W(A),b,r}^{(m)}\) for any \(Q\) if \(a\in V^{r}(W(A))\)._
Proof.: By 2.21, if \(\phi\in\mathcal{E}W^{(m)}(A)\) we can write
\[\phi=\sum_{r=0}^{\infty}\sum_{I}F^{-r}(\alpha_{I})\{\partial\}_{I/p^{r}}+\sum_ {J}\alpha_{J}\{\partial\}_{J}\in\widehat{\mathcal{D}}_{W(A),b}^{(m)}\]
Therefore we must compute
\[\phi\cdot Q= \tag{2.5}\]
In order to compute this term, we first note that if \(D=(D_{0},\ldots,D_{j})\) is any Hasse-Schmidt derivation (on an arbitrary ring \(R\)), we have an equality of operators
\[[D_{j},f]=\sum_{i=1}^{j}D_{i}(f)D_{j-i}=\sum_{i=0}^{j-1}D_{j-i}(f)D_{i}\]
(where \(f\in R\) denotes the operator \(g\to fg\)). Now choose any \(j\) so that \(j\leq p^{r+m}\). We have, for any \(r^{\prime}\geq r-\text{val}(j)\) and \(\alpha\in W_{r^{\prime}+1}(A)\),
\[[\partial_{l}^{[jp^{r^{\prime}-r}]},\alpha]=\sum_{i=0}^{jp^{r^{\prime}-r}-1} \partial_{l}^{[jp^{r^{\prime}-r}-i]}(\alpha)\partial_{l}^{[i]}\]
which may be rewritten as
\[[\{\partial_{l}\}_{j/p^{r}},\alpha]=\sum_{i=0}^{jp^{r^{\prime}-r}-1}\{\partial _{l}\}_{(jp^{r^{\prime}-r}-i)/p^{r^{\prime}}}(\alpha)\cdot\{\partial_{l}\}_{i/ p^{r^{\prime}}}\]
\[=\sum_{i=0}^{jp^{r^{\prime}-r}-1}\{\partial_{l}\}_{(jp^{r^{\prime}-r}-i)/p^{r^ {\prime}}}(\alpha)\cdot\{\partial_{l}\}_{i/p^{r^{\prime}}}\]
So that we obtain the equality
\[[\{\partial_{l}\}_{j/p^{r}},\alpha]=\{\partial_{l}\}_{j/p^{r}}(\alpha)+\sum_{r ^{\prime}\geq r-\text{val}(j)}^{\infty}\sum_{i=1}^{jp^{r^{\prime}-r}-1}\{ \partial_{l}\}_{(jp^{r^{\prime}-r}-i)/p^{r^{\prime}}}(\alpha)\cdot\{\partial_{ l}\}_{i/p^{r^{\prime}}}\]
where on the right hand side we sum over \(i\) with \((i,p)=1\). This is an equality of operators on \(W(A)\); with both sides in \(\mathcal{D}^{(m)}_{W(A)}\) (in fact the right hand side is in \(\mathcal{E}W_{A}\)). Thus we see from this expression that \([\{\partial_{l}\}_{j/p^{r}},\alpha]\in\widehat{\mathcal{D}}^{(m)}_{W(A),b,r}\), and consequently that \(F^{-r}(\alpha)\{\partial\}_{I/p^{r}}\cdot\alpha_{I_{0}}\in\widehat{\mathcal{D }}^{(m)}_{W(A),b,r}\) for any \(\alpha,\alpha_{I_{0}}\in W(A)\).
Next, we consider a product of the form \(F^{-r}(\alpha)\{\partial_{l}\}_{j/p^{r}}\cdot F^{-s}(\beta)\{\partial_{t}\}_{ j^{\prime}/p^{s}}\), where \(j^{\prime}\) is not divisible by \(p\), and \(s\geq r\) (the case \(s<r\) is essentially identical; as is the case where we need to multiply \(\alpha_{J}\{\partial\}_{J}\cdot F^{-s}(\beta)\{\partial_{t}\}_{j^{\prime}/p^{ s}}\)). To compute this term, we may, by extending linearly, regard both \(\{\partial_{l}\}_{j/p^{r}}\) and \(F^{-s}(\beta)\{\partial_{t}\}_{j^{\prime}/p^{s}}\) as endomorphisms of \(W(A)[p^{-1}]\). Since \(F^{-s}(\beta)\in W(A)[p^{-1}]\), the previous equality implies an equality
\[[\{\partial_{l}\}_{j/p^{r}},F^{-s}(\beta)]\]
\[=\{\partial_{l}\}_{j/p^{r}}(F^{-s}(\beta))+\sum_{r^{\prime}\geq r-\mathrm{val} (j)}^{\infty}\sum_{i=1}^{jp^{r^{\prime}-r}-1}\{\partial_{l}\}_{(jp^{r^{\prime} -r}-i)/p^{r^{\prime}}}(F^{-s}(\beta))\cdot\{\partial_{l}\}_{i/p^{r^{\prime}}}\]
of operators on \(W(A)[p^{-1}]\). Therefore
\[[\{\partial_{l}\}_{j/p^{r}},F^{-s}(\beta)\{\partial_{m}\}_{j^{\prime}/p^{s}}] \tag{2.6}\]
\[=\{\partial_{l}\}_{j/p^{r}}(F^{-s}(\beta))\{\partial_{t}\}_{j^{\prime}/p^{s}}\]
\[+\sum_{r^{\prime}\geq r-\mathrm{val}(j)}^{\infty}\sum_{i=1}^{jp^{r^{\prime}-r }-1}\{\partial_{l}\}_{(jp^{r^{\prime}-r}-i)/p^{r^{\prime}}}(F^{-s}(\beta)) \cdot\{\partial_{l}\}_{i/p^{r^{\prime}}}\{\partial_{t}\}_{j^{\prime}/p^{s}}\]
(using that \(\{\partial_{l}\}_{j/p^{r}}\) and \(\{\partial_{t}\}_{i/p^{s}}\) commute); as above we suppose \((i,p)=1\) in the right hand sum.
Now, 2.10, applied to the operators \(\{\partial_{l}\}_{j/p^{r}}\), yields a relation
\[\{\partial_{l}\}_{j/p^{r}}F=F\{\partial_{l}\}_{j/p^{r+1}}\]
which implies
\[F^{-1}\{\partial_{l}\}_{j/p^{r}}=\{\partial_{l}\}_{j/p^{r+1}}F^{-1}\]
in \(W(A)[p^{-1}]\). We note that this makes sense for all \(j,r\in\mathbb{Z}\) such that \(j>0\) and \(r\geq 0\); in particular we have
\[\{\partial_{l}\}_{(j/p^{r}-i/p^{r^{\prime}})}(F^{-s}(\beta))=F^{-s}(\{\partial _{l}\}_{(jp^{r^{\prime}-r}-i)p^{s}/p^{r^{\prime}}}(\beta))\]
So that
\[F^{-r}(\alpha)\{\partial_{l}\}_{j/p^{r}}\cdot F^{-s}(\beta)\{\partial_{t}\}_{ j^{\prime}/p^{s}}=F^{-s}(\beta F^{s-r}(\alpha))\{\partial_{t}\}_{j^{\prime}/p^{r}} \{\partial_{l}\}_{j/p^{r}}\]
\[+F^{-s}(\{\partial_{l}\}_{p^{s}j/p^{r}}(\beta)\cdot F^{s-r}(\alpha))\{\partial _{t}\}_{j^{\prime}/p^{s}}\]
\[+\sum_{r^{\prime}\geq r-\mathrm{val}(j)}^{\infty}\sum_{i=1}^{jp^{r^{\prime}-r} -1}F^{-s}(\{\partial_{l}\}_{(jp^{r^{\prime}-r}-i)p^{s}/p^{r^{\prime}}}(\beta)F^ {s-r}(\alpha))\cdot\{\partial_{l}\}_{i/p^{r^{\prime}}}\{\partial_{t}\}_{j^{ \prime}/p^{s}}\]
as operators on \(W(A)\). We note that all of the terms involved both sides of this equality are in \(\mathcal{D}^{(m)}_{W(A)}\); indeed, since \(\mathcal{D}^{(m)}_{W(A)}\) is an algebra, this is clear for the product \(F^{-r}(\alpha)\{\partial_{l}\}_{j/p^{r}}\cdot F^{-s}(\beta)\{\partial_{t}\}_{ i/p^{s}}\), and it is also clear for the terms \(F^{-s}(\beta F^{s-r}(\alpha))\{\partial_{t}\}_{j^{\prime}/p^{s}}\{\partial_{l} \}_{j/p^{r}}\) and \(F^{-s}(\{\partial_{l}\}_{p^{s}j/p^{r}}(\beta)\cdot F^{s-r}(\alpha))\{\partial_{t }\}_{j^{\prime}/p^{s}}\), therefore it is true for the last term as well.
So, since \(\mathcal{D}^{(m)}_{W(A)}\subset\operatorname{End}_{W(k)}(W(A))\), the equality of (2.6) is in fact an equality of elements of \(\mathcal{D}^{(m)}_{W(A)}\), and so we obtain that the image of \(F^{-r}(\alpha)\{\partial_{l}\}_{j/p^{r}}\cdot F^{-s}(\beta)\{\partial_{t}\}_{i /p^{s}}\) in \(\widehat{\mathcal{D}}^{(m)}_{W(A)}\) is contained in \(\widehat{\mathcal{D}}^{(m)}_{W(A),b}\).
Now, we claim that \(F^{-r}(\alpha)\{\partial_{l}\}_{j/p^{r}}\cdot F^{-s}(\beta)\{\partial_{t}\}_{i /p^{s}}\) is actually contained in \(\widehat{\mathcal{D}}^{(m)}_{W(A),b,s}\). This is obvious in the above sum unless \(r^{\prime}=s\) and \(l=t\). In that case, we have
\[\{\partial_{l}\}_{i/p^{s}}\{\partial_{l}\}_{j^{\prime}/p^{s}}=u\cdot{i+j^{ \prime}\choose i}\{\partial_{l}\}_{(i+j^{\prime})/p^{s}}\]
by Lemma 2.22. Let \(\nu=\operatorname{val}(i+j^{\prime})\). Then we need to show that
\[\operatorname{val}{i+j^{\prime}\choose i}\geq\nu\]
This only has content if \(v>0\), so we assume this from now on; let \(\alpha=(i+j^{\prime})/p\). To see this, we consider the element \((T_{1}+T_{2})^{\alpha}\) inside \(\mathbb{Z}/p^{v}[T_{1},T_{2}]\). Since \(\operatorname{val}(\alpha)=\nu-1\), we have \((T_{1}+T_{2})^{\alpha}\in W_{v}(\mathbb{F}_{p}[T_{1},T_{2}]^{(\nu-1)})\). Therefore \(F(T_{1}+T_{2})^{\alpha}=(T_{1}+T_{2})^{i+j^{\prime}}\) for any lift of Frobenius on \(\mathbb{Z}/p^{v}[T_{1},T_{2}]\); choosing \(F\) to be the lift which takes \(T_{i}\to T_{i}^{p}\), we see that the coefficient of \(T_{1}^{a}T_{2}^{b}\) in \((T_{1}+T_{2})^{i+j^{\prime}}\) is zero if either \(a\) or \(b\) is coprime to \(p\). As the coefficient of \(T_{1}^{i}T_{2}^{j^{\prime}}\) is \({i+j^{\prime}\choose i}\), and as \(i\) is coprime to \(p\), we deduce that \({i+j^{\prime}\choose i}\) is zero in \(\mathbb{Z}/p^{v}\) as required.
Now the result follows, via additivity, from the remark above that \(\sum_{s=i}^{\infty}\phi_{s}\in\widehat{\mathcal{D}}^{(m)}_{W(A),b,i}\) whenever \(\phi_{s}\in\widehat{\mathcal{D}}^{(m)}_{W(A),b,s}\).
From this we deduce
**Corollary 2.25**.: _Every element of \(\widehat{\mathcal{D}}^{(m)}_{W(A)}\) is contained in \(\widehat{\mathcal{D}}^{(m)}_{W(A),b}\). Further, the ideal \(\widehat{I^{(i)}}\) is equal to \(\widehat{\mathcal{D}}^{(m)}_{W(A),b.i}\)._
Proof.: Since \(W(A)\subset\widehat{\mathcal{D}}^{(m)}_{W(A),b}\) and \(\mathcal{E}W^{(m)}(A)\subset\widehat{\mathcal{D}}^{(0)}_{W(A),b}\), we see that the image of \(\mathcal{D}^{(m)}_{W(A)}\to\widehat{\mathcal{D}}^{(m)}_{W(A)}\) is contained in \(\widehat{\mathcal{D}}^{(m)}_{W(A),b}\). By the last statement of Lemma 2.24, the image of \(I^{(i)}\) is contained in \(\widehat{\mathcal{D}}^{(m)}_{W(A),b,i}\). Since an arbitrary sum \(\sum_{s=0}^{\infty}Q_{s}\) converges if \(Q_{s}\in\widehat{D}_{W(A),b,s}\), we deduce that \(\widehat{\mathcal{D}}^{(m)}_{W(A)}=\widehat{\mathcal{D}}^{(m)}_{W(A),b}\), with \(\widehat{I^{(i)}}\subset\widehat{\mathcal{D}}^{(m)}_{W(A),b,i}\). But the containment \(\widehat{\mathcal{D}}^{(m)}_{W(A),b,i}\subset\widehat{I^{(i)}}\) is clear from the definitions.
Finally, we need to address uniqueness. It will follow from the a generalization of Lemma 2.19; to state this, set, for \(i\in\mathbb{N}\), \(f_{i}:=\operatorname{val}(i!)\), and, for a multi-index \(I\), set \(f_{I}:=\sum_{j=1}^{n}f_{i_{j}}\).
**Lemma 2.26**.: _An element_
\[Q=\sum_{I}(\sum_{r=1}^{\infty}\sum_{J_{I}}F^{-r}(\alpha_{J_{I}})\{\partial\}_{J _{I}/p^{r}}+\sum_{K_{I}}\alpha_{K_{I}}\{\partial\}_{K_{I}})\{\partial\}_{p^{m}} ^{I}\]
_(where the notation is as in Theorem 2.17) acts trivially on \(W_{r^{\prime}+1}(A)\) iff we have \(\alpha_{J_{I}}\in V^{r^{\prime}+1-r-f_{I}}(W(A))\) for each \(I\) and \(J_{I}\) (where we take the superscript to be \(0\) if \(r^{\prime}+1-r-f_{I}<0\)), and \(\alpha_{K_{I}}\in V^{r^{\prime}+1}(W(A))\) for all \(K_{I}\)._
Proof.: We consider the operator
\[\sum_{I}(\sum_{r=-m}^{r^{\prime}}\sum_{J_{I}}F^{-r}(\alpha_{J_{I}})\partial^{[ p^{r^{\prime}-r}J_{I}]}+\sum_{K_{I}}\alpha_{K_{I}}\partial^{[p^{r^{\prime}}K_{I}] })(\partial^{[p^{r^{\prime}+m}]})^{I}\]
we note that by our conventions on sums in Theorem 2.17 we have \(\alpha_{J_{I}},\alpha_{K_{I}}\to 0\) as \(I\to\infty\), so for the purposes of considering the action on \(W_{r^{\prime}+1}(A)\) we may regard this as being a finite sum. We claim that
\[((\partial^{[p^{r^{\prime}+m}]})^{I})(W_{r^{\prime}+1}(A^{(r^{\prime})})) \subset V^{f_{I}}(W_{r^{\prime}+1}(A^{(r^{\prime})}))\]
To show this it suffices to show \((\partial_{l}^{[p^{r^{\prime}+m}]})^{i}(W_{r^{\prime}+1}(A^{(r^{\prime})})) \subset p^{f_{i}}(W_{r^{\prime}+1}(A^{(r^{\prime})}))\) for each \(l\) and \(i\). We have, by Lemma 2.22 the relation
\[(\partial_{l}^{[p^{r^{\prime}+m}]})^{i}=up^{f_{i}}\partial_{l}^{[ip^{r^{\prime }+m}]}\]
for an appropriate unit \(u\in\mathbb{Z}_{p}\); the operator \(\partial_{l}^{[ip^{r^{\prime}}]}\) is the canonical lift of the Hasse-Schmidt derivation \(\partial_{l}^{[ip^{r^{\prime}}]}\) and hence preserves \(W_{r^{\prime}+1}(A^{(r^{\prime})})\), so that \((\partial_{l}^{[p^{r^{\prime}}]})^{i}\) must take \(W_{r^{\prime}+1}(A^{(r^{\prime})})\) to \(p^{f_{i}}(W_{r^{\prime}+1}(A^{(r^{\prime})}))\) as required.
Next, we claim that \(F^{-r}(\alpha_{J_{I}})\partial^{[p^{r^{\prime}-r}J_{I}]}(V^{f_{I}}(W_{r^{ \prime}+1}(A^{(r^{\prime})})))=0\) if
\(\alpha_{J_{I}}\in V^{r^{\prime}-r+1-f_{I}}(W(A))\); this follows directly from the definitions as in Lemma 2.19. This, along with the fact that \(\alpha_{K_{I}}\partial^{[p^{r^{\prime}}K_{I}]}(W_{r^{\prime}+1}(A^{(r^{\prime })}))=0\) if
\(\alpha_{K_{I}}\in V^{r^{\prime}+1}(W_{r^{\prime}+1}(A^{(r^{\prime})}))\), gives us the reverse direction of the lemma.
For the forward direction, consider the (finite) set of indices \(\{p^{r^{\prime}-r}J_{I}+p^{r^{\prime}+m}I|\alpha_{J_{I}}\neq 0\}\), union with \(\{p^{r^{\prime}K_{I}}+p^{r^{\prime}+m}I|\alpha_{K_{I}}\neq 0\}\). Suppose the least (in the lexicographic ordering) index in this set has the form \(p^{r^{\prime}-r}J_{I_{1}}+p^{r^{\prime}}I_{1}\) (the case where the least element has the form \(p^{r^{\prime}K_{I}}+p^{r^{\prime}+m}I\) is similar, but simpler). Then we have
\[0=\sum_{I}(\sum_{r=-m}^{r^{\prime}}\sum_{J_{I}}F^{-r}(\alpha_{J_{I}})\partial^{ [p^{r^{\prime}-r}J_{I}]}+\sum_{K_{I}}\alpha_{K_{I}}\partial^{[p^{r^{\prime}}K_ {I}]})(\partial^{[p^{r^{\prime}+m}]})^{I}(p^{r}T^{p^{r^{\prime}-r}J_{I_{1}}+p^ {r^{\prime}}I_{1}})\]
\[=u\cdot p^{r+f_{I}}F^{-r}(\alpha_{J_{I}})\]
for some \(u\) such that \(\operatorname{val}(u)=0\); here we have used that \(\partial^{[p^{r^{\prime}-r}J_{I}]}(\partial^{[p^{r^{\prime}}]})^{I}(T^{I^{ \prime}})=0\) whenever \(I^{\prime}<p^{r^{\prime}-r}J_{I}+p^{r^{\prime}}I\), and that \(\partial^{[p^{r^{\prime}-r}J_{I_{1}}]}T^{p^{r^{\prime}-r}J_{I_{1}}}=1\) while \((\partial^{[p^{r^{\prime}}]})^{I_{1}}T^{I_{1}}=up^{f_{I}}\). For this to hold we must have \(\alpha_{J_{I_{1}}}\in V^{r^{\prime}+1-r-f_{I}}(W_{r^{\prime}+1}(A^{(r^{\prime })}))\); this shows, using the direction of the lemma already proved, that \(F^{-r}(\alpha_{J_{I_{1}}})\partial^{[p^{r^{\prime}-r}J_{I_{1}}]}(\partial^{[p ^{r^{\prime}}]})^{I_{1}}\) acts as zero on \(W_{r^{\prime}+1}(A^{(r^{\prime})})\). Thus we may subtract this element from
\[\sum_{I}\sum_{r=0}^{r^{\prime}}\sum_{J_{I}}F^{-r}(\alpha_{J_{I}})\partial^{[p^{ r^{\prime}-r}J_{I}]}(\partial^{[p^{r^{\prime}}]})^{I}\]
and obtain another operator which acts as zero; continuing in this way shows that \(\alpha_{J_{I}}\in V^{r^{\prime}+1-r-f_{I}}(W_{r^{\prime}+1}(A^{(r^{\prime})}))\) for all \(\alpha_{J}\) as required.
Now we put everything together for the
Proof.: (of Theorem 2.17) The existence, for each \(Q\in\widehat{\mathcal{D}}_{W(A)}^{(m)}\), of the claimed representation is given by 2.25. As for the uniqueness, we note that if any element of the form
\[\sum_{I}(\sum_{r=1}^{\infty}\sum_{J_{I}}F^{-r}(\alpha_{J_{I}})\{\partial\}_{J_{ I}/p^{r}}+\sum_{K_{I}}\alpha_{K_{I}}\{\partial\}_{K_{I}}\}\{\partial\}_{p^{m}}^{I}\]
acts as zero on \(W(A)\), then by the previous lemma we have, for each index \(J_{I}\), \(\alpha_{J_{I}}\in V^{r^{\prime}-r+1-f_{I}}(W(A))\) for all \(r^{\prime}\geq 0\); but this implies \(\alpha_{J_{I}}=0\).
The proof actually gave us a little more:
**Corollary 2.27**.: _The natural map \(\widehat{\mathcal{D}}_{W(A)}^{(m)}\to\text{End}_{W(k)}(W(A))\) is injective._
In addition, using the description of \(I^{(1)}\) in 2.25, we obtain
**Corollary 2.28**.: _For any \(m\geq 0\) there is an isomorphism \(\widehat{\mathcal{D}}_{W(A)}^{(m)}/I^{(1)}\tilde{\to}\mathcal{D}_{A}^{(m)}\)._
As well as
**Corollary 2.29**.: _For each \(m\) the obvious map \(\widehat{\mathcal{D}}_{W(A)}^{(m)}\to\widehat{\mathcal{D}}_{W(A)}^{(m+1)}\) is injective. For each \(m\) the natural map \(\widehat{\mathcal{D}}_{W(A)}^{(m)}\to\widehat{\mathcal{D}}_{W(A)}\) is injective._
To close out this section, we would like to record for later use the analogue of Theorem 2.17 when we have a morphism \(\varphi^{\#}:B\to A\) of smooth \(k\)-algebras. By the functoriality of the Witt vectors there is a morphism \(W\varphi^{\#}:W(B)\to W(A)\). Letting \(X=\operatorname{Spec}(A)\) and \(Y=\operatorname{Spec}(B)\), we obtain a morphism of affine formal schemes \(W\varphi:W(X)\to W(Y)\).
**Definition 2.30**.: Let \(\mathcal{D}_{W(X)\to W(Y)}^{(m)}\) be the \((\mathcal{D}_{W(A)},\mathcal{D}_{W(B)})\) bi-submodule of
\(\operatorname{Hom}_{W(k)}(W(B),W(A))\) generated by \(\mathcal{E}W^{(m)}(B;A)\). Let \(\widehat{\mathcal{D}}_{W(X)\to W(Y)}^{(m)}\) be the completion of this bimodule along the filtration
\[F^{l}(\mathcal{D}_{W(X)\to W(Y)}^{(m)})=\{I^{(i)}\mathcal{D}_{W(X)\to W(Y)}^{( m)}I^{(j)}\}_{i+j\geq l}\]
Suppose that \(B\) possesses local coordinates \(\{T_{1},\dots,T_{d}\}\). Then we have
**Corollary 2.31**.: _Every element of \(\widehat{\mathcal{D}}_{W(X)\to W(Y)}^{(m)}\) can be uniquely expressed as_
\[\sum_{I}(\sum_{r=0}^{\infty}\sum_{J_{I}}F^{-r}(\alpha_{J_{I}})\cdot W\varphi^{ \#}(\{\partial\}_{J_{I}/p^{r}})\{\partial\}^{I})\]
_where \(\alpha_{J_{I}}\in W(A)\), \(\{\partial\}_{J_{I}/p^{r}}\) and \(\{\partial\}^{I}\) are in \(\widehat{\mathcal{D}}_{W(B)}\), and the convergence conditions are as in Theorem 2.17. In particular, for any \(m\geq 0\) we have_
\[I^{(1)}\backslash\widehat{\mathcal{D}}_{W(X)\to W(Y)}/I^{(1)}\tilde{\to}A \otimes_{B}\mathcal{D}_{B}^{(m)}\]
_The latter object is the usual transfer bimodule in the theory of \(\mathcal{D}^{(m)}\)-modules._
Proof.: It follows directly from the description of \(\mathcal{E}W^{(m)}(B;A)\) in 2.21 the set of such sums are contained in \(\widehat{\mathcal{D}}_{W(X)\to W(Y)}\); denote this set by \(\widehat{\mathcal{D}}_{W(X)\to W(Y),b}\) and as above we say that \(Q\in\widehat{\mathcal{D}}_{W(X)\to W(Y),b,i}\) if \(Q\) admits a representation as above so that for each \(r<i\) and each associated \(J_{I}\), we have \(\alpha_{J_{I}}\in V^{i-r}(W(A))\). So, applying Theorem 2.17, we have to show that for any element \(\phi\) of \(\mathcal{E}W^{(0),i}(A)\), we have that \(\phi\circ W\varphi^{\#}\) is contained in \(\widehat{\mathcal{D}}_{W(A),b,i}\).
Using the local coordinates on \(B\), we see that the argument(s) of Lemma 2.18, Lemma 2.19, and 2.20 carry over to this situation with (essentially) no change, and we conclude that \(\widehat{\mathcal{D}}_{W(X)\to W(Y)}=\widehat{\mathcal{D}}_{W(X)\to W(Y),b}\).
The uniqueness of the representation of an element of \(\widehat{\mathcal{D}}_{W(X)\to W(Y)}\) as a sum follows exactly as in Lemma 2.26, and the last sentence follows immediately from the description and the definition of the ideal \(I^{(1)}\).
To finish out this section we show how to turn \(\widehat{\mathcal{D}}_{W(A)}\) into a sheaf on the etale site of \(X=\operatorname{Spec}(A)\). This immediately leads to a definition of \(\widehat{\mathcal{D}}_{W(X)}\) for any smooth \(X\) over \(k\). We need to show:
**Proposition 2.32**.: _1) Let \(\psi^{\#}:B\to A\) be an etale morphism of smooth \(k\)-algebras, where \(B\) admits local coordinates. Then there is an isomorphism_
\[W(A)\widehat{\otimes}_{W(B)}\widehat{\mathcal{D}}_{W(B)}\widetilde{\to} \widehat{\mathcal{D}}_{W(A)}\]
_where on the left hand side the \(\widehat{\otimes}\) denotes completion with respect to the filtration \(V^{i}(W(A))\). In particular, we have for each \(r\geq 0\)_
\[W_{r+1}(A)\otimes_{W_{r+1}(B)}\mathcal{D}_{W_{r+1}(B)}\overset{\sim}{\to} \mathcal{D}_{W_{r+1}(A)}\]
_2) Let \(\varphi^{\#}:C\to B\) any morphism, and \(\psi^{\#}:B\to A\) as above. Let \(X=\operatorname{Spec}(B)\), \(Y=\operatorname{Spec}(C)\), and \(U=\operatorname{Spec}(A)\), we obtain morphisms of affine formal schemes \(W\varphi:W(X)\to W(Y)\) and \(W\psi:W(U)\to W(X)\), as well as the composition \(W(U)\to W(Y)\). Then there is an isomorphism_
\[W(A)\widehat{\otimes}_{W(B)}\widehat{\mathcal{D}}_{W(X)\to W(Y)}\overset{\sim }{\to}\widehat{\mathcal{D}}_{W(U)\to W(Y)}\]
Proof.: 1) As HS derivations extend canonically over etale morphisms, one sees easily that there is a morphism \(\mathcal{E}W_{B}\to\mathcal{E}W_{A}\) which extends to a morphism \(\widehat{\mathcal{D}}_{W(B)}\to\widehat{\mathcal{D}}_{W(A)}\); extending this by linearity and completing gives the morphism of the statement. The fact that it is an isomorphism follows directly from Theorem 2.17 for both \(\widehat{\mathcal{D}}_{W(A)}\) and \(\widehat{\mathcal{D}}_{W(B)}\); noting the fact that
\[W(A)\otimes_{W(B)}F^{-r}(W(B))\overset{\sim}{\to}F^{-r}(W(A))\]
where \(F^{-r}(W(B))\) and \(F^{-r}(W(A))\) are regarded as sub-modules of \(W(B)[p^{-1}]\) and \(W(A)[p^{-1}]\), respectively; this fact, in turn, can be easily checked using local coordinates for \(B\) and the fact that \(A\) is etale over \(B\). Now the last sentence follows from the identification of \(\overline{I^{(r+1)}}\) with \(\widehat{D}_{W(A),b,r+1}\).
2) This follows exactly as in 1) but using 2.31 instead of Theorem 2.17.
This leads to the
**Definition 2.33**.: 1) Let \(X\) be a smooth scheme over \(k\). We define \(\widehat{\mathcal{D}}_{W(X)}\) as the unique sheaf of rings (in the Zariski topology of \(X\)) which assigns to each open affine \(\operatorname{Spec}(A)\subset X\) the ring \(\widehat{\mathcal{D}}_{W(A)}\). By the previous lemma this is an inverse limit of quasicoherent sheaves on \(\{W_{r+1}(X)\}\).
2) Let \(\varphi:X\to Y\) a morphism of smooth schemes. We define \(\widehat{\mathcal{D}}_{W(X)\to W(Y)}\) as the unique sheaf of \((\widehat{\mathcal{D}}_{W(X)},W\varphi^{-1}(\widehat{\mathcal{D}}_{W(Y)}))\) bimodules which assigns, to each open affine \(\operatorname{Spec}(A)\subset X\) such that \(\varphi(\operatorname{Spec}(A))\subset\operatorname{Spec}(B)\subset Y\), the bimodule \(\widehat{\mathcal{D}}_{W(\operatorname{Spec}(A))\to W(\operatorname{Spec}(B))}\). This is an inverse limit of quasicoherent sheaves on \(\{W_{r+1}(X)\}\).
### Compliment: The algebra \(\widehat{\mathcal{D}}^{(0)}_{W(X),\operatorname{crys}}\)
In this subsection we wish to discuss the analogue of the above results and constructions for a slightly different algebra, which will turn out to be a completion of \(\widehat{\mathcal{D}}^{(0)}_{W(X)}\). As above we start in the case where \(X=\operatorname{Spec}(A)\).
**Definition 2.34**.: The operator filtration on \(\widehat{\mathcal{D}}^{(0)}_{W(A)}\) is the filtration defined by
\[F^{i}(\widehat{\mathcal{D}}^{(0)}_{W(A)})=\{P\in\widehat{\mathcal{D}}^{(0)}_{ W(A)}|P(W(A)\subset V^{i}(W(A))\}\]
This is a filtration by two sided ideals, and we denote the completion along it by \(\widehat{\mathcal{D}}^{(0)}_{W(X),\operatorname{crys}}\).
There is a description of \(\widehat{\mathcal{D}}^{(0)}_{W(X),\operatorname{crys}}\) in local coordinates:
**Proposition 2.35**.: _Suppose \(A\) admits local coordinates, and let the notation be as in Theorem 2.17. Then an element_
\[\sum_{I}(\sum_{r=0}^{\infty}\sum_{J_{I}}F^{-r}(\alpha_{J_{I}})\{\partial\}_{J _{I}/p^{r}})\{\partial\}^{I}\in\widehat{\mathcal{D}}^{(0)}_{W(A)}\]
_is contained in \(F^{i}(\widehat{\mathcal{D}}^{(0)}_{W(A)})\) iff, whenever \(f_{I}<i\) and \(r<i\), we have \(\alpha_{J_{I}}\in V^{i-(r+f_{I})}(W(A))\) (where we set \(V^{i-(r+f_{I})}(W(A)=W(A)\) for \(i-(r+f_{I})<0\))._
_Therefore, every element of \(\widehat{\mathcal{D}}^{(0)}_{W(A),\operatorname{crys}}\) has a unique expression_
\[\sum_{I}(\sum_{r=0}^{\infty}\sum_{J_{I}}F^{-r}(\alpha_{J_{I}})\{\partial\}_{J _{I}/p^{r}})\{\partial\}^{I}\]
_where \(I=(i_{1},\dots,i_{n})\in\mathbb{N}^{n}\), \(\{\partial\}^{I}:=\{\partial_{1}\}_{1}^{i_{1}}\cdots\{\partial_{n}\}_{1}^{i_{n}}\), and \(J_{I}=(j_{1},\dots,j_{n})\) satisfies \(0\leq j_{l}<p^{r}\) for each \(j_{l}\) and \((p,j_{l})=1\) for at least one \(l\); unless \(r=0\) and \(J_{I}=0\)._
Proof.: This follows from Lemma 2.26 above.
Exactly as above, this implies that the sheafification \(\widehat{\mathcal{D}}^{(0)}_{W(X),\operatorname{crys}}\) has the property that \(\Gamma(U,\widehat{\mathcal{D}}^{(0)}_{W(X),\operatorname{crys}})=\widehat{ \mathcal{D}}^{(0)}_{W(A),\operatorname{crys}}\) whenever \(U=\operatorname{Spec}(A)\) has local coordinates. We also see that
**Corollary 2.36**.: _The completion map \(\widehat{\mathcal{D}}^{(0)}_{W(X)}\to\widehat{\mathcal{D}}^{(0)}_{W(X), \operatorname{crys}}\) is injective._
## 3. Accessibility
In this chapter we will define the main category of interest in this paper, the category of accessible modules over \(\widehat{\mathcal{D}}^{(m)}_{W(X)}\) (here, \(m\geq 0\)). In particular, we shall prove Theorem 1.5, Theorem 1.6, 1.7, as well as several other related results. To get things off the ground, we need to defined and study the fundamental bimodule \(\Phi^{*}\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}\) where \(\mathcal{A}\) is, as usual, a smooth \(W(k)\) algebra lifting \(A\) which admits local coordinates. We let \(F:\mathcal{A}\to\mathcal{A}\) be some coordinatized lift of Frobenius and \(\Phi:\mathcal{A}\to W(A)\) the associated map to Witt vectors.
**Lemma 3.1**.: _1) The action map \(\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}\to\mathcal{E}nd_{W(k)}(\mathcal{A})\) is injective._
_2) Let \(\Phi^{*}\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}\) denote the completion of \(W(A)\otimes_{\mathcal{A}}\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}\) along the filtration \(V^{i}(W(A))\otimes_{\mathcal{A}}\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}\). Then there is an embedding_
\[\Phi^{*}\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}\to\mathcal{H}om_{W(k)}( \mathcal{A},W(A))\]
_which takes \(1\otimes 1\) to \(\Phi\). This embedding preserves the natural right \(\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}\)-module structures on both sides._
Proof.: 1) This is a well-known fact, whose proof boils down to a (vastly) simplified version of Lemma 2.26.
2) There is a map \(\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}\to\mathcal{H}om_{W(k)}(\mathcal{A}, W(A))\) given by taking an element \(Q\in\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}\) to \(\Phi\circ Q:\mathcal{A}\to W(A)\) (where we regard \(Q\) as acting on \(\mathcal{A}\)). Since \(\Phi\) is injective this map is injective also; in addition, the right action of \(\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}\) on itself clearly corresponds to the right action of \(\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}\) on \(\mathcal{H}om_{W(k)}(\mathcal{A},W(A))\) through the action on \(\mathcal{A}\). Further, the (left) action of \(W(A)\) on itself gives \(\mathcal{H}om_{W(k)}(\mathcal{A},W(A))\) the structure of a \(W(A)\)-module, so that there is an induced map
\[\iota:W(A)\otimes_{\mathcal{A}}\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}\to \mathcal{H}om_{W(k)}(\mathcal{A},W(A))\]
and, for elements \(a_{i}\in V^{i}(W(A))\otimes_{\mathcal{A}}\widehat{\mathcal{D}}^{(m)}_{ \mathcal{A}}\), the map \(\iota(a_{i})\) has image in \(V^{i}(W(A))\), so that the sum \(\sum_{i}\iota(a_{i})\) is well-defined in \(\mathcal{H}om_{W(k)}(\mathcal{A},W(A))\). Thus there is an induced map, which we also call \(\iota\), \(\iota:\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}\to\mathcal{H}om_{W(k)}( \mathcal{A},W(A))\), which clearly preserves the right \(\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}\)-module structures.
To show that this map is injective, let \(\{T_{1},\ldots,T_{n}\}\) be local coordinates on \(\mathcal{A}\). Then any element of \(W(A)\) can be written uniquely as \(\sum_{r=0}^{\infty}\sum_{I}V^{r}(T^{I})\cdot\Phi(a_{I})\) where, for any \(r\), \(I=(i_{1},\ldots,i_{n})\) ranges over elements of \(\mathbb{N}^{n}\) such that \(0\leq i_{j}<p^{r}\) and \((p,i_{j})=1\) for at least one index \(j\); and \(a_{I}\in\mathcal{A}\). Thus any element \(Q\in\Phi^{*}\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}\) can be written uniquely as \(\sum_{r=0}^{\infty}\sum_{I}V^{r}(T^{I})\cdot Q_{I}\) where \(Q_{I}\in\Phi^{*}\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}\) and \(I\) satisfies the same conditions as above. But such an element acts as \(0\) on \(\mathcal{A}\) iff each \(Q_{I}\) acts as zero on \(\mathcal{A}\) which implies that \(Q=0\); therefore \(\iota\) is injective as claimed.
The natural action of \(\widehat{\mathcal{D}}^{(m)}_{W(A)}\) on \(W(A)\) endows the module \(\mathcal{H}om_{W(k)}(\mathcal{A},W(A))\) with a left action by \(\widehat{\mathcal{D}}^{(m)}_{W(A)}\).
**Proposition 3.2**.: _The submodule \(\Phi^{*}\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}\) of \(\mathcal{H}om_{W(k)}(\mathcal{A},W(A))\) is preserved under the action of \(\widehat{\mathcal{D}}^{(m)}_{W(A)}\). Thus \(\Phi^{*}\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}\) has the structure of a \((\widehat{\mathcal{D}}^{(m)}_{W(A)},\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}})\)-bimodule. If \(m\geq 0\) the map \(\mathcal{D}^{(0)}_{W(\mathcal{A})}\to\Phi^{*}\widehat{\mathcal{D}}^{(0)}_{ \mathcal{A}}\) which takes \(1\to\Phi\) is onto._
Proof.: Let us consider the first statement. We shall suppose that \(m=0\), the other cases being extremely similar. We begin by considering an element of the form \(\{\partial\}_{J/p^{r}}\), where \(r\geq 0\) and \(0\leq j_{i}\leq p^{r}\) for all \(i\in\{1,\ldots n\}\)). We shall show that \(\{\partial\}_{J/p^{r}}\circ\Phi\in\operatorname{Hom}_{W(k)}(\mathcal{A},W(A))\) agrees with an element of \(\Phi^{*}\widehat{\mathcal{D}}^{(0)}_{\mathcal{A}}\).
The map \(\{\partial\}_{J/p^{r}}\circ\Phi\) induces, for each \(r^{\prime}\geq r\), a map \(\mathcal{A}/p^{r^{\prime}+1}\to W_{r^{\prime}+1}(A)\). Further, \(\mathcal{A}/p^{r^{\prime}+1}\) is etale over a polynomial ring \(W_{r^{\prime}+1}(k)[T_{1},\ldots,T_{n}]\). Therefore we
must compute the action of \(\{\partial\}_{J/p^{r}}\) on a monomial \(T_{1}^{k_{1}}\cdots T_{n}^{k_{n}}\in W_{r^{\prime}+1}(k)[T_{1},\ldots,T_{n}]\). We have
\[\{\partial\}_{J/p^{r}}(T_{1}^{k_{1}}\cdots T_{n}^{k_{n}})=\prod_{i=1}^{n} \binom{k_{i}p^{r^{\prime}}}{j_{i}p^{r^{\prime}-r}}\cdot T^{k_{i}-j_{i}/p^{r}} \in W_{r^{\prime}+1}(k[T_{1},\ldots,T_{n}])\]
When \(j_{i}\neq 0\) we have the identity
\[\binom{k_{i}p^{r^{\prime}}}{j_{i}p^{r^{\prime}-r}}=\prod_{l=1}^{j_{i}p^{r^{ \prime}-r}}\frac{k_{i}p^{r^{\prime}}-j_{i}p^{r^{\prime}-r}+l}{l}=\prod_{l=1}^{j _{i}^{i}p^{r^{\prime}-r+v_{i}}}\frac{k_{i}p^{r^{\prime}}-j_{i}^{\prime}p^{r^{ \prime}-r+v_{i}}+l}{l}\]
where we have written \(j_{i}=j_{i}^{\prime}p^{v_{i}}\) where \(v_{i}=\operatorname{val}(j_{i})\). Our aim is to show that this expression is a polynomial in \(k_{i}\), with coefficients in \(\mathbb{Z}/p^{r^{\prime}+1}\). If \(j_{i}=p^{r}\), then we have
\[\binom{k_{i}p^{r^{\prime}}}{j_{i}p^{r^{\prime}-r}}=\binom{k_{i}p^{r^{\prime}} }{p^{r^{\prime}}}=\prod_{l=1}^{p^{r^{\prime}}}\frac{(k_{i}-1)p^{r^{\prime}}+l }{l}\]
\[=k_{i}\cdot\prod_{l=1}^{p^{r^{\prime}}-1}((k_{i}-1)\frac{p^{r^{\prime}}}{l}+1)\]
which is evidently a polynomial in \(k_{i}\), with coefficients in \(\mathbb{Z}/p^{r^{\prime}+1}\). We remark that if \(q_{i,r^{\prime}}(t)\) denotes this polynomial, then \(t|q_{i,r^{\prime}}(t)\).
Next, if \(j_{i}<p^{r}\) then \(v_{i}<r\) and we have
\[\frac{k_{i}p^{r^{\prime}}-j_{i}^{\prime}p^{r^{\prime}-r+v_{i}}-l}{l}=\frac{p^ {r^{\prime}-r+v_{i}}(k_{i}p^{r-v_{i}}-j_{i}^{\prime})-l}{l} \tag{3.1}\]
and since \(\operatorname{val}(j_{i}^{\prime})=0\) we have \(\operatorname{val}(k_{i}p^{r-v_{i}}-j_{i}^{\prime})=0\) and so, if \(\operatorname{val}(l)\leq r^{\prime}-r+v_{i}\) we have
\[\frac{p^{r^{\prime}-r+v_{i}}(k_{i}p^{r-v_{i}}-j_{i}^{\prime})-l}{l}=\frac{p^{ r^{\prime}-r+v_{i}}}{l}(k_{i}p^{r-v_{i}}-j_{i}^{\prime})-1\in\mathbb{Z}/p^{r^{ \prime}+1}\]
However, if \(\operatorname{val}(l)>r^{\prime}-r+v_{i}\) then the term (3.1), considered as a element of \(\mathbb{Q}\), is not in \(\mathbb{Z}_{p}\). For each such \(l\) we define \(l_{j^{\prime}}:=j^{\prime}p^{r^{\prime}-r+v_{i}}-l\); since \(1\leq l\leq j^{\prime}p^{r^{\prime}-r+v_{i}}\) the same is true of \(l_{j^{\prime}}\), and we have \(\operatorname{val}(l_{j^{\prime}})=r^{\prime}-r+v_{i}\). We have that
\[\frac{k_{i}p^{r^{\prime}}-j_{i}^{\prime}p^{r^{\prime}-r+v_{i}}-l}{l}\cdot \frac{k_{i}p^{r^{\prime}}-j_{i}^{\prime}p^{r^{\prime}-r+v_{i}}-l_{j^{\prime}}} {l_{j^{\prime}}}\]
\[=\frac{k_{i}p^{r^{\prime}}-l_{j^{\prime}}}{l_{j^{\prime}}}\cdot\frac{k_{i}p^{r^ {\prime}}-l}{l}=(k_{i}\frac{p^{r^{\prime}}}{l_{j^{\prime}}}-1)(k_{i}\frac{p^{r^ {\prime}}}{l}-1)\]
So, in sum, we have
\[\binom{k_{i}p^{r^{\prime}}}{j_{i}p^{r^{\prime}-r}}=\prod(\frac{p^{r^{\prime}-r +v_{i}}}{l}(k_{i}p^{r-v_{i}}-j_{i}^{\prime})-1)\cdot\prod_{\{l|\operatorname{ val}(l)>r^{\prime}-r+v_{i}\}}(k_{i}\frac{p^{r^{\prime}}}{l_{j^{\prime}}}-1)(k_{i} \frac{p^{r^{\prime}}}{l}-1)\]
where the first product ranges over \(l\in\{1,\ldots,j_{i}^{\prime}p^{r^{\prime}-r+v_{i}}\}\) such that \(\operatorname{val}(l)\leq r^{\prime}-r+v_{i}\) and \(l\) is not of the form \(l_{j^{\prime}}\). This expression is evidently a polynomial in \(k_{i}\), with coefficients in \(\mathbb{Z}/p^{r^{\prime}+1}\); as above we denote this polynomial by \(q_{i,r^{\prime}}(t)\). When \(l=j_{i}^{\prime}p^{r^{\prime}-r+v_{i}}\) we get the term \(k_{i}\frac{p^{r-v_{i}}}{j_{i}^{\prime}}\), so that again \(t|q_{i,r^{\prime}}(t)\). By inspection one sees that the image of \(q_{i,r^{\prime}}(t)\) in \(\mathbb{Z}/p^{r^{\prime}}[t]\) is \(q_{i,r^{\prime}-1}(t)\).
Now we note that, for any polynomial \(q^{\prime}(t)\in\mathbb{Z}/p^{r^{\prime}+1}[t]\), and for a fixed index \(i\in\{1,\ldots n\}\), the endomorphism of \(W_{r^{\prime}+1}(k)[T_{1},\ldots,T_{n}]\) given by
\[\prod_{j=1}^{n}T_{j}^{k_{j}}\to k_{i}q^{\prime}(k_{i})\prod_{j\neq i}T_{j}^{k_ {j}}\cdot T_{i}^{k_{i}-1}\]
is given by the action of an operator in \(D^{(0)}_{\mathcal{A}/p^{r^{\prime}+1}}\); indeed we have that \(\frac{\partial}{\partial T_{i}}\cdot q^{\prime}(T_{i}\frac{\partial}{\partial T _{i}})(\prod_{j=1}^{n}T_{j}^{k_{j}})=k_{i}q^{\prime}(k_{i})\cdot\prod_{j\neq i }T_{j}^{k_{j}}\cdot T_{i}^{k_{i}-1}\). So we have
\[\{\partial\}_{J/p^{r}}(T_{1}^{k_{1}}\cdots T_{n}^{k_{n}})=(\prod_{\{i|j_{i} \neq 0\}}p^{r-v_{i}}T_{i}^{1-j_{i}/p^{r}}\cdot q_{i,r^{\prime}}(k_{i}))(T_{1 }^{k_{1}-\epsilon_{1}}\cdots T_{n}^{k_{n}-\epsilon_{n}})\]
(where \(\epsilon_{i}=1\) if \(j_{i}\neq 0\) and \(\epsilon_{i}=0\) if \(j_{i}=0\)), which, since \(t|q_{i,r^{\prime}}(t)\), is given by the application of an operator
\[\prod_{\{i|j_{i}\neq 0\}}p^{r-v_{i}}T_{i}^{1-j_{i}/p^{r}}\cdot Q_{i,r^{\prime}}\]
for appropriate \(Q_{i,r^{\prime}}\in D^{(0)}_{\mathcal{A}/p^{r^{\prime}+1}}\); since \(\mathcal{A}/p^{r^{\prime}+1}\) is etale over \(W_{r^{\prime}+1}(k)[T_{1},\ldots,T_{n}]\) and both \(\{\partial\}_{J/p^{r}}\) and \(\prod_{\{i|j_{i}\neq 0\}}p^{r-v_{i}}T_{i}^{1-j_{i}/p^{r}}\cdot Q_{i,r^{ \prime}}\) are differential operators, we deduce that they are equal on all of \(\mathcal{A}/p^{r^{\prime}+1}\). Furthermore the image of \(Q_{i,r^{\prime}}\) in \(D^{(0)}_{\mathcal{A}/p^{r^{\prime}}}\) is \(Q_{i,r^{\prime}-1}\) since this is true for the polynomials \(q_{i,r^{\prime}}\). Therefore, we have that the operator
\[\prod_{\{i|j_{i}\neq 0\}}p^{r-v_{i}}T_{i}^{1-j_{i}/p^{r}}\cdot Q_{i}=\lim_{r ^{\prime}\to\infty}\prod_{\{i|j_{i}\neq 0\}}p^{r-v_{i}}T_{i}^{1-j_{i}/p^{r}} \cdot Q_{i,r^{\prime}}\]
acts as \(\{\partial\}_{J/p^{r}}\) on \(\mathcal{A}\), which shows that \(\{\partial\}_{J/p^{r}}\circ\Phi\) agrees with an element of \(\Phi^{*}\mathcal{D}^{(0)}_{\mathcal{A}}\).
Now suppose, in addition, that \(J=(j_{1},\ldots,j_{n})\) has the property that at least one \(j_{i}\), say \(j_{i^{\prime}}\), satisfies \(\operatorname{val}(j_{i^{\prime}})=0\). Then we may consider the operator \(F^{-r}(\alpha)\cdot\{\partial\}_{J/p^{r}}\), and the previous discussion shows that this operator acts as
\[V^{r}(\alpha T^{p^{r}-j_{i^{\prime}}})Q_{i^{\prime}}\cdot\prod_{i\neq i^{ \prime}}p^{r-v_{i}}T_{i}^{1-j_{i}/p^{r}}\cdot Q_{i}\in\Phi^{*}\mathcal{D}^{(0)} _{\mathcal{A}}\]
This implies further that, for any \(Q\in\mathcal{D}^{(0)}_{\mathcal{A}}\), the operator \(F^{-r}(\alpha)\{\partial\}_{J/p^{r}}\circ\Phi\circ Q\) agrees with \(V^{r}(\alpha T^{p^{r}-j_{i^{\prime}}})Q_{i^{\prime}}\cdot\prod_{i\neq i^{ \prime}}p^{r-v_{i}}T_{i}^{1-j_{i}/p^{r}}\cdot Q_{i}\cdot Q\), as an operator on \(\mathcal{A}\), and is therefore also contained in \(\Phi^{*}\mathcal{D}^{(0)}_{\mathcal{A}}\). Thus we see that, for any sum \(\sum_{r=1}^{\infty}F^{-r}(\alpha)\cdot\{\partial\}_{J/p^{r}}\), we have
\[\sum_{r=1}^{\infty}F^{-r}(\alpha)\cdot\{\partial\}_{J/p^{r}}\circ\Phi\circ Q= \sum_{r=1}^{\infty}V^{r}(\alpha T^{p^{r}-j_{i^{\prime}}})Q_{i^{\prime}}\cdot \prod_{i\neq i^{\prime}}p^{r-v_{i}}T_{i}^{1-j_{i}/p^{r}}\cdot Q_{i}\cdot Q\]
which is a convergent sum in \(\Phi^{*}\mathcal{D}^{(0)}_{\mathcal{A}}\).
Next consider any element of the form \(\{\partial\}^{I}=\{\partial_{1}\}^{i_{1}}\cdots\{\partial_{n}\}^{i_{n}}\). Suppose, inductively, that \(\{\partial_{1}\}^{i_{1}-1}\cdots\{\partial_{n}\}^{i_{n}}\circ\Phi\in\Phi^{*} \mathcal{D}^{(0)}_{\mathcal{A}}\), and write
\[\{\partial_{1}\}^{i_{1}-1}\cdots\{\partial_{n}\}^{i_{n}}\circ\Phi=\sum_{r=0}^{ \infty}V^{r}(\alpha_{r})\cdot\Phi\circ Q_{r}\]
for \(Q_{r}\in\mathcal{D}^{(0)}_{\mathcal{A}}\). Then
\[\{\partial_{1}\}\sum_{r=0}^{\infty}V^{r}(\alpha_{r})\cdot\Phi\circ Q_{r}=\sum_ {r=0}^{\infty}[\{\partial_{1}\},V^{r}(\alpha_{r})]\cdot\Phi\circ Q_{r}+\sum_{r =0}^{\infty}V^{r}(\alpha_{r})\{\partial\}_{1}\circ\Phi\circ Q_{r}\]
and since \([\{\partial_{1}\},V^{r}(\alpha_{r})]=\sum_{l=1}^{\infty}\sum_{j\leq p^{l}}F^{ -l}(\beta_{jl})\{\partial\}_{j/p^{l}}\) (as in the proof of (2.5)), the previous remarks imply that \(\{\partial\}^{I}\circ\Phi\in\Phi^{*}\mathcal{D}^{(0)}_{\mathcal{A}}\). As above this implies directly that \(\{\partial\}^{I}\circ\Phi\circ Q\in\Phi^{*}\mathcal{D}^{(0)}_{\mathcal{A}}\) for all \(Q\in\mathcal{D}^{(0)}_{\mathcal{A}}\), and then, using the representation of an element of \(\widehat{D}^{(0)}_{W(A)}\) as in Theorem 2.17, the first result follows.
Now we consider the second statement, again when \(m=0\) (\(m>0\) is essentially identical). We may, by the fact that \(\Phi^{*}\widehat{\mathcal{D}}^{(0)}_{\mathcal{A}}\) is \(p\)-adically complete, prove the analogous fact for the reduction \(\Phi^{*}\widehat{\mathcal{D}}^{(0)}_{\mathcal{A}}/p\); let us call this object \(\Phi^{*}\mathcal{D}^{(0)}_{A}\). We have, for each \(1\leq i\leq n\), that \(\{\partial_{i}\}\cdot(1\otimes 1)=1\otimes\partial_{i}\) inside \(\Phi^{*}\mathcal{D}^{(0)}_{A}\) (this follows directly from the above description of the action of \(\{\partial_{i}\}\)). So we see that the image of \(\widehat{\mathcal{D}}^{(0)}_{W(A)}\) acting on \(1\otimes 1\in\Phi^{*}\mathcal{D}^{(0)}_{A}\) contains every element of the form \(\alpha\otimes P\) for \(P\in\mathcal{D}^{(0)}_{\mathcal{A}}\). As every element of \(\Phi^{*}\mathcal{D}^{(0)}_{A}\) is a (convergent) sum of such elements, the result follows.
Let us note here that this bimodule sheafifies well:
**Lemma 3.3**.: _Let \(\mathcal{A}\) and \(\Phi:\mathcal{A}\to W(A)\) be as above, and let \(X=\text{Spec}(A)\), \(\mathfrak{X}=\text{Spec}(\mathcal{A})\). For each affine open \(U\subset X\) we obtain, by complete localization of the map \(\Phi\), a map \(\Phi:\mathcal{O}_{\mathfrak{X}}(U)\to W(\mathcal{O}_{X}(U))\), and, therefore, a bimodule \(\Phi^{*}\mathcal{O}_{\mathfrak{X}}(U)\). Thus, regarding \(\Phi\) as a morphism \(\mathcal{O}_{\mathfrak{X}}\to W(\mathcal{O}_{X})\), this sheaf is the sheaf of bi-sub-modules of \(\mathcal{H}om_{W(k)}(\mathcal{O}_{\mathfrak{X}},\mathcal{O}_{W(X)})\) locally generated by \(\Phi\). It is also the inverse limit of the quasicoherent sheaves \((W_{r}(A)\otimes_{\mathcal{A}}\mathcal{D}^{(0)}_{\mathcal{A}_{r}})^{\sim}\) on \(\text{Spec}(W_{r}(A))\), and hence quasicoherent on \(W(X)\)._
Proof.: Everything is clear from the above except, perhaps, the quasicoherence statement. For that, recall that, for a \(\{\mathcal{F}_{n}\}\) a surjective system of quasicoherent sheaves and \(U\subset X\) any open affine, we have \(\lim_{n}\mathcal{F}_{n}(U)\tilde{=}(\lim_{n}\mathcal{F}_{n})(U)\) (by, e.g., [28], lemma 1.1.6). But for each such \(U\) the theorem above implies
\[\lim_{r}(W_{r}(A)\otimes_{\mathcal{A}}\mathcal{D}^{(0)}_{\mathcal{A}_{r}})^{ \sim}(U)\tilde{=}\Phi^{*}\mathcal{O}_{\mathfrak{X}}(U)\]
and the result follows.
Before proceeding, we would like to give the crystalline version of these results. Fix \(r\geq 1\), and consider \(\mathfrak{X}_{r}\) a smooth formal scheme over \(W_{r}(k)\) with special fibre \(X\). We recall that the differential operators \(\mathcal{D}^{(0)}_{\mathfrak{X}_{r}}\) posses a two sided ideal \(\mathcal{I}_{r}\) consisting of sections \(P\) which annihilate \(\mathcal{O}_{\mathfrak{X}_{r}}\). If \(\mathfrak{X}_{r}\) is affine and possess local coordinates \(\{T_{1},\dots,T_{n}\}\), then this ideal is generated by sections of the form \(\{p^{j}\partial_{i}^{p^{r-j}}\}\) for
\(1\leq i\leq n\) and \(0\leq j\leq r-1\). The completion of \(\mathcal{D}^{(0)}_{\mathfrak{X}_{r}}\) along the powers of \(\mathcal{I}_{r}\) is denoted \(\mathcal{D}^{(0)}_{\mathfrak{X}_{r},\text{crys}}\); if \(\mathfrak{X}_{r}=\text{Spec}(\mathcal{A}_{r})\) then we denote \(\mathcal{D}^{(0)}_{\mathcal{A}_{r},\text{crys}}\) the global sections of \(\mathcal{D}^{(0)}_{\mathfrak{X}_{r},\text{crys}}\). Taking the inverse limit over \(r\), we obtain the algebra \(\widehat{\mathcal{D}}^{(0)}_{\mathcal{A},\text{crys}}\), which is a further completion of \(\widehat{\mathcal{D}}^{(0)}_{\mathcal{A}}\); similarly we have the sheaf \(\widehat{\mathcal{D}}^{(0)}_{\mathfrak{X},\text{crys}}\). One can show without much difficulty that elements of this ring have unique expressions of the form
\[\sum_{I}a_{I}\partial^{I}\]
where \(a_{I}\in\mathcal{A}\). Since \(\partial^{I}\to 0\) as \(I\to\infty\) as operators on \(\mathcal{A}\), this algebra acts on \(\mathcal{A}\), and the action is faithful.
Now, repeating the proof of the above theorems essentially verbatim, we obtain
**Theorem 3.4**.: _1) Let \(\Phi^{*}\widehat{\mathcal{D}}^{(0)}_{\mathcal{A},\text{crys}}\) denote the completion of \(W(A)\otimes_{\mathcal{A}}\widehat{\mathcal{D}}^{(0)}_{\mathcal{A},\text{crys}}\) along the filtration \(V^{i}(W(A))\otimes_{\mathcal{A}}\widehat{\mathcal{D}}^{(0)}_{\mathcal{A}, \text{crys}}\). Then there is an embedding_
\[\Phi^{*}\widehat{\mathcal{D}}^{(0)}_{\mathcal{A},\text{crys}}\to\mathcal{H}om_ {W(k)}(\mathcal{A},W(A))\]
_which takes \(1\otimes 1\) to \(\Phi\). This embedding preserves the natural right \(\widehat{\mathcal{D}}^{(0)}_{\mathcal{A},\text{crys}}\)-module structures on both sides._
_2) The submodule \(\Phi^{*}\widehat{\mathcal{D}}^{(0)}_{\mathcal{A},\text{crys}}\) of \(\mathcal{H}om_{W(k)}(\mathcal{A},W(A))\) is preserved under the action of \(\widehat{\mathcal{D}}^{(0)}_{W(A)}\). Thus \(\Phi^{*}\widehat{\mathcal{D}}^{(0)}_{\mathcal{A},\text{crys}}\) has the structure of a \((\widehat{\mathcal{D}}^{(0)}_{W(A)},\widehat{\mathcal{D}}^{(0)}_{\mathcal{A}, \text{crys}})\)-bimodule._
_3) Let \(X=\text{Spec}(A)\), \(\mathfrak{X}=\text{Spec}(\mathcal{A})\). Then there is a \((\widehat{\mathcal{D}}^{(0)}_{W(X)},\widehat{\mathcal{D}}^{(0)}_{\mathfrak{X}, \text{crys}})\), which is an inverse limit of quasicoherent sheaves on \(W(X)\)._
Now we want to compare two bimodules \(\Phi^{*}\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}\) and \(\Psi^{*}\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}\) for two morphisms \(\Phi,\Psi:\mathcal{A}\to W(A)\) coming from two lifts of Frobenius on \(\mathcal{A}\). In fact, assuming \(p>2\) the situation is as nice as possible. We begin by showing
**Lemma 3.5**.: _Let \(\Phi\) and \(\Psi\) be two morphisms \(\mathcal{A}\to W(A)\), coming from coordinatized lifts of Frobenius on \(\mathcal{A}\). Make \(W(A)\) into an \(\mathcal{A}\)-module via \(\Phi\). Then, for any \(n\geq 1\), the map \(\delta_{n}:\mathcal{A}/p^{n}\to W(A)/p^{n}\), given by the reduction mod \(p^{n}\) of \(\delta=\Phi-\Psi\), is a differential operator of degree \(\leq n\) over \(\mathcal{A}\)._
Proof.: We start by noting the formula
\[\delta(ab)=a\delta(b)+b\delta(a)-\delta(a)\delta(b)\]
for any \(a,b\in\mathcal{A}\). Indeed, we have
\[\delta(ab)=\Phi(ab)-\Psi(ab)=\Phi(a)\Phi(b)-\Psi(a)\Psi(b)\]
\[=\Phi(a)\Phi(b)-\Phi(a)\Psi(b)+\Phi(a)\Psi(b)-\Psi(a)\Psi(b)\]
\[=\Phi(a)\delta(b)+\delta(a)\Psi(b)=\Phi(a)\delta(b)+\delta(a)(-\delta(b)+ \Phi(b))\]
\[=a\delta(b)+b\delta(a)-\delta(a)\delta(b)\]
as claimed. Now, the map \(\delta\) takes values in the ideal \(V(W(A))\), as the reduction mod \(V(W(A))\) of both \(\Phi\) and \(\Psi\) is, by construction, the natural quotient map \(\mathcal{A}\to A\). So, after reduction to \(W(A)/p\) we have \(\delta(a)\delta(b)=0\); in other words, \(\delta_{1}:A\to W(A)/p\) is a derivation.
Now we suppose \(n\geq 2\). For any \(a\in\mathcal{A}/p^{n}\) we have the operator \(\delta_{n,a}:\mathcal{A}/p^{n}\to W(A)/p^{n}\) defined by \(\delta_{n,a}(b)=\delta_{n}(ab)-a\cdot\delta_{n}(b)\). Iterating, we obtain for any
sequence \((a_{1},\ldots a_{r})\) in \(\mathcal{A}/p^{n}\) an operator \(\delta_{n,a_{1},\ldots,a_{r}}:\mathcal{A}/p^{n}\to W(A)/p^{n}\) defined inductively by
\[\delta_{n,a_{1},\ldots,a_{r}}(b)=\delta_{n,a_{1},\ldots,a_{r-1}}(a_{r}\cdot b)- a_{r}\cdot\delta_{n,a_{1},\ldots,a_{r-1}}(b)\]
To show that \(\delta_{n}\) is a differential operator of order \(\leq n\) we must show that \(\delta_{n,a_{1},\ldots,a_{n+1}}=0\) for any sequence \((a_{1},\ldots a_{n+1})\) of length \(n+1\).
On the other hand, for the sequence \((a_{1},\ldots,a_{r})\) we can define the operator \(\epsilon_{a_{1},\ldots a_{r}}:\mathcal{A}/p^{n}\to W(A)/p^{n}\)
\[\epsilon_{a_{1},\ldots a_{r}}(b)=\delta_{n}(a_{1})\cdots\delta_{n}(a_{r}) \delta_{n}(b)\]
I claim that for each \(r\geq 1\), the map
\[\delta_{a_{1},\ldots,a_{r}}+(-1)^{r+1}\epsilon_{a_{1},\ldots a_{r}}:\mathcal{ A}/p^{n}\to W(A)/p^{n}\]
is an \(\mathcal{A}/p^{n}\)-linear map. We proceed by induction on \(r\); when \(r=1\) this follows from
\[\delta_{n,a}(b):=\delta_{n}(ab)-a\cdot\delta_{n}(b)=b\cdot\delta_{n}(a)- \delta_{n}(a)\delta_{n}(b)\]
Supposing the result holds for \(r\), we have
\[\delta_{n,a_{1},\ldots,a_{r+1}}(b):=\delta_{n,a_{1},\ldots,a_{r}}(a_{r+1}\cdot b )-a_{r+1}\cdot\delta_{n,a_{1},\ldots,a_{r}}(b)\]
By induction, we have that
\[\delta_{n,a_{1},\ldots,a_{r}}=(-1)^{r+1}\epsilon_{a_{1},\ldots,a_{r}}+\varphi\]
where \(\varphi:\mathcal{A}/p^{n}\to W(A)/p^{n}\) is an \(\mathcal{A}/p^{n}\)-linear map. So
\[\delta_{n,a_{1},\ldots,a_{r}}(a_{r+1}\cdot b)=(-1)^{r+1}\epsilon_{a_{1}, \ldots,a_{r}}(a_{r+1}\cdot b)+a_{r+1}\varphi(b)\]
and
\[a_{r+1}\cdot\delta_{n,a_{1},\ldots,a_{r}}(b)=(-1)^{r+1}a_{r+1}\cdot\epsilon_{ a_{1},\ldots,a_{r}}(b)+a_{r+1}\varphi(b)\]
So we see that
\[\delta_{n,a_{1},\ldots,a_{r+1}}(b)=(-1)^{r+1}(\epsilon_{a_{1},\ldots,a_{r}}(a _{r+1}\cdot b)-a_{r+1}\cdot\epsilon_{a_{1},\ldots,a_{r}}(b))\]
But
\[\epsilon_{a_{1},\ldots,a_{r}}(a_{r+1}\cdot b)=\delta_{n}(a_{1})\cdots\delta_{ n}(a_{r})\delta_{n}(a_{r+1}b)\]
\[=b\cdot\delta_{n}(a_{1})\cdots\delta_{n}(a_{r})\delta_{n}(a_{r+1})+a_{r+1} \cdot\delta_{n}(a_{1})\cdots\delta_{n}(a_{r})\delta_{n}(b)-\delta_{n}(a_{1}) \cdots\delta_{n}(a_{r})\delta_{n}(a_{r+1})\delta_{n}(b)\]
\[=b\cdot\delta_{n}(a_{1})\cdots\delta_{n}(a_{r})\delta_{n}(a_{r+1})+a_{r+1} \epsilon_{a_{1},\ldots,a_{r}}(b)-\epsilon_{a_{1},\ldots,a_{r+1}}(b)\]
so that
\[\epsilon_{a_{1},\ldots,a_{r}}(a_{r+1}\cdot b)-a_{r+1}\cdot\epsilon_{a_{1}, \ldots,a_{r}}(b)=b\cdot\delta_{n}(a_{1})\cdots\delta_{n}(a_{r})\delta_{n}(a_{r +1})-\epsilon_{a_{1},\ldots,a_{r+1}}(b)\]
Therefore
\[\delta_{n,a_{1},\ldots,a_{r+1}}(b)=(-1)^{r+1}(\epsilon_{a_{1}, \ldots,a_{r}}(a_{r+1}\cdot b)-a_{r+1}\cdot\epsilon_{a_{1},\ldots,a_{r}}(b))\] \[\qquad=(-1)^{r+1}(b\cdot\delta_{n}(a_{1})\cdots\delta_{n}(a_{r}) \delta_{n}(a_{r+1})-\epsilon_{a_{1},\ldots,a_{r+1}}(b))\] \[\qquad=(-1)^{r+2}\epsilon_{a_{1},\ldots,a_{r+1}}(b)+(-1)^{r+1}b \cdot\delta_{n}(a_{1})\cdots\delta_{n}(a_{r})\delta_{n}(a_{r+1})\]
as required.
Finally, to finish the proof, we note that
\[\epsilon_{a_{1},\ldots a_{n}}(b)=\delta_{n}(a_{1})\cdots\delta_{n}(a_{n}) \delta_{n}(b)=0\]
for all \(b\), as the product of \(n+1\) elements of the form \(V(\alpha)\) (in \(W(A)\)) is contained in the ideal generated by \(p^{n}\). Thus \(\delta_{n,a_{1},\ldots,a_{n}}\) is a linear operator, and the result follows.
Using this, we can prove
**Theorem 3.6**.: _Let \(\Phi\) and \(\Psi\) be two morphisms \(\mathcal{A}\to W(A)\), coming from coordinatized lifts of Frobenius on \(\mathcal{A}\)._
_1) Suppose \(p>2\). Then there is a canonical isomorphism of \((\widehat{\mathcal{D}}^{(0)}_{W(A)},\widehat{\mathcal{D}}^{(0)}_{\mathcal{A}})\)-bimodules \(\epsilon_{\Phi,\Psi}:\Phi^{*}\widehat{\mathcal{D}}^{(0)}_{\mathcal{A}}\bar{ \rightarrow}\Psi^{*}\widehat{\mathcal{D}}^{(0)}_{\mathcal{A}}\). If \(\Xi\) is a third such lift then we have the cocycle condition \(\epsilon_{\Xi,\Phi}\circ\epsilon_{\Phi,\Psi}=\epsilon_{\Xi,\Psi}\)._
_2) For any \(p\), there is a canonical isomorphism of \((\widehat{\mathcal{D}}^{(0)}_{W(A),\text{crys}},\widehat{\mathcal{D}}^{(0)}_{ \mathcal{A},\text{crys}})\)-bimodules \(\epsilon_{\Phi,\Psi}:\Phi^{*}\widehat{\mathcal{D}}^{(0)}_{\mathcal{A},\text{ crys}}\bar{\rightarrow}\Psi^{*}\widehat{\mathcal{D}}^{(0)}_{\mathcal{A},\text{ crys}}\). If \(\Xi\) is a third such lift then we have the cocycle condition \(\epsilon_{\Xi,\Phi}\circ\epsilon_{\Phi,\Psi}=\epsilon_{\Xi,\Psi}\). When \(p>2\) this isomorphism is a completion of the one constructed in part \(1)\) above._
_3) Let \(m>0\). Then for all \(p\) there is a canonical isomorphism of \((\widehat{\mathcal{D}}^{(m)}_{W(A)},\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}})\)-bimodules \(\epsilon_{\Phi,\Psi}:\Phi^{*}\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}\bar{ \rightarrow}\Psi^{*}\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}\); we again have the cocycle condition for a third such lift._
Proof.: 1) By construction (c.f. 3.2), we have canonical inclusions \(\Psi^{*}\widehat{\mathcal{D}}^{(0)}_{\mathcal{A}},\Phi^{*}\widehat{\mathcal{D }}^{(0)}_{\mathcal{A}}\subset\operatorname{Hom}_{W(k)}(\mathcal{A},W(A))\). We shall show that, when \(p>2\), these inclusions have the same image; this immediately proves part 1).
To proceed, we shall show that \(\Psi:\mathcal{A}\to W(A)\) is contained in the image of \(\Phi^{*}\widehat{\mathcal{D}}^{(0)}_{\mathcal{A}}\) in \(\operatorname{Hom}_{W(k)}(\mathcal{A},W(A))\). This implies directly that the image of \(\Psi^{*}\widehat{\mathcal{D}}^{(0)}_{\mathcal{A}}\) is contained in image of \(\Phi^{*}\widehat{\mathcal{D}}^{(0)}_{\mathcal{A}}\). By symmetry, we also have the reverse inclusion, and hence the required equality.
Since \(\Phi:\mathcal{A}\to W(A)\) is contained in the image of \(\Phi^{*}\widehat{\mathcal{D}}^{(0)}_{\mathcal{A}}\) by definition, it suffices to show that \(\delta=\Phi-\Psi\) is contained in the image of \(\Phi^{*}\widehat{\mathcal{D}}^{(0)}_{\mathcal{A}}\). By the previous lemma, the reduction \(\delta_{n}:\mathcal{A}/p^{n}\to W(A)/p^{n}\) is a differential operator of order \(\leq n\) (as in that lemma, we regard \(W(A)\) as being a \(\mathcal{A}\)-module via \(\Phi\)). Therefore it suffices to construct a sequence of operators \((\gamma_{n})\) in the image of \(\Phi^{*}\widehat{\mathcal{D}}^{(0)}_{\mathcal{A}}\) in \(\operatorname{Hom}_{W(k)}(\mathcal{A},W(A))\), such that \(\gamma_{n}-\delta_{n}=0\) on \(\mathcal{A}/p^{n}\) and such that \((\gamma_{n})\) is convergent in \(\Phi^{*}\widehat{\mathcal{D}}^{(0)}_{\mathcal{A}}\). Then \(\gamma:=\lim\gamma_{n}\) will agree with \(\delta\) and the result will follow.
We construct \(\gamma_{n}\) by induction on \(n\). We know that \(\delta_{1}:A\to W(A)/p\) is a derivation. So, we can write
\[\delta_{1}=\sum_{i=1}^{n}\sum_{(I,r)}\bar{a}_{I,i}(\overline{p^{r}T^{I/p^{r}}} )\cdot\partial_{i}\]
where the second sum is over pairs \((I,r)\) where \(I\) is a multi-index, each of whose entries is \(<p^{r}\), and at least one entry of which is coprime to \(p\); and \(\bar{a}_{I,i}\in A\) (as usual we write \(p^{r}T^{I/p^{r}}\in W(A)\) for \(V^{r}(T^{I})\)). Choosing lifts of the \(\bar{a}_{I,i}\) to \(a_{I,i}\in\mathcal{A}\), we obtain a derivation \(\gamma_{1}:\mathcal{A}\to W(A)\)
\[\gamma_{1}=\sum_{i=1}^{n}\sum_{(I,r)}a_{I,i}p^{r}T^{I/p^{r}}\partial_{i}\]
so \(\gamma_{1}-\delta_{1}=0\) on \(\mathcal{A}/p\), and \(\gamma_{1}\in\Phi^{*}\widehat{\mathcal{D}}^{(0)}_{\mathcal{A}}\) by construction.
Suppose we have constructed \((\gamma_{1},\gamma_{2},\ldots,\gamma_{n})\). Then the operator \(\gamma_{n}-\delta:\mathcal{A}\to W(A)\) takes values in \(p^{n}W(A)\). Looking at the reduction mod \(p^{n+1}\) of this operator, we obtain a map
\[\gamma_{n}-\delta:\mathcal{A}/p^{n+1}\to W(A)/p^{n+1}W(A)\]
Since \(\gamma_{n}\in\Phi^{*}\widehat{\mathcal{D}}^{(0)}_{\mathcal{A}}\) has order \(\leq n\) by induction, and \(\delta_{n+1}\) is a differential operator of order \(\leq n+1\), we see that this map takes the form
\[\gamma_{n}-\delta=\sum_{|J|\leq n+1}\sum_{(I,r)}p^{n}\bar{a}_{I,J}(\overline{p^ {r}T^{I/p^{r}}})\partial^{[J]}\]
where the notation is as above. Now,
\[\partial^{[J]}=\frac{\partial_{1}^{j_{1}}}{j_{1}!}\cdots\frac{\partial_{d}^{j_ {d}}}{j_{d}!}\]
with \(j_{1}+\cdots+j_{d}\leq n+1\). Recalling the formula
\[\operatorname{val}(j!)=\frac{j-s_{p}(j)}{p-1}\]
where \(s_{p}(j)\) is the sum of the digits in the \(p\)-adic expansion of \(j\), we see that
\[\operatorname{val}_{p}(j_{1}!\cdots j_{d}!)=\sum_{i=1}^{d}\operatorname{val}_ {p}(j_{i}!)=\sum_{i=1}^{d}\frac{j_{i}-s_{p}(j_{i})}{p-1}\leq\frac{n}{p-1}\]
where in the last inequality we used that \(j_{1}+\cdots+j_{d}\leq n+1\). Therefore we have
\[p^{n}\partial^{[J]}=\alpha\cdot p^{n-\operatorname{val}_{p}(j_{1}!\cdots j_{d }!)}\partial^{J}\]
for some \(\alpha\in\mathbb{Z}_{p}\). So we have
\[\gamma_{n}-\delta=\sum_{|J|\leq n+1}\sum_{I}p^{n-\operatorname{val}_{p}(j_{1}!\cdots j_{d}!)}\alpha a_{I,J}(p^{r}T^{I/p^{r}})\partial^{J}\]
and we can thus define \(\gamma_{n+1}\) as
\[\gamma_{n}-\sum_{|J|\leq n+1}\sum_{I}p^{n-\operatorname{val}_{p}(j_{1}!\cdots j _{d}!)}\alpha a_{I,J}(p^{r}T^{I/p^{r}})\partial^{J}\]
which makes sense as a map \(\mathcal{A}\to W(A)\) (and is clearly an element of \(\Phi^{*}\widehat{\mathcal{D}}^{(0)}_{\mathcal{A}}\)).
Further, since \(p>2\), we see that
\[n-\operatorname{val}_{p}(j_{1}!\cdots j_{d}!)\to\infty\]
as \(n\to\infty\). Thus we see that, in this case, the sequence \((\gamma_{n})\) is convergent inside \(\Phi^{*}\widehat{\mathcal{D}}^{(0)}_{\mathcal{A}}\) as claimed.
2) The proof proceeds identically to that of part 1), until we need to evaluate \(n-\operatorname{val}_{p}(j_{1}!\cdots j_{d}!)\). At that point, we simply have the estimate
\[\operatorname{val}_{p}(j_{1}!\cdots j_{d}!)\leq n\]
(indeed, they can be equal in characteristic 2). Thus the sequence of operators \((\gamma_{n})\) may not converge in \(\Phi^{*}\widehat{\mathcal{D}}^{(0)}_{\mathcal{A}}\), but only in \(\Phi^{*}\widehat{\mathcal{D}}^{(0)}_{\mathcal{A},\operatorname{crys}}\). Thus we obtain the isomorphism \(\Phi^{*}\widehat{\mathcal{D}}^{(0)}_{\mathcal{A},\operatorname{crys}}\tilde{ \to}\Psi^{*}\widehat{\mathcal{D}}^{(0)}_{\mathcal{A},\operatorname{crys}}\) as desired.
3) We again repeat the proof, but note that we can now use operators of the form
\[\sum_{|J|\leq n+1}\sum_{I}p^{n-\operatorname{val}_{p}(j_{1}!\cdots j_{d}!)} \alpha a_{I,J}(p^{r}T^{I/p^{r}})(\frac{\partial^{p^{m}}}{p^{m!}})^{J}\]
and when \(m>1\) obtain the required convergence even when \(p=2\)
Later on, we shall use this result to explain the relationship between \(\mathcal{D}^{(m)}_{\mathfrak{X}_{r}}\)-modules on a smooth scheme \(\mathfrak{X}_{r}\) over \(W(k)/p^{r}\) and modules over \(\widehat{\mathcal{D}}^{(0)}_{W(X)}/p^{r}\). To that end, we shall record the following result, which follows directly from the construction:
**Corollary 3.7**.: _Let \(m\geq 0\), and let \(p>2\) if \(m=0\). Let \(\Phi\) and \(\Psi\) be two morphisms \(\mathcal{A}\to W(A)\), coming from coordinatized lifts of Frobenius on \(\mathcal{A}\). Suppose that these maps agree after reduction mod \(p^{r}\). Then the reduction mod \(p^{r}\) of the canonical isomorphism \(\epsilon_{\Phi,\Psi}:\Phi^{*}\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}/p^{r} \tilde{\rightarrow}\Psi^{*}\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}/p^{r}\) constructed above agrees with the obvious identification \(\Phi^{*}(\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}/p^{r})\tilde{\rightarrow} \Psi^{*}(\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}/p^{r})\) coming from the fact that \(\Phi=\Psi\) on \(\mathcal{A}/p^{n}\)._
We should also remark here on the relation between these constructions and some facts from the usual crystalline cohomology theory. Recall that the algebra \(\mathcal{D}^{(0)}_{\mathcal{A}_{r},\text{crys}}\) is a major player in that story, under the name of the HPD differential operators. They are constructed as the dual of the divided power envelope of the diagonal of \(\mathfrak{X}_{r}\) (c.f. [7], chapter 4 for details).
We consider the two maps
\[\Phi,\Psi:\mathcal{A}_{r}\to W(A)/p^{r}\]
Via the standard divided power structure on \(W(A)\), both of these maps may be considered as (inverse limits of) divided power thickenings. Therefore the theory of the crystalline site (or, in this case, the description of \(\mathcal{D}^{(0)}_{\mathcal{A}_{r},\text{crys}}\) as the HPD differential operators) yields a canonical isomorphism
\[\eta_{r}:\Phi^{*}\mathcal{D}^{(0)}_{\mathcal{A}_{r},\text{crys}}\tilde{ \rightarrow}\Psi^{*}\mathcal{D}^{(0)}_{\mathcal{A}_{r},\text{crys}}\]
Taking the inverse limit over \(r\)yields an isomorphism
\[\eta:\Phi^{*}\widehat{\mathcal{D}}^{(0)}_{\mathcal{A},\text{crys}}\tilde{ \rightarrow}\Psi^{*}\widehat{\mathcal{D}}^{(0)}_{\mathcal{A},\text{crys}}\]
and we have
**Corollary 3.8**.: _The isomorphism \(\eta\) agrees with the isomorphism \(\epsilon\) constructed above in Theorem 3.6._
Proof.: This will follow if we know that the maps \(\Phi^{*}\widehat{\mathcal{D}}^{(0)}_{\mathcal{A},\text{crys}}\rightarrow\text {Hom}_{W(k)}(\mathcal{A},W(A))\) and \(\Psi^{*}\widehat{\mathcal{D}}^{(0)}_{\mathcal{A},\text{crys}}\rightarrow\text {Hom}_{W(k)}(\mathcal{A},W(A))\) coming from the action of differential operators on \(\mathcal{A}\) are compatible with \(\eta\). But the analogous fact is true for \(\eta_{r}\), simply from the construction of \(\eta_{r}\) which is given by factoring \(\Phi\times\Psi\) through a divided power neighborhood of the diagonal (c.f. [7], chapters 3 and 4). Taking the inverse limit over \(r\) we see that this is true for \(\eta\) as well.
To finish off this section, we shall develop another fundamental property of the bimodules \(\Phi^{*}\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}\), namely
**Theorem 3.9**.: _1) As a right \(\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}\)-module, \(\Phi^{*}\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}\) is faithfully flat._
_2) Let \(m\geq 0\). As a left \(\widehat{\mathcal{D}}_{W(A)}\)-module, \(\Phi^{*}\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}\) is projective and cyclic; in particular, it is a summand of \(\widehat{\mathcal{D}}^{(m)}_{W(A)}\) itself._
_The analogous statements hold for \(\Phi^{*}\widehat{\mathcal{D}}^{(m)}_{\mathcal{A},\text{crys}}\) as a \((\widehat{\mathcal{D}}^{(m)}_{W(A),\text{crys}},\Phi^{*}\widehat{\mathcal{D} }^{(m)}_{\mathcal{A},\text{crys}})\) bimodule._
This result will allow us, in the next section, to define categories of \(\widehat{\mathcal{D}}^{(m)}_{W(X)}\)-modules which are surprisingly well behaved.
Proof.: (of part 1) As \(\Phi^{*}\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}\) is \(p\)-torsion free and \(p\)-adically complete, by [28], corollary 1.6.7 it suffices to show that \(\Phi^{*}\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}/p\) is faithfully flat over \(\mathcal{D}^{(m)}_{A}\). However, \(\Phi^{*}\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}/p\) is an inverse limit of free modules over \(\mathcal{D}^{(m)}_{A}\), namely modules of the form
\[\bigoplus_{I<p^{r}}V^{r}(T^{I})\cdot\mathcal{D}^{(m)}_{A}\]
where \(I=(i_{1},\ldots,i_{n})\) is a multi-index and \(I<p^{r}\) means that each \(i_{j}<p^{r}\). As the maps in this inverse system are surjective, one deduces immediately that \(\Phi^{*}\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}/p\) is faithfully flat over \(\mathcal{D}^{(m)}_{A}\) as needed. An identical proof works for the crystalline version of the theory.
To prove part 2), we will have to work a bit harder. We start with the special case in which \(A=k[T]\) ; the lift of this algebra is given by \(W(k)<<T>>\) and we shall use the standard Frobenius lift \(T\to T^{p}\). In this context we write \(d=\dfrac{d}{dt}\). For \(i\in\{0,\ldots,p^{r}-1\}\), we write \(\pi_{(i,p^{r})}:k[T]\to k[T]\) for the \(k\)-linear projection operator which takes \(T^{i+ap^{r}}\) to itself (for all \(a\)) and which takes all other monomials to zero. Then
**Lemma 3.10**.: _The operator \(\pi_{(i,p^{r})}\) is a differential operator of order \(<p^{r}\), namely_
\[\pi_{(i,p^{r})}=T^{i}d^{[p^{r}-1]}T^{p^{r}-1-i}=\sum_{l=i}^{p^{r}-1}\binom{p^ {r}-1-i}{p^{r}-1-l}T^{l}\cdot d^{[l]}\]
Proof.: We begin with the case \(i=0\). Then, for any \(a\geq 0\) we have
\[d^{[p^{r}-1]}T^{p^{r}-1}(T^{p^{r}a})=\binom{p^{r}a+p^{r}-1}{p^{r}-1}T^{p^{r}a}\]
But
\[\binom{p^{r}a+p^{r}-1}{p^{r}-1}=\prod_{m=1}^{p^{r}-1}\dfrac{p^{r}a+m}{m}=\prod _{m=1}^{p^{r}-1}(1+a\dfrac{p^{r}}{m})=1\]
since \(\operatorname{val}_{p}(m)<r\) for all terms in the product; so we see \(d^{[p^{r}-1]}T^{p^{r}-1}(T^{p^{r}a})=T^{p^{r}a}\). On the other hand, for any natural number of the form \(p^{r}a+b\) where \(1\leq b\leq p^{r}-1\), we have
\[d^{[p^{r}-1]}T^{p^{r}-1}(T^{p^{r}a+b})=T^{p^{r}(a+1)}\cdot d^{[p^{r}-1]}(T^{b -1})=0\]
where we have used the well known identity (over \(k\)) \([d^{[p^{r}-1]},T^{p^{r}(a+1)}]=0\). This proves the result when \(i=0\).
Now we consider the general case. Working for a moment over the ring \(k[T,T^{-1}]\), it follows from the case \(i=0\) that the operator \(\pi_{(i,p^{r})}\) has the form
\(T^{i-p^{r}}(d^{[p^{r}-1]}T^{p^{r}-1})T^{p^{r}-i}\). Now write \(2p^{r}-1-i=p^{r}+b\) for \(b\in\{0,\ldots,p^{r}-2\}\). Then
\[T^{i-p^{r}}(d^{[p^{r}-1]}T^{p^{r}-1})T^{p^{r}-i}=T^{i-p^{r}}d^{[p^{r}-1]}T^{p^ {r}+b}=T^{i}d^{[p^{r}-1]}T^{b}\]
is the operator of the required form, where in the last equality we have used \([d^{[p^{r}-1]},T^{p^{r}}]=0\).
Finally to prove the equality
\[T^{i}d^{[p^{r}-1]}T^{p^{r}-1-i}=\sum_{l=i}^{p^{r}-1}\binom{p^{r}-1-i}{p^{r}-1-l}T^ {l}\cdot d^{[l]}\]
we note that by the Hasse-Schmidt property we have
\[[d^{[p^{r}-1]},T^{p^{r}-1-i}]=\sum_{l=0}^{p^{r}-2}d^{[p^{r}-1-l]}(T^{p^{r}-1-i} )\cdot d^{[l]}\]
\[=\sum_{l=0}^{p^{r}-2}\binom{p^{r}-1-i}{p^{r}-1-l}T^{l-i}\cdot d^{[l]}\]
from which the equality follows directly.
Our goal is to study the actions of these operators when lifted to \(W_{r+1}(A)\); we start with the
**Definition 3.11**.: Let \(f:\mathbb{Z}_{\geq 0}\to\mathbb{Z}/p^{r+1}\mathbb{Z}\) be a function, and let \(j>0\) be some natural number. We say that \(f\) is polynomial in arithmetic progressions (of length \(j\)) if, for each \(1\leq a\leq j\), there is a polynomial \(p_{a}\in(\mathbb{Z}/p^{r+1}\mathbb{Z})[x]\) such that \(f(a+bj)=p_{a}(b)\) for all \(b\in\mathbb{Z}_{\geq 0}\).
We note that if \(f\) is polynomial in arithmetic progressions (of length \(j\)), then \(f\) is also polynomial in arithmetic progressions of length \(j^{\prime}\) for all \(j^{\prime}\) which are divisible by \(j\).
Then the result we need is
**Proposition 3.12**.: _Let \(1\leq i<p^{n}\). Fix \(r\geq 0\). Then each operator of the form \(T^{i}d^{[i]}\) on \(W_{r+1}(k)[T]\) takes the form_
\[T^{i}d^{[i]}(T^{a})=f(a)T^{a}\]
_where \(f:\mathbb{Z}_{\geq 0}\to\mathbb{Z}/p^{r+1}\mathbb{Z}\) is polynomial in arithmetic progressions of length \(p^{n-1}\)._
Proof.: We begin with the case of the operator \(T^{p^{m}}d^{[p^{m}]}\) for \(m<n\). Fix \(a\in\{0,\dots,p^{m}-1\}\). In this case, we have
\[T^{p^{m}}d^{[p^{m}]}(T^{a+bp^{m}})=\binom{a+bp^{m}}{p^{m}}T^{a+bp^{m}}\]
We shall show that \(\binom{a+bp^{m}}{p^{m}}\) is polynomial in \(b\) with coefficients in \(\mathbb{Z}_{p}\), and therefore \(f\) is arithmetic progressions of length \(p^{m}\) (as remarked above, this implies that it is also polynomial in arithmetic progressions of length \(p^{n-1}\)). We have
\[\binom{a+bp^{m}}{p^{m}}=\prod_{l=1}^{p^{m}}\frac{a+(b-1)p^{m}+l}{l}\]
For each \(l\in\{1,\dots,p^{m}\}\) set
\[a+l=t+\epsilon_{l}p^{m}\]
where \(t\in\{1,\dots,p^{m}\}\) and \(\epsilon_{t}\in\{0,1\}\). Note that, as \(l\) ranges over \(\{1,\dots,p^{m}\}\), \(t\) also ranges over \(\{1,\dots,p^{m}\}\). Therefore we have
\[\prod_{l=1}^{p^{m}}\frac{a+(b-1)p^{m}+l}{l}=\prod_{t=1}^{p^{m}}\frac{(b-1)p^{m} +t+\epsilon_{l}p^{m}}{t}\]
\[=\prod_{t=1}^{p^{m}}\frac{(b-1+\epsilon_{t})p^{m}+t}{t}=b\prod_{t=1}^{p^{m}-1} \frac{(b-1+\epsilon_{t})p^{m}+t}{t}\]
Now, in the last expression, we have \(\operatorname{val}_{p}(t)=\operatorname{val}_{p}((b-1+\epsilon_{t})p^{m}+t)\). It follows that this product is a polynomial in \(b\) with coefficients in \(\mathbb{Z}_{p}\) as desired.
In order to handle the general case, we recall that, if \(i<p^{n}\) has \(p\)-adic expansion \(i=\sum_{j=0}^{m}a_{j}p^{j}\), then we have
\[d^{[i]}=u\cdot(d^{[p^{j}]})^{a_{j}}\cdots(d)^{a_{0}}\]
where \(u\) is a unit in \(\mathbb{Z}_{p}\). From this, and the standard relation
\[d^{[m]}T^{m}\cdot d^{[n]}T^{n}=\] \[\sum_{l=0}^{m}\binom{n}{m-l}\binom{l+n}{l}T^{(l+n)}d^{[l+n]}\]
we see that \(T^{i}d^{[i]}\) is itself a polynomial (with coefficients in \(\mathbb{Z}_{p}\)) is the operators \(\{T^{p^{m}}d^{[p^{m}]}\}_{m=0}^{p^{n-1}}\), and the result follows.
**Corollary 3.13**.: _Let \(\Xi:k[T]\to k[T[\) be an operator which satisfies \(\Xi(T^{a})=f(a)T^{a}\), where \(f:\mathbb{Z}_{\geq 0}\to\mathbb{Z}/p\mathbb{Z}\) is polynomial in arithmetic progressions of length \(p^{n-1}\). Then \(\Xi\) is given by a differential operator of the form_
\[\Xi=\sum_{i=0}^{p^{n}-1}g_{i}(T^{i}d^{[i]})\]
_where \(g_{i}\) are polynomials in \(\mathbb{Z}/p\mathbb{Z}[T]\)._
Proof.: By (the proof of) the previous proposition, we have that
\[T^{p^{m}}d^{[p^{m}]}T^{(a+bp^{m})}=b\cdot T^{(a+bp^{m})}\]
for each \(0\leq a<p^{m}-1\). Thus, if \(g\) is any polynomial in \(\mathbb{Z}/p\mathbb{Z}\), the operator \(g(T^{p^{m}}d^{[p^{m}]})\) takes \(T^{(a+bp^{m})}\) to \(g(b)T^{a+bp^{m}}\). Furthermore, the projector \(\pi_{(a,p^{m})}\) is an operator of the form \(\sum_{i=0}^{p^{n}-1}c_{i}T^{i}d^{[i]}\) for \(c_{i}\in\mathbb{Z}\) (by Lemma 3.10). So the composition \(g(T^{p^{m}}d^{[p^{m}]})\cdot\pi_{a/p^{m}}\) acts with eigenvalue \(g(b)\) on each element of the form \(T^{a+bp^{m}}\) and is zero on all other monomials; as above we see that this product is a polynomial in the \(\{T^{i}d^{[i]}\}\). Adding up such operators over \(a\in\{0,\dots,p^{m}-1\}\) yields the result.
Now, for any \(r\geq 0\), consider for any \(i\in\{0,\dots,p^{r}-1\}\) the projection operator \(\pi_{i/p^{r}}\) on \(W_{r+1}(A)\) which takes \(p^{m}T^{a/p^{m}}\) to itself whenever \(p^{r-m}a\equiv i\) mod \(p^{r}\) and \(0\) otherwise. Then we have
**Proposition 3.14**.: _Let \(r\geq 0\). For each \(i\in\{1,\dots,p^{r}-1\}\) the projector \(\pi_{i/p^{r}}\) (on \(W_{r+1}(A)\)) is given by the action of an element in \(\mathcal{D}^{(0)}_{W(A)}\), of the form_
\[\psi_{i/p^{r}}=\sum_{i=1}^{p^{r}-1}g_{i}(T^{i/p^{r}}\{d\}_{i/p^{r}})\]
_where \(g_{i}\in\mathbb{Z}_{p}[x]\). We have that \(\psi_{i/p^{r}}\in V^{r-\text{val}_{p}(i)}(\mathcal{D}_{W(A)})\). Therefore, the element_
\[\pi_{r}=1-\sum_{i=1}^{p^{r}-1}\psi_{i/p^{r}}\]
_acts as a projector from \(W_{r+1}(A)\) to \(\mathcal{A}_{r}\). Further, one has \(\pi_{r}\equiv\pi_{r-1}\) mod \(V^{r}(\widehat{\mathcal{D}}^{(0)}_{W(A)})\). Therefore the sequence \(\{\pi_{r}\}\), approaches a limit, \(\pi\in\widehat{\mathcal{D}}^{(0)}_{W(A)}\), which acts as a projector from \(W(A)\) to \(\mathcal{A}\)._
Proof.: We proceed by induction on \(r\), the case \(r=0\) being trivial. Suppose the theorem holds at level \(r-1\). Consider any \(i\in\{1,\dots,p^{r}-1\}\) with \(\text{val}(i)=0\). We will work now with the copy of \(W_{r+1}(A^{(r)})\) contained in \(\mathcal{A}_{r+1}=W_{r+1}(k)[T]\) and construct the projector there.
The projector \(\pi_{(i,p^{r})}\) on \(k[T]\) is given by an expression of the form \(\sum_{j=0}^{p^{r}-1}c_{j}T^{j}d^{[j]}\) (for \(c_{j}\in\mathbb{Z}\)); where, since \(\text{val}(i)=0\), we have \(c_{j}=0\) whenever \(\text{val}(j)\neq 0\) (to see this, note that the restriction of the operator to the subalgebra \(k[T^{p}]\) is \(0\)). Now, consider the action of the term \(\sum_{j=0}^{p^{r}-1}c_{j}T^{j}d^{[j]}\) on \(W_{r+1}(A^{(r)})\subset\mathcal{A}_{r+1}\). As \(\text{val}(j)=0\) for all nonzero \(c_{j}\), this operator takes every term of the form \(p^{m}T^{ap^{r-m}}\) to a term of the form \(f_{m}(a)\cdot p^{r}T^{ap^{r-m}}\) for some \(f_{m}(a)\in\mathbb{Z}/p\mathbb{Z}\). To analyze the functions \(f_{m}\), we shall apply 3.12. In particular, we see that \(a\to f_{m}(a)\) is polynomial on arithmetic progressions of length \(p^{m}\), and is equal to zero on all terms of the form \(a=pa^{\prime}\) (when \(m>0\)).
Applying the previous result, we see that there is an operator of the form
\[\sum_{i=0}^{p^{r+1}-1}g_{i,..m}(p^{r-\text{val}(i)}T^{i}d^{[i]})\]
where \(g_{i,m}\in\mathbb{Z}_{p}[X]\), which takes \(p^{m}T^{ap^{r-m}}\to f_{m}(a)\cdot p^{r}T^{ap^{r-m}}\) for all \(a\). By construction this operator is contained in \(V_{r}(\mathcal{D}^{(0)}_{W(A)})\), and we have
\[\psi_{i/p^{r}}=\sum_{j=0}^{p^{r}-1}c_{j}T^{j}d^{[j]}-\sum_{m=0}^{r-1}\sum_{i=0 }^{p^{r+1}-1}g_{i}(p^{r-\text{val}(i)}T^{i}d^{[i]})\]
Now suppose \(\text{val}(i)>0\). Write \(i=pi^{\prime}\). By induction we have already constructed the operator \(\psi_{i^{\prime}/p^{r-1}}\) on \(W_{r}(A)\), and it is an operator of the form \(\sum_{i=0}^{p^{r}-1}h_{i}(T^{i/p^{r-1}}\{d\}_{i/p^{r-1}})\) for polynomials \(h_{i}\in\mathbb{Z}_{p}[X]\). Translating to \(W_{r}(A^{(r-1)})\subset W_{r}(k)[T]\) it is an operator of the form \(\sum_{i=0}^{p^{r}-1}h_{i}(T^{i}d^{[i]})\). Lifting to \(W_{r+1}(A^{(r)}))\subset W_{r+1}(k)[T]\) yields the
operator \(\sum_{i=0}^{p^{r}-1}h_{i}(T^{pi}d^{[pi]})\). By construction, the operator
\[\psi_{i/p^{r}}-\sum_{i=0}^{p^{r}-1}h_{i}(T^{pi}d^{[pi]})\]
takes each term of the form \(p^{m}T^{ap^{r-m}}\) to a term of the form \(f_{m}(a)\cdot p^{r}T^{ap^{r-m}}\) for some \(f_{m}(a)\in\mathbb{Z}/p\mathbb{Z}\) which is polynomial on arithmetic progressions of length \(p^{m}\) (again by 3.12). Arguing as above, we find a term in \(V_{r}(\mathcal{D}^{(0)}_{W(A)})\) which acts as \(\psi_{i/p^{r}}-\sum_{i=0}^{p^{r}-1}h_{i}(T^{pi}d^{[pi]})\) on \(W_{r+1}(A^{(r)})\), and thus construct \(\psi_{i/p^{r}}\) as required.
Now we can give the
**Corollary 3.15**.: _Let \(A\) be any smooth affine algebra which is equipped with local coordinates, and let \(\Phi:\mathcal{A}\to W(A)\) be the map coming from a coordinatized Frobenius lift. Then there is an element \(\pi\in\widehat{\mathcal{D}}^{(m)}_{W(A)}\) which acts as a projector onto \(\Phi(\mathcal{A})\), i.e., \(\pi^{2}=\pi\) and \(\pi(W(A))=\Phi(\mathcal{A})\)._
_Further, the map \(\widehat{\mathcal{D}}^{(m)}_{W(A)}\to\Phi^{*}\widehat{\mathcal{D}}^{(m)}_{ \mathcal{A}}\) which takes \(1\to\Phi\) induces an isomorphism \(\widehat{\mathcal{D}}^{(m)}_{W(A)}\cdot\pi\tilde{\to}\Phi^{*}\widehat{ \mathcal{D}}^{(m)}_{\mathcal{A}}\); therefore, \(\widehat{\Phi}^{*}\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}\) is a summand of \(\mathcal{D}^{(m)}_{W(A)}\). The analogous facts are true for \(\widehat{\mathcal{D}}^{(0)}_{W(A),\text{crys}}\) and \(\Phi^{*}\widehat{\mathcal{D}}^{(0)}_{\mathcal{A},\text{crys}}\)._
Note that this immediately proves part 2) of Theorem 3.9
Proof.: If \(A=k[T_{1},\dots,T_{d}]\), then we have inclusions \(\widehat{\mathcal{D}}^{(0)}_{W(k[T_{i}])}\to\widehat{\mathcal{D}}^{(0)}_{W(A)}\) coming from sending \(p^{r}T_{i}^{a/p^{r}}\to p^{r}T_{i}^{a/p^{r}}\) and \(\{d\}_{a/p^{r}}\to\{\partial_{i}\}_{a/p^{r}}\); and we can set \(\pi=\prod_{i=1}^{d}\pi_{i}\) where \(\pi_{i}\in\mathcal{D}^{(0)}_{W(k[T_{i}])}\) is the element constructed directly above. In this case we see that \(\pi:W(A)\to\mathcal{A}=W(k)<<T_{1},\dots,T_{d}>>\) is the standard coordinate projection; i.e., the continuous map which takes \(p^{r}T^{I/p^{r}}\)to \(0\) whenever \(r\geq 1\) and \(T^{I}\to T^{I}\) for \(I\in(\mathbb{Z}^{\geq 0})^{d}\).
For the general case, the choice of local coordinates yields an etale map
\(k[T_{1},\dots,T_{d}]\to A\). By the etale local nature of Witt-differential operators (2.32), we see that \(\pi\) extends uniquely to \(\mathcal{D}^{(0)}_{W(A)}\); this is the required element.
To obtain the last statement, note that the element \(\Phi\in\widehat{\Phi}^{*}\widehat{\mathcal{D}}^{(0)}_{\mathcal{A}}\) has annihilator consisting of \(\{\varphi\in\widehat{\mathcal{D}}^{(0)}_{W(A)}|\varphi\cdot\Phi(\mathcal{A})=0\}\); this is also the left annihilator of \(\pi\); this yields the isomorphism \(\mathcal{D}^{(0)}_{W(A)}\cdot\pi\tilde{\to}\widehat{\Phi}^{*}\widehat{ \mathcal{D}}^{(0)}_{\mathcal{A}}\). Since \(\pi\) is an idempotent the rest of the statement follows directly. We deduce the statement for higher \(m\) by looking at the image of \(\pi\) in \(\widehat{\mathcal{D}}^{(m)}_{W(A)}\).
Similarly, to handle the crystalline version, we recall that there is an injective map \(\widehat{\mathcal{D}}^{(0)}_{W(A)}\to\widehat{\mathcal{D}}^{(0)}_{W(A),\text{ crys}}\), and so the image of \(\pi\) in \(\widehat{\mathcal{D}}^{(0)}_{W(A),\text{crys}}\) does the job.
### Accessible modules
In this section we use the fundamental isomorphism of Theorem 3.6 to define and study our categories of interest.
Let us begin in positive characteristic. Translating 3.7 to this situation (c.f. 5.15 for a proof that works in all characteristics) yields
**Proposition 3.16**.: _Let \(m\geq 0\). There is a well-defined \((\widehat{\mathcal{D}}^{(m)}_{W(X)}/p,\mathcal{D}^{(m)}_{X})\) bimodule, denoted \(\mathcal{B}^{(m)}_{X}\), which is locally projective over \(\widehat{\mathcal{D}}^{(m)}_{W(X)}/p\), and locally faithfully flat as a right \(\mathcal{D}^{(0)}_{X}\)-module. The associated functor_
\[\mathcal{B}^{(m)}_{X}{\otimes}_{\mathcal{D}^{(0)}_{X}}?:\mathcal{D}^{(m)}_{X} -\text{mod}\to\widehat{\mathcal{D}}^{(m)}_{W(X)}/p-\text{mod}\]
_is exact and fully faithful, and admits an exact right adjoint \(\mathcal{H}om_{\widehat{\mathcal{D}}^{(m)}_{W(X)}/p}(\mathcal{B}^{(m)}_{X},?)\). We have_
\[\mathcal{H}om_{\widehat{\mathcal{D}}^{(m)}_{W(X)}/p}(\mathcal{B}^{(m)}_{X}, \mathcal{B}^{(m)}_{X}\otimes_{\mathcal{D}^{(m)}_{X}}\mathcal{M})\bar{\to} \mathcal{M}\]
_for all \(\mathcal{M}\in\mathcal{D}^{(m)}_{X}-\text{mod}\). Therefore, the functor_
\[\mathcal{B}_{X}{\otimes}^{L}_{\mathcal{D}^{(m)}_{X}}:D(\mathcal{D}^{(m)}_{X} -\text{mod})\to D(\widehat{\mathcal{D}}^{(m)}_{W(X)}/p-\text{mod})\]
_is fully faithful and we have_
\[R\mathcal{H}om_{\widehat{\mathcal{D}}^{(m)}_{W(X)}/p}(\mathcal{B}^{(m)}_{X}, \mathcal{B}^{(m)}_{X}\otimes_{\mathcal{D}^{(m)}_{X}}\mathcal{M}\,\cdot)\bar{ \to}\mathcal{M}\,\cdot\]
_for all \(\mathcal{M}\,\cdot\in D(\mathcal{D}^{(m)}_{X}-\text{mod})\)._
Proof.: The existence of the bimodule follows immediately from Theorem 3.6 and 3.7 (or 5.15). By Theorem 3.9, we see that it is locally faithfully flat as a right \(\mathcal{D}^{(m)}_{X}\)-module and locally projective over \(\widehat{\mathcal{D}}^{(m)}_{W(X)}/p\). The rest follows directly once we prove that the adjunction map
\[\mathcal{M}\to\mathcal{H}om_{\widehat{\mathcal{D}}^{(m)}_{W(X)}/p}(\mathcal{ B}^{(m)}_{X},\mathcal{B}^{(m)}_{X}\otimes_{\mathcal{D}^{(m)}_{X}}\mathcal{M}) \tag{3.2}\]
is an isomorphism for all \(\mathcal{M}\in\mathcal{D}^{(m)}_{X}-\text{mod}\). To see this, we may work locally and assume \(X=\text{Spec}(A)\). Let \(\Phi:\mathcal{A}\to W(A)\) be the morphism coming from a coordinatized lift of Frobenius on \(\mathcal{A}\), and \(\pi:W(A)\to\mathcal{A}\) the associated projector. Now note that, as a right \(\mathcal{D}^{(m)}_{X}\)-module, we have
\[\mathcal{B}^{(m)}_{X}=(1-\pi)\mathcal{B}^{(m)}_{X}\oplus\pi\mathcal{B}^{(m)}_ {X}=(1-\pi)\mathcal{B}^{(m)}_{X}\oplus\mathcal{D}^{(m)}_{X}\]
so that
\[\mathcal{B}^{(m)}_{X}\otimes_{\mathcal{D}^{(m)}_{X}}\mathcal{M}=((1-\pi) \mathcal{B}^{(m)}_{X}\otimes_{\mathcal{D}^{(m)}_{X}}\mathcal{M})\oplus \mathcal{M}\]
As the functor \(\mathcal{H}om_{\widehat{\mathcal{D}}^{(m)}_{W(X)}/p}(\mathcal{B}^{(m)}_{X}, \mathcal{N})\) agrees with \(\{n\in\mathcal{N}|(1-\pi)n=0\}\), we see that (3.2) is an isomorphism as required.
This yields the
**Definition 3.17**.: 1) A module \(\mathcal{N}\in\widehat{\mathcal{D}}^{(m)}_{W(X)}/p-\text{mod}\) is accessible if is of the form \(\mathcal{B}^{(m)}_{X}\otimes_{\mathcal{D}^{(m)}_{X}}\mathcal{M}\) for some \(\mathcal{M}\in\mathcal{D}^{(m)}_{X}-\text{mod}\).
2) A complex \(\mathcal{N}\,\cdot\in D(\widehat{\mathcal{D}}^{(m)}_{W(X)}/p-\text{mod})\) is accessible if is of the form \(\mathcal{B}^{(m)}_{X}\otimes^{L}_{\mathcal{D}^{(m)}_{X}}\mathcal{M}\) for some \(\mathcal{M}\,\cdot\in D(\mathcal{D}^{(m)}_{X}-\text{mod})\).
3) Let \(r\geq 1\). A complex \(\mathcal{N}\,\cdot\in D(\widehat{\mathcal{D}}^{(m)}_{W(X)}/p^{r}-\text{mod})\) is accessible if \(\mathcal{N}\,\cdot\otimes^{L}_{W_{r}(k)}k\) is accessible in \(D(\widehat{\mathcal{D}}^{(m)}_{W(X)}/p-\text{mod})\). Similarly, a complex \(\mathcal{N}\,\cdot\in D_{cc}(\widehat{\mathcal{D}}^{(m)}_{W(X)}-\text{mod})\) is accessible if \(\mathcal{N}\,\cdot\otimes^{L}_{W_{r}(k)}k\) is accessible in \(D(\widehat{\mathcal{D}}^{(m)}_{W(X)}/p-\text{mod})\).
As the full subcategory of accessible complexes in \(D(\widehat{\mathcal{D}}^{(m)}_{W(X)}/p-\text{mod})\) is a thick triangulated subcategory, the same is true for the accessible complexes in \(D(\widehat{\mathcal{D}}^{(m)}_{W(X)}/p^{n}-\text{mod})\) and \(D(\widehat{\mathcal{D}}^{(m)}_{W(X)}-\text{mod})\). We denote these categories by
\(D_{\text{acc}}(\widehat{\mathcal{D}}^{(m)}_{W(X)}/p^{n}-\text{mod})\) and \(D_{\text{acc}}(\widehat{\mathcal{D}}^{(m)}_{W(X)}-\text{mod})\), respectively.
In the presence of a lift of Frobenius, we have the following characterization of accessibility. Before giving it, let us set some notation: Let \(A\) be smooth affine which possesses local coordinates and let \(\Phi:\mathcal{A}\to W(A)\) be the morphism coming from a coordinatized lift of Frobenius. For \(\mathcal{M}\colon\in D(\mathcal{D}^{(m)}_{X_{n}}-\text{mod})\), let \(\Phi^{*}\mathcal{M}\colon=\Phi^{*}\mathcal{D}^{(m)}_{X_{n}}\otimes^{L}_{ \mathcal{D}^{(m)}_{X_{n}}}\mathcal{M}\colon\) For \(\mathcal{M}\colon\in D_{cc}(\mathcal{D}^{(m)}_{\mathfrak{X}}-\text{mod})\), let \(\Phi^{*}\mathcal{M}\colon=\Phi^{*}\mathcal{D}^{(m)}_{\mathfrak{X}}\widehat{ \otimes}^{L}_{\mathcal{D}^{(m)}_{\mathfrak{X}}}\mathcal{M}\colon\) the cohomological completion of \(\Phi^{*}\mathcal{M}\colon=\Phi^{*}\mathcal{D}^{(m)}_{\mathfrak{X}}\otimes^{L} _{\mathcal{D}^{(m)}_{\mathfrak{X}}}\mathcal{M}\). Then we have
**Theorem 3.18**.: _Let \(A\), \(\mathcal{A}\) and \(\Phi:\mathcal{A}\to W(A)\) be the morphism coming from a coordinatized lift of Frobenius. Let \(X=\text{Spec}(A)\). Then \(\mathcal{N}\colon\in D(\widehat{\mathcal{D}}^{(m)}_{W(X)}/p^{r}-\text{mod})\) is accessible iff \(\mathcal{N}\!\cdot\!\cong\!\Phi^{*}\mathcal{M}\colon\) for some \(\mathcal{M}\colon\in D(\mathcal{D}^{(m)}_{\mathfrak{X}_{r}}-\text{mod})\) (where \(\mathfrak{X}_{r}=\text{Spec}(\mathcal{A}/p^{r})\))._
_Similarly, \(\mathcal{N}\colon\in D_{cc}(\widehat{\mathcal{D}}^{(m)}_{W(X)}-\text{mod})\) is accessible iff \(\mathcal{N}\!\cdot\!\cong\!\Phi^{*}\mathcal{M}\colon\) for some \(\mathcal{M}\colon\in D_{cc}(\mathcal{D}^{(m)}_{\mathfrak{X}}-\text{mod})\)._
Proof.: Let \(\mathcal{N}\colon\in D(\widehat{\mathcal{D}}^{(m)}_{W(X)}/p-\text{mod})\). It follows from 3.16 that \(\mathcal{N}\) is accessible iff the adjunction
\[\mathcal{N}\!\cdot\to\mathcal{B}^{(m)}_{W(X)}\otimes^{L}_{\mathcal{D}^{(m)}_{ \mathcal{X}}}R\mathcal{H}om_{\widehat{\mathcal{D}}^{(m)}_{W(X)}/p}(\mathcal{B} ^{(m)}_{W(X)},\mathcal{N}\!\cdot)\]
is an isomorphism.
Now let \(\mathcal{N}\colon\in D(\widehat{\mathcal{D}}^{(m)}_{W(X)}/p^{r}-\text{mod})\). Suppose \(\mathcal{N}\) is accessible. We have the bimodule \(\mathcal{B}^{(m)}_{\mathfrak{X}_{r}}=\Phi^{*}\mathcal{D}^{(m)}_{\mathfrak{X}_{ r}}\), which, by Theorem 3.9, is locally projective as a left \(\widehat{\mathcal{D}}^{(m)}_{W(X)}/p^{n}\)-module and is locally faithfully flat as a right \(\mathcal{D}^{(m)}_{\mathfrak{X}_{r}}\)-module. Therefore
\[R\mathcal{H}om_{\widehat{\mathcal{D}}^{(m)}_{W(X)}/p^{r}}(\Phi^{*}\mathcal{D }^{(m)}_{X_{n}},\mathcal{N}\!\cdot)\otimes^{L}_{W_{r}(k)}k\!\cdot\!\tilde{ \to}R\mathcal{H}om_{\widehat{\mathcal{D}}^{(m)}_{W(X)}/p}(\mathcal{B}^{(m)}_{W (X)},\mathcal{N}\!\cdot\otimes^{L}_{W_{r}(k)}k)\]
Therefore,
\[(\Phi^{*}\mathcal{D}^{(m)}_{\mathfrak{X}_{r}}\otimes^{L}_{\mathcal{D}^{(m)}_{ \mathfrak{X}_{r}}}R\mathcal{H}om_{\widehat{\mathcal{D}}^{(m)}_{W(X)}/p^{r}}( \Phi^{*}\mathcal{D}^{(m)}_{\mathfrak{X}_{r}},\mathcal{N}\!\cdot))\otimes^{L} _{W_{r}(k)}k\]
\[\tilde{\to}\mathcal{B}^{(m)}_{X}\otimes^{L}_{\mathcal{D}^{(m)}_{X}}R\mathcal{ H}om_{\widehat{\mathcal{D}}^{(m)}_{W(X)}/p}(\mathcal{B}^{(m)}_{W(X)},\mathcal{N}\! \cdot\otimes^{L}_{W_{r}(k)}k)\]
so that the adjunction map
\[\mathcal{N}\!\cdot\to\Phi^{*}\mathcal{D}^{(m)}_{\mathfrak{X}_{r}}\otimes^{L}_{ \mathcal{D}^{(m)}_{\mathfrak{X}_{r}}}R\mathcal{H}om_{\widehat{\mathcal{D}}^{(m)} _{W(X)}/p^{r}}(\Phi^{*}\mathcal{D}^{(m)}_{\mathfrak{X}_{r}},\mathcal{N}\!\cdot)\]
becomes an isomorphism after applying \(\otimes^{L}_{W_{r}(k)}k\), and therefore is already an isomorphism by Nakayama. Thus we have \(\mathcal{N}\!\cdot\!\cong\!\Phi^{*}\mathcal{M}\colon\) for some \(\mathcal{M}\colon\in\text{Qcoh}(\mathcal{D}^{(m)}_{\mathfrak{X}_{r}})\); the converse direction is clear.
The statement for \(\mathcal{N}\!\cdot\in D_{cc}(\widehat{\mathcal{D}}^{(m)}_{W(X)}-\text{mod})\) is essentially identical, using the Nakayama lemma for cohomologically complete complexes and the fact that
\[R\mathcal{H}om_{\widehat{\mathcal{D}}^{(m)}_{W(X)}}(\Phi^{*}\mathcal{D}^{(m)}_{ \mathfrak{X}_{n}},\mathcal{N}\!\cdot)\]
is cohomologically complete if \(\mathcal{N}\!\cdot\) is (since this sheaf is a summand of \(\mathcal{N}\!\cdot\)).
From this, we obtain
**Corollary 3.19**.: _Let \(D_{acc,qcoh}(\widehat{\mathcal{D}}^{(m)}_{W(X)}/p^{r}-\text{mod})\) denote the full subcategory of \(D_{acc}(\widehat{\mathcal{D}}^{(m)}_{W(X)}/p^{r}-\text{mod})\) consisting of complexes \(\mathcal{N}^{\cdot}\) such that_
\[\mathcal{N}^{\cdot}\otimes_{W_{r}(k)}^{L}k\tilde{\to}\mathcal{B}^{(m)}_{X} \otimes_{\mathcal{D}^{(m)}_{X}}^{L}\mathcal{M}^{\cdot}\]
_where \(\mathcal{H}^{i}(\mathcal{M}^{\cdot})\in\text{Qcoh}(\mathcal{D}^{(m)}_{X})\) for all \(i\). Then, if \(X=\text{Spec}(A)\) as above, we have \(\mathcal{N}^{\cdot}\in D_{acc,qcoh}(\widehat{\mathcal{D}}^{(m)}_{W(X)}/p^{r}- \text{mod})\) iff \(\mathcal{N}^{\cdot}\tilde{=}\Phi^{*}\mathcal{M}^{\cdot}\) for some \(\mathcal{M}^{\cdot}\in D_{qcoh}(\mathcal{D}^{(m)}_{\mathfrak{X}_{r}}-\text{ mod})\)._
Proof.: By the previous theorem, it suffices to show the following: let \(\mathcal{M}^{\cdot}\in D(\mathcal{O}_{\mathfrak{X}_{r}}-\text{mod})\). Then \(\mathcal{M}^{\cdot}\in D(\text{Qcoh}(\mathfrak{X}_{r}))\) iff \(\mathcal{M}^{\cdot}\otimes_{W_{r}(k)}^{L}k\in D(\text{QCoh}(\mathfrak{X}_{r}))\). The forward direction is obvious. For the reverse, consider the short exact sequence
\[0\to W_{r-1}(k)\to W_{r}(k)\to k\to 0\]
where on the left we identify \(pW_{n}(k)\tilde{=}W_{n-1}(k)\). This yields a distinguished triangle
\[\mathcal{M}^{\cdot}\otimes_{W_{r}(k)}^{L}W_{r-1}(k)\to\mathcal{M}^{\cdot} \to\mathcal{M}^{\cdot}\otimes_{W_{r}(k)}^{L}k\]
As \((\mathcal{M}^{\cdot}\otimes_{W_{r}(k)}^{L}W_{r-1}(k))\otimes_{W_{r-1}(k)}^{L}k \tilde{\to}\mathcal{M}^{\cdot}\otimes_{W_{r}(k)}^{L}k\), by induction we may assume \(\mathcal{M}^{\cdot}\otimes_{W_{r}(k)}^{L}W_{r-1}(k)\in D(\text{Qcoh}( \mathfrak{X}_{r-1}))\); hence both edges of the triangle can be regarded as members of \(D(\text{Qcoh}(\mathfrak{X}_{r}))\), therefore \(\mathcal{M}^{\cdot}\in D(\text{Qcoh}(\mathfrak{X}_{r}))\) as well.
We shall also find it useful to consider the category \(D_{\text{acc},qcoh}(\widehat{\mathcal{D}}^{(m)}_{W(X)}-\text{mod})\) consisting of complexes \(\mathcal{N}^{\cdot}\) such that
\[\mathcal{N}^{\cdot}\otimes_{W_{r}(k)}^{L}k\tilde{\to}\mathcal{B}^{(m)}_{X} \otimes_{\mathcal{D}^{(m)}_{X}}^{L}\mathcal{M}^{\cdot}\]
where \(\mathcal{H}^{i}(\mathcal{M}^{\cdot})\in\text{Qcoh}(\mathcal{D}^{(m)}_{X})\) for all \(i\). I don't know a local characterization of it in terms of the functor \(\Phi^{*}\) as above. However, we do have:
**Corollary 3.20**.: _Let \(D^{b}_{acc,coh}(\widehat{\mathcal{D}}^{(m)}_{W(X)}-\text{mod})\) denote the full subcategory of \(D_{acc}(\widehat{\mathcal{D}}^{(m)}_{W(X)}-\text{mod})\) consisting of bounded complexes \(\mathcal{N}^{\cdot}\) such that_
\[\mathcal{N}^{\cdot}\otimes_{W(k)}^{L}k\tilde{\to}\mathcal{B}^{(m)}_{X} \otimes_{\mathcal{D}^{(m)}_{X}}^{L}\mathcal{M}^{\cdot}\]
_where \(\mathcal{H}^{i}(\mathcal{M}^{\cdot})\in\text{Coh}(\mathcal{D}^{(m)}_{X})\) for all \(i\). Then, if \(X=\text{Spec}(A)\) as above, we have \(\mathcal{N}^{\cdot}\in D^{b}_{acc,coh}(\widehat{\mathcal{D}}^{(m)}_{W(X)}- \text{mod})\) iff \(\mathcal{N}^{\cdot}\tilde{=}\Phi^{*}\mathcal{P}^{\cdot}\) for some \(\mathcal{P}^{\cdot}\in D^{b}_{coh}(\widehat{\mathcal{D}}^{(m)}_{\mathfrak{X}}- \text{mod})\). Furthermore, a bounded complex \(\mathcal{N}^{\cdot}\) is contained in \(D^{b}_{acc,coh}(\widehat{\mathcal{D}}^{(m)}_{W(X)}-\text{mod})\) iff each \(\mathcal{H}^{i}(\mathcal{N}^{\cdot})\), considered as a complex in degree \(0\), is contained in \(D^{b}_{acc,coh}(\widehat{\mathcal{D}}^{(m)}_{W(X)}-\text{mod})\). In particular, the elements of \(D^{b}_{acc,coh}(\widehat{\mathcal{D}}^{(m)}_{W(X)}-\text{mod})\) in homological degree zero form an abelian subcategory of \(D_{acc}(\widehat{\mathcal{D}}^{(m)}_{W(X)}-\text{mod})\)._
Proof.: Note that, as the functor \(\otimes_{W(k)}^{L}k\) has finite homological dimension, we have that the complex \(\mathcal{M}^{\cdot}\) in the statement is bounded. Further, \(\mathcal{N}^{\cdot}\tilde{=}\Phi^{*}\mathcal{P}^{\cdot}\) for some \(\mathcal{P}^{\cdot}\in D^{b}_{\text{coh}}(\mathcal{D}^{(m)}_{\mathfrak{X}}- \text{mod})\) clearly implies
\[\mathcal{N}^{\cdot}\otimes_{W(k)}^{L}k\tilde{\to}\mathcal{B}^{(m)}_{X}\otimes_{ \mathcal{D}^{(m)}_{X}}^{L}(\mathcal{P}^{\cdot}\otimes_{W(k)}^{L}k)\]
which gives the converse direction of the if and only if. For the forward direction, note that \(\mathcal{N}^{\cdot}\tilde{=}\Phi^{*}\mathcal{P}^{\cdot}\) for some cohomologically complete complex \(\mathcal{P}^{\cdot}\) which satisfies
\(\mathcal{P}^{\cdot}\otimes_{W(k)}^{L}k\in D^{b}_{\mathrm{coh}}(\mathcal{D}^{(m)}_{ \mathcal{X}}-\mathrm{mod})\); therefore, applying [28], theorem 1.6.4, we have \(\mathcal{P}^{\cdot}\in D^{b}_{\mathrm{coh}}(\widehat{\mathcal{D}}^{(m)}_{ \mathfrak{X}}-\mathrm{mod})\).
Now, we have that \(\Phi^{*}\mathcal{D}^{(m)}_{\mathfrak{X}}\widehat{\otimes}_{\mathcal{D}^{(m)}_{ \mathfrak{X}}}^{L}\widehat{\mathcal{D}}^{(m)}_{\mathfrak{X}}\) is the cohomological completion of
\[\Phi^{*}\mathcal{D}^{(m)}_{\mathfrak{X}}\otimes_{\mathcal{D}^{(m)}_{ \mathfrak{X}}}^{L}\widehat{\mathcal{D}}^{(m)}_{\mathfrak{X}}=\Phi^{*}\mathcal{ D}^{(m)}_{\mathfrak{X}}\]
which, being \(p\)-torsion-free and \(p\)-adically complete, is already cohomologically complete. Thus, for any locally free coherent \(\widehat{\mathcal{D}}^{(m)}_{\mathfrak{X}}\)-module \(\mathcal{P}\), we have
\[\Phi^{*}\mathcal{D}^{(m)}_{\mathfrak{X}}\widehat{\otimes}_{\mathcal{D}^{(m)}_ {\mathfrak{X}}}^{L}\mathcal{P}\dot{=}\Phi^{*}\mathcal{D}^{(m)}_{\mathfrak{X}} \otimes_{\mathcal{D}^{(m)}_{\mathfrak{X}}}^{L}\mathcal{P}=\Phi^{*}\mathcal{D} ^{(m)}_{\mathfrak{X}}\otimes_{\mathcal{D}^{(m)}_{\mathfrak{X}}}\mathcal{P}\]
As \(\widehat{\mathcal{D}}^{(m)}_{\mathfrak{X}}\) has finite homological dimension, we have that any coherent \(\widehat{\mathcal{D}}^{(m)}_{\mathfrak{X}}\)-module \(\mathcal{M}\) is quasi-isomorphic to a finite complex of locally free coherent \(\widehat{\mathcal{D}}^{(m)}_{\mathfrak{X}}\)-modules, and so, as \(\Phi^{*}\mathcal{D}^{(m)}_{\mathfrak{X}}\) is flat over \(\mathcal{D}^{(m)}_{\mathfrak{X}}\), we see
\[\Phi^{*}\mathcal{D}^{(m)}_{\mathfrak{X}}\widehat{\otimes}_{\mathcal{D}^{(m)}_ {\mathfrak{X}}}^{L}\mathcal{M}\dot{=}\Phi^{*}\mathcal{D}^{(m)}_{\mathfrak{X}} \otimes_{\mathcal{D}^{(m)}_{\mathfrak{X}}}\mathcal{M}\]
for all such \(\mathcal{M}\); by induction on the cohomological length we see that
\[\Phi^{*}\mathcal{D}^{(m)}_{\mathfrak{X}}\widehat{\otimes}_{\mathcal{D}^{(m)}_ {\mathfrak{X}}}^{L}\mathcal{M}\dot{=}\Phi^{*}\mathcal{D}^{(m)}_{\mathfrak{X}} \otimes_{\mathcal{D}^{(m)}_{\mathfrak{X}}}^{L}\mathcal{M}\dot{\cdot}\]
for all \(\mathcal{M}\in D^{b}_{\mathrm{coh}}(\widehat{\mathcal{D}}^{(m)}_{\mathfrak{X} }-\mathrm{mod})\). Therefore, a bounded complex \(\mathcal{N}^{\cdot}\) is contained in \(D^{b}_{\mathrm{acc},\mathrm{coh}}(\widehat{\mathcal{D}}^{(m)}_{W(X)}-\mathrm{ mod})\) iff we have \(\mathcal{H}^{i}(\mathcal{N}^{\cdot})=\Phi^{*}\mathcal{D}^{(m)}_{\mathfrak{X}} \otimes_{\mathcal{D}^{(m)}_{\mathfrak{X}}}\mathcal{M}\) for some coherent \(\widehat{\mathcal{D}}^{(m)}_{\mathfrak{X}}\)-module \(\mathcal{M}\); indeed the forward direction of this follows immediately from the above discussion, and the converse follows by induction on the cohomological length; the rest of the corollary follows immediately.
_Remark 3.21_.: In the course of the above proof we showed that
\[\Phi^{*}\mathcal{D}^{(m)}_{\mathfrak{X}}\widehat{\otimes}_{\mathcal{D}^{(m)}_ {\mathfrak{X}}}^{L}\mathcal{M}\dot{=}\Phi^{*}\mathcal{D}^{(m)}_{\mathfrak{X}} \otimes_{\mathcal{D}^{(m)}_{\mathfrak{X}}}\mathcal{M}\]
for any coherent \(\mathcal{D}^{(m)}_{\mathfrak{X}}\)-module \(\mathcal{M}\). In fact this isomorphism holds whenever \(\mathcal{M}\) is a cohomologically complete \(\mathcal{D}^{(m)}_{\mathfrak{X}}\)-module with bounded \(p\)-torsion; i.e., there is some \(N\in\mathbb{N}\) such that, if a section \(m\) is killed by a power of \(p\), then it is killed by \(p^{N}\). Let \(\mathcal{M}_{\mathrm{tors}}\) denote this subsheaf. Then, since modules of bounded torsion are cohomologically complete, we see that \(\Phi^{*}\mathcal{D}^{(m)}_{\mathfrak{X}}\widehat{\otimes}_{\mathcal{D}^{(m)}_ {\mathfrak{X}}}^{L}\mathcal{M}_{\mathrm{tors}}=\Phi^{*}\mathcal{D}^{(m)}_{ \mathfrak{X}}\otimes_{\mathcal{D}^{(m)}_{\mathfrak{X}}}^{L}\mathcal{M}_{ \mathrm{tors}}=\Phi^{*}\mathcal{D}^{(m)}_{\mathfrak{X}}\otimes_{\mathcal{D}^{(m )}_{\mathfrak{X}}}\mathcal{M}_{\mathrm{tors}}\). Further, since \(\mathcal{M}/\mathcal{M}_{\mathrm{tors}}\) is \(p\)-torsion-free, so is \(\Phi^{*}\mathcal{D}^{(m)}_{\mathfrak{X}}\otimes_{\mathcal{D}^{(m)}_{\mathfrak{X}}} \mathcal{M}/\mathcal{M}_{\mathrm{tors}}\) (by the flatness of \(\Phi^{*}\mathcal{D}^{(m)}_{\mathfrak{X}}\)) and we have that \(\Phi^{*}\mathcal{D}^{(m)}_{\mathfrak{X}}\widehat{\otimes}_{\mathcal{D}^{(m)}_ {\mathfrak{X}}}^{L}\mathcal{M}/\mathcal{M}_{\mathrm{tors}}\) is simply the \(p\)-adic completion of \(\Phi^{*}\mathcal{D}^{(m)}_{\mathfrak{X}}\otimes_{\mathcal{D}^{(m)}_{\mathfrak{X}}} \mathcal{M}/\mathcal{M}_{\mathrm{tors}}\), which lives in homological degree \(0\). Therefore the result follows for \(\mathcal{M}\) from the short exact sequence
\[0\rightarrow\mathcal{M}_{\mathrm{tors}}\rightarrow\mathcal{M}\rightarrow \mathcal{M}/\mathcal{M}_{\mathrm{tors}}\to 0\]
When working mod \(p^{r}\) we can do even better:
**Corollary 3.22**.: _1) Let \(\mathcal{N}^{\cdot}\in D(\widehat{\mathcal{D}}^{(m)}_{W(X)}/p^{r}-\text{mod})\). Then \(\mathcal{N}^{\cdot}\) is accessible iff \(\mathcal{H}^{i}(\mathcal{N}^{\cdot})\) (considered as a complex concentrated in degree \(0\)) is accessible for all \(i\). Thus the full subcategory of \(\widehat{\mathcal{D}}^{(m)}_{W(X)}/p^{r}-\text{mod}\) consisting of accessible modules (when
_considered as complexes in degree \(0\)) is abelian. The same holds for accessible quasicoherent and coherent modules._
Proof.: The statement is local, so we can apply 3.19 using the exactness of \(\Phi^{*}\) and the functor \(\mathcal{H}om_{\widehat{\mathcal{D}}_{W(X)}^{(m)}/p^{r}}(\Phi^{*}\mathcal{D}_{ \mathfrak{X}_{r}}^{(m)},)\).
We denote by \(\widehat{\mathcal{D}}_{W(X)}^{(m)}/p^{r}-\operatorname{mod}_{\operatorname{acc }}\), \(\widehat{\mathcal{D}}_{W(X)}^{(m)}/p^{r}-\operatorname{mod}_{\operatorname{acc },\operatorname{qcoh}}\), and \(\widehat{\mathcal{D}}_{W(X)}^{(m)}/p^{r}-\operatorname{mod}_{\operatorname{acc },\operatorname{coh}}\) the abelian category of accessible modules, accessible quasicoherent modules, and accessible coherent modules, respectively.
Now let us note what happens in the presence of a global smooth lift \(\mathfrak{X}_{r}\) of \(X\) (over \(W_{r}(k)\); we allow \(r=\infty\) here to cover the case of a smooth formal \(\mathfrak{X}\) over \(W(k)\)).
**Corollary 3.23**.: _Suppose that \(p>2\) if \(m=0\). There is a \((\widehat{\mathcal{D}}_{W(X)}^{(m)}/p^{r},\mathcal{D}_{\mathfrak{X}_{r}}^{(m )})\)-bimodule \(\mathcal{B}_{\mathfrak{X}_{r}}^{(m)}\) which is locally isomorphic to \(\Phi^{*}\mathcal{D}_{\mathfrak{X}_{r}}^{(m)}\) whenever we have \(X=\text{Spec}(A)\) as above. This bimodule induces an equivalence of categories_
\[\mathcal{D}_{\mathfrak{X}_{r}}^{(m)}-\text{mod}\to\widehat{\mathcal{D}}_{W(X) }^{(m)}/p^{r}-\text{mod}_{\operatorname{acc}}\]
_as well as a derived equivalence_
\[D(\mathcal{D}_{\mathfrak{X}_{r}}^{(m)}-\text{mod})\to D_{\operatorname{acc}}( \widehat{\mathcal{D}}_{W(X)}^{(m)}/p^{r}-\text{mod})\]
_and when \(r=\infty\) and equivalence_
\[D_{cc}(\widehat{\mathcal{D}}_{\mathfrak{X}}^{(m)}-\text{mod})\to D_{ \operatorname{acc}}(\widehat{\mathcal{D}}_{W(X)}^{(m)}-\text{mod})\]
This follows immediately from the above discussions and 3.7 and Theorem 3.6.
To finish off this section we make some remarks about the completions in this theory, which will be useful when discussing right \(\mathcal{D}\)-modules and the de Rham-Witt resolution. Let us begin by noting the
**Lemma 3.24**.: _Let \(A\) be smooth affine which possesses local coordinates, and let \(\Phi:\mathcal{A}\to W(A)\) be the morphism coming from a coordinatized lift of Frobenius. Let \(X=\text{Spec}(A)\). For \(r\geq 1\) consider the filtrations \(\{V^{m}(W(A)/p^{r})\}_{m\geq 1}\) and_
\[F^{m}(W(A)/p^{r})=\{\sum_{(I,p)=1}a_{I}p^{s}T^{I/p^{s}}|a_{I}=0\text{ for }s<m\}\]
_(for \(m\geq 1\)). These filtrations satisfy \(F^{m}(W(A)/p^{r})\subset V^{m}(W(A)/p^{r})\) (for all \(m\)) and \(V^{m+r-1}(W(A)/p^{r})\subset F^{m}(W(A)/p^{r})\) (for all \(m\))._
Proof.: The first inclusion is obvious since \(p^{s}T^{I/p^{s}}\in V^{m}\) whenever \(s\geq m\). For the second, set \(s=r+m-1\) and note that for \(I\in\mathbb{Z}_{\geq 0}^{d}\)
\[V^{s}(T^{I})=p^{s}T^{I/p^{s}}\]
Write \(T^{I}=T^{I_{1}}T^{I_{2}}\) where the valuation of each entry in \(I_{1}\) is \(\geq s\) and the valuation of each entry in \(I_{2}\) is \(<s\) (if there are no entries in the second category we take \(I_{2}=\emptyset\)). Let \(J_{1}=I_{1}/p^{s}\). Then
\[p^{s}T^{I/p^{s}}=T^{J_{1}}\cdot p^{s}T^{I_{2}/p^{s}}\]
If \(I_{2}=\emptyset\) then \(p^{s}T^{I/p^{s}}=p^{s}T^{J_{1}}=0\) (as \(s\geq r\)). If \(I_{2}\neq\emptyset\) then let \(t=\min\{\operatorname{val}_{p}(i)|i\in I_{2}\}\), and set \(J_{2}=I_{2}/p^{t}\). Then
\[p^{s}T^{I_{2}/p^{s}}=p^{t}p^{s-t}T^{J_{2}/p^{s-t}}\]
and if \(t<r\) then \(s-t\geq m\). So we have either \(p^{s}T^{I_{2}/p^{s}}=0\) (because \(t\geq r\)) or \(p^{s}T^{I_{2}/p^{s}}\in F^{m}(W(A)/p^{r})\); so we see that \(V^{r+m-1}(T^{I})\in F^{m}(W(A)/p^{r})\) for all \(I\). As \(W(A)/p^{r}\) is the completion of the span over \(\mathcal{A}/p^{r}\) of terms of the form \(T^{I}\) and \(V^{j}(T^{I})\) for some \(j>0\), the result follows.
This allows us to consider limits in the theory. Recall that if \(\mathcal{F}_{i}\) is a sequence of objects in a triangulated category with maps \(\mathcal{F}_{i}\to\mathcal{F}_{i-1}\) an object \(\mathcal{L}\) is said to be a homotopy limit of the \(\mathcal{F}_{i}\) if there is a distinguished triangle
\[\mathcal{L}\to\prod_{i}\mathcal{F}_{i}\to\prod_{i}\mathcal{F}_{i}\]
where the second map is \(\operatorname{Id}\ -\operatorname{Shift}\). Even if they exist, homotopy limits are not functorial in a general triangulated category. However, in this paper all of our triangulated categories will be the derived category of modules over some sheaf of rings on \(X\), or a full subcategory theoreof. In this setting, functorial cones and limits do exists; indeed if \(\mathcal{R}\) is a sheaf of rings on \(X\) then, by a fundamental result of Spaltenstein, we have
\[D(\mathcal{R}-\operatorname{mod}){\widetilde{\to}}K(\operatorname{k-inj}( \mathcal{R}))\]
where \(\operatorname{k-inj}(\mathcal{R})\) is the category of \(\operatorname{k-injective}\) complexes in \(\mathcal{R}-\operatorname{mod}\), and \(K()\) is the homotopy category thereof. In other words, we may replace any element \(\mathcal{M}\colon\in D(\mathcal{R}-\operatorname{mod})\) with a \(\operatorname{k-injective}\) complex, and this replacement is unique up to homotopy. As the category \(K(\operatorname{k-inj}(\mathcal{R}))\) admits functorial cones, so too does the category \(D(\mathcal{R}-\operatorname{mod})\). So in this setting we will write \(\operatorname{holim}(\mathcal{F}_{i})\) for the functorially defined homotopy limit of the \(\mathcal{F}_{i}\).
Specializing further, suppose \(\mathcal{M}\colon\in\widehat{\mathcal{D}}^{(m)}_{W(X)}/p^{r}-\operatorname{mod}\). Then we define the derived completion of \(\mathcal{M}\colon\) along \(V^{i}(\mathcal{O}_{W(X)}/p^{r})\) to be
\[\widehat{\mathcal{M}}\colon=\operatorname{holim}(\mathcal{M}\cdot\otimes_{ \mathcal{O}_{W(X)/p^{r}}}^{L}(\mathcal{O}_{W(X)}/p^{r})/V^{i}(\mathcal{O}_{W( X)}/p^{r}))\]
In the accessible, quasicoherent case, this object admits a more direct description:
**Proposition 3.25**.: _Let \(r\geq 1\). Let \((\mathcal{M}^{j},d)\) be a complex of \(\widehat{\mathcal{D}}^{(m)}_{W(X)}/p^{r}\) modules, and suppose its image \(\mathcal{M}\colon\) in \(D(\widehat{\mathcal{D}}^{(m)}_{W(X)}/p^{r}-\operatorname{mod})\) is accessible and quasicoherent. Form the complex \(\mathcal{N}^{j}\) whose terms are the completions_
\[\widehat{\mathcal{M}}^{j}:=\lim_{i}\mathcal{M}^{j}/V^{i}(\mathcal{O}_{W(X)}/p ^{r})\]
_Then there is an isomorphism_
\[\mathcal{N}\operatorname{\widetilde{\to}}\widehat{\mathcal{M}}\]
_In particular, if \(X=\text{Spec}(A)\) possesses local coordinates and \(\Phi:\mathcal{A}_{r}\to W(A)/p^{r}\) is as above, then for any \(\mathcal{M}\colon\in D_{\text{acc}}(\widehat{\mathcal{D}}^{(m)}_{W(X)}/p^{r}- \operatorname{mod})\), if \(\mathcal{M}\colon=\Phi^{*}\mathcal{N}\) then_
\[R\lim_{i}(\mathcal{M}\cdot\otimes_{\mathcal{O}_{W(X)/p^{r}}}^{L}(\mathcal{O}_ {W(X)}/p^{r})/V^{i}){\tilde{=}}\widehat{\Phi}^{*}\mathcal{N}.\]
_where \(\widehat{\Phi}^{*}\) denotes pullback followed by completion. For each \(\mathcal{N}^{i}\), \(\widehat{\Phi}^{*}\mathcal{N}^{i}\) is an inverse limit of a direct sum of copies of \(\mathcal{N}^{i}\)._
Proof.: Let \(\mathcal{M}\colon\to\mathcal{F}\) be a \(\operatorname{K-flat}\) resolution in \(\widehat{\mathcal{D}}^{(m)}_{W(X)}/p^{r}-\operatorname{mod}\). Then \(\mathcal{M}\otimes_{\mathcal{O}_{W(X)/p^{r}}}^{L}(\mathcal{O}_{W(X)}/p^{r})/V^ {i}\) is represented by the complex \(\mathcal{F}\cdot\otimes_{\mathcal{O}_{W(X)/p^{r}}}(\mathcal{O}_{W(X)}/p^{r})/V ^{i})\) which is the complex whose terms are \(\mathcal{F}^{j}/V^{i}(\mathcal{O}_{W(X)}/p^{r})\). Thus we obtain a map of
complexes whose terms are \(\mathcal{M}^{j}/V^{i}(\mathcal{O}_{W(X)}/p^{r})\to\mathcal{F}^{j}/V^{i}(\mathcal{O }_{W(X)}/p^{r})\). Taking the inverse limit we obtain a map of complexes
\[\mathcal{N}\,\to(\widehat{\mathcal{F}})\cdot\]
and the latter is a complex representing \(\widehat{\mathcal{M}}\). To show this is an isomorphism, we may work locally and assume \(X=\operatorname{Spec}(A)\) possesses local coordinates. Choose a quasi-isomorphism \(\mathcal{F}\cdot\to\mathcal{H}\cdot\), where \(\mathcal{H}\cdot\) is a k-flat complex whose terms are direct sums of \(\Phi^{*}\mathcal{D}^{(0)}_{\mathfrak{X}_{r}}\) (this is possible as \(\mathcal{M}\cdot\) is accessible and quasicoherent). It suffices to show that the induced map \((\widehat{\mathcal{M}})\cdot\to(\widehat{\mathcal{H}})\cdot\) is an isomorphism.
To see it, apply \(\otimes^{L}_{W_{r}(k)}k\). We can evaluate it using the fact that each \(\mathcal{M}^{i}\tilde{=}\Phi^{*}\mathcal{N}^{i}\), and that the completion along \(V^{i}\) is equal to the completion along the filtration \(F^{i}\) (by the lemma above); this makes \(\widehat{\mathcal{M}}^{i}\) an inverse limit of a surjective system of modules, each of which is a direct sum of copies of \(\mathcal{M}^{i}\). This shows that
\[(\widehat{\mathcal{M}})\cdot\otimes^{L}_{W_{r}(k)}k\tilde{=}\widehat{\Phi}^{ *}(\mathcal{N}\cdot\otimes^{L}_{W(k)}k)\]
By the same token, if \(\mathcal{H}^{i}=\bigoplus_{J_{i}}\Phi^{*}\mathcal{D}^{(0)}_{\mathfrak{X}_{r}}\), then
\[(\widehat{\mathcal{H}})^{i}\otimes^{L}_{W_{r}(k)}k\tilde{=}\widehat{\Phi}^{*} (\bigoplus_{J_{i}}\mathcal{D}^{(0)}_{X})\]
and since \(\widehat{\Phi}^{*}\) is exact and conservative we obtain the result.
### The crystalline version
To relate the theory considered here to crystals, we run the above program with \(\widehat{\mathcal{D}}^{(m)}_{W(X),crys}\) in place of \(\widehat{\mathcal{D}}^{(m)}_{W(X)}\). All of the theorems above go through without any change, and we obtain
**Theorem 3.26**.: _1) There is a well-defined \((\widehat{\mathcal{D}}^{(0)}_{W(X),\,crys}/p,\mathcal{D}^{(0)}_{X,\,crys})\) bimodule, denoted \(\mathcal{B}^{(0)}_{X,\,crys}\) which is locally projective over \(\widehat{\mathcal{D}}^{(0)}_{W(X),\,crys}/p\), and locally faithfully flat as a right \(\mathcal{D}^{(0)}_{X,\,crys}\)-module. The associated functor_
\[\mathcal{B}^{(0)}_{X,\,crys}\otimes_{\mathcal{D}^{(0)}_{X,\,crys}}?:\mathcal{D }^{(0)}_{X,\,crys}-\text{mod}\to\widehat{\mathcal{D}}^{(0)}_{W(X),\,crys}/p- \text{mod}\]
_is exact and fully faithful, and admits an exact right adjoint \(\mathcal{H}om_{\widehat{\mathcal{D}}^{(0)}_{W(X),\,crys}/p}(\mathcal{B}^{(0)} _{X,\,crys},?)\). We have_
\[\mathcal{H}om_{\widehat{\mathcal{D}}^{(0)}_{W(X),\,crys}/p}(\mathcal{B}^{(0)} _{X,\,crys},\mathcal{B}^{(0)}_{X,\,crys}\otimes_{\mathcal{D}^{(0)}_{X,\,crys}} \mathcal{M})\tilde{\to}\mathcal{M}\]
_for all \(\mathcal{M}\in\mathcal{D}^{(0)}_{X,\,crys}-\text{mod}\). Therefore, the functor_
\[\mathcal{B}_{X,\,\,crys}\otimes^{L}_{\mathcal{D}^{(0)}_{X,\,crys}}:D(\mathcal{ D}^{(0)}_{X,\,crys}-\text{mod})\to D(\widehat{\mathcal{D}}^{(0)}_{W(X),\,crys}/p- \text{mod})\]
_is fully faithful and we have_
\[R\mathcal{H}om_{\widehat{\mathcal{D}}^{(0)}_{W(X),\,crys}/p}(\mathcal{B}^{(0)} _{X,\,crys},\mathcal{B}^{(0)}_{X,\,crys}\otimes_{\mathcal{D}^{(0)}_{X,\,crys}} \mathcal{M}\cdot)\tilde{\to}\mathcal{M}\]
_for all \(\mathcal{M}\cdot\in D(\mathcal{D}^{(0)}_{X,\,crys}-\text{mod})\)._
_2) A module \(\mathcal{M}\cdot\in D(\widehat{\mathcal{D}}^{(0)}_{W(X),\,crys}/p-\text{mod})\) is called accessible if it is of the form \(\mathcal{B}^{(0)}_{X,\,crys}\otimes^{L}_{\mathcal{D}^{(0)}_{X,\,crys}}\mathcal{N}\) for some \(\mathcal{N}\cdot\in D(\mathcal{D}^{(0)}_{X,\,crys}-\text{mod})\). Let \(\mathcal{M}\cdot\in D_{cc}(\widehat{\mathcal{D}}^{(0)}_{W(X),\,crys}-\text{mod})\). Then \(\mathcal{M}\cdot\) is said to be accessible if \(\mathcal{M}\otimes^{L}_{W(k)}k\) is accessible inside \(D(\widehat{\mathcal{D}}^{(0)}_{W(X)}/p-\text{mod})\)._
mod); similarly \(\mathcal{M}\!:\in D(\widehat{\mathcal{D}}^{(0)}_{W(X),\,\text{crys}}/p^{r}-\text{ mod})\) is said to accessible if \(\mathcal{M}\!:\otimes^{L}_{W_{r}(k)}k\) is accessible inside \(D(\widehat{\mathcal{D}}^{(0)}_{W(X)}/p-\text{mod})\)._
_The complex \(\mathcal{M}\!:\) is accessible iff, for any open affine \(\text{Spec}(A)\subset X\), which admits local coordinates, and any coordinatized lift of Frobenius \(\Phi\), we have_
\[\mathcal{M}\!:\widetilde{\to}\Phi^{*}\widehat{\mathcal{D}}^{(0)}_{\mathfrak{X },\text{crys}}\widehat{\otimes}^{L}_{\widehat{\mathcal{D}}^{(0)}_{\mathfrak{X },\text{crys}}}\mathcal{N}\!:\]
_for a complex \(\mathcal{N}\!:\in D_{cc}(\widehat{\mathcal{D}}^{(0)}_{\mathfrak{X}}-\text{ mod})\) In particular, the latter condition is independent of the choice of \(\Phi\); one has the analogous statement for \(D(\widehat{\mathcal{D}}^{(0)}_{W(X),\,\text{crys}}/p^{r}-\text{mod})\), in which case \(\mathcal{M}\!:\) is accessible iff \(\mathcal{H}^{i}(\mathcal{M}\!:)\) is for all \(i\)._
_3) Suppose that \(\mathfrak{X}\!\) is a smooth formal scheme over \(W(k)\) whose special fibre is \(X\) (it might not exist in general). Then there is an equivalence of categories_
\[D_{cc}(\widehat{\mathcal{D}}^{(0)}_{\mathfrak{X},\text{crys}}-\text{mod}) \to D_{acc}(\widehat{\mathcal{D}}^{(0)}_{W(X),\,\text{crys}}-\text{mod})\]
_and the analogous fact holds for schemes \(\mathfrak{X}_{r}\) which are smooth over \(W_{r}(k)\)._
In addition, we have a new subcategory to consider:
**Definition 3.27**.: An object \(\mathcal{M}\) in \(\widehat{\mathcal{D}}^{(0)}_{W(X),\text{crys}}/p^{r}-\text{mod}_{\text{acc}}\) is said to be locally nilpotent if each local section is annihilated by \(F^{m}(\widehat{\mathcal{D}}^{(0)}_{W(X),\text{crys}}/p^{r})\) for all \(m>>0\) (here, \(F^{m}\) is the image of the operator filtration on \(\widehat{\mathcal{D}}^{(0)}_{W(X),\text{crys}}\)). If \(X=\text{Spec}(A)\) possesses local coordinates this is equivalent to \(\mathcal{M}\!\tilde{=}\Phi^{*}\mathcal{N}\) where \(\mathcal{N}\) is locally nilpotent over \(\mathcal{D}^{(0)}_{\mathfrak{X}_{r},\text{crys}}\) in the usual sense (each section is killed by a power of the ideal \(\mathcal{I}_{r}\)). One makes the same definition over \(\widehat{\mathcal{D}}^{(0)}_{W(X)}/p^{r}\), and the category of locally nilpotent accessible \(\widehat{\mathcal{D}}^{(0)}_{W(X)}/p^{r}\)-modules is equivalent to the category of locally nilpotent accessible \(\widehat{\mathcal{D}}^{(0)}_{W(X),\text{crys}}/p^{r}\)-modules; indeed, the local nilpotence condition ensures that the \(\widehat{\mathcal{D}}^{(0)}_{W(X)}/p^{r}\)-module structure extends uniquely to a \(\widehat{\mathcal{D}}^{(0)}_{W(X),\text{crys}}/p^{r}\)-module.
We use the subscript \(\ln\) to denote locally nilpotent objects in a given category. Then we have
**Proposition 3.28**.: _The map \(\widehat{\mathcal{D}}^{(0)}_{W(X)}/p^{r}\to\widehat{\mathcal{D}}^{(0)}_{W(X), \text{crys}}/p^{r}\) yields a functor_
\[\widehat{\mathcal{D}}^{(0)}_{W(X),\text{crys}}/p^{r}-\text{mod}_{\text{acc},\, \text{qcoh},\ln}\to\widehat{\mathcal{D}}^{(0)}_{W(X)}/p^{r}-\text{mod}_{ \text{acc},\,\text{qcoh}}\]
_This functor is fully faithful, and its image consists of all sheaves \(\mathcal{M}\) such which are accessible, quasicoherent, and locally nilpotent._
Proof.: The map of algebras \(\widehat{\mathcal{D}}^{(0)}_{W(X)}/p^{r}\to\widehat{\mathcal{D}}^{(0)}_{W(X), \text{crys}}/p^{r}\) yields a forgetful functor \(D(\widehat{\mathcal{D}}^{(0)}_{W(X),\text{crys}}/p^{r}-\text{mod})\to D( \widehat{\mathcal{D}}^{(0)}_{W(X)}/p^{r}-\text{mod})\). Suppose \(\mathcal{M}\) is an accessible, quasicoherent, and locally nilpotent module over \(\widehat{\mathcal{D}}^{(0)}_{W(X),\text{crys}}/p^{r}\). Restricting to \(X=\text{Spec}(A)\) for some \(A\) which possesses local coordinates, we may write
\[\mathcal{M}\!\tilde{\to}\Phi^{*}\mathcal{D}^{(0)}_{\mathfrak{X}_{r},\text{crys} }\otimes^{L}_{\mathcal{D}^{(0)}_{\mathfrak{X}_{r},\text{crys}}}\mathcal{N}\]
As \(\mathcal{N}\) is quasicoherent and locally nilpotent over \(\mathcal{D}^{(0)}_{\mathfrak{X}_{r},\text{crys}}\), it is a union of its coherent \(\mathcal{D}^{(0)}_{\mathfrak{X}_{r},\text{crys}}\)-submodules which are nilpotent (i.e. the entire module is annihilated by
some power of \(\mathcal{I}_{r}\)). Let \(\mathcal{N}^{\prime}\) be such a coherent submodule. Then the natural map
\[\Phi^{*}\mathcal{D}^{(0)}_{\mathfrak{X}_{r}}\otimes^{L}_{\mathcal{D}^{(0)}_{ \mathfrak{X}_{r}}}\mathcal{N}^{\prime}\to\Phi^{*}\mathcal{D}^{(0)}_{\mathfrak{ X}_{r},\text{crys}}\otimes^{L}_{\mathcal{D}^{(0)}_{\mathfrak{X}_{r},\text{crys}}} \mathcal{N}^{\prime}\]
is an isomorphism. To see this, note that \(\mathcal{N}^{\prime}\) is also coherent over \(\mathcal{D}^{(0)}_{\mathfrak{X}_{r}}\). Therefore both sides are isomorphic to \(\underset{\leftarrow}{\lim}(\mathcal{O}_{W(X)}/p^{r})/V^{m}\otimes_{ \mathcal{O}_{x_{r}}}\mathcal{N}^{\prime}\) (use a presentation of \(\mathcal{N}^{\prime}\) over \(\mathcal{D}^{(0)}_{\mathfrak{X}_{r}}\) on the left hand side and over \(\mathcal{D}^{(0)}_{\mathfrak{X}_{r},\text{crys}}\) on the right hand side). As the tensor product commutes with inductive limits, we see that
\[\Phi^{*}\mathcal{D}^{(0)}_{\mathfrak{X}_{r}}\otimes^{L}_{\mathcal{D}^{(0)}_{ \mathfrak{X}_{r}}}\mathcal{N}\tilde{\to}\Phi^{*}\mathcal{D}^{(0)}_{\mathfrak{ X}_{r},\text{crys}}\otimes^{L}_{\mathcal{D}^{(0)}_{\mathfrak{X}_{r},\text{crys}}} \mathcal{N}\]
In other words, \(\mathcal{M}\) is also accessible and quasicoherent when regarded as a module over \(\widehat{\mathcal{D}}^{(0)}_{W(X)}/p^{r}\). Thus the natural functor
\[\widehat{\mathcal{D}}^{(0)}_{W(X),\text{crys}}/p^{r}-\text{mod}_{\text{acc}, \text{qcoh},\text{ln}}\to\widehat{\mathcal{D}}^{(0)}_{W(X)}/p^{r}-\text{mod} _{\text{acc},\text{qcoh}}\]
has image in \(\widehat{\mathcal{D}}^{(0)}_{W(X)}/p^{r}-\text{mod}_{\text{acc},\text{qcoh}}\) as claimed, and is clearly onto the category of accessible, quasicoherent, and locally nilpotent modules. For the full faithfulness, for any two objects \(\mathcal{M}_{1},\mathcal{M}_{2}\) in \(\widehat{\mathcal{D}}^{(0)}_{W(X),\text{crys}}/p^{r}-\text{mod}_{\text{acc}, \text{qcoh},\text{ln}}\), we consider the morphism of sheaves
\[\mathcal{H}om_{\widehat{\mathcal{D}}^{(0)}_{W(X),\text{crys}}/p^{r}}( \mathcal{M}_{1},\mathcal{M}_{2})\to\mathcal{H}om_{\widehat{\mathcal{D}}^{(0)}_ {W(X)}/p^{r}}(\mathcal{M}_{1},\mathcal{M}_{2})\]
we will be done if we can show it is an isomorphism; to do so, we can work locally and suppose \(X=\text{Spec}(A)\) and \(\mathcal{M}_{i}=\Phi^{*}\mathcal{N}_{i}\) (for \(i=1,2\)). Then we need to show that
\[\mathcal{H}om_{\mathcal{D}^{(0)}_{\mathfrak{X}_{r},\text{crys}}}(\mathcal{N} _{1},\mathcal{N}_{2})\to\mathcal{H}om_{\mathcal{D}^{(0)}_{\mathfrak{X}_{r}}}( \mathcal{N}_{1},\mathcal{N}_{2})\]
is an isomorphism; but this is clear.
Now we give the relation with crystals in the usual sense. It reads
**Theorem 3.29**.: _Consider the category \(\text{Crys}_{W_{r}(k)}(X)\) of crystals on \(X\) over \(W_{r}(k)\). There is an exact functor_
\[\eta:\text{Crys}_{W_{r}(k)}(X))\to\widehat{\mathcal{D}}^{(0)}_{W(X),\text{crys }}/p^{r}-\text{mod}_{\text{acc},}\]
_which is fully faithful. Let_
\[\epsilon:\text{Crys}_{W_{r}(k)}(X))\to\widehat{\mathcal{D}}^{(0)}_{W(X)}/p^{r}- \text{mod}\]
_denote the composition of the restriction of \(\eta\) with the forgetful functor to \(\widehat{\mathcal{D}}^{(0)}_{W(X)}/p^{r}-\text{mod}\). Then, upon restriction to \(\text{Qcoh}(\text{Crys}_{W_{r}(k)}(X)))\)\(\epsilon\) lands in \(\widehat{\mathcal{D}}^{(0)}_{W(X)}/p^{r}-\text{mod}_{\text{acc},\text{qcoh}}\). The functor_
\[\epsilon:D_{\text{qcoh}}(\text{Crys}_{W_{r}(k)}(X))\to D_{\text{acc},\text{ qcoh}}(\widehat{\mathcal{D}}^{(0)}_{W(X)}/p^{r}-\text{mod})\]
_is also fully faithful. The image of \(\epsilon\) consists of all complexes \(\mathcal{M}^{\cdot}\) such that, for each \(i\), \(\mathcal{H}^{i}(\mathcal{M}^{\cdot})\) is accessible, quasicoherent, and locally nilpotent._
Proof.: Let \(\tilde{\mathcal{M}}\) be an element of \(\text{Crys}_{W_{r}(k)}(X)\). For every open subset of the form \(U=\text{Spec}(A)\) (which possesses local coordinates), and every flat lift \(\mathcal{A}_{r}\) to \(W_{r}(k)\), \(\tilde{\mathcal{M}}\) produces a canonically defined sheaf with locally nilpotent flat connection on \(\mathcal{A}_{r}\), denoted \((\mathcal{N},\nabla)\). For any map \(\Phi:\mathcal{A}_{r}\to W(A)/p^{r}\) coming from a coordinatized
lift of Frobenius, we have \(\widehat{\Phi}^{*}\mathcal{N}\), the completion (along \(V^{i}(\mathcal{O}_{W(X)}/p^{r})\)) of \(\Phi^{*}\mathcal{N}\); this is exactly the sheaf over \(W(X)_{p^{r}=0}\) that the theory of the crystalline site attaches to \(\mathcal{N}\) (coming from the fact that \(\Phi:\mathcal{A}_{r}\to W(A)/p^{r}\) is an inverse limit of pd thickenings).
For any other such map \(\Psi\), the crystalline theory yields a canonical isomorphism \(\widehat{\Phi}^{*}\mathcal{N}\tilde{\to}\widehat{\Psi}^{*}\mathcal{N}\), and, using 3.8, we have that this map agrees with the canonical isomorphism coming from theory of accessible \(\widehat{\mathcal{D}}^{(0)}_{W(A),\text{crys}}/p^{r}\)-modules (this is because the morphism is realized via \(\Phi^{*}\mathcal{D}^{(0)}_{\mathfrak{X}_{r}}\widehat{\otimes}_{\mathcal{D}^{ (0)}_{\mathfrak{X}_{r}}}\mathcal{N}\tilde{\to}\Psi^{*}\mathcal{D}^{(0)}_{ \mathfrak{X}_{r}}\widehat{\otimes}_{\mathcal{D}^{(0)}_{\mathfrak{X}_{r}}} \mathcal{N}\)). So the sheaf \(\tilde{\mathcal{M}}_{W(X)_{p^{r}=0}}\) on \(W(X)_{p^{r}=0}\) which the theory of the crystalline site attaches to \(\tilde{\mathcal{M}}\) carries a canonical \(\widehat{\mathcal{D}}^{(0)}_{W(X),\text{crys}}/p^{r}\) action. This sheaf is not accessible, however, there is an exact functor
\[\widehat{\mathcal{D}}^{(0)}_{W(X),\text{crys}}/p^{r}-\text{mod}\to\widehat{ \mathcal{D}}^{(0)}_{W(X),\text{crys}}/p^{r}-\text{mod}_{\text{acc}}\]
which is right adjoint to the inclusion functor (c.f. the discussion right below 4.5 below); we denote it \(\mathcal{M}\to\mathcal{M}_{\text{acc}}\); if \(X=\text{Spec}(A)\) then \((\widehat{\Phi}^{*}\mathcal{N})_{\text{acc}}=\Phi^{*}\mathcal{N}\). Therefore we define the functor
\[\eta(\tilde{\mathcal{M}})=(\tilde{\mathcal{M}}_{W(X)_{p^{r}=0}})_{\text{acc}}\]
Clearly its restriction to quasicoherent crystals lands in quasicoherent accessible modules (this is the argument of the previous proposition). There is a canonical map \(\mathcal{H}om_{\text{crys}}(\tilde{\mathcal{M}}_{1},\tilde{\mathcal{M}}_{2}) \to\mathcal{H}om_{\widehat{\mathcal{D}}^{(0)}_{W(X),\text{crys}}/p^{r}}(\eta( \tilde{\mathcal{M}}_{1}),\eta(\tilde{\mathcal{M}}_{2}))\) of sheaves in the Zariski topology of \(X\); to show it is an isomorphism we may work locally, but there the claim reduces to the full-faithfulness of \(\Phi^{*}\).
Now, after restricting \(\eta\) to quasicoherent sheaves, we obtain \(\epsilon\), which is also fully faithful. Let us consider the derived version. Let \(\mathcal{M}_{i}\) (\(i=1,2\)) be elements of \(D_{\text{qcoh}}(\text{Crys}_{W_{r}(k)}(X)))\); and replace \(\mathcal{M}_{2}\) with a K-injective resolution, \(\mathcal{K}\). Then we have
\[R\mathcal{H}om_{\text{crys}}(\mathcal{M}_{1},\mathcal{M}_{2})=\mathcal{H}om_{ \text{crys}}(\mathcal{M}_{1},\mathcal{K}\cdot)\to\mathcal{H}om_{\widehat{ \mathcal{D}}^{(0)}_{W(X)}/p^{r}}(\epsilon(\mathcal{M}_{1}),\epsilon(\mathcal{K }\cdot))\]
where the latter two \(\mathcal{H}om\) indicate Hom in the homotopy category of chain complexes. Now we have a map
\[R\mathcal{H}om_{\widehat{\mathcal{D}}^{(0)}_{W(X)}/p^{r}}(\epsilon(\mathcal{M }_{1}),\epsilon(\mathcal{K}\cdot))\to\mathcal{H}om_{\widehat{\mathcal{D}}^{(0 )}_{W(X)}/p^{r}}(\epsilon(\mathcal{M}_{1}),\epsilon(\mathcal{K}\cdot))\]
and we will show that both of these maps are isomorphisms. This is a local question, which boils down to the following: if \(\mathcal{N}_{i}\) (\(i=1,2\)) are elements of the derived category \(\text{Qcoh}_{\text{ln}}(\mathcal{D}^{(0)}_{\mathfrak{X}_{r}})\) (the category of quasicoherent \(\mathcal{D}^{(0)}_{\mathfrak{X}_{r}}\)-modules which are locally nilpotent), then the map
\[R\mathcal{H}om_{\text{Qcoh}_{\text{ln}}(\mathcal{D}^{(0)}_{\mathfrak{X}_{r}}) }(\mathcal{N}_{1},\mathcal{N}_{2})\to R\mathcal{H}om_{\mathcal{D}^{(0)}_{ \mathfrak{X}_{r}}-\text{mod}}(\mathcal{N}_{1},\mathcal{N}_{2})\]
is an isomorphism. This follows from by regarding a quasicoherent \(\mathcal{D}^{(0)}_{\mathfrak{X}_{r}}\)-module as a quasicoherent sheaf on the center \(\mathcal{Z}(\mathcal{D}^{(0)}_{\mathfrak{X}_{r}})\), and applying the functor of local cohomology with support along the zero section (compare, e.g. [8], lemma 3.1.7).
It is worth noting that we also have equivalences
\[D(\text{Qcoh}(\text{Crys}_{W_{r}(k)}(X)))\tilde{\to}D_{\text{qcoh}}(\text{Crys }_{W_{r}(k)}(X))\]
\[D(\widehat{\mathcal{D}}^{(0)}_{W(X)}/p^{r}-\operatorname{mod}_{\operatorname{acc },\operatorname{qcoh},\operatorname{ln}})\tilde{\to}D_{\operatorname{acc}, \operatorname{qcoh},\operatorname{ln}}(\widehat{\mathcal{D}}^{(0)}_{W(X)}/p^{r }-\operatorname{mod})\]
These can be proved in a very similar way to the classical statement
\[D(\operatorname{QCoh}(X))\tilde{\to}D_{\operatorname{qcoh}}(\mathcal{O}_{X}- \operatorname{mod})\]
c.f. [12], corollary 5.5.
### Frobenius descent
In this subsection we explain how Berthelot's fundamental theorem on the independence of the Frobenius in arithmetic \(\mathcal{D}\)-module theory also follows from our results. To set things up, let us recall the main results of that theory:
**Theorem 3.30**.: _1) Let \(A\) be a smooth algebra which possesses local coordinates; let \(\mathcal{A}\) be a lift of \(A\) and let \(F:\mathcal{A}\to\mathcal{A}\) be a coordinatized lift of Frobenius. Let consider the \((\widehat{\mathcal{D}}^{(m+1)}_{\mathcal{A}},\widehat{\mathcal{D}}^{(m)}_{ \mathcal{A}})\) bisubmodule of \(\text{End}_{W(k)}(\mathcal{A})\) generated by \(F\). Then the natural map \(F^{*}\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}\to\text{End}_{W(k)}(\mathcal{ A})\) which takes \(a\otimes P\to a\cdot F(P)\) is an isomorphism onto this bisubmodule. Furthermore, this bimodule induces an equivalence of categories_
\[\mathcal{M}\to F^{*}\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}\otimes_{ \widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}}\mathcal{M}:=F^{*}\mathcal{M}\]
_from \(\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}-\text{mod}\) to \(\widehat{\mathcal{D}}^{(m+1)}_{\mathcal{A}}-\text{mod}\)._
_2) If \(F_{1},F_{2}\) are two lifts of Frobenius as a above, then (assuming \(p>2\) when \(m=0\)), there is a canonical isomorphism of bimodules_
\[F_{1}^{*}\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}\tilde{\to}F_{2}^{*} \widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}\]
_In particular, if \(X\) is an arbitrary smooth scheme over \(k\) and \(\mathfrak{X}\) is a lift, then there is a globally defined equivalence of categories \(F^{*}:\mathcal{D}^{(m)}_{\mathfrak{X}}-\text{mod}\to\mathcal{D}^{(m+1)}_{ \mathfrak{X}}-\text{mod}\)._
This theorem is proved in [5], section 2.3. (for part 1)) and [5], theorem 2.2.5 for part 2); c.f. also [19], corollary 13.3.8.
Part 1) can be proved rather rapidly from the methods of this paper. Our construction of \(\Phi^{*}\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}\) is a more elaborate version of the construction of \(F^{*}\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}\), and, the proof of Theorem 3.9 is a more elaborate version of the proof that \(F^{*}\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}\) is projective over \(\mathcal{D}^{(m+1)}_{\mathcal{A}}\). Once this is known, the crucial isomorphism
\[\widehat{\mathcal{D}}^{(m+1)}_{\mathcal{A}}\tilde{\to}\text{End}_{\widehat{ \mathcal{D}}^{(m),\text{opp}}_{\mathcal{A}}}(F^{*}\widehat{\mathcal{D}}^{(m) }_{\mathcal{A}})\]
can be proved after reduction mod \(p\), where it is an elementary computation (c.f. [17], proposition 4.18 for the case \(m=0\)).
Now let us address part 2). We in fact have the following for Witt differential operators:
**Lemma 3.31**.: _There is a morphism of sheaves of algebras \(F:\widehat{\mathcal{D}}^{(m)}_{W(X)}\to\widehat{\mathcal{D}}^{(m+1)}_{W(X)}\) whose restriction to \(\mathcal{O}_{W(X)}\) is the Witt-vector Frobenius map._
Proof.: It suffices to show this locally, so without loss of generality we assume \(X=\operatorname{Spec}(A)\) possesses local coordinates. Then for an operator \(P\in\widehat{\mathcal{D}}^{(m)}_{W(A)}\) we consider the operator \(F\circ P\circ F^{-1}\). When \(P\) is multiplication by an element \(a\in W(A)\), this operator is \(F(a)\). Using 2.10, we see that \(F\{\partial_{i}\}_{\lambda}F^{-1}=\{\partial_{i}\}_{p\lambda}\) for all \(\lambda\); so that Theorem 2.17 ensures us that \(F\circ P\circ F^{-1}\in\widehat{\mathcal{D}}^{(m+1)}_{W(A)}\) as required.
Now, we can prove
**Theorem 3.32**.: _Let \((\mathcal{A},F)\) be as above, and \(X=\text{Spec(A)}\); let \(\Phi:\mathcal{A}\to W(A)\) the associated map. Then for \(\mathcal{M}\colon\in D_{acc}(\widehat{\mathcal{D}}^{(m)}_{W(X)}-\text{mod})\), if \(\mathcal{M}\colon=\Phi^{*}\mathcal{N}\) then_
\[F^{*}\mathcal{M}\operatorname{\tilde{\rightarrow}}\Phi^{*}(F^{*}\mathcal{N})\]
_where on the left we have the pullback with respect to the algebra morphism \(\widehat{\mathcal{D}}^{(m)}_{W(X)}\rightarrow\widehat{\mathcal{D}}^{(m+1)}_{W (X)}\), and on the right we have Berthelot's Frobenius pullback. In particular for two such morphisms \(F_{1},F_{2}\) we obtain a canonical isomorphism \(F_{1}^{*}\mathcal{N}\operatorname{\tilde{\rightarrow}}F_{2}^{*}\mathcal{N}\)._
Proof.: By definition we have
\[F^{*}\mathcal{M}\operatorname{\tilde{=}}\widehat{\mathcal{D}}^{(m+1)}_{W(X)} \widehat{\otimes}^{L}_{\widehat{\mathcal{D}}^{(m)}_{W(X)}}\mathcal{M}\cdot\]
where the action of \(\widehat{\mathcal{D}}^{(m)}_{W(X)}\) on \(\widehat{\mathcal{D}}^{(m+1)}_{W(X)}\) is via the map \(F\). As \(\mathcal{M}\operatorname{\tilde{=}}\Phi^{*}\widehat{\mathcal{D}}^{(m)}_{ \mathfrak{X}}\widehat{\otimes}^{L}_{\widehat{\mathcal{D}}^{(m)}_{\mathfrak{X} }}\mathcal{N}\): we are left to ponder
\[\widehat{\mathcal{D}}^{(m+1)}_{W(X)}\widehat{\otimes}^{L}_{\widehat{\mathcal{ D}}^{(m)}_{W(X)}}\Phi^{*}\widehat{\mathcal{D}}^{(m)}_{\mathfrak{X}}\]
As \(\Phi^{*}\widehat{\mathcal{D}}^{(m)}_{\mathfrak{X}}\) is locally projective over \(\widehat{\mathcal{D}}^{(m)}_{W(X)}\) this is concentrated in a single degree, and is in fact a summand of \(\widehat{\mathcal{D}}^{(m+1)}_{W(X)}\). We are considering \(\widehat{\mathcal{D}}^{(m+1)}_{W(X)}\) as a \((\widehat{\mathcal{D}}^{(m+1)}_{W(X)},\widehat{\mathcal{D}}^{(m)}_{W(X)})\) bimodule; and it is exactly the bisubmodule of \(\mathcal{E}nd_{W(k)}(\mathcal{O}_{W(X)})\) which is locally generated by \(F\). As \(\Phi^{*}\widehat{\mathcal{D}}^{(m)}_{\mathfrak{X}}\) is the \((\widehat{\mathcal{D}}^{(m)}_{W(X)},\widehat{\mathcal{D}}^{(m)}_{\mathfrak{X}})\) bisubmodule of \(\mathcal{H}om_{W(k)}(\mathcal{O}_{\mathfrak{X}},\mathcal{O}_{W(X)})\) locally generated by \(\Phi\), we conclude that
\[\widehat{\mathcal{D}}^{(m+1)}_{W(X)}\widehat{\otimes}^{L}_{\widehat{\mathcal{ D}}^{(m)}_{W(X)}}\Phi^{*}\widehat{\mathcal{D}}^{(m)}_{\mathfrak{X}}\]
is the \((\widehat{\mathcal{D}}^{(m+1)}_{W(X)},\widehat{\mathcal{D}}^{(m)}_{\mathfrak{ X}})\) bisubmodule of \(\mathcal{H}om_{W(k)}(\mathcal{O}_{\mathfrak{X}},\mathcal{O}_{W(X)})\) locally generated by \(\Phi\circ F\). As \(F\circ\Phi=\Phi\circ F\) we obtain that these two bimodules are isomorphic and so
\[\widehat{\mathcal{D}}^{(m+1)}_{W(X)}\widehat{\otimes}^{L}_{\widehat{\mathcal{ D}}^{(m)}_{W(X)}}\mathcal{M}\operatorname{\tilde{=}}\widehat{\mathcal{D}}^{(m+1)}_{W(X)} \widehat{\otimes}^{L}_{\widehat{\mathcal{D}}^{(m)}_{W(X)}}\Phi^{*}\widehat{ \mathcal{D}}^{(m)}_{\mathfrak{X}}\widehat{\otimes}^{L}_{\widehat{\mathcal{D}} ^{(m)}_{\mathfrak{X}}}\mathcal{N}\cdot\]
\[\operatorname{\tilde{\rightarrow}}\Phi^{*}\widehat{\mathcal{D}}^{(m+1)}_{ \mathfrak{X}}\widehat{\otimes}^{L}_{\widehat{\mathcal{D}}^{(m+1)}_{\mathfrak{ X}}}F^{*}\widehat{\mathcal{D}}^{(m)}_{\mathfrak{X}}\widehat{\otimes}^{L}_{ \widehat{\mathcal{D}}^{(m)}_{\mathfrak{X}}}\mathcal{N}\cdot\]
\[=\Phi^{*}(F^{*}\mathcal{N}\operatorname{\tilde{}})\]
as required.
Restricting oneself to the subcategory of coherent accessible modules, we obtain an isomorphism of functors \(F_{1}^{*}\operatorname{\tilde{\rightarrow}}F_{2}^{*}\) on that category. This recovers part 2) of Berthelot's theorem above. In addition, one may use this technique to get a rather quick proof that the category of coherent modules in Berthelot's theory of overconvergent \(\mathcal{D}\)-modules, admits a Frobenius action. We will return to this topic elsewhere.
## 4. Operations on Accessible Modules
In this chapter we develop the basic operations on the accessible \(\widehat{\mathcal{D}}^{(m)}_{W(X)}\)-modules; we assume throughout that the level \(m\geq 0\).
### Operations on modules: Right Modules and the left-right interchange
In this section, we'll construct the category of accessible _right_\(\mathcal{D}^{(m)}_{W(X)}\)-modules and explain the left-right interchange in this context. Before doing so, we give a quick review of the left-right interchange in the classical situation: on \(\mathfrak{X}\), the line bundle \(\omega_{\mathfrak{X}}\) carries the natural structure of a right \(\widehat{\mathcal{D}}^{(m)}_{\mathfrak{X}}\) module. One can define the structure of a sheaf of algebras on
\[\omega_{\mathfrak{X}}\otimes_{\mathcal{O}_{\mathfrak{X}}}\widehat{\mathcal{D }}^{(m)}_{\mathfrak{X}}\otimes_{\mathcal{O}_{\mathfrak{X}}}\omega_{\mathfrak{ X}}^{-1}\]
via
\[(s_{1}\otimes\Phi_{1}\otimes t_{1})\cdot(s_{2}\otimes\Phi_{2}\otimes t_{2})=s_ {1}\otimes\Phi_{1}<t_{1},s_{1}>\Phi_{2}\otimes t_{2}\]
snd we have an there is an isomorphism of sheaves of algebras
\[\omega_{\mathfrak{X}}\otimes_{\mathcal{O}_{\mathfrak{X}}}\widehat{\mathcal{D }}^{(m)}_{\mathfrak{X}}\otimes_{\mathcal{O}_{\mathfrak{X}}}\omega_{\mathfrak{ X}}^{-1}\tilde{=}\widehat{\mathcal{D}}^{(m),\mathrm{op}}_{\mathfrak{X}}\]
(this uses the right action of \(\widehat{\mathcal{D}}^{(m)}_{\mathfrak{X}}\) on \(\omega_{\mathfrak{X}}\)). One obtains from this the fact that \(\mathcal{M}\to\omega_{\mathfrak{X}}\otimes_{\mathcal{O}_{\mathfrak{X}}} \mathcal{M}\) is an equivalence of categories from \(\widehat{\mathcal{D}}^{(m)}_{\mathfrak{X}}-\mathrm{mod}\) to \(\widehat{\mathcal{D}}^{(m),\mathrm{op}}_{\mathfrak{X}}-\mathrm{mod}\). This extends to an equivalence on the derived categories, which preserves cohomologically complete objects on each side. We wish to extend this to the accessible \(\widehat{\mathcal{D}}^{(m)}_{W(X)}\)-modules.
Let's begin with the local situation:
**Proposition 4.1**.: _Let \(X=\text{Spec}(A)\) where \(A\) possesses local coordinates; let \(F:\mathcal{A}\to\mathcal{A}\) a coordinatized lift of Frobenius, and let \(\Phi:\mathcal{A}\to W(A)\); let \(\pi:W(A)\to\mathcal{A}\) be the associated projection as in 3.15. Then_
\[\Phi^{!}(\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}):=\text{Hom}_{\widehat{ \mathcal{D}}^{(m)}_{W(A)}}(\Phi^{*}\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}},\widehat{\mathcal{D}}^{(m)}_{W(A)})\tilde{=}\pi\cdot\widehat{\mathcal{D}}^{ (m)}_{W(A)}\]
_is a \((\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}},\widehat{\mathcal{D}}^{(m)}_{W(A)})\) bimodule, which is a projective as a right \(\widehat{\mathcal{D}}^{(m)}_{W(A)}\)-module. Further, there is an isomorphism of right \(\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}\)-modules_
\[\tilde{\text{Hom}}_{\mathcal{A}}(W(A),\widehat{\mathcal{D}}^{(m)}_{\mathcal{A }})\tilde{-}\Phi^{!}(\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}})\]
_where \(\tilde{\text{Hom}}_{\mathcal{A}}(W(A),\widehat{\mathcal{D}}^{(m)}_{\mathcal{ A}})\subset\text{Hom}_{\mathcal{A}}(W(A),\widehat{\mathcal{D}}^{(m)}_{ \mathcal{A}})\) consists of those \(\mathcal{A}\)-linear morphisms \(\epsilon:W(A)\to\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}\) which satisfy \(\epsilon(V^{r}(W(A)))\subset p^{r}\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}\)._
Proof.: The first line follows immediately from \(\Phi^{*}\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}=\widehat{\mathcal{D}}^{(m)}_ {W(A)}\cdot\pi\); the \((\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}},\widehat{\mathcal{D}}^{(m)}_{W(A)})\)- bimodule structure is deduced from the right action of \(\widehat{\mathcal{D}}^{(m)}_{W(A)}\) on
\(\text{Hom}_{\widehat{\mathcal{D}}^{(m)}_{W(A)}}(\Phi^{*}\widehat{\mathcal{D}} ^{(m)}_{\mathcal{A}},\widehat{\mathcal{D}}^{(m)}_{W(A)})\) and the right action of \(\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}\) on \(\Phi^{*}\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}\). So the only thing that needs to be done is to construct the isomorphism
\[\tilde{\text{Hom}}_{\mathcal{A}}(W(A),\widehat{\mathcal{D}}^{(m)}_{\mathcal{A }})\tilde{-}\Phi^{!}(\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}})\]
First, define \(\tilde{\text{Hom}}_{\mathcal{A}}(W(A),\mathcal{A})\) to be the set of \(\mathcal{A}\) linear maps \(\epsilon:W(A)\to\mathcal{A}\) satisfying \(\epsilon(V^{r}(W(A)))\subset p^{r}\mathcal{A}\). By the lemma below we have that if \(\epsilon\in\tilde{\text{Hom}}_{\mathcal{A}}(W(A),\mathcal{A})\), then \(\Phi\circ\epsilon\in\pi\cdot\widehat{\mathcal{D}}^{(m)}_{W(A)}\); thus we obtain an inclusion \(\iota:\tilde{\text{Hom}}_{\mathcal{A}}(W(A),\mathcal{A})\subset\Phi^{!}( \widehat{\mathcal{D}}^{(m)}_{\mathcal{A}})\). Now, I claim that there is an isomorphism
\[\tilde{\text{Hom}}_{\mathcal{A}}(W(A),\widehat{\mathcal{D}}^{(m)}_{\mathcal{A }})\tilde{=}\tilde{\text{Hom}}_{\mathcal{A}}(W(A),\mathcal{A})\widehat{ \otimes}_{\mathcal{A}}\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}} \tag{4.1}\]
where \(\widehat{\otimes}\) here denotes the completion with respect to the filtration
\[F^{m}(\tilde{\text{Hom}}_{\mathcal{A}}(W(A),\mathcal{A})):=\{\epsilon\in \tilde{\text{Hom}}_{\mathcal{A}}(W(A),\mathcal{A})|\epsilon(W(A))\subseteq p^{ m}\mathcal{A}\}\]
To prove this note that, via the inclusion \(\mathcal{A}\subset\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}\) we have an inclusion \(\tilde{\operatorname{Hom}}_{\mathcal{A}}(W(A),\mathcal{A})\subset\tilde{ \operatorname{Hom}}_{\mathcal{A}}(W(A),\widehat{\mathcal{D}}^{(m)}_{\mathcal{A }})\), which induces a morphism
\[\tilde{\operatorname{Hom}}_{\mathcal{A}}(W(A),\mathcal{A})\otimes_{\mathcal{A} }\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}\to\tilde{\operatorname{Hom}}_{ \mathcal{A}}(W(A),\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}})\]
whose completion induces (4.1) (that it is an isomorphism is easily checked in local coordinates).
So, via the right action of \(\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}\) on \(\Phi^{!}(\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}})\) we obtain a map
\[\tilde{\operatorname{Hom}}_{\mathcal{A}}(W(A),\mathcal{A})\otimes_{\mathcal{ A}}\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}\to\Phi^{!}(\widehat{\mathcal{D}}^{(m)} _{\mathcal{A}})\]
given by \((\epsilon,P)\to\iota(\epsilon)\cdot P\). As elements of \(F^{m}(\tilde{\operatorname{Hom}}_{\mathcal{A}}(W(A),\mathcal{A}))\) are contained in \(V^{r}(\widehat{\mathcal{D}}^{(0)}_{W(A)})\) (as shown in the lemma directly below), we may complete to obtain a map
\[a:\tilde{\operatorname{Hom}}_{\mathcal{A}}(W(A),\mathcal{A})\widehat{\otimes }_{\mathcal{A}}\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}\to\Phi^{!}(\widehat{ \mathcal{D}}^{(m)}_{\mathcal{A}})\]
Identifying \(\Phi^{!}(\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}})\tilde{=}\pi\cdot\widehat {\mathcal{D}}^{(m)}_{W(A)}\) with a set of \(W(k)\)-linear morphisms from \(W(A)\) to \(\mathcal{A}\), this map is given by the composition
\[(\epsilon,P)\to P\circ\epsilon:W(A)\to\mathcal{A}\]
It follows directly that the map \(a\) is injective. To see the surjectivity, it suffices to show that the image of \(a\) is preserved under the right action of \(\widehat{\mathcal{D}}^{(m)}_{W(A)}\) (since \(\pi\) is clearly contained in the image). The proof of this is given in the lemma directly below
In the proof above, we used the
**Lemma 4.2**.: _1) Let \(\phi_{I/p^{r}}:W(A)\to\mathcal{A}\) be the \(\mathcal{A}\)-linear map which takes \(p^{r}T^{I/p^{r}}\) to \(p^{r}\) and which takes \(p^{r^{\prime}}T^{J/p^{r^{\prime}}}\) to \(0\) for all \(J\neq I\). Then \(\Phi\circ\phi_{I/p^{r}}\in\pi\cdot\widehat{\mathcal{D}}^{(0)}_{W(A)}\)._
_2) Let \(P\in\widehat{\mathcal{D}}^{(0)}_{W(A)}\). Then \(\phi_{I/p^{r}}(P\cdot):W(A)\to\mathcal{A}\) is contained in the image of \(\tilde{\operatorname{Hom}}_{\mathcal{A}}(W(A),\mathcal{A})\widehat{\otimes}_{ \mathcal{A}}\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}\) in \(\operatorname{Hom}_{W(k)}(W(A),\mathcal{A})\)._
Proof.: These will be variants of the proofs of 3.15 and 3.2.
1) Start with case \(A=k[T]\), \(\mathcal{A}=W(k)<<T>>\), and the Frobenius lift is the standard one. Let \(i\in\{1,\dots p^{r}-1\}\) so that \(\operatorname{val}_{p}(i)=0\). Let \(r^{\prime}\geq r\). As in 3.14, let \(\pi_{i/p^{r}}:W_{r^{\prime}+1}(A)\to W_{r^{\prime}+1}(A)\) denote the projection operator which takes any monomial of the form \(p^{r}T^{j+i/p^{r}}\) to itself (for each \(j\in\mathbb{Z}_{\geq 0}\)), and all other monomials to \(0\). By induction on \(r^{\prime}\) we shall construct an operator satisfying
\[\Phi_{r^{\prime}}\circ\phi_{i/p^{r}}^{r^{\prime}}:=\{d\}_{i/p^{r}}\circ\pi_{i/p ^{r}}-\sum_{i=1}^{p^{r^{\prime}+1}-1}f_{i}(T^{i/p^{r^{\prime}}}\{d\}_{i/p^{r^ {\prime}}})\circ\{d\}_{i/p^{r}}\circ\pi_{i/p^{r}} \tag{4.2}\]
for some \(f_{i}\in\mathbb{Z}_{p}[X]\), so that \(\phi_{i/p^{r}}^{r^{\prime}}\equiv\phi_{i/p^{r}}^{r^{\prime}-1}\) mod \(V^{r}(\widehat{\mathcal{D}}^{(0)}_{W(A)})\). The inverse limit of these operators is then the required \(\phi_{i/p^{r}}\).
We will work now with the copy of \(W_{r^{\prime}+1}(A^{(r^{\prime})})\) contained in \(\mathcal{A}_{r^{\prime}+1}=W_{r^{\prime}+1}(k)[T]\) and construct the operator there. We will proceed by induction; when \(r^{\prime}=r\) we have that
\[d^{[i]}\circ\pi_{i/p^{r}}\]
is already equal to \(\phi_{i/p^{r}}^{r}\). Assuming by induction that \(\phi_{i/p^{r}}^{r^{\prime}-1}\) has been constructed which satisfies (4.2), consider the operator
\[d^{[p^{r^{\prime}-r}i]}\circ\pi_{i/p^{r}}-\sum_{i=1}^{p^{r^{\prime}}-1}f_{i}(T^ {i}d^{[i]})\circ d^{[p^{r^{\prime}-r}i]}\circ\pi_{i/p^{r}}\]
on \(\mathcal{A}_{r^{\prime}+1}\). Arguing exactly as in the proof of 3.12, and employing the induction hypothesis, we see that the evaluation of this operator on a term of the form \(p^{r}T^{p^{\prime^{\prime}-r}i}T^{ap^{r^{\prime}}}\) yields \(p^{r^{\prime}}h(a)T^{ap^{r^{\prime}}}\) where \(h\) is a polynomial in \(\mathbb{Z}_{p}[X]\). So we can add a term of the form
\[-p^{r^{\prime}}h(T^{p^{r^{\prime}+1}}d^{[p^{r^{\prime}+1}]})\circ d^{[p^{r^{ \prime}-r}i]}\circ\pi_{i/p^{r}}\]
to construct \(\phi^{r^{\prime}}\), as desired.
To construct \(\phi_{I/p^{r}}\) in general, use the inclusions \(\widehat{\mathcal{D}}_{W(k[T_{i}])}^{(0)}\to\widehat{\mathcal{D}}_{W(A)}^{(0)}\) (as noted in 3.15) and take products of operators in \(\widehat{\mathcal{D}}_{W(k[T_{i}])}^{(0)}\).
2) As \(\tilde{\operatorname{Hom}}_{\mathcal{A}}(W(A),\mathcal{A})\widehat{\otimes}_ {\mathcal{A}}\widehat{\mathcal{D}}_{\mathcal{A}}^{(m)}\to\operatorname{Hom}_{W (k)}(W(A),\mathcal{A})\) is injective, we identify \(\tilde{\operatorname{Hom}}_{\mathcal{A}}(W(A),\mathcal{A})\widehat{\otimes}_ {\mathcal{A}}\widehat{\mathcal{D}}_{\mathcal{A}}^{(m)}\) with its image. By an \(F\)-conjugation argument, we can assume \(m=0\) from now on. As \(A\) is etale over \(k[T_{1},\ldots,T_{n}]\), we can assume without loss of generality that \(A=k[T_{1},\ldots,T_{n}]\) with the coordinate lift of Frobenius.
We begin by considering \(\phi_{I/p^{m}}(\{\partial_{j}\}_{1/p^{r}}\cdot)\); without loss of generality suppose \(j=1\). Choose \(l\geq r,m\) and consider \(\phi_{I/p^{m}}\) acting on \(W_{l+1}(A)\); as usual we work with the copy of \(W_{l+1}(A^{(l)})\subset\mathcal{A}_{l+1}=W(k)[T_{1},\ldots,T_{n}]\). Then \(\phi_{I/p^{m}}\) is nonzero on monomials of the form \(T^{K}\) where \(K=(p^{l-m}i_{1}+p^{l}j_{1},\ldots,p^{l-m}i_{n}+p^{l}j_{n})\) where \(j_{i}\in\mathbb{Z}_{\geq 0}\) for all \(i\), and zero on all other monomials. In this context, \(\{\partial_{1}\}_{1/p^{r}}\) becomes the operator \(\partial_{1}^{[p^{l-r}]}\). The operator \(\phi_{I/p^{m}}(\partial_{1}^{[p^{l-r}]}\cdot)\) is nonzero on monomials of the form \(T^{K^{\prime}}\)where \(K^{\prime}=(p^{l-m}i_{1}+p^{l}j_{1}+p^{l-r},\ldots,p^{l-m}i_{n}+p^{l}j_{n})\), and zero on all the others.
We have
\[d_{1}^{[p^{l-r}]}T_{1}^{p^{l-m}i_{1}+p^{l}j_{1}+p^{l-r}}T_{2}^{p^{l-m}i_{2}+p^ {l}j_{2}}\cdots T_{n}^{p^{l-m}i_{n}+p^{l}j_{n}}\]
\[=\binom{p^{l-m}i_{1}+p^{l}j_{1}+p^{l-r}}{p^{l-r}}T_{1}^{p^{l-m}i_{1}+p^{l}j_{ 1}}T_{2}^{p^{l-m}i_{2}+p^{l}j_{2}}\cdots T_{n}^{p^{l-m}i_{n}+p^{l}j_{n}}\]
However,
\[\binom{p^{l-m}i_{1}+p^{l-r}(p^{r}j_{1}+1)}{p^{l-r}}=f_{l}(p^{r}j_{1}+1)\]
is a polynomial in \(j_{1}\) (by 3.12). So we deduce that
\[\phi_{I/p^{m}}(\partial_{1}^{[p^{l-r}]}\cdot)=p^{a}\lim_{l}f_{l}(1+p^{r}T_{1} \partial_{1})\circ\phi_{I^{\prime}/p^{b}}\]
where \(\lim_{l}f_{l}(1+p^{r}T_{1}\partial_{1})\) is understood as an element of \(\widehat{\mathcal{D}}_{\mathcal{A}}^{(0)}\), \(I^{\prime}/p^{b}\) is the term \(I/p^{m}+(1/p^{r},0,\ldots,0)\), and \(a=|b-\max\{r,m\}|\cdot\); thus we see that \(\phi_{I/p^{m}}(\partial_{1}^{[p^{l-r}]}\cdot)\in\tilde{\operatorname{Hom}}_{ \mathcal{A}}(W(A),\mathcal{A})\widehat{\otimes}_{\mathcal{A}}\widehat{ \mathcal{D}}_{\mathcal{A}}^{(0)}\). As any element of \(\tilde{\operatorname{Hom}}_{\mathcal{A}}(W(A),\mathcal{A})\widehat{\otimes}_ {\mathcal{A}}\widehat{\mathcal{D}}_{\mathcal{A}}^{(0)}\) can be written as a sum of the form
\[\sum_{I,r}\phi_{I/p^{r}}\cdot P_{I}\]
we see that \(\tilde{\operatorname{Hom}}_{\mathcal{A}}(W(A),\mathcal{A})\widehat{\otimes}_{ \mathcal{A}}\widehat{\mathcal{D}}_{\mathcal{A}}^{(0)}\) is closed under the action of \(\{\partial_{j}\}_{1/p^{r}}\), and therefore under the action of every term of the form \(\{\partial\}_{J/p^{r}}\). By Theorem 2.17, it suffices to show that it is closed under the action of terms of the form \(T^{K/p^{r}}\{\partial\}_{J/p^{r}}\). That follows from the above by noting that the operator
\[\phi_{I/p^{m}}(T^{K/p^{r}}.):V^{r}(W(A))\to\mathcal{A}\]
is the restriction to \(V^{r}(W(A))\) of a sum of operators of the form \(\phi_{J/p^{l}}\) (and this is straightforward to verify).
From the above we obtain
**Definition 4.3**.: Set \(\mathcal{H}om_{\widehat{\mathcal{D}}_{W(X)}^{(m)}/p}(\mathcal{B}_{X}^{(m)}, \widehat{\mathcal{D}}_{W(X)}^{(m)}/p):=\mathcal{B}_{X}^{(m),r}\). This is a sheaf of \((\widehat{\mathcal{D}}_{X}^{(m)},\widehat{\mathcal{D}}_{W(X)}^{(m)}/p)\) bimodules, which is locally isomorphic to \(\Phi^{!}(\widehat{\mathcal{D}}_{X}^{(m)})\) for a coordinatized lift of Frobenius \(\Phi\).
In particular, this bimodule always exists when \(n=1\). We can thus copy over 3.17 and obtain
**Definition 4.4**.: 1) A module \(\mathcal{N}\in\widehat{\mathcal{D}}_{W(X)}^{(m),\operatorname{op}}/p- \operatorname{mod}\) is accessible if is of the form \(\mathcal{M}\otimes_{\mathcal{D}_{X}^{(m)}}\mathcal{B}_{X}^{(m),r}\) for some \(\mathcal{M}\in\mathcal{D}_{X}^{(m),\operatorname{op}}-\operatorname{mod}\).
2) A complex \(\mathcal{N}\colon\in D(\widehat{\mathcal{D}}_{W(X)}^{(m),\operatorname{op}}/p -\operatorname{mod})\) is accessible if is of the form \(\mathcal{M}\colon\otimes_{\mathcal{D}_{X}^{(m)}}^{L}\mathcal{B}_{X}^{(m),r}\) for some \(\mathcal{M}\in D(\mathcal{D}_{X}^{(m),\operatorname{op}}-\operatorname{mod})\).
3) Let \(r\geq 1\). A complex \(\mathcal{N}\colon\in D(\widehat{\mathcal{D}}_{W(X)}^{(m),\operatorname{op}}/p ^{r}-\operatorname{mod})\) is accessible if \(\mathcal{N}\colon\otimes_{W_{r}(k)}^{L}k\) is accessible in \(D(\widehat{\mathcal{D}}_{W(X)}^{(m),\operatorname{op}}/p-\operatorname{mod})\). Similarly, a complex \(\mathcal{N}\colon\in D_{cc}(\widehat{\mathcal{D}}_{W(X)}^{(m),\operatorname{op }}-\operatorname{mod})\) is accessible if \(\mathcal{N}\cdot\otimes_{W(k)}^{L}k\) is accessible in \(D(\widehat{\mathcal{D}}_{W(X)}^{(m),\operatorname{op}}/p-\operatorname{mod})\).
The analogues of all the basic results on accessibility (Theorem 3.18 through 3.23) all hold without any change, and with identical proofs, for right modules.
With this in place, we turn to defining the fundamental bimodule \(\widehat{\mathcal{D}}_{W(X),\operatorname{acc}}^{(m)}\) and then describing the left-right swap for \(\widehat{\mathcal{D}}_{W(X)}^{(m)}\)-modules.
First suppose we are working locally, i.e., \(X=\operatorname{Spec}(A)\) possesses local coordinates. Then the functor
\[\mathcal{N}\,\to \Phi^{*}\widehat{\mathcal{D}}_{\mathfrak{X}}^{(m)}\widehat{ \otimes}_{\widehat{\mathcal{D}}_{\mathfrak{X}}^{(m)}}^{L}R\mathcal{H}om_{ \widehat{\mathcal{D}}_{W(X)}^{(m)}}(\Phi^{*}\widehat{\mathcal{D}}_{\mathfrak{ X}}^{(m)},\mathcal{N}\,)\] \[\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \
**Definition 4.5**.: The sheaf of bimodules constructed just above is called \(\widehat{\mathcal{D}}^{(m)}_{W(X),\mathrm{acc}}\).
As the natural map
\[\Phi^{*}\widehat{\mathcal{D}}^{(m)}_{\mathfrak{X}}\widehat{\otimes}^{L_{(m)}}_{ \widehat{\mathcal{D}}^{(m)}_{\mathfrak{X}}}\Phi^{!}\widehat{\mathcal{D}}^{(m)} _{\mathfrak{X}}\rightarrow\widehat{\mathcal{D}}^{(m)}_{W(X)}\]
coming from the adjunction is compatible with the isomorphism (4.3), we obtain a morphism of bimodules \(\widehat{\mathcal{D}}^{(m)}_{W(X),\mathrm{acc}}\rightarrow\widehat{\mathcal{D }}^{(m)}_{W(X)}\).
This, in turn, implies that the right adjoint to the inclusion \(D_{acc}(\widehat{\mathcal{D}}^{(m)}_{W(X)}-\mathrm{mod})\to D_{cc}( \widehat{\mathcal{D}}^{(m)}_{W(X)}-\mathrm{mod})\) is defined on any \(X\), namely, it is given by the functor
\[\mathcal{N}\mbox{\textasciitifright}\rightarrow\widehat{\mathcal{D}}^{(m)}_{W( X),\mathrm{acc}}\widehat{\otimes}^{L}_{\widehat{\mathcal{D}}^{(m)}_{W(X)}} \mathcal{N}\mbox{\textasciitifright}\]
Note that
\[\mathcal{N}\mbox{\textasciitifright}\rightarrow\mathcal{N}\mbox{\textasciitifright} \widehat{\otimes}^{L}_{\widehat{\mathcal{D}}^{(m)}_{W(X)}}\widehat{\mathcal{D }}^{(m)}_{W(X),\mathrm{acc}}\]
is the right adjoint to the inclusion \(D_{acc}(\mathrm{mod}-\widehat{\mathcal{D}}^{(m)}_{W(X)})\to D_{cc}( \mathrm{mod}-\widehat{\mathcal{D}}^{(m)}_{W(X)})\) of the category of right accessible \(\widehat{\mathcal{D}}^{(m)}_{W(X)}\) modules (this discussion completes the proof of 1.7, part 2). Note the the same argument, applied to \(\widehat{\mathcal{D}}^{(m)}_{W(X),\mathrm{acc}}/p^{r}\), shows that the inclusion \(D_{acc}(\widehat{\mathcal{D}}^{(m)}_{W(X)}/p^{r}-\mathrm{mod})\to D( \widehat{\mathcal{D}}^{(m)}_{W(X)}/p^{r}-\mathrm{mod})\) admits a right adjoint as well; in this case the functor preserves the abelian subcategories of accessible modules as well. We note here also that replacing everywhere \(\widehat{\mathcal{D}}^{(m)}_{W(X)}\) with \(\widehat{\mathcal{D}}^{(m)}_{W(X),\mathrm{crys}}\) yields the analogous results for those categories of modules.
Now, let's give an application of the construction of \(\Phi^{!}\):
**Proposition 4.6**.: _For each \(m\geq 0\) the sheaf \(W\omega_{X}\) admits the structure of an accessible right \(\widehat{\mathcal{D}}^{(m)}_{W(X)}\)-module. If \(X=\text{Spec}(A)\) then choosing a coordinatized lift of Frobenius, and the corresponding map \(\Phi:\mathcal{A}\to W(A)\), we have \(W\omega_{X}{\tilde{=}}\Phi^{!}(\omega_{\mathfrak{X}})\). For each \(m\geq 0\) we have \(F^{!}W\omega_{X}{\tilde{=}}W\omega_{X}\)._
The proof of this takes a few steps. Echoing the construction of the functor \(F^{*}\) for left \(\mathcal{D}\)-modules, we have:
**Corollary 4.7**.: _Let \(\mathfrak{X}=\text{Specf}(\mathcal{A})\) be equipped with a coordinatized lift of Frobenius \(F\), and let \(\Phi:\mathcal{A}\to W(A)\) be the associated morphism. Let \(\mathcal{A}^{F}\) denote \(\mathcal{A}\) as an \(\mathcal{A}\)-module via \(F\) and let \(\pi:\mathcal{A}\to F(\mathcal{A})\) be an \(F(\mathcal{A})\)-linear projection._
_1) Let \(F^{!}\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}\) denote the \((\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}},\widehat{\mathcal{D}}^{(m+1)}_{ \mathcal{A}})\) bi-sub-module of \(\text{Hom}_{W(k)}(\mathcal{A},\mathcal{A})\) generated by \(\pi\). Then the natural map_
\[\text{Hom}_{\mathcal{A}}(\mathcal{A}^{F},\mathcal{A})\otimes_{\mathcal{A}} \widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}\rightarrow\text{Hom}_{W(k)}( \mathcal{A},\mathcal{A})\]
_which takes \((\phi,P)\) to \(P\circ\phi\) induces isomorphisms_
\[\text{Hom}_{\mathcal{A}}(\mathcal{A}^{F},\widehat{\mathcal{D}}^{(m)}_{ \mathcal{A}}){\tilde{=}}\text{Hom}_{\mathcal{A}}(\mathcal{A}^{F},\mathcal{A}) \otimes_{\mathcal{A}}\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}{\tilde{ \rightarrow}}F^{!}\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}\]
_Sheafifying to \(\mathfrak{X}=\text{Specf}(\mathcal{A})\), we obtain a sheaf of bimodules \(F^{!}\widehat{\mathcal{D}}^{(m)}_{\mathfrak{X}}\). Thus there is a functor \(\mathcal{M}\to F^{!}\mathcal{M}:=\mathcal{M}\otimes_{\widehat{\mathcal{D}}^{(m )}_{\mathfrak{X}}}F^{!}(\widehat{\mathcal{D}}^{(m)}_{\mathfrak{X}})\) which is in fact an equivalence of categories \(\widehat{\mathcal{D}}^{(m)}_{\mathfrak{X}}-\text{mod}\rightarrow\widehat{ \mathcal{D}}^{(m+1)}_{\mathfrak{X}}-\text{mod}\). When \(\mathcal{M}\in\text{Coh}(\widehat{\mathcal{D}}^{(m)}_{\mathfrak{X}})\) we have_
\[F^{!}\mathcal{M}{\tilde{\rightarrow}}\text{Hom}_{\mathcal{O}_{\mathfrak{X}}}(F _{*}\mathcal{O}_{\mathfrak{X}},\mathcal{M})\]
_2) Let \(F^{!}\widehat{\mathcal{D}}^{(m)}_{W(A)}\) denote the \((\widehat{\mathcal{D}}^{(m)}_{W(A)},\widehat{\mathcal{D}}^{(m+1)}_{W(A)})\) bi-sub-module of \(\text{Hom}_{W(k)}(W(A),W(A))\) generated by the projection \(\pi:W(A)\to\mathcal{A}^{(1)}\). Sheafifying, we obtain a sheaf of bimodules \(F^{!}\widehat{\mathcal{D}}^{(m)}_{W(X)}\), and the induced functor \(\mathcal{M}\to F^{!}\mathcal{M}:=\mathcal{M}\otimes_{\widehat{\mathcal{D}}^{( m)}_{W(X)}}\widehat{\mathcal{D}}^{(m+1)}_{W(X)}\) is an equivalence from right-accessible \(\widehat{\mathcal{D}}^{(m)}_{W(X)}\)-modules to right-accessible \(\widehat{\mathcal{D}}^{(m+1)}_{W(X)}\)-modules. In fact, if \(\mathcal{M}\in\text{Qcoh}(\widehat{\mathcal{D}}^{(m)}_{W(X)})\) we have \(F^{!}\Phi^{!}_{m}\mathcal{M}\tilde{\to}\Phi^{!}_{m+1}(F^{!}\mathcal{M})\) where on the left \(F\) is the Witt-vector Frobenius and on the right it is the lift of Frobenius on \(\mathfrak{X}\)._
The proof of this result is extremely similar to that of Theorem 3.32 and 4.1, and will be left to the reader.
It will be necessary to also verify an additional compatibility. To state it, we recall a few basic facts from Grothendieck duality theory (c.f. [23], as well as [19], appendix for a useful discussion of the Cartier isomorphism in this context). Recall that if \(f:X\to Y\) is a proper morphism of locally noetherian separated schemes then we have a functor \(f^{!}:D^{b}\text{Coh}(Y)\to D^{b}\text{Coh}(X)\) which satisfies
\[R\mathcal{H}om_{\mathcal{O}_{Y}}(Rf_{*}\mathcal{N},\mathcal{M})\tilde{\to}Rf_ {*}R\mathcal{H}om_{\mathcal{O}_{X}}(\mathcal{N},f^{!}\mathcal{M})\]
for any \(\mathcal{M}\in D^{b}\text{Coh}(Y)\) and \(\mathcal{N}\in D^{b}\text{Coh}(X)\). If \(f\) is a finite morphism finite then we can in fact define
\[f^{!}(\mathcal{M}):=f^{-1}\mathcal{H}om_{\mathcal{O}_{Y}}(f_{*}\mathcal{O}_{X},\mathcal{M})\]
from \(\text{Coh}(Y)\) to \(\text{Coh}(X)\) (the \(\mathcal{O}_{X}\)-module structure is given by the action of \(\mathcal{O}_{X}\) on \(f^{-1}(f_{*}\mathcal{O}_{X})\)). In most cases of interest in this paper \(f\) will also be a topological isomorphism, in which case we will suppress the \(f^{-1}\) in the formula.
Let \(\mathfrak{X}_{r}\) be a flat lift of \(X\) over \(W_{r}(k)\); denote the morphism to \(W_{r}(k)\) by \(p_{r}\). Then there is a canonical isomorphism \(p^{!}_{r}(W_{r}(k))\tilde{\to}\omega_{\mathfrak{X}_{r}}\). It follows that, if \(F:\mathfrak{X}_{r}\to\mathfrak{X}_{r}\) is a lift of Frobenius, we have
\[\mathcal{H}om_{\mathcal{O}_{\mathfrak{X}_{r}}}(F_{*}(\mathcal{O}_{\mathfrak{X} _{r}}),\omega_{\mathfrak{X}_{r}}):=F^{!}(\omega_{\mathfrak{X}_{r}})\tilde{\to} \omega_{\mathfrak{X}_{r}}\]
**Corollary 4.8**.: _Let notation be as in the previous corollary._
_1) Let \(\mathcal{M}\in\text{mod}-\widehat{\mathcal{D}}^{(m)}_{\mathfrak{X}}\) be coherent and \(p\)-torsion free. Then there is a natural isomorphism_
\[\Phi^{!}(\mathcal{M})\tilde{\to}\mathcal{H}\tilde{\omega}m_{\mathcal{O}_{X}}( \mathcal{O}_{W(X)},\mathcal{M})\]
_where \(\mathcal{H}\tilde{\omega}m_{\mathcal{O}_{X}}(\mathcal{O}_{W(X)},\mathcal{M}) \subset\mathcal{H}om_{\mathcal{O}_{X}}(\mathcal{O}_{W(X)},\mathcal{M})\) consists of those morphisms which satisfy \(\epsilon(V^{m}(\mathcal{O}_{W(X)}))\subset p^{m}\mathcal{N}\) for all \(m\) (over every open set in \(X\)). In other words,_
\[\Phi^{!}(\mathcal{M})\tilde{\to}\lim_{m}\mathcal{H}om_{\mathcal{O}_{X_{m}}}( \mathcal{O}_{W_{m}(X)},\mathcal{M}/p^{m}\mathcal{M})\]
_2) Suppose \(\mathcal{N}\in\text{mod}-\widehat{\mathcal{D}}^{(m)}_{W(X)}\) is a right-accessible coherent module, which is \(p\)-torsion-free. Then_
\[F^{!}\mathcal{N}\tilde{\leq}\mathcal{H}\tilde{\omega}m_{\mathcal{O}_{W(X)}}(F_{ *}(\mathcal{O}_{W(X)}),\mathcal{N})\]
_where \(\mathcal{H}\tilde{\omega}m_{\mathcal{O}_{W(X)}}(F_{*}(\mathcal{O}_{W(X)}), \mathcal{N})\subset\mathcal{H}om_{\mathcal{O}_{W(X)}}(F_{*}(\mathcal{O}_{W(X)}),\mathcal{N})\) consists of morphisms \(\epsilon\) satisfying \(\epsilon(V^{l}(\mathcal{O}_{W(X)}))\subset p^{l}\mathcal{N}\) for all \(m\) (over every open set in \(X\)). In other words,_
\[F^{!}\mathcal{N}\tilde{\geq}\lim_{l}\mathcal{H}om_{\mathcal{O}_{W_{l}(X)}}(F_{ *}(\mathcal{O}_{W_{l}(X)}),\mathcal{N}/p^{l})=\lim_{l}F^{!}(\mathcal{N}/p^{l} \mathcal{N})\]
Proof.: 1) There is a natural map
\[\mathcal{M}\otimes_{\widehat{\mathcal{D}}^{(m)}_{\mathfrak{X}}}\mathcal{H}\tilde{ om}_{\mathcal{O}_{X}}(\mathcal{O}_{W(X)},\widehat{\mathcal{D}}^{(m)}_{\mathfrak{X}}) \rightarrow\mathcal{H}\tilde{om}_{\mathcal{O}_{X}}(\mathcal{O}_{W(X)}, \mathcal{M})\]
taking \((m,\phi)\) to the morphism \(f\to m\cdot\phi(f)\). As everything in sight is \(p\)-torsion-free and \(p\)-adically complete, it suffices to show that the reduction mod \(p\)
\[\mathcal{M}/p\otimes_{\mathcal{D}^{(m)}_{X}}\mathcal{H}\tilde{om}_{\mathcal{O }_{X}}(\mathcal{O}_{W(X)},\widehat{\mathcal{D}}^{(m)}_{\mathfrak{X}})/p \rightarrow\mathcal{H}\tilde{om}_{\mathcal{O}_{X}}(\mathcal{O}_{W(X)}, \mathcal{M})/p\]
is an isomorphism. Further, we have
\[\mathcal{H}\tilde{om}_{\mathcal{O}_{X}}(\mathcal{O}_{W(X)},\widehat{ \mathcal{D}}^{(m)}_{\mathfrak{X}})/p=\prod_{(I,r)}p^{r}T^{I}\cdot\mathcal{D}^{ (m)}_{X}\]
is an infinite product of copies of \(\mathcal{D}^{(m)}_{X}\), and similarly \(\mathcal{H}\tilde{om}_{\mathcal{O}_{X}}(\mathcal{O}_{W(X)},\mathcal{M})/p\) is an infinite product of copies of \(\mathcal{M}/p\). Since \(\mathcal{M}\) is finitely generated and \(\mathcal{D}^{(m)}_{X}\) is noetherian, the result follows from this.
2) Write \(\mathcal{N}=\Phi^{!}_{m}\mathcal{M}\) for a coherent, \(p\)-torsion-free \(\mathcal{M}\). Then by the previous result we have
\[F^{!}\Phi^{!}_{m}\mathcal{M}\tilde{\rightarrow}\Phi^{!}_{m+1}F^{!}\mathcal{M}\]
as \(\mathcal{M}\) is \(p\)-torsion-free and complete we have
\[\Phi^{!}_{m+1}F^{!}\mathcal{M}=\lim_{l}\mathcal{H}om_{\mathcal{O}_{X_{l}}}( \mathcal{O}_{W_{l}(X)},\mathcal{H}om_{\mathcal{O}_{X_{l}}}(F_{*}\mathcal{O}_{ \mathfrak{X}_{l}},\mathcal{M}/p^{l}\mathcal{M}))\]
using part 1) to evaluate \(\Phi^{!}_{m+1}\). We wish to prove that
\[F^{!}\mathcal{N}\tilde{=}\lim_{l}\mathcal{H}om_{\mathcal{O}_{W_{l}(X)}}(F_{*}( \mathcal{O}_{W_{l}(X)}),\mathcal{N}/p^{l})\]
But again by part 1) we have
\[\mathcal{H}om_{\mathcal{O}_{W_{l}(X)}}(F_{*}(\mathcal{O}_{W_{l}(X)}), \mathcal{N}/p^{l})\tilde{=}\mathcal{H}om_{\mathcal{O}_{W_{l}(X)}}(F_{*}( \mathcal{O}_{W_{l}(X)}),\mathcal{H}om_{\mathcal{O}_{X_{l}}}(\mathcal{O}_{W_{l }(X)},\mathcal{M}/p^{l}\mathcal{M}))\]
So the result follows from the fact that \(\Phi\circ F=F\circ\Phi\) and the composition of morphisms in Grothendieck duality.
Let us continue with \(X=\operatorname{Spec}(A)\) as above. Recalling that \(\omega_{\mathfrak{X}}\) has the structure of a right \(\widehat{\mathcal{D}}^{(m)}_{\mathfrak{X}}\) module for every \(m\geq 0\), and, the above isomorphism in Grothendieck duality theory yields \(F^{!}\omega_{\mathfrak{X}}\tilde{=}\omega_{\mathfrak{X}}\) as \(\widehat{\mathcal{D}}^{(m+1)}_{\mathfrak{X}}\) modules. Thus \(\omega_{\mathfrak{X}}\) acquires the structure of a \(\widehat{\mathcal{D}}_{\mathfrak{X}}:=\lim\widehat{\mathcal{D}}^{(m)}_{ \mathfrak{X}}\)-module. We have
**Lemma 4.9**.: _For \(X=\operatorname{Spec}(A)\) with Frobenius lift \(\Phi\) we have that there exists, for each \(m\geq 0\) a map of \(\widehat{\mathcal{D}}^{(m)}_{W(X)}\)-modules \(\eta_{m}:\Phi^{!}\omega_{\mathfrak{X}}\tilde{\rightarrow}W\omega_{X}\); the sequence of maps \(\eta_{m}\) are compatible with \(F^{!}\) in the sense that \(\eta_{m+1}=F^{!}\eta_{m}\). Therefore \(W\omega_{X}\) acquires the structure of a right \(\widehat{\mathcal{D}}_{W(X)}\)-module and there is an isomorphism \(\eta_{\infty}:\Phi^{!}\omega_{\mathfrak{X}}\tilde{\rightarrow}W\omega_{X}\) satisfying \(\eta_{\infty}F^{!}=F^{!}\eta_{\infty}\)._
Proof.: According to Illusie ([26], c.f. also [16], section 1.9) there is, for each \(l\geq 0\), a canonical isomorphism
\[p^{!}_{l}(W_{l}(k))\tilde{\rightarrow}W_{l}\omega_{X}\]
where \(p_{l}\) maps \(W_{l}(X)\) to a point. And therefore a canonical isomorphism
\[\Phi^{!}p^{!}_{l}(W_{l}(k))\tilde{\rightarrow}\Phi^{!}\omega_{\mathfrak{X}_{l}} \tilde{\rightarrow}W_{l}\omega_{X}\]
taking the inverse limit over \(l\) we obtain the required isomorphism at the level of coherent sheaves. Endowing \(\omega_{\mathfrak{X}}\) with its right \(\widehat{\mathcal{D}}^{(m)}_{\mathfrak{X}}\)-module structure as above gives
\(W\omega_{X}\) the structure of a right \(\widehat{\mathcal{D}}^{(m)}_{W(X)}\)-module and using 4.8 shows the compatibility with \(F^{!}\) as required.
Now we can proceed to the
Proof.: (of 4.6) Without loss of generality we may suppose \(X\) is geometrically connected. Choose some \(U\subset X\) which is open affine, such that \(U=\operatorname{Spec}(A)\) possesses local coordinates; choose a coordinatized lift of Frobenius on \(\mathfrak{U}\) as above. Then the above yields an isomorphism \(\mathfrak{c}_{\mathfrak{U}}:\Phi^{!}\omega_{\mathfrak{U}}\widetilde{\to}W \omega_{U}\) which respects the Frobenius structure and the structure of a \(\widehat{\mathcal{D}}_{W(X)}\)-module. Choose and fix one such isomorphism.
Now let \(V\subset U\) be another such open affine; pick a Frobenius lift \(\Psi\) on \(\mathfrak{V}\subset\mathfrak{U}\). Then we have \(\epsilon_{\mathfrak{V}}:\Psi^{!}\omega_{\mathfrak{V}}\widetilde{\to}W\omega_{V}\), and so we obtain the isomorphism \(\epsilon_{\mathfrak{U}}\circ\epsilon_{\mathfrak{V}}^{-1}\) on \(W\omega_{V}\). This isomorphism commutes with \(F^{!}\). But it is easy to see that the set of such maps is simply \(\mathbb{Z}_{p}^{\times}\); in particular the isomorphism \(\Phi^{!}\omega_{\mathfrak{U}}\widetilde{\to}\Psi^{!}\omega_{\mathfrak{V}}\) must respect the \(\widehat{\mathcal{D}}_{W(X)}\)-module structure. Furthermore, we may rescale the map \(\epsilon_{\mathfrak{U}}\) and obtain \(\epsilon_{\mathfrak{U}}\circ\epsilon_{\mathfrak{V}}^{-1}=1\) on \(\omega_{dRW(V)}^{n}\). Since \(X\) is geometrically connected any open affine \(W\subset X\) intersects \(U\) nontrivially and the result follows directly.
Now we are ready to discuss the left-right interchange. Given the right \(\widehat{\mathcal{D}}^{(m)}_{W(X)}\)-module structure on \(W\omega_{X}\), and the left \(\widehat{\mathcal{D}}^{(m)}_{W(X)}\)-module structure on \(\mathcal{O}_{W(X)}\), we see that \(\mathcal{H}om_{W(k)}(\mathcal{O}_{W(X)},W\omega_{X})\) acquires the structure of a \((\widehat{\mathcal{D}}^{(m),\text{opp}}_{W(X)},\widehat{\mathcal{D}}^{(m), \text{opp}}_{W(X)})\)-bimodule, while \(\mathcal{H}om_{W(k)}(W\omega_{X},\mathcal{O}_{W(X)})\) acquires the structure of a \((\widehat{\mathcal{D}}^{(m)}_{W(X)},\widehat{\mathcal{D}}^{(m)}_{W(X)})\)-bimodule.
**Proposition 4.10**.: _1) Consider the sheaf \(W\omega_{X}\widehat{\otimes}_{\mathcal{O}_{W(X)}}\widehat{\mathcal{D}}^{(m)}_ {W(X),\,acc}\), which is the completion of \(W\omega_{X}\otimes_{\mathcal{O}_{W(X)}}\widehat{\mathcal{D}}^{(m)}_{W(X),\,acc}\) along the filtration9_
Footnote 9: Here, \(G^{j}\) denotes the standard filtration on \(W\omega_{X}\) coming from de Rham-Witt theory
\(\{V^{i}(\widehat{\mathcal{D}}^{(m)}_{W(X),\,acc})\otimes G^{j}(W\omega_{X})\} _{i+j\geq r}\). There is a natural injective map_
\(W\omega_{X}\widehat{\otimes}_{\mathcal{O}_{W(X)}}\widehat{\mathcal{D}}^{(m)}_ {W(X),\,acc}\to\mathcal{H}om_{W(k)}(\mathcal{O}_{W(X)},W\omega_{X})\)__
_the image of which is a \((\widehat{\mathcal{D}}^{(m),\text{opp}}_{W(X)},\widehat{\mathcal{D}}^{(m), \text{opp}}_{W(X)})\)-bi-sub-module; therefore \(W\omega_{X}\widehat{\otimes}_{\mathcal{O}_{W(X)}}\widehat{\mathcal{D}}^{(m)} _{W(X),\,acc}\) is itself a \((\widehat{\mathcal{D}}^{(m),\text{opp}}_{W(X)},\widehat{\mathcal{D}}^{(m), \text{opp}}_{W(X)})\)-bimodule._
_2) Consider the sheaf \(\tilde{\mathcal{H}om}_{\mathcal{O}_{W(X)}}(W\omega_{X},\widehat{\mathcal{D}}^{( m)}_{W(X),\,acc})\) where \(\tilde{\mathcal{H}om}\) denotes those morphisms which take \(G^{i}(W\omega_{X})\) to \(V^{i}(\widehat{\mathcal{D}}^{(m)}_{W(X),\,acc})\). There is a natural injective map_
\(\tilde{\mathcal{H}om}_{\mathcal{O}_{W(X)}}(W\omega_{X},\widehat{\mathcal{D}}^{ (m)}_{W(X),\,acc})\to\mathcal{H}om_{W(k)}(W\omega_{X},\mathcal{O}_{W(X)})\)__
_the image of which is a \((\widehat{\mathcal{D}}^{(m)}_{W(X)},\widehat{\mathcal{D}}^{(m)}_{W(X)})\)-bi-sub-module; therefore \(\tilde{\mathcal{H}om}_{\mathcal{O}_{W(X)}}(W\omega_{X},\widehat{\mathcal{D}}^{ (m)}_{W(X),\,acc})\) is itself a \((\widehat{\mathcal{D}}^{(m)}_{W(X)},\widehat{\mathcal{D}}^{(m)}_{W(X)})\)-bimodule._
Proof.: 1) The map is given as the completion of the map
\(W\omega_{X}\otimes_{\mathcal{O}_{W(X)}}\widehat{\mathcal{D}}^{(m)}_{W(X),\,acc }\to\mathcal{H}om_{W(k)}(\mathcal{O}_{W(X)},W\omega_{X})\)
which takes \(\delta\otimes P\rightarrow\delta\cdot P\) (where \(P\in\widehat{\mathcal{D}}^{(m)}_{W(X),\text{acc}}\) is regarded as an endomorphism of \(\mathcal{O}_{W(X)}\)). To verify the properties of this map, we work locally. Assume \(X=\text{Spec}(A)\), \(\Phi:\mathcal{A}\to W(A)\) and \(\pi\) are given. Then as \(\widehat{\mathcal{D}}^{(m)}_{W(X),\text{acc}}=\Phi^{*}\mathcal{D}^{(m)}_{ \mathcal{A}}\widehat{\otimes}_{\mathcal{D}^{(m)}_{\mathcal{A}}}\Phi^{!} \mathcal{D}^{(m)}_{\mathcal{A}}\) we start by considering \(W\omega_{A}\widehat{\otimes}_{W(A)}\Phi^{*}\mathcal{D}^{(m)}_{\mathcal{A}}\), the completion of \(W\omega_{A}\otimes_{W(A)}\Phi^{*}\mathcal{D}^{(m)}_{\mathcal{A}}\) along \(\{V^{i}(W\omega_{A})\otimes_{W(A)}V^{j}(\Phi^{*}\mathcal{D}^{(m)}_{\mathcal{A} })\}_{i+j\geq r}\).. We have
\[W\omega_{A}\widehat{\otimes}_{W(A)}\Phi^{*}\mathcal{D}^{(m)}_{\mathcal{A}} \tilde{=}W\omega_{A}\widehat{\otimes}_{\mathcal{A}}\mathcal{D}^{(m)}_{ \mathcal{A}}\]
where on the right the completion is along \(\{V^{i}(W\omega_{A})\otimes_{\mathcal{A}}p^{j}\mathcal{D}^{(m)}_{\mathcal{A} })\}_{i+j\geq r}\). We then have
\[W\omega_{A}\widehat{\otimes}_{\mathcal{A}}\mathcal{D}^{(m)}_{\mathcal{A}} \rightarrow\tilde{Hom}_{W(k)}(\mathcal{A},W\omega_{A})\]
where the last map is given by the completion of \(\delta\otimes Q\rightarrow\delta\cdot Q\). From the fact that \(W\omega_{A}\) is an inverse limit of free \(\mathcal{A}\)-modules, this is clearly injective, and we have that its image is closed under the right action of \(\widehat{\mathcal{D}}^{(m)}_{W(A)}\) (the proof of this is extremely similar to 3.2). As \(\Phi^{!}\mathcal{D}^{(m)}_{\mathcal{A}}\subset\tilde{Hom}_{W(k)}(W(A), \mathcal{A})\) is preserved under the right action of \(\widehat{\mathcal{D}}^{(m)}_{W(A)}\), the map
\[W\omega_{A}\widehat{\otimes}_{W(A)}\Phi^{*}\mathcal{D}^{(m)}_{\mathcal{A}} \widehat{\otimes}_{\mathcal{D}^{(m)}_{\mathcal{A}}}\Phi^{!}\mathcal{D}^{(m)}_ {\mathcal{A}}\rightarrow\mathcal{Hom}_{W(k)}(W(A),W\omega_{A})\]
defined by composition has image closed under the action of \((\widehat{\mathcal{D}}^{(m),\text{opp}}_{W(A)},\widehat{\mathcal{D}}^{(m), \text{opp}}_{W(A)})\).
2) This is extremely similar to 1), using instead that
\[\tilde{Hom}_{W(A)}(W\omega_{A},\Phi^{!}(\widehat{\mathcal{D}}^{(m)}_{\mathcal{ A}}))\tilde{=}\tilde{Hom}_{\mathcal{A}}(W\omega_{A},\widehat{\mathcal{D}}^{(m)}_ {\mathcal{A}})\rightarrow\tilde{Hom}_{W(k)}(W\omega_{A},\mathcal{A})\]
where the last map is given by noting
\[\tilde{Hom}_{\mathcal{A}}(W\omega_{A},\widehat{\mathcal{D}}^{(m)}_{\mathcal{ A}})\tilde{=}\tilde{Hom}_{\mathcal{A}}(W\omega_{A},\mathcal{A})\widehat{ \otimes}_{\mathcal{A}}\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}\]
and then embedding that module into \(\tilde{Hom}_{W(k)}(W\omega_{A},\mathcal{A})\) by composing maps. As above the image is closed under the left action of \(\widehat{\mathcal{D}}^{(m)}_{W(A)}\), and the result follows.
From this we deduce
**Corollary 4.11**.: _The functors_
\[(W\omega_{X}\widehat{\otimes}_{\mathcal{O}_{W(X)}}\widehat{\mathcal{D}}^{(m) }_{W(X),\text{acc}})\otimes^{L}_{\widehat{\mathcal{D}}^{(m)}_{W(X)}}\]
_and_
\[\otimes^{L}_{\widehat{\mathcal{D}}^{(m)}_{W(X)}}\tilde{Hom}_{\mathcal{O}_{W(X) }}(W\omega_{X},\widehat{\mathcal{D}}^{(m)}_{W(X),\text{acc}})\]
_give inverse equivalences of categories from left accessible to right accessible \(\widehat{\mathcal{D}}^{(m)}_{W(X)}\)-modules._
Proof.: First let us examine the local situation.
We have
\[W\omega_{A}\widehat{\otimes}_{W(A)}\Phi^{*}\widehat{\mathcal{D}}^{(m)}_{ \mathcal{A}}\tilde{=}W\omega_{A}\widehat{\otimes}_{\mathcal{A}}\widehat{ \mathcal{D}}^{(m)}_{\mathcal{A}}\]
\[\tilde{=}\tilde{\text{Hom}}_{\mathcal{A}}(W(A),\omega_{\mathcal{A}})\widehat{ \otimes}_{\mathcal{A}}\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}}\]
\[\tilde{=}\tilde{\omega}_{\mathcal{A}}\otimes_{\mathcal{A}}\tilde{\text{ Hom}}_{\mathcal{A}}(W(A),\widehat{\mathcal{D}}^{(m)}_{\mathcal{A}})=\omega_{ \mathcal{A}}\otimes_{\mathcal{A}}\Phi^{!}\widehat{\mathcal{D}}^{(m)}_{ \mathcal{A}}\]
where, in the first line, \(W\omega_{A}\widehat{\otimes}_{\mathcal{A}}\widehat{\mathcal{D}}_{\mathcal{A}}^{(m)}\) is the completion of \(W\omega_{A}\otimes_{\mathcal{A}}\widehat{\mathcal{D}}_{\mathcal{A}}^{(m)}\) along \(\{V^{i}(W\omega_{A})\otimes p^{j}\widehat{\mathcal{D}}_{\mathcal{A}}^{(m)}\}\); the second isomorphism is from \(W\omega_{A}{\widehat{=}}\tilde{\operatorname{Hom}}_{\mathcal{A}}(W(A),\omega _{\mathcal{A}})\), and the third and fourth are from the fact that \(\omega_{\mathcal{A}}\) is locally free over \(\mathcal{A}\).
It follows that
\[(W\omega_{A}\widehat{\otimes}_{W(A)}\widehat{\mathcal{D}}_{W(A),\operatorname {acc}}^{(m)})\otimes_{\widehat{\mathcal{D}}_{W(A)}^{(m)}}\Phi^{*}\widehat{ \mathcal{D}}_{\mathcal{A}}^{(m)}{\mathop{\buildrel\to\over{\to}}}W\omega_{A} \widehat{\otimes}_{W(A)}\Phi^{*}\widehat{\mathcal{D}}_{\mathcal{A}}^{(m)}\]
and therefore
\[(W\omega_{A}\widehat{\otimes}_{W(A)}\widehat{\mathcal{D}}_{W(A),\operatorname {acc}}^{(m)})\otimes_{\widehat{\mathcal{D}}_{W(A)}^{(m)}}^{L}\Phi^{*} \mathcal{M}{\mathop{\buildrel\to\over{\to}}}(W\omega_{A}\widehat{\otimes}_{W( A)}\Phi^{*}\widehat{\mathcal{D}}_{\mathcal{A}}^{(m)})\widehat{\otimes}_{ \widehat{\mathcal{D}}_{\mathcal{A}}^{(m)}}^{L}\mathcal{M}\cdot\]
\[(\omega_{\mathcal{A}}\otimes_{\mathcal{A}}\Phi^{!}\widehat{\mathcal{D}}_{ \mathcal{A}}^{(m)})\widehat{\otimes}_{\widehat{\mathcal{D}}_{\mathcal{A}}^{(m) }}^{L}\mathcal{M}\cdot=\Phi^{!}(\omega_{\mathcal{A}}\otimes_{\mathcal{A}} \mathcal{M}\cdot)\]
so this functor agrees with the usual left-right interchange over \(\mathcal{A}\). Now, the functor \((W\omega_{X}\widehat{\otimes}_{\mathcal{O}_{W(X)}}\widehat{\mathcal{D}}_{W(X), \operatorname{acc}}^{(m)})\otimes_{\widehat{\mathcal{D}}_{W(X)}^{(m)}}\) admits an adjoint; namely
\[R\mathcal{H}om_{\widehat{\mathcal{D}}_{W(X)}^{(m),\operatorname{op}}}((W \omega_{X}\widehat{\otimes}_{\mathcal{O}_{W(X)}}\widehat{\mathcal{D}}_{W(X), \operatorname{acc}}^{(m)}))\text{, and an extremely similar argument shows that it also preserves the subcategories of accessible modules; as it must be the inverse locally, we see that it is so globally as well.
The proof for the functor \(\otimes_{\widehat{\mathcal{D}}_{W(A)}^{(m)}}^{L}\mathcal{H}om_{\mathcal{O}_{ W(X)}}(W\omega_{X},\widehat{\mathcal{D}}_{W(X),\operatorname{acc}}^{(m)})\) is essentially identical.
### Operations on modules: Pull-Back
Throughout this section let \(m\geq 0\). Let \(\varphi:\mathfrak{X}\to\mathfrak{Y}\) be a morphism of smooth formal schemes over \(W(k)\). Recall that Berthelot in [5], section 3.2, has shown that \(\widehat{\mathcal{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(m)}:=\varphi^{*}( \widehat{\mathcal{D}}_{\mathfrak{Y}}^{(m)})\) carries the structure of a left \(\widehat{\mathcal{D}}_{\mathfrak{X}}^{(m)}\)-module (by \(\varphi^{*}\) we mean the \(p\)-adically completed pullback). By definition \(\varphi^{*}\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(m)}\) carries the structure of a right \(\varphi^{-1}(\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(m)})\)-module. This, in turn allows one to define the functor \(\varphi^{*}\) via
\[L\varphi^{*}(\mathcal{M}):=\varphi^{*}\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(j )}\widehat{\otimes}_{\varphi^{-1}(\widehat{\mathcal{D}}_{\mathfrak{Y}})}^{L} \varphi^{-1}(\mathcal{M})\]
as usual, the completion means the cohomological completion. One sets \(\varphi^{!}:=L\varphi^{*}[d_{X/Y}]\) (where \(d_{X/Y}=\dim(X)-\dim(Y)\)).
Now, if \(\varphi:X\to Y\), we wish to define the analogous pullback functor \(L\varphi^{*}\) from accessible \(\widehat{\mathcal{D}}_{W(Y)}^{(m)}\)-modules to accessible \(\widehat{\mathcal{D}}_{W(X)}^{(m)}\)-modules. Recall that, by the functoriality of the Witt vectors, \(\varphi\) give rise to a morphism of ringed spaces \(W\varphi:(X,\mathcal{O}_{WX})\to(Y,\mathcal{O}_{W(Y)})\). Thus we have a map \(W\varphi^{\#}:W\varphi^{-1}(\mathcal{O}_{W(Y)})\to\mathcal{O}_{W(X)}\); there is also an induced map \(W\varphi^{\#}:W\varphi^{-1}(\mathcal{O}_{W(Y)}/p^{r})\to\mathcal{O}_{W(X)}/p^ {r}\). We shall construct a suitable sheaf of \((\widehat{\mathcal{D}}_{W(X)}^{(m)},W\varphi^{-1}(\widehat{\mathcal{D}}_{W(Y) }^{(m)}))\) bimodules; before doing so, let us remark on a slight generalization of the notion of accessibility.
Starting in characteristic \(p\), we note that there is an exact, conservative functor
\[\mathcal{Q}\to\mathcal{B}_{X}^{(m)}\otimes_{\mathcal{D}_{X}^{(m)}}\mathcal{Q} \otimes_{\varphi^{-1}(\mathcal{D}_{Y}^{(m)})}\varphi^{-1}(\mathcal{B}_{Y}^{(m ),r})\]
from the category of \((\mathcal{D}_{X}^{(m)},\varphi^{-1}(\mathcal{D}_{Y}^{(m)}))\)-bimodules to the category of \((\mathcal{D}_{W(X)}^{(m)}/p,W\varphi^{-1}(\mathcal{D}_{W(Y)}^{(m)})/p)\)-bimodules, with right adjoint given by
\[\mathcal{P}\to\mathcal{B}_{X}^{(m),r}\otimes_{\mathcal{D}_{W(X)}^{(m)}/p} \mathcal{P}\otimes_{W\varphi^{-1}(\mathcal{D}_{W(Y)}^{(m)})/p)}W\varphi^{-1}( \mathcal{B}_{Y}^{(m)})\]
We see (exactly as in 3.16) that there is a full embedding of categories
\[D((\mathcal{D}^{(m)}_{X},\varphi^{-1}(\mathcal{D}^{(m)}_{Y}))-\text{bimod})\to D ((\mathcal{D}^{(m)}_{W(X)}/p,W\varphi^{-1}(\mathcal{D}^{(m)}_{W(Y)})/p)-\text{ bimod})\]
whose image we call the accessible \((\mathcal{D}^{(m)}_{W(X)}/p,W\varphi^{-1}(\mathcal{D}^{(m)}_{W(Y)})/p)\)-bimodules.
**Definition 4.12**.: A complex \(\mathcal{P}\colon\in D_{cc}((\widehat{\mathcal{D}}^{(m)}_{W(X)},W\varphi^{-1} (\widehat{\mathcal{D}}^{(m)}_{W(Y)}))-\text{bimod})\) is said to be accessible if \(\mathcal{P}\otimes^{L}_{W(k)}k\) is accessible as a complex of \((\mathcal{D}^{(m)}_{W(X)}/p,W\varphi^{-1}(\mathcal{D}^{(m)}_{W(Y)})/p)\)-bimodules.
Exactly as in Theorem 3.18, we have that \(\mathcal{P}\cdot\) is accessible iff, locally, it can be written as
\[(\Phi^{*}_{1}\widehat{\mathcal{D}}^{(m)}_{\mathfrak{X}})\widehat{\otimes}^{L }_{\widehat{\mathcal{D}}^{(m)}_{\mathfrak{X}}}\mathcal{Q}\,\widehat{\otimes}^ {L}_{\varphi^{-1}(\widehat{\mathcal{D}}^{(m)}_{\mathfrak{Y}})}\varphi^{-1}( \Phi^{1}_{2}\widehat{\mathcal{D}}^{(m)}_{\mathfrak{Y}})\]
where \(\Phi_{1}\) and \(\Phi_{2}\) are coordinatized lifts of Frobenius. Furthermore, we have the natural functor
\[\mathcal{P}\cdot\to\mathcal{P}^{\cdot}_{\text{acc}}:=\widehat{\mathcal{D}}^{( m)}_{W(X),\text{acc}}\widehat{\otimes}^{L}_{\widehat{\mathcal{D}}^{(m)}_{W(X)}} \mathcal{P}\cdot\widehat{\otimes}^{L}_{W\varphi^{-1}(\widehat{\mathcal{D}}^{ (m)}_{W(Y)})}W\varphi^{-1}(\widehat{\mathcal{D}}^{(m)}_{W(Y)})\]
which takes any complex of bimodules to an accessible complex, and which is the identity on accessible complexes.
**Definition 4.13**.: For any morphism \(\varphi:X\to Y\), we have the sheaf of \((\widehat{\mathcal{D}}^{(m)}_{W(X)},W\varphi^{-1}(\widehat{\mathcal{D}}^{(m)} _{W(Y)}))\)-bimodules \(\widehat{\mathcal{D}}^{(m)}_{W(X)\to W(Y)}\) constructed in 2.33 Then we define
\[\widehat{\mathcal{D}}^{(m)}_{W(X)\to W(Y),\text{acc}}:=\widehat{\mathcal{D}}^{ (m)}_{W(X),\text{acc}}\widehat{\otimes}^{L}_{\widehat{\mathcal{D}}^{(m)}_{W( X)}}\widehat{\mathcal{D}}^{(m)}_{W(X)\to W(Y)}\widehat{\otimes}^{L}_{W\varphi^{-1} (\widehat{\mathcal{D}}^{(m)}_{W(Y)})}W\varphi^{-1}(\widehat{\mathcal{D}}^{(m )}_{W(Y),\text{acc}})\]
a sheaf of accessible \((\widehat{\mathcal{D}}^{(m)}_{W(X)},W\varphi^{-1}(\widehat{\mathcal{D}}^{(m)} _{W(Y)}))\)-bimodules on \(X\).
By the local description of \(\widehat{\mathcal{D}}^{(m)}_{W(X),\text{acc}}\) (given right above 4.5) we see that when \(\varphi\) is the identity map we have \(\widehat{\mathcal{D}}^{(m)}_{W(X)\to W(Y),\text{acc}}=\widehat{\mathcal{D}}^{ (m)}_{W(X),\text{acc}}\)
Then we have the
**Definition 4.14**.: Let \(\mathcal{M}\colon\in D_{\text{acc}}(\widehat{\mathcal{D}}^{(m)}_{W(Y)}-\text{ mod})\). We define
\[L(W\varphi)^{*}(\mathcal{M}\,):=\widehat{\mathcal{D}}^{(m)}_{W(X)\to W(Y), \text{acc}}\widehat{\otimes}^{L}_{W\varphi^{-1}(\widehat{\mathcal{D}}^{(m)}_{ W(Y)})}W\varphi^{-1}(\mathcal{M}\,)\in D(\widehat{\mathcal{D}}^{(m)}_{W(X)}- \text{mod})\]
Similarly we define, for any \(\mathcal{M}\colon\in D(\widehat{\mathcal{D}}^{(m)}_{W(Y)}/p^{r}-\text{mod})\),
\[L(W\varphi)^{*}(\mathcal{M}\,):=\widehat{\mathcal{D}}^{(m)}_{W(X)\to W(Y), \text{acc}}/p^{r}\widehat{\otimes}^{L}_{W\varphi^{-1}(\widehat{\mathcal{D}}^{(m )}_{W(Y)}/p^{r})}W\varphi^{-1}(\mathcal{M}\,)\in D(\widehat{\mathcal{D}}^{(m) }_{W(X)}/p^{r}-\text{mod})\]
In order to get control over this definition, we need to compare with the usual pullback functor in the presence of local lifts. Suppose for a moment that \(X=\text{Spec}(A)\) and \(Y=\text{Spec}(B)\) are affine, each possessing local coordinates, as well as coordinatized Frobenius lifts; let \(\Phi_{1}:\mathcal{O}_{\mathfrak{X}}\to\mathcal{O}_{W(X)}\) and \(\Phi_{2}:\mathcal{O}_{\mathfrak{Y}}\to\mathcal{O}_{W(Y)}\) be the associated morphism, with projections \(\pi_{1}\) and \(\pi_{2}\), respectively. Suppose we have a lift \(\varphi:\mathfrak{X}\to\mathfrak{Y}\).
**Proposition 4.15**.: _Let \(\widehat{\mathcal{D}}^{(m),\Phi_{1},\Phi_{2}}_{\mathfrak{X}\to\mathfrak{Y}}\) denote the \(p\)-adic completion of the \((\widehat{\mathcal{D}}^{(m)}_{\mathfrak{X}},\varphi^{-1}(\widehat{\mathcal{D}}^{ (m)}_{\mathfrak{Y}}))\) bimodule locally generated by \(\pi_{1}\circ W\varphi^{\#}\circ\varphi^{-1}(\Phi_{2}):\varphi^{-1}(\mathcal{O} _{\mathfrak{Y}})\to\mathcal{O}_{\mathfrak{X}}\). We have_
\[\widehat{\mathcal{D}}^{(m)}_{W(X)\to W(Y),\text{acc}}\tilde{=}\Phi^{*}_{1} \widehat{\mathcal{D}}^{(m)}_{\mathfrak{X}}\widehat{\otimes}^{L}_{\widehat{ \mathcal{D}}^{(m)}_{\mathfrak{X}}}\widehat{\mathcal{D}}^{(m)}_{\mathfrak{X}\to \mathfrak{Y}}\widehat{\otimes}^{L}_{\varphi^{-1}(\widehat{\mathcal{D}}^{(m)}_{ \mathfrak{Y}})}\varphi^{-1}(\Phi^{1}_{2}\widehat{\mathcal{D}}^{(m)}_{\mathfrak{Y} })\]
i.e., the derived completed tensor product is concentrated in degree \(0\)(and is equal to the usual \(p\)-adic completion of the tensor product). In particular, if \(\varphi\circ\Phi_{1}=\Phi_{2}\circ\varphi\) then we obtain_
\[\widehat{\mathcal{D}}^{(m)}_{W(X)\to W(Y),\text{acc}}\tilde{=}\Phi_{1}^{*} \widehat{\mathcal{D}}^{(m)}_{\mathfrak{X}}\widehat{\otimes}^{L}_{\widehat{ \mathcal{D}}^{(m)}_{\mathfrak{X}}}\widehat{\mathcal{D}}^{(m)}_{\mathfrak{X} \rightarrow\mathfrak{Y}}\widehat{\otimes}^{L}_{\varphi^{-1}(\widehat{ \mathcal{D}}^{(m)}_{\mathfrak{Y}})}\varphi^{-1}(\Phi_{2}^{!}\widehat{ \mathcal{D}}^{(m)}_{\mathfrak{Y}})\]
Proof.: The first statement is equivalent to
\[\Phi_{1}^{!}\widehat{\mathcal{D}}^{(m)}_{\mathfrak{X}}\otimes_{\widehat{ \mathcal{D}}^{(m)}_{W(X)}}\widehat{\mathcal{D}}^{(m)}_{W(X)\to W(Y)} \otimes_{W\varphi^{-1}(\widehat{\mathcal{D}}^{(m)}_{W(Y)})}W\varphi^{-1}( \Phi_{2}^{*}\widehat{\mathcal{D}}^{(m)}_{\mathfrak{Y}}){\tilde{=}}\widehat{ \mathcal{D}}^{(m),\Phi_{1},\Phi_{2}}_{\mathfrak{X}\rightarrow\mathfrak{Y}}\]
The composition of morphisms produces a map
\[\Phi_{1}^{!}\widehat{\mathcal{D}}^{(m)}_{\mathfrak{X}}\otimes_{\widehat{ \mathcal{D}}^{(m)}_{W(X)}}\widehat{\mathcal{D}}^{(m)}_{W(X)\to W(Y)} \otimes_{W\varphi^{-1}(\widehat{\mathcal{D}}^{(m)}_{W(Y)})}W\varphi^{-1}(\Phi _{2}^{*}\widehat{\mathcal{D}}^{(m)}_{\mathfrak{Y}})\rightarrow\mathcal{H}om _{W(k)}(\varphi^{-1}(\mathcal{O}_{\mathfrak{Y}}),\mathcal{O}_{\mathfrak{X}}) \tag{4.4}\]
As the left hand side is the summand of \(\widehat{\mathcal{D}}^{(m)}_{W(X)\to W(Y)}\) consisting of maps whose image is contained \(\pi_{1}(\mathcal{O}_{W(X)})\), and which vanish on the complement of \(\varphi^{-1}(\Phi_{2}(\mathcal{O}_{\mathfrak{Y}}))\) in \(\mathcal{O}_{W(Y)}\), this map is clearly injective. We shall show that the image is equal to \(\widehat{\mathcal{D}}^{(m),\Phi_{1},\Phi_{2}}_{\mathfrak{X}\rightarrow\mathfrak{Y }}\).
By 2.31, \(\Phi_{1}^{!}\widehat{\mathcal{D}}^{(m)}_{\mathfrak{X}}\otimes_{\widehat{ \mathcal{D}}^{(m)}_{W(X)}}\widehat{\mathcal{D}}^{(m)}_{W(X)\to W(Y)} \otimes_{W\varphi^{-1}(\widehat{\mathcal{D}}^{(m)}_{W(Y)})}W\varphi^{-1}(\Phi _{2}^{*}\widehat{\mathcal{D}}^{(m)}_{\mathfrak{Y}})\) is the sheaf whose sections over \(\mathfrak{X}\) are of the form
\[\sum_{I}(\sum_{r=0}^{\infty}\sum_{J_{I}}\pi_{1}F^{-r}(\alpha_{J_{I}})\cdot W \varphi^{\#}(\{\partial\}_{J_{I}/p^{r}})\{\partial\}^{I}\pi_{2} \tag{4.5}\]
where the notation is as in 2.31.
Now, write
\[\{\partial\}_{J_{I}/p^{r}}\{\partial\}^{I}\pi_{2}=\sum_{(K,s)}p^{s}T^{K/p^{s}} \Phi_{2}(b_{K})\]
for \(b_{K}\in\widehat{\mathcal{D}}^{(m)}_{\mathfrak{Y}}(\mathfrak{Y})\), where \(s<r\) implies \(b_{K}\in p^{r-s}\widehat{\mathcal{D}}^{(m)}_{\mathfrak{Y}}(\mathfrak{Y})\).
Then
\[\pi_{1}F^{-r}(\alpha_{J_{I}})\cdot W\varphi^{\#}(\{\partial\}_{J_{I}/p^{r}})\{ \partial\}^{I}\pi_{2}=\sum_{(K,s)}\pi_{1}\circ F^{-r}(\alpha_{J_{I}})\cdot W \varphi^{\#}\circ p^{s}T^{K/p^{s}}\cdot\Phi_{2}(b_{K})\]
As \(p^{r}T^{J/p^{r}}=V^{r}(T^{J})\) and \(W\varphi^{\#}\) commutes with \(V\), we see that the above sum is equal to
\[\sum_{(K,s)}\pi_{1}F^{-r}(\alpha_{J_{I}})\cdot(V^{r}(W\varphi^{\#}(T^{J}))) \circ(W\varphi^{\#}\circ\Phi_{2})(b_{K})\]
Let \(\{X_{1},\ldots X_{n}\) be local coordinates on \(\mathcal{A}\). Then we can write
\[(V^{r}(W\varphi^{\#}(T^{J})))\circ(W\varphi^{\#}\circ\Phi_{2})(b_{J})=\sum_{( L,m)}p^{m}X^{L/m}\cdot(W\varphi^{\#}\circ\Phi_{2})(b_{L})\]
and \(m<r\) implies \(b_{L}\in p^{r-m}\widehat{\mathcal{D}}^{(m)}_{\mathfrak{Y}}(\mathfrak{Y})\). Now, we have
\[\sum_{(L,m)}\pi_{1}F^{-r}(\alpha_{J_{I}})\cdot p^{m}X^{L/m}\cdot(W\varphi^{\# }\circ\Phi_{2})(b_{L})=\sum_{(L,m)}c_{L}\pi_{1}(W\varphi^{\#}\circ\Phi_{2})(b_{L})\]
where \(c_{L}\in p^{M}\mathcal{A}\) with \(M=\max\{m,r\}\), we see that this sum is \(p\)-adically convergent, and therefore
\[\pi_{1}F^{-r}(\alpha_{J_{I}})\cdot W\varphi^{\#}(\{\partial\}_{J_{I}/p^{r}}) \{\partial\}^{I}\pi_{2}=\sum_{(L,m),(J,r)}c_{L}\cdot\pi_{1}\circ(W\varphi^{\#} \circ\Phi_{2})(b_{J})\]
which is contained in the \(p\)-adic completion of the \((\widehat{\mathcal{D}}^{(m)}_{\mathfrak{X}}(\mathfrak{X}),\widehat{\mathcal{D}}^{(m )}_{\mathfrak{Y}}(\mathfrak{Y}))\)) bimodule generated by \(\pi_{1}\circ W\varphi^{\#}\circ\Phi_{2}\); from the above conditions on \(a_{I},b_{J}\) we also see that \(\pi_{1}F^{-r}(\alpha_{J_{I}})\cdot W\varphi^{\#}(\{\partial\}_{J_{I}/p^{r}}) \{\partial\}^{I}\pi_{2}\) is contained in \(p^{R}\cdot\widehat{\mathcal{D}}^{(m),\Phi_{1},\Phi_{2}}_{\mathfrak{X}\to \mathfrak{Y}}\) where \(R=\max\{r,v\}\) where \(v\) is the least natural number such that \(\alpha_{J_{I}}\in V^{v}(W(A))\).
Therefore the image of the sum 4.5 is contained in \(\widehat{\mathcal{D}}^{(m),\Phi_{1},\Phi_{2}}_{\mathfrak{X}\to\mathfrak{Y}}\) as well. Thus the image of the map 4.4 is contained in \(\widehat{\mathcal{D}}^{(m),\Phi_{1},\Phi_{2}}_{\mathfrak{X}\to\mathfrak{Y}}\). It is surjective, as the image of (4.4) is already a \((\widehat{\mathcal{D}}^{(m)}_{\mathfrak{X}},\varphi^{-1}(\widehat{\mathcal{D}} ^{(m)}_{\mathfrak{Y}}))\) bi-submodule containing \(\pi_{1}\circ W\varphi^{\#}\circ\varphi^{-1}(\Phi_{2})\); and so we have the desired result.
Finally, if \(\varphi\circ\Phi_{1}=\Phi_{2}\circ\varphi\) we have \(\pi_{1}\circ W\varphi^{\#}\circ\varphi^{-1}(\Phi_{2})=\varphi^{\#}\) and so the last sentence of the proposition follows.
Now, for some \(r\geq 1\) (including \(r=\infty\)) let us suppose we are given a lift of \(\varphi\) to \(\varphi:\mathfrak{X}_{r}\to\mathfrak{Y}_{r}\). We say that this map \(\varphi\) is _locally compatible with a Frobenius lift_ if, locally on \(\mathfrak{X}_{r}\) and \(\mathfrak{Y}_{r}\), we can find lifts \(\mathfrak{X}\) and \(\mathfrak{Y}\) of \(\mathfrak{X}_{r}\) and \(\mathfrak{Y}_{r}\) and a lift \(\varphi:\mathfrak{X}\to\mathfrak{Y}\), which commutes with some coordinatized lifts of Frobenius \(\Phi_{1}\) on \(\mathfrak{X}\) and \(\Phi_{2}\) on \(\mathfrak{Y}\). Then
**Corollary 4.16**.: _Let \(\varphi:\mathfrak{X}_{r}\to\mathfrak{Y}_{r}\) be locally compatible with a Frobenius lift (when \(m=0\) and \(p=2\), we suppose \(r=1\)). Let \(\mathcal{N}^{\cdot}\in D(\text{Mod}(\mathcal{D}_{\mathfrak{Y}_{r}}))\). Then there is an isomorphism_
\[LW\varphi^{*}(\mathcal{B}^{(m)}_{\mathfrak{Y}_{r}}\otimes^{L}_{\mathcal{D}^{( m)}_{\mathfrak{Y}_{r}}}\mathcal{N}^{\cdot})\widetilde{\to}\mathcal{B}_{ \mathfrak{X}_{r}}\otimes^{L}_{\mathcal{D}^{(m)}_{\mathfrak{X}_{r}}}L\varphi^{ *}(\mathcal{N}^{\cdot})\]
_where on the right hand side the pullback is the pullback in the category of \(\mathcal{D}^{(m)}_{\mathfrak{X}_{r}}\)-modules. The analogous statement holds when \(r=\infty\), i.e., for \(\varphi:\mathfrak{X}\to\mathfrak{Y}\)._
Proof.: We have the isomorphism
\[\widehat{\mathcal{D}}^{(m)}_{W(X)\to W(Y),\text{acc}}\tilde{=}\Phi_{1}^{*} \widehat{\mathcal{D}}^{(m)}_{\mathfrak{X}}\widehat{\otimes}^{L}_{\widehat{ \mathcal{D}}^{(m)}_{\mathfrak{X}}}\widehat{\mathcal{D}}^{(m)}_{\mathfrak{X} \to\mathfrak{Y}}\widehat{\otimes}^{L}_{\varphi^{-1}(\widehat{\mathcal{D}}^{( m)}_{\mathfrak{Y}})}\varphi^{-1}(\Phi_{2}^{\dagger}\widehat{ \mathcal{D}}^{(m)}_{\mathfrak{Y}})\]
for \(\Phi\circ\varphi=\Phi\circ\varphi\) (c.f. 4.15); taking reduction mod \(p^{r}\) this proves the result locally on \(\mathfrak{X}_{r}\) and \(\mathfrak{Y}_{r}\). If \(\Phi_{1}^{\prime},\Phi_{2}^{\prime}\) are different local lifts, by 3.7 the isomorphism \(\Phi_{2}^{*}\widehat{\mathcal{D}}^{(m)}_{\mathfrak{Y}}\widetilde{\to}(\Phi_{2} ^{\prime})^{*}\widehat{\mathcal{D}}^{(m)}_{\mathfrak{Y}}\) is compatible with reduction mod \(p^{r}\) (and similarly for \(\Phi_{1}\)), and the result follows; when \(p=2\) and \(n=1\) we appeal to 5.15 for the required gluing statement.
To make better use of this statement, we note the:
**Lemma 4.17**.: _For \(\mathcal{M}^{\cdot}\in D(\widehat{\mathcal{D}}^{(m)}_{W(Y)}-\text{mod})\) we have_
\[L(W\varphi)^{*}(\mathcal{M}^{\cdot})\otimes^{L}_{W(k)}k\tilde{\to}L(W\varphi)^ {*}(\mathcal{M}^{\cdot}\otimes^{L}_{W(k)}k)\]
_where the functor on the right denotes the pullback in the category of \(\widehat{\mathcal{D}}^{(m)}_{W(Y)}/p\) modules. Similarly, if \(\mathcal{M}^{\cdot}\in D(\widehat{\mathcal{D}}^{(m)}_{W(Y)}/p^{n}-\text{mod})\) for some \(n\geq 1\), then_
\[L(W\varphi)^{*}(\mathcal{M}^{\cdot})\otimes^{L}_{W_{n}(k)}k\tilde{\to}L(W \varphi)^{*}(\mathcal{M}^{\cdot}\otimes^{L}_{W_{n}(k)}k)\]
Proof.: As \(\varphi^{-1}(\widehat{\mathcal{D}}^{(m)}_{W(Y)})\) and \(\widehat{\mathcal{D}}^{(m)}_{W(X)\to W(Y),\text{acc}}\) are flat over \(W(k)\) we have
\[(\widehat{\mathcal{D}}^{(m)}_{W(X)\to W(Y),\text{acc}}\widehat{\otimes}^{L}_{ \varphi^{-1}(\widehat{\mathcal{D}}^{(m)}_{W(Y)})}\varphi^{-1}(\mathcal{M}^{\cdot}) )\otimes^{L}_{W(k)}k\]
\[\tilde{\to}(\widehat{\mathcal{D}}^{(m)}_{W(X)\to W(Y),\text{acc}}/p)\widehat{ \otimes}^{L}_{\varphi^{-1}(\widehat{\mathcal{D}}^{(m)}_{W(Y)})/p}(\varphi^{-1}( \mathcal{M}^{\cdot})\widehat{\otimes}^{L}_{W(k)}k)\]
which implies the first result. The statement about \(\mathcal{M}^{\cdot}\in D(\widehat{\mathcal{D}}^{(m)}_{W(Y)}/p^{n}-\text{mod})\) is similar.
Combining these two results yields immediately
**Proposition 4.18**.: _Let \(\varphi:X\to Y\) be locally compatible with a lift of Frobenius. Then \(L(W\varphi)^{*}\) takes \(D_{acc}(\widehat{\mathcal{D}}^{(m)}_{W(Y)}-\text{mod})\) to \(D_{acc}(\widehat{\mathcal{D}}^{(m)}_{W(X)}-\text{mod})\). The same holds mod \(p^{r}\), i.e., for \(D_{acc}(\widehat{\mathcal{D}}^{(m)}_{W(Y)}/p^{r}-\text{mod})\)._
To generalize this to all morphisms, we need some information about compositions:
**Lemma 4.19**.: _Let \(\varphi:X\to Y\) and \(\psi:Y\to Z\). There is a natural map_
\[\widehat{\mathcal{D}}^{(m)}_{W(X)\to W(Y),acc}\otimes_{W\varphi^{-1}( \widehat{\mathcal{D}}^{(m)}_{W(Y)})}W\varphi^{-1}(\widehat{\mathcal{D}}^{(m)}_ {W(Y)\to W(Z),acc})\to\widehat{\mathcal{D}}^{(m)}_{W(X)\to W(Z),acc}\]
_of \((\widehat{\mathcal{D}}^{(m)}_{W(X)},W(\psi\circ\varphi)^{-1}(\widehat{ \mathcal{D}}^{(m)}_{W(X)}))\)-bimodules._
Proof.: The composition of morphisms yields a map
\[\widehat{\mathcal{D}}^{(m)}_{W(X)\to W(Y)}\otimes_{\varphi^{-1}(\widehat{ \mathcal{D}}^{(m)}_{W(Y)})}\varphi^{-1}(\widehat{\mathcal{D}}^{(m)}_{W(Y)\to W (Z)})\to\mathcal{H}om_{W(k)}((\varphi\circ\psi)^{-1}\mathcal{O}_{W(Z)}, \mathcal{O}_{W(X)})\]
and one sees directly- working locally and use the commutation relations of Lemma 2.24-that the image of this map is contained in \(\widehat{\mathcal{D}}^{(m)}_{W(X)\to W(Z)}\). Now, by definition we have
\[\widehat{\mathcal{D}}^{(m)}_{W(X)\to W(Y),\text{acc}}=\widehat{\mathcal{D}}^{ (m)}_{W(X),\text{acc}}\widehat{\otimes}^{L}_{\widehat{\mathcal{D}}^{(m)}_{W( X)}}\widehat{\mathcal{D}}^{(m)}_{W(X)\to W(Y)}\widehat{\otimes}^{L}_{W \varphi^{-1}(\widehat{\mathcal{D}}^{(m)}_{W(Y)})}W\varphi^{-1}(\widehat{ \mathcal{D}}^{(m)}_{W(Y),\text{acc}})\]
and so the natural maps \(\widehat{\mathcal{D}}^{(m)}_{W(X),\text{acc}}\to\widehat{\mathcal{D}}^{(m)}_{ W(X)}\) (and \(\widehat{\mathcal{D}}^{(m)}_{W(Y),\text{acc}}\to\widehat{\mathcal{D}}^{(m)}_{W( Y)}\)) yield a natural map
\[\widehat{\mathcal{D}}^{(m)}_{W(X)\to W(Y),\text{acc}}\to\widehat{\mathcal{D}}^{ (m)}_{W(X)\to W(Y)}\]
of \((\widehat{\mathcal{D}}^{(m)}_{W(X)},W\varphi^{-1}(\widehat{\mathcal{D}}^{(m)}_ {W(Y)}))\)-bimodules; and the analogous fact holds for \(\widehat{\mathcal{D}}^{(m)}_{W(Y)\to W(Z),\text{acc}}\). So, we have
\[\widehat{\mathcal{D}}^{(m)}_{W(X)\to W(Y),\text{acc}}\otimes_{W\varphi^{-1}( \widehat{\mathcal{D}}^{(m)}_{W(Y)})}W\varphi^{-1}(\widehat{\mathcal{D}}^{(m)}_ {W(Y)\to W(Z),\text{acc}})\]
\[\to\widehat{\mathcal{D}}^{(m)}_{W(X)\to W(Y)}\otimes_{W\varphi^{-1}(\widehat{ \mathcal{D}}^{(m)}_{W(Y)})}W\varphi^{-1}(\widehat{\mathcal{D}}^{(m)}_{W(Y)\to W (Z)})\to\widehat{\mathcal{D}}^{(m)}_{W(X)\to W(Z)}\]
And, since \(\widehat{\mathcal{D}}^{(m)}_{W(X)\to W(Y),\text{acc}}\otimes_{W\varphi^{-1}( \widehat{\mathcal{D}}^{(m)}_{W(Y)})}W\varphi^{-1}(\widehat{\mathcal{D}}^{(m)}_ {W(Y)\to W(Z),\text{acc}})\) is an accessible \((\widehat{\mathcal{D}}^{(m)}_{W(X)},W(\psi\circ\varphi)^{-1}(\widehat{ \mathcal{D}}^{(m)}_{W(X)}))\) bimodule, by applying the functor \(\mathcal{P}^{\cdot}\to\mathcal{P}^{\cdot}_{\text{acc}}\) for \((\widehat{\mathcal{D}}^{(m)}_{W(X)},W(\psi\circ\varphi)^{-1}(\widehat{\mathcal{D }}^{(m)}_{W(X)}))\) bimodules, we obtain a natural map
\[\widehat{\mathcal{D}}^{(m)}_{W(X)\to W(Y),\text{acc}}\otimes_{W\varphi^{-1}( \widehat{\mathcal{D}}^{(m)}_{W(Y)})}W\varphi^{-1}(\widehat{\mathcal{D}}^{(m)}_ {W(Y)\to W(Z),\text{acc}})\to\widehat{\mathcal{D}}^{(m)}_{W(X)\to W(Z),\text{ acc}}\]
as required.
Now, combining this with the previous result gives:
**Corollary 4.20**.: _1) With notation as in 4.15, we have an isomorphism of bimodules \(\widehat{\mathcal{D}}^{(m),\Phi_{1},\Phi_{2}}_{X\to 2\Phi}\otimes^{L}_{W(k)}k^{\text{\rm$ \simeq$}}\widehat{\mathcal{D}}^{(m)}_{X\to Y}\)._
_2) Suppose \(\psi:Y\to Z\) is another map of smooth affine varieties with local coordinates. Then the map of the previous lemma induces an isomorphism_
\[\widehat{\mathcal{D}}^{(m)}_{W(X)\to W(Y),acc}\mathop{\otimes}\limits^{L}_{W \varphi^{-1}(\widehat{\mathcal{D}}^{(m)}_{W(Y)})}W\varphi^{-1}(\widehat{ \mathcal{D}}^{(m)}_{W(Y)\to W(Z),acc})\tilde{\to}\widehat{\mathcal{D}}^{(m)}_{ W(X)\to W(Z),acc}\]
_of \((\widehat{\mathcal{D}}^{(m)}_{W(X)},W(\psi\circ\varphi)^{-1}(\widehat{ \mathcal{D}}^{(m)}_{W(X)}))\)-bimodules._
_3) There is an isomorphism of functors \(L(W\varphi)^{*}\circ L(W\psi)^{*}\tilde{\to}LW(\psi\circ\varphi)^{*}\) from \(D_{acc}(\widehat{\mathcal{D}}^{(m)}_{W(Z)}-\text{mod})\) to \(D_{acc}(\widehat{\mathcal{D}}^{(m)}_{W(X)}-\text{mod})\)._
Proof.: There is a map \(\mathcal{O}_{\mathfrak{X}}\otimes\varphi^{-1}(\widehat{\mathcal{D}}^{(m)}_{ \mathfrak{Y}})\to\widehat{\mathcal{D}}^{(m),\Phi_{1},\Phi_{2}}_{\mathfrak{X} \to\mathfrak{Y}}\) given by sending a tensor \(a\otimes\varphi^{-1}(P)\) to \(a\cdot\pi_{1}\circ W\varphi^{\#}\circ\varphi^{-1}(\Phi_{2}\cdot P)\), passing to the \(p\)-adic completion yields a map
\[\varphi^{*}\widehat{\mathcal{D}}^{(m)}_{\mathfrak{Y}}\to\widehat{\mathcal{D}}^ {(m),\Phi_{1},\Phi_{2}}_{\mathfrak{X}\to\mathfrak{Y}}\]
and we shall show that this map is an isomorphism. The surjectivity follows from (the proof of) 4.15, where we in fact showed that every term of the form (4.5) is a sum of terms of the form \(a\cdot\pi_{1}\circ W\varphi^{\#}\circ\varphi^{-1}(\Phi_{2}\cdot P)\). As for the injectivity, we may apply the functor of reduction mod \(p\) to obtain a map
\[\varphi^{*}\mathcal{D}^{(m)}_{Y}\to\widehat{\mathcal{D}}^{(m),\Phi_{1},\Phi_{2 }}_{\mathfrak{X}\to\mathfrak{Y}}/p\to\widehat{\mathcal{D}}^{(m)}_{W(X)\to W(Y),acc}/p\]
the latter map is injective as \(\widehat{\mathcal{D}}^{(m),\Phi_{1},\Phi_{2}}_{\mathfrak{X}\to\mathfrak{Y}}\) is a summand of \(\widehat{\mathcal{D}}^{(m)}_{W(X)\to W(Y),acc}\). On the other hand we have maps
\[\widehat{\mathcal{D}}^{(m)}_{W(X)\to W(Y),acc}/p\to\widehat{\mathcal{D}}^{(m) }_{W(X)\to W(Y)}/p\to\varphi^{*}\mathcal{D}^{(m)}_{Y}\]
where the first map is the canonical one and the second is given by the quotient by \(I^{(1)}\) (c.f. 2.32). It is not hard to see that the composition of these maps is the identity on \(\varphi^{*}\mathcal{D}^{(m)}_{Y}\); therefore \(\varphi^{*}\mathcal{D}^{(m)}_{Y}\to\widehat{\mathcal{D}}^{(m),\Phi_{1},\Phi_{ 2}}_{\mathfrak{X}\to\mathfrak{Y}}/p\) is injective. As \(\varphi^{*}\widehat{\mathcal{D}}^{(m)}_{\mathfrak{Y}}\to\widehat{\mathcal{D}}^ {(m),\Phi_{1},\Phi_{2}}_{\mathfrak{X}\to\mathfrak{Y}}\) is a surjection of torsion-free \(p\)-adically complete sheaves, by applying \(\mathop{\otimes}\limits^{L}_{W(k)}k\) we see that this map is an isomorphism. Furthermore, the composition
\[\widehat{\mathcal{D}}^{(m),\Phi_{1},\Phi_{2}}_{\mathfrak{X}\to\mathfrak{Y}}/p \to\widehat{\mathcal{D}}^{(m)}_{W(X)\to W(Y),acc}/p\to\varphi^{*}\mathcal{D}^ {(m)}_{Y}\]
is easily seen to be a map of \((\mathcal{D}^{(m)}_{X},\varphi^{-1}(\mathcal{D}^{(m)}_{Y}))\) where \(\mathcal{D}^{(m)}_{X}\) acts via the identification \(\varphi^{*}\mathcal{D}^{(m)}_{Y}\tilde{=}\mathcal{D}^{(m)}_{X\to Y}\) (this follows readily in local coordinates). Thus 1) is proved.
As for 2), the map in question is obtained from the previous lemma by noting that
\[\mathcal{H}^{0}(\widehat{\mathcal{D}}^{(m)}_{W(X)\to W(Y),acc}\mathop{\otimes }\limits^{L}_{W\varphi^{-1}(\widehat{\mathcal{D}}^{(m)}_{W(Y)})}W\varphi^{-1}( \widehat{\mathcal{D}}^{(m)}_{W(Y)\to W(Z),acc}))\]
\[=\widehat{\mathcal{D}}^{(m)}_{W(X)\to W(Y),acc}\mathop{\otimes}\limits_{W \varphi^{-1}(\widehat{\mathcal{D}}^{(m)}_{W(Y)})}W\varphi^{-1}(\widehat{ \mathcal{D}}^{(m)}_{W(Y)\to W(Z),acc})\]
Using the identifications
\[\widehat{\mathcal{D}}^{(m)}_{W(X)\to W(Y),acc}\tilde{=}\Phi_{1}^{*}\widehat{ \mathcal{D}}^{(m)}_{\mathfrak{X}}\widehat{\otimes}^{L}_{\widehat{\mathcal{D}}^ {(m)}_{\mathfrak{X}}\widehat{\mathcal{D}}^{(m)}_{\mathfrak{X}\to\mathfrak{Y}}} \widehat{\mathcal{D}}^{(m),\Phi_{1},\Phi_{2}}_{\mathfrak{X}\to\mathfrak{Y}} \widehat{\otimes}^{L}_{\varphi^{-1}(\widehat{\mathcal{D}}^{(m)}_{\mathfrak{Y}} )}\varphi^{-1}(\Phi_{2}^{!}\widehat{\mathcal{D}}^{(m)}_{\mathfrak{Y}})\]
and
\[W\varphi^{-1}(\widehat{\mathcal{D}}^{(m)}_{W(Y)\to W(Z),acc})\tilde{=}W\varphi^ {-1}(\Phi_{2}^{*}\widehat{\mathcal{D}}^{(m)}_{\mathfrak{Y}}\widehat{\otimes}^{L}_ {\widehat{\mathcal{D}}^{(m)}_{\mathfrak{Y}}}\widehat{\mathcal{D}}^{(m),\Phi_{2 },\Phi_{3}}_{\mathfrak{Y}\to 3}\widehat{\otimes}^{L}_{\psi^{-1}(\widehat{\mathcal{D}}^{(m)}_{3} )}\psi^{-1}(\Phi_{3}^{!}\widehat{\mathcal{D}}^{(m)}_{3}))\]
yields an isomorphism
\[\widehat{\mathcal{D}}^{(m)}_{W(X)\to W(Y),acc}\mathop{\otimes}\limits^{L}_{W \varphi^{-1}(\widehat{\mathcal{D}}^{(m)}_{W(Y)})}W\varphi^{-1}(\widehat{ \mathcal{D}}^{(m)}_{W(Y)\to W(Z),acc})\]
\[\begin{split}\supset\!\Phi_{1}^{*}\widehat{\mathcal{D}}_{\mathfrak{X}}^{(m)} \widehat{\otimes}_{\widehat{\mathcal{D}}_{\mathfrak{X}}^{(m)}}^{L}\widehat{ \mathcal{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(m),\Phi_{1},\Phi_{2}}\widehat{ \otimes}_{\varphi^{-1}(\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(m)})}^{L} \varphi^{-1}(\widehat{\mathcal{D}}_{\mathfrak{Y}\to\mathfrak{Z}}^{(m),\Phi_{ 2},\Phi_{3}}\widehat{\otimes}_{\psi^{-1}(\widehat{\mathcal{D}}_{3}^{(m)})}^ {L}\psi^{-1}(\Phi_{3}^{\dagger}\widehat{\mathcal{D}}_{3}^{(m)}))\\ \supset\!\Phi_{1}^{*}\widehat{\mathcal{D}}_{\mathfrak{X}}^{(m)} \widehat{\otimes}_{\widehat{\mathcal{D}}_{\mathfrak{X}}^{(m)}}^{L}(\widehat{ \mathcal{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(m),\Phi_{1},\Phi_{2}}\widehat{ \otimes}_{\varphi^{-1}(\widehat{\mathcal{D}}_{\mathfrak{Y}\to\mathfrak{Z}}^{(m) })}^{L}\varphi^{-1}(\widehat{\mathcal{D}}_{\mathfrak{Y}\to\mathfrak{Z}}^{(m),\Phi_{2},\Phi_{3}}))\widehat{\otimes}_{(\varphi\circ\psi)^{-1}(\widehat{ \mathcal{D}}_{3}^{(m)})}^{L}(\varphi\circ\psi)^{-1}(\Phi_{3}^{\dagger} \widehat{\mathcal{D}}_{3}^{(m)}))\end{split}\]
So the map of the previous lemma yields a map
\[\widehat{\mathcal{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(m),\Phi_{1},\Phi_{2}} \widehat{\otimes}_{\varphi^{-1}(\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(m)})} ^{L}\varphi^{-1}(\widehat{\mathcal{D}}_{\mathfrak{Y}\to\mathfrak{Z}}^{(m), \Phi_{2},\Phi_{3}})\to\widehat{\mathcal{D}}_{\mathfrak{X}\to\mathfrak{Z}}^{(m),\Phi_{1},\Phi_{3}}\]
which is seen to be an isomorphism by applying \(\otimes_{W(k)}^{L}k\) and part 1); so part 2) follows as well.
3) Let \(\mathcal{M}\in D_{\rm acc}(\widehat{\mathcal{D}}_{W(Y)}^{(m)}-{\rm mod})\). Then by part 2) we have
\[\begin{split} LW(\psi\circ\varphi)^{*}\mathcal{M}:=\widehat{ \mathcal{D}}_{W(X)\to W(Z),{\rm acc}}^{(m)}\otimes_{W(\psi\circ\varphi)^{-1} \widehat{\mathcal{D}}_{W(Z)}^{(m)}}^{L}W(\psi\circ\varphi)^{-1}\mathcal{M}^{ \cdot}\\ \bar{\to}\widehat{\mathcal{D}}_{W(X)\to W(Y),{\rm acc}}^{(m)} \otimes_{W\varphi^{-1}(\widehat{\mathcal{D}}_{W(Y)}^{(m)})}^{L}W\varphi^{-1}( \widehat{\mathcal{D}}_{W(Y)\to W(Z),{\rm acc}}^{(m)}\otimes_{W\varphi^{-1}( \widehat{\mathcal{D}}_{W(Z)}^{(m)})}^{L}W\varphi^{-1}\circ W\psi^{-1}(\mathcal{ M}^{\cdot}))\\ \bar{\to}\widehat{\mathcal{D}}_{W(X)\to W(Y),{\rm acc}}^{(m)} \otimes_{W\varphi^{-1}(\widehat{\mathcal{D}}_{W(Y)}^{(m)})}^{L}W\varphi^{-1}( \widehat{\mathcal{D}}_{W(Y)\to W(Z),{\rm acc}}^{(m)}\otimes_{W\psi^{-1}( \widehat{\mathcal{D}}_{W(Z)}^{(m)})}^{L}W\psi^{-1}(\mathcal{M}^{\cdot}))\\ \bar{\to}LW\varphi^{*}(LW\psi^{*}\mathcal{M}^{\cdot})\end{split}\]
as required.
Thus we may conclude
**Corollary 4.21**.: _1) For any \(\varphi\), if \(\mathcal{M}^{\cdot}\in D_{acc}(\widehat{\mathcal{D}}_{W(Y)}^{(m)}-{\rm mod})\), we have \(L\varphi^{*}(\mathcal{M}^{\cdot})\in D_{acc}(\widehat{\mathcal{D}}_{W(X)}^{(m )}-{\rm mod})\). The same holds for \(\mathcal{M}^{\cdot}\in D_{acc}(\widehat{\mathcal{D}}_{W(Y)}^{(m)}/p^{r}-{\rm mod})\)._
_2) The result of 4.16 holds for an arbitrary morphism \(\varphi\)._
Proof.: 1) Factor \(\varphi:X\to Y\) as \(\varphi_{1}:X\to Z\) followed by \(\varphi_{2}:Z\to Y\), where \(\varphi\) is a closed immersion and \(\varphi_{2}\) is smooth. By the previous result, it suffices to check that each of \(L(W\varphi_{1})^{*}\) and \(L(W\varphi_{2})^{*}\) preserve accessibility; as they are each locally compatible with a lift of Frobenius this follows from 4.18.
2) This is similar to 1)- we break up \(\varphi=\varphi_{2}\circ\varphi_{1}\) and use 4.16.
To finish off this section, let's note the compatibility with the pullback for crystals. Namely,
**Proposition 4.22**.: _Let \(r\geq 1\), and let \(\epsilon\) be the functor of Theorem 3.29. Then for any smooth morphism \(\varphi:X\to Y\) smooth, and any \(\mathcal{M}^{\cdot}\in D_{qcoh}(\text{Crys}_{W_{r}(k)}(X))\), we have_
\[LW\varphi^{*}\circ\epsilon\bar{\to}\epsilon\circ L\varphi^{*}_{\text{crys}}\]
_where \(L\varphi^{*}_{\text{crys}}\) is the pullback in the derived category of crystals._
Proof.: As \(\varphi\) is smooth, \(\varphi^{*}\) is exact, and so it suffices to prove this when \(\mathcal{M}\) is concentrated in a single degree. In that case, we have that, for a local lift \(\varphi:\mathfrak{X}_{r}\to\mathfrak{Y}_{r}\) is compatible with a Frobenius lift; and so \(LW\varphi^{*}\) corresponds to the usual pullback under 4.16. This shows immediately that the pullback preserves the local nilpotence condition; furthermore, this is exactly the definition of the pullback in the category of crystals, and the proposition follows.
### Operations on modules: Pushforward
As discussed in the introduction, the key to defining a pushforward in (the usual) -module theory is the definition of a transfer bimodule. This is constructed as follows: we already have a bimodule ; we can apply the left-right interchange simultaneously to left -modules and to right -modules to obtain
which is now a bimodule. In particular, if is a point then with its canonical right -module structure.
If we now have a morphism of smooth formal schemes, and we fix some, an identical procedure defines, and then the pushforward is defined10 as
Footnote 10: This is not quite Berthelot’s definition; as he works with sheaves over and then takes an inverse limit; whereas we bypass this by using the cohomological completion. The two notions agree on a large subclass of objects, for instance on coherent -modules
We can utilize the same procedure to define.
**Definition 4.23**.: Let. The transfer bimodule is the sheaf of bimodules defined as:
With this in hand, one makes the
**Definition 4.24**.: Let (for ) Then for any morphism we define
Next, let. Then we define
where as above denotes the derived completion.
In fact these two definitions agree on their overlap:
_Remark 4.25_.: Let, and regard as an element of via the obvious inclusion. Then we have an isomorphism
of complexes of -modules. To see this, note that is itself -torsion free. So, to show the above, we let be a flat resolution of
in the category of right-\(\widehat{\mathcal{D}}^{(m)}_{W(X)}\)-modules. Then, as \(p^{r}\) annihilates each term of \(\mathcal{M}\)', we have
\[\widehat{\mathcal{D}}^{(m)}_{W(Y)\gets W(X)}\widehat{\otimes}^{L}_{ \widehat{\mathcal{D}}^{(m)}_{W(X)}}\mathcal{M}\text{ }\text{ }=\widehat{\mathcal{D}}^{(m)}_{W(Y)\gets W(X)}\otimes^{L}_{\widehat{ \mathcal{D}}^{(m)}_{W(X)}}\mathcal{M}\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\]
and this is computed by
\[\mathcal{F}\otimes_{\widehat{\mathcal{D}}^{(m)}_{W(X)}}\mathcal{M}\text{ }\
To show that is is an isomorphism it suffices to show that it is so after applying \(R\mathcal{H}om_{\widehat{\mathcal{D}}^{(m)}_{W(Y)}/p}(\mathcal{B}^{(m)}_{Y},)\). We have
\[R\mathcal{H}om_{\widehat{\mathcal{D}}^{(m)}_{W(Y)}/p}(\mathcal{B}^{(m)}_{Y},R(W \varphi)_{*}(W\varphi^{-1}(\mathcal{B}^{(m)}_{Y})\otimes^{L}_{\varphi^{-1}( \mathcal{D}^{(m)}_{Y})}\mathcal{M}\cdot)_{\mathrm{acc}})\]
\[\eqalign{\bar{\to}R\mathcal{H}om_{\widehat{\mathcal{D}}^{(m)}_{W(Y)}/p}( \mathcal{B}^{(m)}_{Y},R(W\varphi)_{*}(W\varphi^{-1}(\mathcal{B}^{(m)}_{Y}) \otimes^{L}_{\varphi^{-1}(\mathcal{D}^{(m)}_{Y})}\mathcal{M}\cdot))\cr\bar{\to}R \varphi_{*}R\mathcal{H}om_{\varphi^{-1}(\widehat{\mathcal{D}}^{(m)}_{W(Y)}/p} (W\varphi^{-1}(\mathcal{B}^{(m)}_{Y}),\varphi^{-1}(\mathcal{B}^{(m)}_{Y}) \otimes^{L}_{\varphi^{-1}(\mathcal{D}^{(m)}_{Y})}\mathcal{M}\cdot)\cr\bar{\to} R\varphi_{*}(\mathcal{M}\cdot)\cr}\]
where the first isomorphism is by the definition of accessibility, and the second is by the adjunction of \(W\varphi^{-1}\) and \(R\varphi_{*}\). It follows that the map (4.6) is an isomorphism, and 1) follows by setting \(\mathcal{M}\cdot=\mathcal{D}^{(m)}_{X\gets Y}\otimes^{L}_{\mathcal{D}^{(m) }_{X}}\mathcal{N}\cdot\). Now 2) follows from the description of the transfer bimodule given directly above.
Now let's discuss the base change functor \(\otimes^{L}_{W(k)}k\). The situation is very nice here:
**Lemma 4.29**.: _There is, for any \(\mathcal{M}\cdot\in D_{acc}(\widehat{\mathcal{D}}^{(m)}_{W(X)}-\text{mod})\), an isomorphism_
\[(\int_{W\varphi}\mathcal{M}\cdot)\otimes^{L}_{W(k)}k\dot{=}\int_{W\varphi}( \mathcal{M}\cdot\otimes^{L}_{W(k)}k)\]
_where on the right we take the pushforward in the category \(D_{acc}(\widehat{\mathcal{D}}^{(m)}_{W(X)}/p-\text{mod})\)._
Proof.: Again by [Stacks], tag 0B54 there is a map
\[R(W\varphi)_{*}(\widehat{\mathcal{D}}^{(m)}_{W(Y)\gets W(X)}\widehat{ \otimes}^{L}_{\widehat{\mathcal{D}}^{(m)}_{W(X)}}\mathcal{M}\cdot)\otimes^{L }_{W(k)}k\to R(W\varphi)_{*}((\widehat{\mathcal{D}}^{(m)}_{W(Y)\gets W(X) }\widehat{\otimes}^{L}_{\widehat{\mathcal{D}}^{(m)}_{W(X)}}\mathcal{M}\cdot) \otimes^{L}_{W(k)}k)\]
which is an isomorphism because \(k\) is a perfect complex over \(W(k)\). However, we have
\[(\widehat{\mathcal{D}}^{(m)}_{W(Y)\gets W(X)}\widehat{\otimes}^{L}_{ \widehat{\mathcal{D}}^{(m)}_{W(X)}}\mathcal{M}\cdot)\otimes^{L}_{W(k)}k\]
\[\dot{=}(\widehat{\mathcal{D}}^{(m)}_{W(Y)\gets W(X)}/p)\widehat{\otimes}^ {L}_{\widehat{\mathcal{D}}^{(m)}_{W(X)}/p}(\mathcal{M}\cdot\otimes^{L}_{W(k) }k)\]
as both \(\widehat{\mathcal{D}}^{(m)}_{W(X)\gets W(Y)}\) and \(\widehat{\mathcal{D}}^{(m)}_{W(X)}\) are flat over \(W(k)\). Further, as the functor \(()_{\mathrm{acc}}\) commutes with \(\otimes^{L}_{W(k)}k\), applying \(()_{\mathrm{acc}}\) to both sides yields the result.
This yields
**Corollary 4.30**.: _Suppose \(\mathcal{M}\cdot\in D_{acc}(\widehat{\mathcal{D}}^{(m)}_{W(X)}-\text{mod})\), and let \(\varphi:X\to Y\), \(\psi:Y\to Z\). There is a functorial morphism_
\[\int_{W\psi}\int_{W\varphi}\mathcal{M}\cdot\to\int_{W(\psi\circ\varphi)} \mathcal{M}\cdot\]
_If \(\mathcal{M}\cdot\in D_{qcoh}(\widehat{\mathcal{D}}^{(m)}_{W(X)}-\text{mod})\), then this map is an isomorphism. The same holds for \(\mathcal{M}\cdot\in D_{qcoh}(\widehat{\mathcal{D}}^{(m)}_{W(X)}/p^{r}-\text{ mod})\), for the pushforward of \(\widehat{\mathcal{D}}^{(m)}_{W(X)}/p^{r}\)-modules._
Proof.: The existence of the map is a little exercise in base changing. Namely, from 4.20 we obtain the isomorphism
\[W\varphi^{-1}(\widehat{\mathcal{D}}^{(m)}_{W(Z)\gets W(Y),\text{acc}}) \otimes^{L}_{W\varphi^{-1}(\widehat{\mathcal{D}}^{(m)}_{W(Y)})}\widehat{ \mathcal{D}}^{(m)}_{W(Y)\gets W(X),\text{acc}}\widetilde{\to}\widehat{ \mathcal{D}}^{(m)}_{W(Z)\gets W(X),\text{acc}}\]
So we have (by [Stacks], tag 0B54)
\[RW\psi_{*}(\widehat{\mathcal{D}}^{(m)}_{W(Z)\gets W(Y),\text{acc}} \widehat{\otimes}_{\widehat{\mathcal{D}}^{(m)}_{W(Y)}}R(W\varphi)_{*}( \widehat{\mathcal{D}}^{(m)}_{W(Y)\gets W(X)}\widehat{\otimes}^{L}_{ \widehat{\mathcal{D}}^{(m)}_{W(X)}}\mathcal{M}\,\cdot\,))\]
\[\to RW\psi_{*}\circ RW\varphi_{*}(W\varphi^{-1}(\widehat{\mathcal{D}}^{(m)}_{W (Z)\gets W(Y),\text{acc}})\widehat{\otimes}_{W\varphi^{-1}(\widehat{ \mathcal{D}}^{(m)}_{W(Y)})}(\widehat{\mathcal{D}}^{(m)}_{W(Y)\gets W(X) }\widehat{\otimes}^{L}_{\widehat{\mathcal{D}}^{(m)}_{W(X)}}\mathcal{M}\, \cdot\,))\]
\[\widetilde{\to}RW(\psi\circ\varphi)_{*}(\widehat{\mathcal{D}}^{(m)}_{W(Z) \gets W(X),\text{acc}}\widehat{\otimes}^{L}_{\widehat{\mathcal{D}}^{(m)}_ {W(X)}}\mathcal{M}\,\cdot\,)\]
Further, we have
\[\int_{W\psi}\int_{W\varphi}\mathcal{M}\,\cdot\,=RW\psi_{*}(\widehat{\mathcal{D }}^{(m)}_{W(Z)\gets W(Y),\text{acc}}\widehat{\otimes}_{\widehat{\mathcal{D }}^{(m)}_{W(Y)}}\int_{W\varphi}\mathcal{M}\,\cdot\,)_{\text{acc}}\]
\[=RW\psi_{*}(\widehat{\mathcal{D}}^{(m)}_{W(Z)\gets W(Y),\text{acc}} \widehat{\otimes}_{\widehat{\mathcal{D}}^{(m)}_{W(Y)}}R(W\varphi)_{*}( \widehat{\mathcal{D}}^{(m)}_{W(Y)\gets W(X)}\widehat{\otimes}^{L}_{ \widehat{\mathcal{D}}^{(m)}_{W(X)}}\mathcal{M}\,\cdot\,)_{\text{acc}})_{ \text{acc}}\]
\[\to RW\psi_{*}(\widehat{\mathcal{D}}^{(m)}_{W(Z)\gets W(Y),\text{acc}} \widehat{\otimes}_{\widehat{\mathcal{D}}^{(m)}_{W(Y)}}R(W\varphi)_{*}( \widehat{\mathcal{D}}^{(m)}_{W(Y)\gets W(X)}\widehat{\otimes}^{L}_{ \widehat{\mathcal{D}}^{(m)}_{W(X)}}\mathcal{M}\,\cdot\,))_{\text{acc}}\]
vis the natural map \(\mathcal{N}_{\text{acc}}\to\mathcal{N}\,\cdot\,\) in \(\widehat{\mathcal{D}}^{(m)}_{W(Y)}-\text{mod}\). Then, applying \(()_{\text{acc}}\) in \(\widehat{\mathcal{D}}^{(m)}_{W(Z)}\)-mod to the previous map, we obtain a map
\[\int_{W\psi}\int_{W\varphi}\mathcal{M}\,\cdot\,\to\widetilde{\to}RW(\psi\circ \varphi)_{*}(\widehat{\mathcal{D}}^{(m)}_{W(Z)\gets W(X),\text{acc}} \widehat{\otimes}^{L}_{\widehat{\mathcal{D}}^{(m)}_{W(X)}}\mathcal{M}\,\cdot \,)_{\text{acc}}=\int_{W(\psi\circ\varphi)}\mathcal{M}\,\cdot\]
To show its an isomorphism, apply \(\otimes^{L}_{W(k)}k\) and the previous lemma to reduce to the case of \(\mathcal{D}^{(m)}_{X}\)-modules in positive characteristic. There, we have
\[\int_{\psi}\int_{\varphi}\mathcal{M}\,\cdot\,\widetilde{\to}\int_{\psi\circ \varphi}\mathcal{M}\,\cdot\]
for any \(\mathcal{M}\,\cdot\,\in D_{\text{qcoh}}(\mathcal{D}^{(m)}_{X}-\text{mod})\) (c.f. [17], lemma 7.7 for this statement), whence the result for \(\mathcal{M}\,\cdot\,\in D_{\text{qcoh}}(\widehat{\mathcal{D}}^{(m)}_{W(X)}- \text{mod})\). To obtain the result for \(\mathcal{M}\,\cdot\,\in D_{\text{qcoh}}(\widehat{\mathcal{D}}^{(m)}_{W(X)}/p^ {r}-\text{mod})\), apply 4.25.
To round things out, we deduce the following compatibility:
**Corollary 4.31**.: _1) Let \(\varphi:\mathfrak{X}_{r}\to\mathfrak{Y}_{r}\) where, if \(m=0\) and \(r>1\), we assume \(p\neq 2\) (we allow \(r=\infty\) here). Let \(\mathcal{N}\,\cdot\,\in D(\mathcal{D}^{(m)}_{\mathfrak{X}_{r}}-\text{mod})\). Then there is a natural isomorphism_
\[\int_{\varphi}(\mathcal{B}^{(m)}_{\mathfrak{X}_{r}}\otimes^{L}_{\mathcal{D}^{(m )}_{\mathfrak{X}_{r}}}\mathcal{N}\,\cdot\,)\tilde{\to}\mathcal{B}^{(m)}_{ \mathfrak{Y}_{r}}\otimes^{L}_{\mathcal{D}^{(m)}_{\mathfrak{Y}_{r}}}\int_{ \varphi}\mathcal{N}\,\cdot\]
Proof.: This follows by breaking \(\varphi\) up into a smooth morphism and a closed embedding just as in 4.21.
Finally, we'd like to discuss some further specific properties of the pushforward when \(\varphi\) satisfies extra conditions. First, we specialize to the case where \(\varphi:X\to Y\) is a smooth morphism. In this case, there is an adjunction between the pullback and the pushforward; more precisely, we have:
**Corollary 4.32**.: _Let \(\varphi:X\to Y\) be smooth of relative dimension \(d\); let \(\mathcal{M}^{\cdot}\in D_{acc}(\widehat{\mathcal{D}}^{(m)}_{W(X)}-\text{mod})\) and \(\mathcal{N}^{\cdot}\in D_{acc}(\widehat{\mathcal{D}}^{(m)}_{W(Y)}-\text{mod})\). Then there is an isomorphism of functors_
\[RW\varphi_{*}R\underline{\mathcal{H}om}_{\widehat{\mathcal{D}}^{(m)}_{W(X)}}(L (W\varphi)^{*}\mathcal{N}^{\cdot},\mathcal{M}^{\cdot})[d]\tilde{\to}R \underline{\mathcal{H}om}_{\widehat{\mathcal{D}}^{(m)}_{W(Y)}}(\mathcal{N}^{ \cdot},\int_{W\varphi}\mathcal{M}^{\cdot})\]
_In particular the functors \(LW\varphi^{*}[d]=W\varphi^{!}\) and \(\int_{\varphi}\) form an adjoint pair on accessible modules. The analogous statement holds for accessible \(\widehat{\mathcal{D}}^{(m)}_{W(X)}/p^{r}\) modules for any \(r\geq 1\)._
We will deduce the this theorem from the analogous fact for \(\widehat{\mathcal{D}}^{(m)}\)-modules. As in that case, the key point is to prove the
**Proposition 4.33**.: _For any smooth morphism \(\varphi:X\to Y\) of relative dimension \(d\) there is an isomorphism of \((\widehat{\mathcal{D}}^{(m)}_{W(Y)},\widehat{\mathcal{D}}^{(m)}_{W(X)})\) bimodules_
\[R\mathcal{H}om_{\widehat{\mathcal{D}}^{(m)}_{W(X)}}(\widehat{\mathcal{D}}^{(m )}_{W(X)\to W(Y),\,acc},\widehat{\mathcal{D}}^{(m)}_{W(X),\,acc})_{Y-acc} \tilde{=}\widehat{\mathcal{D}}^{(m)}_{W(Y)\gets W(X),\,acc}[-d]\]
_where on the left hand side \(()_{Y-acc}\) refers to the functor of accessibalization applied in the category of left \(\varphi^{-1}(\widehat{\mathcal{D}}^{(m)}_{W(Y)})\)-modules._
Before doing so, we recall that the analogous isomorphism
\[R\mathcal{H}om_{\widehat{\mathcal{D}}^{(m)}_{\mathfrak{X}}}(\widehat{ \mathcal{D}}^{(m)}_{\mathfrak{X}\to\mathfrak{Y}},\widehat{\mathcal{D}}^{(m)}_{ \mathfrak{X}})\tilde{\to}\widehat{\mathcal{D}}^{(m)}_{\mathfrak{X}\to \mathfrak{X}}[-d]\]
is the basic point in proving this adjunction for \(\widehat{\mathcal{D}}^{(m)}\)-modules; when \(m=0\) this can be proved via the de Rham resolution, and when \(m>0\) it follows via Frobenius descent.
Proof.: Let us begin in the case where \(X\) and \(Y\) are affine, with smooth lifts \(\mathfrak{X}\) and \(\mathfrak{Y}\). Let \(\Phi_{1}\) and \(\Phi_{2}\) be lifts of Frobenius on \(\mathfrak{X}\) and \(\mathfrak{Y}\), respectively, chosen so that \(\Phi_{2}\circ\varphi=\varphi\circ\Phi_{1}\) (this is possible as \(\varphi\) is smooth).This means that, according to 4.26
\[\widehat{\mathcal{D}}^{(m)}_{W(Y)\gets W(X),\text{acc}}\tilde{=}\varphi^{ -1}(\Phi_{2}^{*}\widehat{\mathcal{D}}^{(m)}_{\mathfrak{Y}})\otimes_{\varphi^{ -1}(\widehat{\mathcal{D}}^{(m)}_{\mathfrak{Y}})}^{L}\widehat{\mathcal{D}}^{(m )}_{\mathfrak{Y}\leftarrow\mathfrak{X}}\widehat{\otimes}_{\widehat{\mathcal{D} }^{(m)}_{\mathfrak{X}}}^{L}\Phi_{1}^{1}\widehat{\mathcal{D}}^{(m)}_{\mathfrak{ X}}\]
As both sides are, by definition, accessible over \(\varphi^{-1}(\widehat{\mathcal{D}}^{(m)}_{W(Y)})\), we may construct the isomorphism by applying the functor \(R\mathcal{H}om_{W\varphi^{-1}(\widehat{\mathcal{D}}^{(m)}_{W(Y)})}(W\varphi^ {-1}(\Phi_{2}^{*}\widehat{\mathcal{D}}^{(m)}_{\mathfrak{Y}}),?)\) to both sides. On the left hand side, this yields
\[R\mathcal{H}om_{W\varphi^{-1}(\widehat{\mathcal{D}}^{(m)}_{W(Y)})}(W\varphi^ {-1}(\Phi_{2}^{*}\widehat{\mathcal{D}}^{(m)}_{\mathfrak{Y}}),R\mathcal{H}om_{ \widehat{\mathcal{D}}^{(m)}_{W(X)}}(\widehat{\mathcal{D}}^{(m)}_{W(X)\to W(Y),\text{acc}},\widehat{\mathcal{D}}^{(m)}_{W(X),\text{acc}}))\]
\[\tilde{\to}R\mathcal{H}om_{\widehat{\mathcal{D}}^{(m)}_{W(X)}}(\widehat{ \mathcal{D}}^{(m)}_{W(X)\to W(Y),\text{acc}}\widehat{\otimes}_{W\varphi^{-1}( \widehat{\mathcal{D}}^{(m)}_{W(Y)})}^{L}W\varphi^{-1}(\Phi_{2}^{*}\widehat{ \mathcal{D}}^{(m)}_{\mathfrak{Y}}),\widehat{\mathcal{D}}^{(m)}_{W(X),\text{ acc}}))\]
\[\tilde{\to}R\mathcal{H}om_{\widehat{\mathcal{D}}^{(m)}_{W(X)}}(\Phi_{1}^{*} \widehat{\mathcal{D}}^{(m)}_{\mathfrak{X}}\tilde{\otimes}_{\widehat{\mathcal{D} }^{(m)}_{\mathfrak{X}}}^{L}\widehat{\mathcal{D}}^{(m)}_{\mathfrak{X}\to \mathfrak{Y}},\widehat{\mathcal{D}}^{(m)}_{W(X),\text{acc}}))\]
\[\tilde{\to}R\mathcal{H}om_{\widehat{\mathcal{D}}^{(m)}_{\mathfrak{X}}}(\widehat{ \mathcal{D}}^{(m)}_{\mathfrak{X}\to\mathfrak{Y}},\widehat{\mathcal{D}}^{(m)}_{ \mathfrak{X}})\otimes_{\widehat{\mathcal{D}}^{(m)}_{\mathfrak{X}}}^{L}\Phi_{1}^ {1}\widehat{\mathcal{D}}^{(m)}_{\mathfrak{X}}\]
\[\tilde{\to}\widehat{\mathcal{D}}^{(m)}_{\mathfrak{Y}\leftarrow\mathfrak{X}} \otimes_{\widehat{\mathcal{D}}^{(m)}_{\mathfrak{X}}}^{L}\Phi_{1}^{1}\widehat{ \mathcal{D}}^{(m)}_{\mathfrak{X}}[-d]\]
where we used hom tensor adjunction for the first isomorphism, 4.15 for the second, the construction of \(\widehat{\mathcal{D}}^{(m)}_{W(X),\mathrm{acc}}\) for the third, the fact that \(\widehat{\mathcal{D}}^{(m)}_{\mathfrak{X}\to\mathfrak{Y}}\) is isomorphic to as perfect complex of \(\widehat{\mathcal{D}}^{(m)}_{\mathfrak{X}}\)-modules11 for the fourth, and isomorphism directly above for the fifth. Thus we obtain the isomorphism of the proposition when \(X\) and \(Y\) are affine by the description of \(\widehat{\mathcal{D}}^{(m)}_{W(Y)\gets W(X),\mathrm{acc}}\) given just above.
Footnote 11: As \(\widehat{\mathcal{D}}^{(m)}_{\mathfrak{X}}\) is locally of finite homological dimension
For the general case, we note that \(\widehat{\mathcal{D}}^{(m)}_{W(Y)\gets W(X),\mathrm{acc}}\) is an accessible \((\varphi^{-1}(\widehat{\mathcal{D}}^{(m)}_{W(Y)}),\widehat{\mathcal{D}}^{(m)} _{W(X)})\) bimodule, and so we have
\[\mathcal{E}nd_{(\varphi^{-1}(\widehat{\mathcal{D}}^{(m)}_{W(Y)}),\widehat{ \mathcal{D}}^{(m)}_{W(X)})}(\widehat{\mathcal{D}}^{(m)}_{W(Y)\gets W(X), \mathrm{acc}})\tilde{\to}\mathcal{E}nd_{(\varphi^{-1}(\widehat{\mathcal{D}}^{ (m)}_{\mathfrak{Y}}),\widehat{\mathcal{D}}^{(m)}_{\mathfrak{X}})}(\widehat{ \mathcal{D}}^{(m)}_{\mathfrak{Y}\leftarrow\mathfrak{X}})=W(k)_{\mathfrak{X}}\]
where \(W(k)_{\mathfrak{X}}\) refers to the locally constant sheaf with sections \(W(k)\). As this sheaf is flasque, we see that the the locally defined isomorphisms constructed above must glue to a globally defined isomorphism (which is unique up to rescale by an element of \(W(k)\)).
Given this, let's give the
Proof.: (of 4.32) (following [27], Theorem 4.40). We have
\[R\mathcal{H}om_{\widehat{\mathcal{D}}^{(m)}_{W(X)}}(LW\varphi^{*}\mathcal{N} \operatorname{,}\mathcal{M}\operatorname{)}[d]\]
\[=R\mathcal{H}om_{\widehat{\mathcal{D}}^{(m)}_{W(X)}}(\widehat{\mathcal{D}}^{(m )}_{W(X)\to W(Y),\mathrm{acc}}\otimes^{L}_{W\varphi^{-1}(\widehat{\mathcal{D}} ^{(m)}_{W(Y)})}\varphi^{-1}(\mathcal{N}\operatorname{),}\mathcal{M} \operatorname{)}[d]\]
\[\tilde{\to}R\mathcal{H}om_{W\varphi^{-1}(\widehat{\mathcal{D}}^{(m)}_{W(Y)})} (W\varphi^{-1}(\mathcal{N}\operatorname{),}R\mathcal{H}om_{\widehat{\mathcal{ D}}^{(m)}_{W(X)}}(\widehat{\mathcal{D}}^{(m)}_{W(X)\to W(Y),\mathrm{acc}} \operatorname{,}\mathcal{M}\operatorname{)})[d]\]
Now, there is the obvious natural map
\[R\mathcal{H}om_{\widehat{\mathcal{D}}^{(m)}_{W(X)}}(\widehat{\mathcal{D}}^{(m )}_{W(X)\to W(Y),\mathrm{acc}},\widehat{\mathcal{D}}^{(m)}_{W(X)})\widehat{ \otimes}^{L}_{\widehat{\mathcal{D}}^{(m)}_{W(X)}}\mathcal{M}\to R \mathcal{H}om_{\widehat{\mathcal{D}}^{(m)}_{W(X)}}(\widehat{\mathcal{D}}^{(m) }_{W(X)\to W(Y),\mathrm{acc}}\operatorname{,}\mathcal{M}\operatorname{)}\]
for any \(\mathcal{M}\in D(\widehat{\mathcal{D}}^{(m)}_{W(X)}-\mathrm{mod})\). Via the right action of \(W\varphi^{-1}(\widehat{\mathcal{D}}^{(m)}_{W(Y)})\) on \(\widehat{\mathcal{D}}^{(m)}_{W(X)\to W(Y),\mathrm{acc}}\), this is a morphism complexes of of left \(W\varphi^{-1}(\widehat{\mathcal{D}}^{(m)}_{W(Y)})\)-modules. Thus, applying the functors \(R\mathcal{H}om_{W\varphi^{-1}(\widehat{\mathcal{D}}^{(m)}_{W(Y)})}(W\varphi^ {-1}(\mathcal{N}\operatorname{),})\) and \(RW\varphi_{*}\) we obtain a morphism
\[RW\varphi_{*}R\mathcal{H}om_{W\varphi^{-1}(\widehat{\mathcal{D}}^{(m)}_{W(Y)})} (W\varphi^{-1}(\mathcal{N}\operatorname{),}R\mathcal{H}om_{\widehat{\mathcal{ D}}^{(m)}_{W(X)}}(\widehat{\mathcal{D}}^{(m)}_{W(X)\to W(Y),\mathrm{acc}}, \widehat{\mathcal{D}}^{(m)}_{W(X)})\widehat{\otimes}^{L}_{\widehat{\mathcal{D}} ^{(m)}_{W(X)}}\mathcal{M}\operatorname{)}[d]\]
\[\to RW\varphi_{*}R\mathcal{H}om_{W\varphi^{-1}(\widehat{\mathcal{D}}^{(m)}_{W(Y) })}(W\varphi^{-1}(\mathcal{N}\operatorname{),}R\mathcal{H}om_{\widehat{ \mathcal{D}}^{(m)}_{W(X)}}(\widehat{\mathcal{D}}^{(m)}_{W(X)\to W(Y),\mathrm{ acc}}\operatorname{,}\mathcal{M}\operatorname{)})[d]\]
\[\tilde{=}R\mathcal{H}om_{\widehat{\mathcal{D}}^{(m)}_{W(X)}}(LW\varphi^{*} \mathcal{N}\operatorname{,}\mathcal{M}\operatorname{)}[d]\]
of complexes of \(\widehat{\mathcal{D}}^{(m)}_{W(Y)}\)-modules. Now, as \(\mathcal{N}\operatorname{\,}\) is accessible over \(\widehat{\mathcal{D}}^{(m)}_{W(Y)}\), we have
\[R\mathcal{H}om_{W\varphi^{-1}(\widehat{\mathcal{D}}^{(m)}_{W(Y)})}(W\varphi^{-1 }(\mathcal{N}\operatorname{),}R\mathcal{H}om_{\widehat{\mathcal{D}}^{(m)}_{W(X) }}(\widehat{\mathcal{D}}^{(m)}_{W(X)\to W(Y),\mathrm{acc}},\widehat{\mathcal{D }}^{(m)}_{W(X)})\widehat{\otimes}^{L}_{\widehat{\mathcal{D}}^{(m)}_{W(X)}} \mathcal{M}\operatorname{)}[d]\]
\[=R\mathcal{H}om_{W\varphi^{-1}(\widehat{\mathcal{D}}^{(m)}_{W(Y)})}(W\varphi^{-1 }(\mathcal{N}\operatorname{),}R\mathcal{H}om_{\widehat{\mathcal{D}}^{(m)}_{W(X) }}(\widehat{\mathcal{D}}^{(m)}_{W(X)\to W(Y),\mathrm{acc}},\widehat{\mathcal{D }}^{(m)}_{W(X)})\mathrm{acc}\widehat{\otimes}^{L}_{\widehat{\mathcal{D}}^{(m)}_{W( X)}}\mathcal{M}\operatorname{)}[d]\]
\[\tilde{\to}R\mathcal{H}om_{W\varphi^{-1}(\widehat{\mathcal{D}}^{(m)}_{W(Y)})}(W \varphi^{-1}(\mathcal{N}\operatorname{),}\widehat{\mathcal{D}}^{(m)}_{W(Y) \gets W(X),\mathrm{acc}}\widehat{\otimes}^{L}_{\widehat{\mathcal{D}}^{(m)}_{W( X)}}\mathcal{M}\operatorname{)}\]
by the previous proposition. Furthermore
\[RW\varphi_{*}R\mathcal{H}om_{W\varphi^{-1}(\widehat{\mathcal{D}}^{(m)}_{W(Y)})}(W \varphi^{-1}(\mathcal{N}\operatorname{),}\widehat{\mathcal{D}}^{(m)}_{W(Y) \gets W(X),\mathrm{acc}}\widehat{\otimes}^{L}_{\widehat{\mathcal{D}}^{(m)}_{W( X)}}\mathcal{M}\operatorname{)}\]
\[\begin{split}\dot{\supset}R\mathcal{H}om_{\widehat{\mathcal{D}}^{(m)}_{W(Y) }}(\mathcal{N}^{\cdot},RW\varphi_{*}(\widehat{\mathcal{D}}^{(m)}_{W(Y)\gets W (X),\text{acc}}\widehat{\otimes}^{L}_{\widehat{\mathcal{D}}^{(m)}_{W(X)}} \mathcal{M}^{\cdot}))\\ \dot{\supset}R\mathcal{H}om_{\widehat{\mathcal{D}}^{(m)}_{W(Y) }}(\mathcal{N}^{\cdot},\int_{W\varphi}\mathcal{M}^{\cdot})\end{split}\]
where we again use that \(\mathcal{N}^{\cdot}\) is accessible in the last line; this follows from
\[\int_{W\varphi}\mathcal{M}^{\cdot}=R\varphi_{*}(\widehat{\mathcal{D}}^{(m)}_{ W(Y)\gets W(X),\text{acc}}\widehat{\otimes}^{L}_{\widehat{\mathcal{D}}^{(m)}_{W(X) }}\mathcal{M}^{\cdot})_{\text{acc}}\]
Summing up, we've obtained a functorial map
\[R\mathcal{H}om_{\widehat{\mathcal{D}}^{(m)}_{W(Y)}}(\mathcal{N}^{\cdot},\int_ {W\varphi}\mathcal{M}^{\cdot})\to R\mathcal{H}om_{\widehat{\mathcal{D}}^{(m) }_{W(X)}}(LW\varphi^{*}\mathcal{N}^{\cdot},\mathcal{M}^{\cdot})[d]\]
To check it is an isomorphism, we may (by cohomological completeness) apply \(\otimes^{L}_{W(k)}k\), and then use Lemma 4.29 to reduce the statement to the (known) adjunction for \(\mathcal{D}^{(m)}\)-modules.
Combining this with the compatibility of pullback on crystals, we deduce:
**Corollary 4.34**.: _Let \(\mathcal{M}^{\cdot}\in D^{+}(\text{Qcoh}(\text{Crys}(X,W_{r}(k)))\), and let \(\epsilon(\mathcal{M}^{\cdot})\) denote the associated element of \(D^{+}_{\text{qcoh},\text{acc}}(\widehat{\mathcal{D}}^{(0)}_{W(X)}/p^{r}-\text{ mod})\)._
_Then there is an isomorphism_
\[\int_{\varphi}\epsilon(\mathcal{M}^{\cdot})[-d]\dot{\supset}\epsilon(R \varphi_{*,\text{crys}}(\mathcal{M}^{\cdot}))\]
_in \(D^{+}_{\text{qcoh}}(\widehat{\mathcal{D}}^{(0)}_{W(Y)}/p^{r}-\text{mod})\)._
Proof.: By definition, the functors
\[L\varphi_{\text{crys}}^{*}:D^{+}(\text{Qcoh}(\text{Crys}(Y,W_{r}(k)))\to D^{ +}(\text{Qcoh}(\text{Crys}(X,W_{r}(k)))\]
and \(R\varphi_{*,\text{crys}}:D^{+}(\text{Qcoh}(\text{Crys}(X,W_{r}(k)))\to D^{+ }(\text{Qcoh}(\text{Crys}(Y,W_{r}(k)))\) form an adjoint pair. As the functor \(\epsilon\) is compatible with pullback, we simply have to check that \(\int_{\varphi}\) preserves the image; i.e., that it sends quasicoherent complexes of nilpotent modules over \(\widehat{\mathcal{D}}^{(0)}_{W(X)}/p^{r}\) to quasicoherent complexes of nilpotent modules over \(\widehat{\mathcal{D}}^{(0)}_{W(Y)}/p^{r}\). By the fact that nilpotency (for quasicoherent modules) can be characterized as being supported on a certain closed subset (of the spectrum of the center of the sheaf of differential operators), this can be checked after applying \(\otimes^{L}_{W(k)}k\). But then, applying Theorem 4.28, this follows from the analogous fact for \(\mathcal{D}^{(0)}_{X}\)-modules, where it is a well known theorem of Katz (using the language of the Gauss-Manin connection, it is proved in [30], section 7, or, in the language of \(\mathcal{D}\)-modules, in [34], section 2.5).
Finally, we have the following important fact when \(\varphi\) is proper:
**Theorem 4.35**.: _Let \(\varphi:X\to Y\) be a proper morphism. Then \(\int_{\varphi}\) takes \(D^{b}_{\text{coh}}(\widehat{\mathcal{D}}^{(m)}_{W(X)}-\text{mod})\) to \(D^{b}_{\text{coh}}(\widehat{\mathcal{D}}^{(m)}_{W(Y)}-\text{mod})\)._
Proof.: Applying [28], theorem 1.6.3, we see that, a complex \(\mathcal{N}^{\cdot}\in D(\widehat{\mathcal{D}}^{(m)}_{W(Y)}-\mathrm{mod})\) is contained in \(D^{b}_{\mathrm{coh}}(\widehat{\mathcal{D}}^{(m)}_{W(Y)}-\mathrm{mod})\) iff \(\mathcal{N}^{\cdot}\otimes^{L}_{W(k)}k\) is contained in \(D^{b}_{\mathrm{coh}}(\widehat{\mathcal{D}}^{(m)}_{W(Y)}/p-\mathrm{mod})\). So it suffices to show that
\[\int_{\varphi}:D(\widehat{\mathcal{D}}^{(m)}_{W(X)}/p-\mathrm{mod})\to D( \widehat{\mathcal{D}}^{(m)}_{W(Y)}/p-\mathrm{mod})\]
takes \(D^{b}_{\mathrm{coh}}\) to \(D^{b}_{\mathrm{coh}}\); this, in turn, follows via Theorem 4.28 from the analogous fact for \(\mathcal{D}^{(m)}\)-modules.
### The de Rham-Witt resolution
In this section we'll recall the (relative) de Rham-Witt resolution and explain the connection between the pushforward functor introduced above and the (relative) de Rham-Witt complex.
Throughout this section, fix a number \(r\geq 1\); we shall work mod \(p^{r}\). According to [25], and [32] for the relative case, there is attached to any smooth morphism \(\varphi:X\to Y\) the relative de Rham-Witt complex \(W\Omega_{X/Y}\). It is a dg algebra inside quasicoherent sheaves on \(W(X)\), it is complete with respect to a canonical filtration which we shall denote \(G(W\Omega_{X/Y})\), and the quotient \(W\Omega_{X/Y}/G^{1}\bar{\to}\Omega_{X/Y}\). In particular zeroth term of \(W\Omega_{X/Y}\) is \(W(X)\), equipped with its usual \(V^{i}\)-filtration.
Suppose \(Y=\mathrm{Spec}(k)\), and that \(X\) is affine and admits local coordinates, and we fix a coordinatized lift of \(F\). Then there is an inclusion \(\Omega^{i}_{\mathfrak{X}_{r}}\to W\Omega^{i}_{X}/p^{r}\) which is compatible with the differential, and with the action of \(\mathcal{O}_{\mathfrak{X}_{r}}\) (via \(\Phi:\mathcal{O}_{\mathfrak{X}_{r}}\to\mathcal{O}_{W(X)}/p^{r}\)).
We begin by reviewing a basic result of Etesse, in a form which is useful for us:
**Theorem 4.36**.: _([20]) Let \(\mathcal{M}\) be a quasicoherent crystal on \(X\), over \(W_{r}(k)\), and let \(\tilde{\mathcal{M}}\) be the associated sheaf on \(W(X)_{p^{r}=0}\). This sheaf admits an integrable de Rham-Witt connection_
\[\nabla:\tilde{\mathcal{M}}\to\tilde{\mathcal{M}}\widehat{\otimes}_{\mathcal{O }_{W(X)}/p^{r}}W\Omega^{1}_{X}/p^{r}\]
_which is continuous with respect to the natural topologies on both sides._
_Suppose \(X\) is affine with local coordinates. Let \(\mathfrak{X}\) be a smooth lift of \(X\) with coordinatized lift of Frobenius \(F\). Then \(\mathcal{M}\) defines a unique sheaf with integrable nilpotent connection \(\mathcal{M}^{\prime}\) on \(\mathfrak{X}_{n}\), and we have_
\[\tilde{\mathcal{M}}\tilde{=}\widehat{\Phi}^{*}\mathcal{M}^{\prime}\]
_with the connection given by_
\[\nabla(p^{m}T^{I/p^{m}}\cdot m)=d(p^{m}T^{I/p^{m}})m+p^{m}T^{I/p^{m}}dm\]
_for \(m\in\mathcal{M}^{\prime}\); here, we are regarding \(dm\in\mathcal{M}^{\prime}\otimes\Omega^{1}_{\mathfrak{X}_{r}}\subset \mathcal{M}^{\prime}\otimes_{\mathcal{O}_{\mathfrak{X}_{r}}}W\Omega^{1}_{X}/p ^{r}\subset\widehat{\Phi}^{*}\mathcal{M}^{\prime}\widehat{\otimes}_{\mathcal{O }_{W(X)}/p^{r}}W\Omega^{1}_{X}/p^{r}\). The same results also hold for pro-objects in the category Qcoh(Crys\((X,W_{r}(k))\)._
In other words, the theorem tells us that the explicitly defined de Rham-Witt connection on \(\widehat{\Phi}^{*}\mathcal{F}^{\prime}\) is in fact independent of the choice of \(\Phi\). Let's apply this to \(\mathcal{F}=(\widehat{\mathcal{D}}^{(0)}_{W(X),\mathrm{crys}}/p^{r})_{\mathrm{ c-acc}}\) the completion (along the left action of \(V^{i}(\mathcal{O}_{W(X)}/p^{r})\)) of \(\widehat{\mathcal{D}}_{W(X),\mathrm{crys,acc}}/p^{r}\). This is as inverse limit of quasicoherent crystals on \(X\) over \(W_{r}(k)\), and so the theorem applies to it. From this we conclude:
**Corollary 4.37**.: _Let \((\widehat{\mathcal{D}}_{W(X)}/p^{r})_{c-acc}\) be the completion (along the left action of \(V^{i}(\mathcal{O}_{W(X)}/p^{r})\)) of \(\widehat{\mathcal{D}}_{W(X),acc}/p^{r}\) There is a unique continuous, integrable de Rham-Witt connection \(\nabla\) on \((\widehat{\mathcal{D}}_{W(X)}/p^{r})_{c-acc}\) satisfying the following: suppose \(U\subset X\) is open affine with local coordinates. Let \(\mathfrak{U}\) be a smooth lift of \(U\) with coordinatized lift of Frobenius \(\Phi\). Then we have the isomorphism_
\[(\widehat{\mathcal{D}}_{W(U)}^{(0)}/p^{r})_{c-acc}{\widehat{=}}\widehat{\Phi }^{*}\Phi^{!}\mathcal{D}_{\mathfrak{U}_{r}}^{(0)}\]
_and therefore a continuous integrable de Rham-Witt connection on \(\widehat{\Phi}^{*}\Phi^{!}\mathcal{D}_{\mathfrak{U}_{r}}^{(0)}\) which is induced from the natural connection on \(\Phi^{!}\mathcal{D}_{\mathfrak{U}_{r}}^{(0)}\) (it is a left \(\mathcal{D}_{\mathfrak{U}_{r}}^{(0)}\)-module by definition). Then the connection \(\nabla\) agrees with this induces connection under the isomorphism._
Proof.: By the previous theorem this is true after completing along the central ideal defining the nilpotent support condition. As the natural completion map
\[(\widehat{\mathcal{D}}_{W(X)}^{(0)}/p^{r})_{c-\mathrm{acc}}\to(\widehat{ \mathcal{D}}_{W(X),\mathrm{crys}}^{(0)}/p^{r})_{\mathrm{c-acc}}\]
is injective, we see that the de Rham-Witt connection on \((\widehat{\mathcal{D}}_{W(X)}/p^{r})_{c-\mathrm{acc}}\) which is locally defined by the isomorphism \((\widehat{\mathcal{D}}_{W(U)}^{(0)}/p^{r})_{c-\mathrm{acc}}{\widehat{=}} \widehat{\Phi}^{*}\Phi^{!}\mathcal{D}_{\mathfrak{U}_{r}}^{(0)}\) is in fact independent of the choice of \(\Phi\). Therefore this connection glues to a globally defined connection on \((\widehat{\mathcal{D}}_{W(X)}/p^{r})_{c-\mathrm{acc}}\) as required.
From this follows
**Corollary 4.38**.: _Let \(\mathcal{M}\in\widehat{\mathcal{D}}_{W(X)}^{(0)}/p^{r}-\text{mod}_{acc}\). Then there is an unique, continuous, integrable de Rham-Witt connection on \(\widehat{\mathcal{M}}\) defined via_
\[\widehat{\mathcal{M}}{\widetilde{\to}}(\widehat{\mathcal{D}}_{W(X)}^{(0)}/p^{ r})_{c-acc}\widehat{\otimes}_{\widehat{\mathcal{D}}_{W(X)}/p^{r}}\widehat{ \mathcal{M}}\]
\[\stackrel{{\nabla}}{{\to}}((\widehat{\mathcal{D}}_{W(X)}^{(0)}/p ^{r})_{c-acc}\widehat{\otimes}_{\mathcal{O}_{W(X)}}W\Omega_{X}^{1}/p^{r}) \widehat{\otimes}_{\widehat{\mathcal{D}}_{W(X)}/p^{r}}\widehat{\mathcal{M}}{ \widetilde{\to}}\widehat{\mathcal{M}}\widehat{\otimes}_{\mathcal{O}_{W(X)}}W \Omega_{X}^{1}/p^{r}\]
_where the second arrow is induced by the de Rham-Witt connection of the previous corollary, and the right action of \(\widehat{\mathcal{D}}_{W(X)}^{(0)}/p^{r}\) on \((\widehat{\mathcal{D}}_{W(X)}^{(0)}/p^{r})_{c-acc}\widehat{\otimes}_{\mathcal{ O}_{W(X)}}W\Omega_{X}^{1}/p^{r}\) is via the right action of \(\widehat{\mathcal{D}}_{W(X)}/p^{r}\) on \((\widehat{\mathcal{D}}_{W(X)}/p^{r})_{c-acc}\). This connection agrees with the "obvious" connection on \(\widehat{\mathcal{M}}=\widehat{\Phi}^{*}(\mathcal{N})\) for any affine open with lift of Frobenius \(\Phi\)._
In order to show that the (relative) de Rham-Witt complex computes the pushforward, we need to define the (relative) de-Rham-Witt resolution. Let \(\varphi:X\to Y\) be a smooth morphism of relative dimension \(d\). Via the natural quotient map \(W\Omega_{X}^{1}\to W\Omega_{X/Y}^{1}\), the de Rham-Witt connection on \((\widehat{\mathcal{D}}_{W(X)}/p^{r})_{c-\mathrm{acc}}\) defines an integrable connection
\[\nabla:(\widehat{\mathcal{D}}_{W(X)}^{(0)}/p^{r})_{c-\mathrm{acc}}\to(\widehat {\mathcal{D}}_{W(X)}^{(0)}/p^{r})_{c-\mathrm{acc}}\widehat{\otimes}_{\mathcal{ O}_{W(X)}}W\Omega_{X/Y}^{1}/p^{r}\]
and so we have the associated relative de Rham-Witt complex
\[(\widehat{\mathcal{D}}_{W(X)}^{(0)}/p^{r})_{c-\mathrm{acc}}\widehat{\otimes}_{ \mathcal{O}_{W(X)}}W\Omega_{X/Y}/p^{r}\]
concentrated in degrees \(\{0,\dots,d\}\). We also define the object \((\widehat{\mathcal{D}}_{W(Y)\gets W(X)}^{(0)}/p^{r})_{\mathrm{c-acc}}\) to be the completion of \((\widehat{\mathcal{D}}_{W(Y)\gets W(X),\mathrm{acc}}^{(0)}/p^{r})\) along the filtration induced by \(\{W\varphi^{-1}(V^{i}(\mathcal{O}_{W(Y)}/p^{r})\}\).
We note that since \(\varphi\) is smooth, the right \(\mathcal{D}^{(0)}_{X}\)-module \(\mathcal{D}^{(0)}_{Y\gets X}\) is coherent. This implies (via the description of 4.26) that \((\widehat{\mathcal{D}}^{(0)}_{W(Y)\gets W(X)}/p^{r})_{\text{c-acc}}\) is already complete along the filtration induced from \(\{V^{i}(\mathcal{O}_{W(X)}/p^{r})\}\). When \(Y\) is a point, we have
\[(\widehat{\mathcal{D}}^{(0)}_{W(Y)\gets W(X)}/p^{r})_{\text{c-acc}}=W \omega_{X}/p^{r}\]
Then we have
**Theorem 4.39**.: _There is a quasi-isomorphism_
\[(\widehat{\mathcal{D}}^{(0)}_{W(X)}/p^{r})_{c-acc}\widehat{\otimes}_{ \mathcal{O}_{W(X)/p^{r}}}W\Omega_{X/Y}/p^{r}[-d]\tilde{\to}(\widehat{\mathcal{ D}}^{(0)}_{W(Y)\gets W(X)}/p^{r})_{c-acc}\]
Proof.: First, we'll construct a map
\[(\widehat{\mathcal{D}}^{(0)}_{W(X)}/p^{r})_{c-\text{acc}}\widehat{\otimes}_{ \mathcal{O}_{W(X)/p^{r}}}W\omega_{X/Y}/p^{r}\to(\widehat{\mathcal{D}}^{(0)}_{ W(Y)\gets W(X)}/p^{r})_{\text{c-acc}}\]
and show that the resulting augmented complex is exact. This map is constructed as follows: via the composition of morphisms there is a map
\[\mathcal{E}nd_{W(k)}(W\omega_{X})\times\mathcal{H}om_{\mathcal{O}_{W(X)}}( \varphi^{*}(W\omega_{Y}),W\omega_{X})\to\mathcal{H}om_{W(k)}(\varphi^{*}(W \omega_{Y}),W\omega_{X})\]
which, via the embeddings \(W\omega_{X/Y}\to\mathcal{H}om_{W(k)}(\varphi^{-1}(W\omega_{Y}),W\omega_{X})\) and \(\widehat{\mathcal{D}}^{(0)}_{W(X)}\to\mathcal{E}nd_{W(k)}(W\omega_{X})\) yields a map
\[\widehat{\mathcal{D}}^{(0)}_{W(X)}\otimes_{\mathcal{O}_{W(X)/p^{r}}}W\omega_{ X/Y}\to\mathcal{H}om_{W(k)}(\varphi^{*}(W\omega_{Y}),W\omega_{X})\]
and, after completion and accessiblization, one sees by looking in local coordinates that the image of this map is contained in \(\lim_{r}\widehat{\mathcal{D}}^{(0)}_{W(Y)\gets W(X),\text{c-acc}}/p^{r}\) (when \(Y\) is a point this map is just the action map coming from the right action of \(\widehat{\mathcal{D}}^{(0)}_{W(X)}\) on \(W\omega_{X}\)).
Now, to show the required exactness it suffices to work after applying \(\otimes^{L}_{W_{r}(k)}k\). Working locally, we have an isomorphism
\[(\widehat{\mathcal{D}}^{(0)}_{W(X)}/p^{r})_{c-\text{acc}}\widehat{\otimes}_{ \mathcal{O}_{W(X)/p^{r}}}W\Omega^{i}_{X/Y}/p^{r}\tilde{=}\Phi^{!}(\widehat{ \mathcal{D}}^{(0)}_{\mathfrak{X}}/p^{r})\widehat{\otimes}_{\mathcal{O}_{ \mathfrak{X}}/p^{r}}W\Omega^{i}_{X/Y}/p^{r}\]
\[:=\lim_{\leftarrow}(\Phi^{!}(\widehat{\mathcal{D}}^{(0)}_{\mathfrak{X}}/p^{r }))/F^{m}\otimes_{\mathcal{O}_{\mathfrak{X}}/p^{r}}W\Omega^{i}_{X/Y}/p^{r}\]
and, since \((\Phi^{!}(\widehat{\mathcal{D}}^{(0)}_{\mathfrak{X}}/p^{r}))/F^{m}\) is locally free over \(\mathcal{O}_{\mathfrak{X}}/p^{r}\), we see that this sheaf is flat over \(W_{r}(k)\). Therefore it suffices to analyze the complex
\[(\widehat{\mathcal{D}}^{(0)}_{W(X)}/p)_{c-\text{acc}}\widehat{\otimes}_{ \mathcal{O}_{W(X)/p}}W\Omega_{X/Y}/p\]
Again working locally, we have the isomorphism
\[(\widehat{\mathcal{D}}^{(0)}_{W(X)}/p)_{c-\text{acc}}\widehat{\otimes}_{ \mathcal{O}_{W(X)/p}}W\Omega_{X/Y}/p\]
\[\tilde{\to}R\lim_{\leftarrow}(\Phi^{!}_{m}(\mathcal{D}^{(0)}_{X}))\otimes_{ \mathcal{O}_{X}}W\Omega_{X/Y}/p\]
(where \(\Phi^{!}_{m}(\mathcal{D}^{(0)}_{X})=\Phi^{!}(\mathcal{D}^{(0)}_{X})/V^{m}( \mathcal{O}_{W(X)}/p)\)). In fact we shall show that
\[\mathcal{H}^{i}(\Phi^{!}_{m}(\mathcal{D}^{(0)}_{X}))\otimes_{\mathcal{O}_{X/p }}W\Omega_{X/Y}/p)=0\]
for \(i<d\) and
\[\mathcal{H}^{d}(\Phi^{!}_{m}(\mathcal{D}^{(0)}_{X}))\otimes_{\mathcal{O}_{X/p }}W\Omega^{i}_{X/Y}/p)\tilde{\to}\Phi^{!}_{m}(W\varphi^{-1}(\mathcal{O}_{W(Y)} /p)\widehat{\otimes}_{\varphi^{-1}(\mathcal{O}_{Y})}\mathcal{D}^{(0)}_{Y \gets X})\]
where, in the right hand side, \(\Phi^{!}_{m}\) refers to \(\otimes_{\mathcal{D}^{(0)}_{X}}\Phi^{!}_{m}(\mathcal{D}^{(0)}_{X})\) via the right \(\mathcal{D}^{(0)}_{X}\)-module structure on \(\mathcal{D}^{(0)}_{Y\gets X}\), and the completion is with respect to the \(V\)-adic completion of \(\mathcal{O}_{W(Y)}\). As each tower \(\{\Phi^{!}_{m}(\mathcal{D}^{(0)}_{X})\}\otimes_{\mathcal{O}_{X/p}}W\Omega^{ \cdot}_{X/Y}/p\}_{m}\) is a tower of quasicoherent sheaves satisfying the Mittag-Leffler condition, we may take the inverse limit over \(m\) (c.f. [28], lemma 1.1.6), so the result then follows directly from 4.26.
To prove the required equalities, let \((\mathcal{M},\nabla)\) be any integrable connection on \(X\). Then, as proved above, there is an induced de Rham-Witt connection on \(\widehat{\Phi}^{*}\mathcal{M}\), and therefore a de Rham-Witt complex
\[\widehat{\Phi}^{*}\mathcal{M}\widehat{\otimes}_{\mathcal{O}_{W(X)/p}}W\Omega^ {\cdot}_{X/Y}/p\tilde{\to}\mathcal{M}\otimes_{\mathcal{O}_{X}}W\Omega^{\cdot }_{X/Y}/p\]
We note that the quotient by \(\mathcal{M}\otimes_{\mathcal{O}_{X}}V(W\Omega^{\cdot}_{X/Y}/p)\) induces a map
\[\mathcal{M}\otimes_{\mathcal{O}_{X}}W\Omega^{\cdot}_{X/Y}/p\to\mathcal{M} \otimes_{\mathcal{O}_{X}}(\Omega^{\cdot}_{X/Y}\widehat{\otimes}_{\varphi^{-1 }(\mathcal{O}_{Y})}W\varphi^{-1}(\mathcal{O}_{W(Y)}/p))\]
If \(\mathcal{M}\) is a vector bundle with nilpotent connection, this map induces an isomorphism
\[\mathcal{H}^{i}(\mathcal{M}\otimes_{\mathcal{O}_{X}}W\Omega^{\cdot}_{X/Y}/p) \tilde{\to}\mathcal{H}^{i}(\mathcal{M}\otimes_{\mathcal{O}_{X}}\Omega^{\cdot }_{X/Y})\widehat{\otimes}_{\varphi^{-1}(\mathcal{O}_{Y})}W\varphi^{-1}( \mathcal{O}_{W(Y)}/p)\]
(via Cartier descent, this follows formally from the case \(\mathcal{O}_{X}\), where it a basic computation, c.f., [32], section 3.2); more precisely it is induced from
\[\mathcal{H}^{i}(\mathcal{M}\otimes_{\mathcal{O}_{X}}W_{j}\Omega^{\cdot}_{X/Y} /p)\tilde{\to}\mathcal{H}^{i}(\mathcal{M}\otimes_{\mathcal{O}_{X}}\Omega^{ \cdot}_{X/Y})\otimes_{\varphi^{-1}(\mathcal{O}_{Y})}W\varphi^{-1}(\mathcal{O} _{W_{j}(Y)}/p)\]
by taking the inverse limit over \(j\).
Now, the quotient of \(\mathcal{D}^{(0)}_{X}\) along any power of the central ideal \(\mathcal{I}_{1}\) defining the nilpotence condition, is such a vector bundle with nilpotent connection. Thus we obtain for each \(m,j,s\)
\[\mathcal{H}^{i}(\Phi^{!}_{m}(\mathcal{D}^{(0)}_{X}/\mathcal{I}^{s})\otimes_{ \mathcal{O}_{X}}W_{j}\Omega^{\cdot}_{X/Y}/p)\tilde{\to}\mathcal{H}^{i}(\Phi^{! }_{m}(\mathcal{D}^{(0)}_{X}/\mathcal{I}^{s})\otimes_{\mathcal{O}_{X}}\Omega^ {\cdot}_{X/Y})\otimes_{\varphi^{-1}(\mathcal{O}_{Y})}W\varphi^{-1}(\mathcal{O }_{W_{j}(Y)}/p)\]
and, as everything in question is a finite right \(\mathcal{D}^{(0)}_{X}\)-module we may take the inverse limit over \(s\) and obtain
\[\mathcal{H}^{i}(\Phi^{!}_{m}(\mathcal{D}^{(0)}_{X,\mathrm{crys}})\otimes_{ \mathcal{O}_{X}}W_{j}\Omega^{\cdot}_{X/Y}/p)\tilde{\to}\mathcal{H}^{i}(\Phi^{! }_{m}(\mathcal{D}^{(0)}_{X,\mathrm{crys}})\otimes_{\mathcal{O}_{X}}\Omega^{ \cdot}_{X/Y})\otimes_{\varphi^{-1}(\mathcal{O}_{Y})}W\varphi^{-1}(\mathcal{O} _{W_{j}(Y)}/p)\]
this vanishes for \(i<d\) and is isomorphic to \(\Phi^{!}_{m}(W\varphi^{-1}(\mathcal{O}_{W_{j}(Y)}/p)\otimes_{\varphi^{-1}( \mathcal{O}_{Y})}\mathcal{D}^{(0)}_{Y\gets X,\mathrm{crys}})\) when \(i=d\). This shows that both the kernel and the cokernel of
\[\mathcal{H}^{i}(\Phi^{!}_{m}\mathcal{D}^{(0)}_{X}\otimes_{\mathcal{O}_{X}}W_{j }\Omega^{\cdot}_{X/Y}/p)\tilde{\to}W\varphi^{-1}(\mathcal{O}_{W_{j}(Y)}/p) \otimes_{\varphi^{-1}(\mathcal{O}_{Y})}\mathcal{H}^{i}(\Phi^{!}_{m}\mathcal{D} ^{(0)}_{X}\otimes_{\mathcal{O}_{X}}\Omega^{\cdot}_{X/Y})\]
are supported, as modules over \(\mathcal{Z}(\mathcal{D}^{(0)}_{X})\tilde{\cong}\mathcal{O}_{T^{*}X^{(1)}}\), away from the zero section.
To show that the kernel and cokernel are actually \(0\), we argue as follows: passing to the algebraic closure of \(k\), we consider a closed point \(x\in T^{*}X^{(1)}\). As \(X\) is affine and has local coordinates, we have that \(x\) is contained in the subscheme defined by the ideal \((\partial_{1}-a_{1}^{p},\partial_{2}-a_{2}^{p},\dots,\partial_{n}-a_{n}^{p})\) for some \(a_{i}\in k\). The algebra of global sections \(\Gamma(\mathcal{D}^{(0)}_{X})\) possesses an automorphism \(\chi\) which preserves \(\Gamma(\mathcal{O}_{X})\) and sends \(\partial_{i}\) to \(\partial_{i}-a_{i}\) (this is clear from the defining relations on \(\mathcal{D}^{(0)}_{X}\)). This automorphism preserves \(\mathcal{Z}(\mathcal{D}^{(0)}_{X})\), and the associated action on \(T^{*}X^{(1)}\) interchanges the zero section \(X^{(1)}\) with the subscheme defined by \((\partial_{1}-a_{1}^{p},\partial_{2}-a_{2}^{p},\dots,\partial_{n}-a_{n}^{p})\).
Now, we may repeat the above argument for \((\mathcal{D}^{(0)}_{X})^{\chi}\), the \(\mathcal{D}^{(0)}_{X}\)-module whose left and right module structure are twisted by the action of \(\chi\). This shows that the kernel and cokernel of
\[\mathcal{H}^{i}(\Phi^{!}_{m}\mathcal{D}^{(0)}_{X}\otimes_{\mathcal{O}_{X}}W_{j }\Omega^{\cdot}_{X/Y}/p)\tilde{\to}W\varphi^{-1}(\mathcal{O}_{W_{j}(Y)}/p) \otimes_{\varphi^{-1}(\mathcal{O}_{Y})}\mathcal{H}^{i}(\Phi^{!}_{m}\mathcal{D} ^{(0)}_{X}\otimes_{\mathcal{O}_{X}}\Omega^{\cdot}_{X/Y})\]
are supported away from the subscheme defined by the ideal \((\partial_{1}-a_{1}^{p},\partial_{2}-a_{2}^{p},\dots,\partial_{n}-a_{n}^{p})\). As this is true for all \((a_{1},\dots,a_{n})\) in \(k\), we see that in fact the support of the kernel and cokernel are empty, as required.
Now we want to compare the (relative) de Rham-Witt cohomology with the relative pushforward constructed above. To do so, we need the following basic fact:
**Theorem 4.40**.: _Fix \(r\geq 1\) and let \(\mathcal{M}^{\cdot}\) be a bounded accessible complex over \(\widehat{\mathcal{D}}^{(0)}_{W(X)}/p^{r}\); and let \(\widehat{\mathcal{M}}^{\cdot}\) be its derived completion (as discussed above 3.25). Consider the functor_
\[\widehat{\int}_{W\varphi}\mathcal{M}^{\cdot}:=R(W\varphi)_{*}(\widehat{ \mathcal{D}}^{(0)}_{W(Y)\gets W(X),c-\text{acc}}/p^{r}\widehat{\otimes}^ {L}_{\widehat{\mathcal{D}}^{(0)}_{W(X)}/p^{r}}\widehat{\mathcal{M}}^{\cdot})\]
_where on the right we define_
\[\widehat{\mathcal{D}}^{(0)}_{W(Y)\gets W(X),c-\text{acc}}/p^{r}\widehat{ \otimes}^{L}_{\widehat{\mathcal{D}}^{(0)}_{W(X)}/p^{r}}\widehat{\mathcal{M}} ^{\cdot}\]
\[:=\text{holim}_{i}(W\varphi^{-1}((\mathcal{O}_{W(Y)}/p^{r})/V^{i})\otimes^{L} _{W\varphi^{-1}((\mathcal{O}_{W(Y)}/p^{r})}\widehat{\mathcal{D}}^{(0)}_{W(Y) \gets W(X),c-\text{acc}}/p^{r}\otimes^{L}_{\widehat{\mathcal{D}}^{(0)}_{ W(X)}/p^{r}}\widehat{\mathcal{M}}^{\cdot})\]
_Then there is a canonical isomorphism_
\[\text{holim}_{i}((\mathcal{O}_{W(Y)}/p^{r})/V^{i})\otimes^{L}_{\mathcal{O}_{ W(Y)}/p^{r}}\int_{W\varphi}\mathcal{M}^{\cdot}\widehat{\to}\widehat{\int}_{W \varphi}\mathcal{M}^{\cdot}\]
Proof.: There is a canonical map
\[\widehat{\mathcal{D}}^{(0)}_{W(Y)\gets W(X),\text{acc}}/p^{r}\otimes^{L} _{\widehat{\mathcal{D}}^{(0)}_{W(X)}/p^{r}}\mathcal{M}^{\cdot}\to\widehat{ \mathcal{D}}^{(0)}_{W(Y)\gets W(X),c-\text{acc}}/p^{r}\widehat{\otimes}^ {L}_{\widehat{\mathcal{D}}^{(0)}_{W(X)}/p^{r}}\widehat{\mathcal{M}}^{\cdot}\]
which induces a map
\[R(W\varphi)_{*}(\widehat{\mathcal{D}}^{(0)}_{W(Y)\gets W(X),\text{acc}}/p ^{r}\otimes^{L}_{\widehat{\mathcal{D}}^{(0)}_{W(X)}/p^{r}}\mathcal{M}^{\cdot}) \to\widehat{\int}_{W\varphi}\mathcal{M}^{\cdot}\]
Work locally on \(Y\); choose a coordinatized lift of Frobenius and an associated \(\Phi\). We will show that the complex \(\widehat{\int}_{W\varphi}\mathcal{M}^{\cdot}\) is a bounded complex which is quasi-isomorphic to a complex of the form \(\widehat{\Phi}\mathcal{K}^{\cdot}\). This shows that the map above factors through \(\int_{W\varphi}\mathcal{M}^{\cdot}\), we then show that in fact \(\Phi^{*}\mathcal{K}^{\cdot}\widehat{\to}\int_{W\varphi}\mathcal{M}^{\cdot}\); this immediately implies the result.
Begin by assuming that \(X\) is also affine. As \(\varphi\) is smooth it is locally compatible with a lift of Frobenius, which we also call \(\Phi\) and so by 4.26 we have
\[\widehat{\mathcal{D}}^{(m)}_{W(Y)\leftarrow W(X),\text{acc}}/p^{r}\tilde{=} \varphi^{-1}(\Phi^{*}\mathcal{D}^{(0)}_{\mathfrak{Y}_{*}})\widehat{\otimes}^{ L}_{\varphi^{-1}(\mathcal{D}^{(0)}_{\mathfrak{Y}_{*}})}\mathcal{D}^{(0)}_{ \mathfrak{Y}_{*}\leftarrow\mathfrak{X}_{r}}\otimes^{L}_{\mathcal{D}^{(0)}_{ \mathfrak{X}_{r}}}\Phi^{!}\mathcal{D}^{(0)}_{\mathfrak{X}_{r}}\]
where the \(\widehat{\otimes}\) on the left indicates completion with respect to \(W\varphi^{-1}(V^{i}(\mathcal{O}_{W(X)}/p^{r}))\). Writing \(\widehat{\mathcal{M}}^{\cdot}=\widehat{\Phi}^{*}\mathcal{N}^{\cdot}\) we see that
\[\widehat{\mathcal{D}}^{(0)}_{W(Y)\gets W(X),c-\text{acc}}/p^{r} \widehat{\otimes}^{L}_{\widehat{\mathcal{D}}^{(0)}_{W(X)}/p^{r}}\widehat{ \mathcal{M}}^{\cdot}\] \[\tilde{=}\varphi^{-1}(\Phi^{*}\mathcal{D}^{(0)}_{\mathfrak{Y}_{* }})\widehat{\otimes}^{L}_{\varphi^{-1}(\mathcal{D}^{(0)}_{\mathfrak{Y}_{*}})} \mathcal{D}^{(0)}_{\mathfrak{Y}_{*}\leftarrow\mathfrak{X}_{r}}\otimes^{L}_{ \mathcal{D}^{(0)}_{\mathfrak{X}_{r}}}\mathcal{N}^{\cdot}\]
Now, \(\mathcal{D}^{(0)}_{\mathfrak{A}_{*}\leftarrow\mathfrak{X}_{r}}\otimes^{L}_{ \mathcal{D}^{(0)}_{\mathfrak{X}_{r}}}\mathcal{N}\) is a bounded complex of \(\varphi^{-1}(\mathcal{D}^{(0)}_{\mathfrak{A}_{*}})\)-modules; if we represent it by a bounded complex \(\mathcal{F}^{i}\), then the \(i\)th term of the above is given by
\[\varphi^{-1}(\Phi^{*}\mathcal{D}^{(0)}_{\mathfrak{A}_{*}})\widehat{\otimes}_{ \varphi^{-1}(\mathcal{D}^{(0)}_{\mathfrak{A}_{*}})}\mathcal{F}^{i}\]
which is simply an infinite product of copies of \(\mathcal{F}^{i}\). So we see easily that
\[R\varphi_{*}(\varphi^{-1}(\Phi^{*}\mathcal{D}^{(0)}_{\mathfrak{A}_{*}}) \widehat{\otimes}_{\varphi^{-1}(\mathcal{D}^{(0)}_{\mathfrak{A}_{*}})} \mathcal{F}^{i})\tilde{\to}\Phi^{*}\mathcal{D}^{(0)}_{\mathfrak{A}_{*}} \widehat{\otimes}_{\mathcal{D}^{(0)}_{\mathfrak{A}_{*}}}R\varphi_{*}( \mathcal{F}^{i})\]
and the analogous result follows easily for a bounded complex (by induction on the cohomological length), so that we have
\[R\varphi_{*}(\varphi^{-1}(\Phi^{*}\mathcal{D}^{(0)}_{\mathfrak{A}_{*}}) \widehat{\otimes}^{L}_{\varphi^{-1}(\mathcal{D}^{(0)}_{\mathfrak{A}_{*}})} \mathcal{D}^{(0)}_{\mathfrak{A}_{*}\leftarrow\mathfrak{X}_{r}}\otimes^{L}_{ \mathcal{D}^{(0)}_{\mathfrak{X}_{r}}}\mathcal{N}\,)\tilde{\to}\Phi^{*} \mathcal{D}^{(0)}_{\mathfrak{A}_{*}}\widehat{\otimes}^{L}_{\mathcal{D}^{(0)}_ {\mathfrak{A}_{*}}}R\varphi_{*}(\mathcal{D}^{(0)}_{\mathfrak{A}_{*}\leftarrow \mathfrak{X}_{r}}\otimes^{L}_{\mathcal{D}^{(0)}_{\mathfrak{A}_{r}}}\mathcal{N} \,)\]
This shows that both \(\widehat{\int_{W\varphi}}\mathcal{M}\,\) is quasi-isomorphic to a complex of the form \(\widehat{\Phi}\mathcal{K}\). and that \(\Phi^{*}\mathcal{K}\tilde{\to}\int_{W\varphi}\mathcal{M}\,\), which is what we wanted.
To get the result for a general \(X\), simply cover it with affines; and use the fact that if \(X=U\cup V\) then there is a distinguished triangle
\[\mathcal{M}\,\to(j_{U})_{*}\mathcal{M}\,\oplus(j_{V})_{*}\mathcal{M}\,\to(j_ {U\cap V})_{*}\mathcal{M}\,\]
where \(j_{U},j_{V}\),\(j_{U\cap V}\) denote the inclusions from those open subsets, respectively; this allows one to do induction on the number of open affines and deduce the result.
Putting it all together, we conclude:
**Corollary 4.41**.: _Let \(\varphi:X\to Y\) be smooth, and let \(\mathcal{M}\in\mathcal{D}^{(0)}_{W(X)}/p^{r}-\text{mod}_{acc,qcoh}\). Let \(\widehat{\mathcal{M}}\widehat{\otimes}_{\mathcal{O}_{W(X)}/p}W\Omega_{X/Y}\) be the associated de Rham-Witt complex. There is an isomorphism of sheaves on \(W(Y)_{p^{r}=0}\)_
\[\widehat{\int_{W\varphi}}\mathcal{M}[d]\tilde{\to}RW\varphi_{*}(\widehat{ \mathcal{M}}\widehat{\otimes}_{\mathcal{O}_{W(X)}/p}W\Omega_{X/Y})\]
_In particular, if \(\mathcal{M}\) is nilpotent and quasicoherent, and \(\widehat{\mathcal{M}}=\widehat{\epsilon}(\mathcal{N})\) for a quasicoherent crystal \(\mathcal{N}\), then for each \(i\geq 0\) we have_
\[\widehat{\epsilon}(R^{i}\varphi_{*,\text{cry}\mathcal{N}})\tilde{\to}R^{i}W \varphi_{*}(\mathcal{M}\widehat{\otimes}_{\mathcal{O}_{W(X)}/p}W\Omega_{X/Y})\]
Proof.: The second sentence follows from the first by 4.34. To prove the first, we shall invoke the previous two results. First, by Theorem 4.39 there is a quasi-isomorphism
\[\widehat{\mathcal{D}}^{(0)}_{W(Y)\gets W(X),c-\text{acc}}/p^{r}\otimes^{L} _{\widehat{\mathcal{D}}^{(0)}_{W(X)}/p^{r}}\widehat{\mathcal{M}}\tilde{\to}(( \widehat{\mathcal{D}}^{(0)}_{W(X)}/p^{r})_{c-\text{acc}}\widehat{\otimes}_{ \mathcal{O}_{W(X)/p^{r}}}W\Omega_{X/Y}/p^{r})\otimes^{L}_{\widehat{\mathcal{D }}^{(0)}_{W(X)}/p^{r}}\widehat{\mathcal{M}}\]
Now, the lemma below ensures that, for each \(i\), \((\widehat{\mathcal{D}}^{(0)}_{W(X)}/p^{r})_{c-\text{acc}}\widehat{\otimes}_{ \mathcal{O}_{W(X)/p^{r}}}W\Omega^{i}_{X/Y}/p^{r}\) is flat over \(\widehat{\mathcal{D}}^{(0)}_{W(X)}/p^{r}\) (in fact, locally, it is an inverse limit of a surjective system of projective modules of the form \(\Phi^{*}\mathcal{D}^{(0)}_{\mathfrak{X}_{r}}\)). So on the right hand side we actually have the complex whose terms are
\[(\widehat{\mathcal{D}}^{(0)}_{W(X)}/p^{r})_{c-\text{acc}}\widehat{\otimes}_{ \mathcal{O}_{W(X)/p^{r}}}W\Omega^{i}_{X/Y}/p^{r}\otimes^{L}_{\widehat{\mathcal{D }}^{(0)}_{W(X)}/p^{r}}\widehat{\mathcal{M}}\]
which we can complete (with respect to the natural filtration on the tensor product) and thus get a map to the complex whose terms are
\[(\widehat{\mathcal{D}}^{(0)}_{W(X)}/p^{r})_{c-\mathrm{acc}}\widehat{\otimes}_{ \mathcal{O}_{W(X)/p^{r}}}W\Omega^{i}_{X/Y}/p^{r}\otimes^{L}_{\widehat{\mathcal{ B}}^{(0)}_{W(X)}/p^{r}}\widehat{\mathcal{M}}\supset W\Omega^{i}_{X/Y}/p^{r} \widehat{\otimes}_{\mathcal{O}_{W(X)/p^{r}}}\widehat{\mathcal{M}}\]
(the last isomorphism is because \(\mathcal{M}\) is accessible). Summing up, we've obtained a map
\[\widehat{\mathcal{D}}^{(0)}_{W(Y)\gets W(X),c-\mathrm{acc}}/p^{r}\otimes^ {L}_{\widehat{\mathcal{B}}^{(0)}_{W(X)}/p^{r}}\widehat{\mathcal{M}}\to W \Omega_{X/Y}/p^{r}\widehat{\otimes}_{\mathcal{O}_{W(X)/p^{r}}}\widehat{ \mathcal{M}}\]
To apply the previous result, we have to take the derived completion with respect to \(\{W\varphi^{-1}V^{i}(\mathcal{O}_{W(Y)}/p^{r})\}\) (i.e., applying holim to \(W\varphi^{-1}((\mathcal{O}_{W(Y)}/p^{r})/V^{i})\otimes^{L}_{W\varphi^{-1}(( \mathcal{O}_{W(Y)}/p^{r}))}\). However, it is not difficult to see that \(W\Omega_{X/Y}/p^{r}\widehat{\otimes}_{\mathcal{O}_{W(X)/p^{r}}}\widehat{ \mathcal{M}}\) is already derived complete with respect to \(\{W\varphi^{-1}V^{i}(\mathcal{O}_{W(Y)}/p^{r})\}\) (for instance, one may work locally and relate the \(\{V^{i}(\mathcal{O}_{W(Y)}/p^{r})\}\) to the filtration \(\{F^{i}(\mathcal{O}_{W(Y)}/p^{r})\}\) as in the proof of 3.25). Thus we actually obtain a map
\[\widehat{\mathcal{D}}^{(0)}_{W(Y)\gets W(X),c-\mathrm{acc}}/p^{r}\widehat{ \otimes}^{L}_{\widehat{\mathcal{B}}^{(0)}_{W(X)}/p^{r}}\widehat{\mathcal{M}} \to W\Omega_{X/Y}/p^{r}\widehat{\otimes}_{\mathcal{O}_{W(X)/p^{r}}} \widehat{\mathcal{M}} \tag{4.7}\]
The result follows if we can show that it is a quasi-isomorphism. Working locally, we can assume \(\mathcal{M}=\Phi^{*}\mathcal{N}\). We begin in the case \(r=1\). In that case, we remark that if \(\mathcal{M}\) is any complex of accessible quasicoherent modules, by applying the above construction to each \(\mathcal{M}^{i}\) and taking the total complex, we obtain a morphism
\[\widehat{\mathcal{D}}^{(0)}_{W(Y)\gets W(X),c-\mathrm{acc}}/p\widehat{ \otimes}^{L}_{\widehat{\mathcal{D}}^{(0)}_{W(X)}/p}\widehat{\mathcal{M}} \to W\Omega_{X/Y}/p\widehat{\otimes}^{L}_{\mathcal{O}_{W(X)/p}}\widehat{ \mathcal{M}}. \tag{4.8}\]
(this works as \(\mathcal{M}\) is accessible and \(W\Omega_{X/Y}/p\) is flat over \(\mathcal{O}_{X}\) by the lemma below) and so for each \(j\) we obtain an induced map
\[\widehat{\mathcal{D}}^{(0)}_{W(Y)\gets W(X),c-\mathrm{acc}}/p\widehat{ \otimes}^{L}_{\widehat{\mathcal{B}}^{(0)}_{W(X)}/p}\widehat{\mathcal{M}} \to W_{j}\Omega_{X/Y}/p\widehat{\otimes}^{L}_{\mathcal{O}_{W_{j}(X)/p}} \mathcal{M}_{j}\]
(where \(\mathcal{M}_{j}=\mathcal{M}\,\otimes^{L}_{\mathcal{O}_{W(X)}/p}\,(\mathcal{O }_{W(X)}/p/V^{j})\). Consider the set \(\mathcal{S}\) of complexes \(\mathcal{M}\): for which this map is a quasi-isomorphism for all \(j\). By the previous theorem we have
\[\mathcal{H}^{i}(\widehat{\mathcal{D}}^{(0)}_{W(Y)\gets W(X),c-\mathrm{acc} }/p\widehat{\otimes}^{L}_{\widehat{\mathcal{B}}^{(0)}_{W(X)}/p}\widehat{ \mathcal{M}})\supset\!\mathcal{H}^{i}(\Omega_{X/Y}\otimes\mathcal{N},\nabla)\]
So that, by Theorem 4.39\(\mathcal{M}=\Phi^{*}\mathcal{D}^{(0)}_{X}\in\mathcal{S}\). It is also clear that \(\mathcal{S}\) is closed under cones, arbitrary sums, and summands. But since it contains \(\Phi^{*}\mathcal{D}^{(0)}_{X}\), it must therefore12 contain any quasi-coherent accessible complex; taking the complex to be in a single degree proves the result when \(r=1\). It then follows, by induction on the cohomological length, that (4.8) is an isomorphism for bounded complexes.
Footnote 12: By the fact that \(\Phi^{*}\mathcal{D}^{(0)}_{X}\), along with all of its shifts, are set of compact generators for \(D_{\mathrm{acc},\mathrm{acoh}}(\widehat{\mathcal{D}}^{(0)}_{W(Y)}/p^{r}- \mathrm{mod})\)
Now let \(r\geq 1\). To prove that (4.7) is an isomorphism, we regard both sides as complexes of \(W(k)\)-modules and apply \(\otimes^{L}_{W(k)}k\) to both sides. By the argument of 4.25 (where the role of \(\widehat{\mathcal{D}}^{(m)}_{W(Y)\gets W(X)}\) is played by \(\lim_{r}\widehat{\mathcal{D}}^{(0)}_{W(Y)\gets W(X),c-\mathrm{acc}}/p^{r}\)), we have
\[(\widehat{\mathcal{D}}^{(0)}_{W(Y)\gets W(X),c-\mathrm{acc}}/p^{r}\widehat{ \otimes}^{L}_{\widehat{\mathcal{D}}^{(0)}_{W(X)}/p^{r}}\widehat{\mathcal{M}}) \otimes^{L}_{W(k)}k\tilde{\to}\widehat{\mathcal{D}}^{(0)}_{W(Y)\gets W(X),c- \mathrm{acc}}/p\widehat{\otimes}^{L}_{\widehat{\mathcal{B}}^{(0)}_{W(X)}/p}( \widehat{\mathcal{M}}\otimes^{L}_{W(k)}k)\]
and similarly for \(W\Omega^{i}_{X/Y}/p^{r}\widehat{\otimes}_{\mathcal{O}_{W(X)/p^{r}}}\widehat{ \mathcal{M}}\). As the complex \((\widehat{\mathcal{M}}\otimes^{L}_{W(k)}k)\) is bounded, the result follows from (4.8) for bounded complexes.
In the proof, we needed the
**Lemma 4.42**.: _Suppose \(A\) and \(B\) are smooth \(k\)-algebras so that \(B\to A\) is smooth. Let \(A\) posses local coordinates, and let \(\Phi:\mathcal{A}\to W(A)\). For each \(i\geq 0\), the module \(W\Omega^{i}_{A/B}\) is (faithfully) flat over \(\mathcal{A}\). Further, for each \(j\geq 0\) the module \(W_{j}\Omega^{i}_{A/B}/p\) is finite flat over \(A\)._
Proof.: It is enough to show that \(W\Omega^{i}_{A/B}/p\) is faithfully flat over \(A\), where \(A\) acts via the embedding \(\Phi:A\to W(A)/p\); and therefore it is enough to show that second statement of the lemma. As \(W\Omega^{i}_{A/B}/p\) is local for the etale topology, we can assume that \(A=B[T_{1},\dots T_{n}]\); and, by extending the ground field, that \(k\) is algebraically closed.
The module \(W\Omega^{i}_{A/B}/p\) is, by definition, the inverse limit of \(W_{j}\Omega^{i}_{A/B}/p\), the quotient of \(W\Omega^{i}_{A/B}/p\) by the module \(V^{j}(W\Omega^{i}_{A/B}/p):=V^{j}(W\Omega^{i}_{A/B})/p\). Each \(\mathrm{gr}_{j}(W\Omega^{i}_{A}/p)=V^{j}(W\Omega^{i}_{A/B}/p)/V^{j+1}(W\Omega ^{i}_{A/B}/p)\) is a module over
\[(W(A)/p)/V^{j+1}(W(A)/p)=W_{j+1}(A)/p\]
In general this action does not factor through the quotient \(W_{j+1}(A)/p/V(W_{j+1}(A)/p)=A\). We therefore refine this filtration by defining
\[\mathrm{gr}_{j,t}(W\Omega^{i}_{A}/p)=V^{t}(W(A)/p)\cdot\mathrm{gr}_{j}(W\Omega ^{i}_{A}/p)\]
for \(0\leq t\leq j+1\). Then the action of \(W_{j+1}(A)/p\) on each \(\mathrm{gr}_{j,t}(W\Omega^{i}_{A}/p)/\mathrm{gr}_{j,t+1}(W\Omega^{i}_{A}/p)\) factors through \(A\); and this action of \(A\) agrees with the action of \(A\) on \(\mathrm{gr}_{j,t}(W\Omega^{i}_{A}/p)/\mathrm{gr}_{j,t+1}(W\Omega^{i}_{A}/p)\) defined via \(\Phi_{1}\). Thus \((W\Omega^{i}_{A/B}/p)/V^{j}(W\Omega^{i}_{A/B}/p)\) is filtered by finitely many \(A\)-modules \(\mathrm{gr}_{j,t}(W\Omega^{i}_{A/B}/p)/\mathrm{gr}_{j,t+1}(W\Omega^{i}_{A/B}/p)\) and so it suffices to show each one is projective over \(A\).
To do that, note that any automorphism \(\sigma\) of \(A\) which preserves \(B\) yields an isomorphism of \(A\)-modules
\[\sigma^{*}\mathrm{gr}_{j,t}(W\Omega^{i}_{A/B}/p)/\mathrm{gr}_{j,t+1}(W\Omega^ {i}_{A/B}/p)\bar{\to}\mathrm{gr}_{j,t}(W\Omega^{i}_{A/B}/p)/\mathrm{gr}_{j,t+ 1}(W\Omega^{i}_{A/B}/p)\]
Indeed, let \(\sigma:W_{n+1}(A)/p\to W_{n+1}(A)/p\) denote the induced isomorphism. Then we have \(\sigma^{*}\mathrm{gr}_{j}(W\Omega^{i}_{A/B}/p)\bar{\to}\mathrm{gr}_{j}(W \Omega^{i}_{A/B}/p)\) by the functoriality of the de Rham Witt construction. The result then follows for \(\mathrm{gr}_{j,t}(W\Omega^{i}_{A}/p)\) from the fact that the automorphism \(\sigma\) preserves \(V^{t}(W(A)/p)\). As the action of automorphisms of \(A=B[T_{1},\dots T_{n}]\) is transitive on closed points in each fibre over \(\mathrm{Spec}(B)\), we see that the finite \(A\)-module \(\mathrm{gr}_{j,t}(W\Omega^{i}_{A/B}/p)/\mathrm{gr}_{j,t+1}(W\Omega^{i}_{A/B}/p)\) is necessarily projective (the geometric fibres all have the same rank) and so the result follows.
## 5. The algebra \(\widehat{\mathcal{U}}\), and applications
In this chapter we give an alternate construction of accessibility on the scheme \(W(X)/p\) via an algebra \(\widehat{\mathcal{U}}\), which is a kind of enveloping algebra for derivations of \(\mathcal{O}_{W(X)}/p\). This allows us to prove 3.16in all positive characteristics, in particular avoiding Theorem 3.6. As mentioned above, the key to the proof is the fact that \(W(A)/p\to A\) is a square zero extension; the argument is a really as rephrasing of Grothendieck's fundamental result which says that a flat connection extends uniquely over a square zero extension (c.f. [7], the introduction to chapter 2 for
a nice discussion of this). Once this is done, we turn to a deeper study of the relationship between de Rham-Witt connections and accessible -modules. Working mod we will use to show that there is (essentially) an equivalence between the two. Then we will lift this mod and in particular prove Theorem 1.15.
### The algebra
Until further notice is a smooth -algebra which possesses local coordinates. We begin by constructing an algebra of differential operators over, which is closely related to, and which we will use to analyze the pull-back from -modules. Let us set some notation:
**Definition 5.1**.: For each let let ; and. Define for all.
Clearly is a Lie algebras under the natural bracket of derivations. However, we have more:
**Lemma 5.2**.: _Let be the sub-sheaf of Lie-algebras of consisting of derivations whose image lies in._
_1) There is a natural map. The kernel of this map is._
_2) For, the composition as well. Thus carries the structure of a sheaf of (non-unital) algebras._
Proof.: 1) Since any takes to itself, it induces a derivation. But since we obtain the result.
2) Since is a square zero ideal, if for all and. Therefore, we have
as claimed.
Now we recall the definition of a Lie-Rinehart enveloping algebra. Let be an algebra over a commutative ring. Denote by the module of -derivations on, and suppose that is also an -module, equipped with a morphism which is both -linear and a morphism of Lie algebras, and such that
for all in and. In this situation we have the sheaf of universal enveloping algebras (or simply if is understood) as constructed in [35], c.f. also [BB] section 1.4. Namely, is the -algebra generated by and satisfying the relations:
for and ; here, the left hand side denotes the multiplication in and the right hand side denotes the action of on ; next we demand
for, where the bracket on the right denotes the Lie-algebra bracket, and finally
for and.
To get control of this object, one typically makes the assumption that \(\mathfrak{a}\) is a projective \(S\)-module (c.f., e.g. [1] for examples of this), which allows one to give an explicit description of \(\mathcal{U}(\mathfrak{a},\rho)\). However, this assumption does not hold in the situations of interest in this paper; instead, we make the following:
**Definition 5.3**.: The \(W(A)/p\)-module \(\mathcal{T}^{\prime}_{W(A)/p}\) is a sub-Lie-algebra of the Lie algebra of all derivations \(\mathcal{T}_{W(A)/p}\); it is therefore a Lie algebroid via the inclusion map \(\rho:\mathcal{T}^{\prime}_{W(A)/p}\to\mathcal{T}_{W(A)/p}\). Define the algebra \(\mathcal{U}^{\prime}_{W(A)/p}\) to be quotient of \(\mathcal{U}(\mathcal{T}^{\prime}_{W(A)/p})\) by the two-sided ideal generated by
\[\partial_{1}\cdot\partial_{2}-\partial_{1}\circ\partial_{2}\]
for all \(\partial_{1},\partial_{2}\in\mathcal{T}^{\prime}_{W(X)/p}(1)\) (here we are using that \(\partial_{1}\circ\partial_{2}\) is a derivation by Lemma 5.2). This algebra is filtered by the two sided ideals \(\{\mathcal{K}^{(i)}\}\), where \(\mathcal{K}^{(i)}\) is generated by \(V^{i}(W(A)/p)\) and \(\{\partial\in\mathcal{T}^{\prime}_{W(X)/p}|\partial(W(A)/p)\subset\mathcal{I }_{i}\}\).
In order to discuss local coordinates on this algebra, we choose a lift of Frobenius \(F:\mathcal{A}\to\mathcal{A}\), along with the corresponding \(\Phi:\mathcal{A}\to W(A)\). The reduction mod \(p\) of \(\Phi\) is a splitting of the natural reduction map \(W(A)/p\to A\). Until further notice, we fix such a map and we regard \(A\subset W(A)/p\); in fact this puts \(A\subset W_{r+1}(A)/p\) for each \(r\), and we have
**Lemma 5.4**.: _The inclusion \(A\subset W_{r+1}(A)/p\) makes \(W_{r+1}(A)/p\) a free \(A\)-module. Furthermore, there is an isomorphism of \(A\)-modules_
\[W_{r+1}(A)/p\tilde{=}A\oplus\bigoplus_{i=1}^{r}K_{i}/K_{i+1}\]
Proof.: In fact, one may check easily that a basis of \(W_{r+1}(A)/p\) over \(A\) is given by
\[1\cup\{V^{i}(T^{I})\}_{1\leq i\leq r}\]
where, for a fixed \(i\), the multi-index \(I\) ranges over \(\{(i_{1},\ldots,i_{n})\}\) such that each \(i_{j}\) is contained in \(\{1,\ldots,p^{i}\}\) and at least one \(i_{j}\) is not divisible by \(p\). This directly implies both statements.
Using the isomorphism of the previous lemma we have
**Lemma 5.5**.: _There is an isomorphism of Lie algebras_
\[\mathcal{T}^{\prime}_{W(A)/p}\tilde{=}\text{Der}_{A}(A,W(A)/p)\oplus\text{ Home}_{A}(\mathcal{I}_{1})\]
_where on the right we have filtration-preserving \(A\)-linear maps on \(\mathcal{I}_{1}\). Equivalently,_
\[\mathcal{T}^{\prime}_{W(A)/p}\tilde{=}\text{Der}_{A}(A,W_{1}(A)/p)\oplus \mathcal{T}^{\prime}_{1}\]
_where \(\mathcal{T}^{\prime}_{1}\) consists of derivations in \(\mathcal{T}^{\prime}_{W(A)/p}\) which vanish on \(A\)._
Proof.: Let \(\partial\in\mathcal{T}^{\prime}_{W(A)/p}\). Then the restriction of \(\partial\) to \(A\) is a derivation from \(A\) into \(W(A)/p\). Further, if \(\delta:A\to W(A)/p\) is any derivation, we may extend it to an element of \(\mathcal{T}^{\prime}_{W(A)/p}\) by setting \(\delta\) to be zero on each term of the form \(V^{i}(T^{I})\) where, as above, each index \(i_{j}\) in \(I\) is contained in \(\{1,\ldots,p^{i}\}\) and at least one \(i_{j}\) is not divisible by \(p\). Thus we have
\[\mathcal{T}^{\prime}_{W(A)/p}\tilde{=}\text{Der}_{A}(A,W_{1}(A)/p)\oplus \mathcal{T}^{\prime}_{1}\]
where \(\mathcal{T}^{\prime}_{1}\) consists of derivations in \(\mathcal{T}^{\prime}_{W(A)/p}\) which vanish on \(A\) which proves the second claim of the lemma.
To show that this implies the first claim, let \(\phi\in\mathcal{T}_{1}\). Then for any \(a\in A\) and \(b\in\mathcal{I}_{1}\) we have
\[\phi(ab)=a\phi(b)+\phi(a)b=a\phi(b)\]
so that \(\phi:\mathcal{I}_{1}\to\mathcal{I}_{1}\) is an \(A\)-linear map. Conversely, since \(\mathcal{I}_{1}\) is a square-zero ideal, any \(A\)-linear map \(\phi:\mathcal{I}_{1}\to\mathcal{I}_{1}\), extended to all of \(W(A)/p\) by setting \(\phi(A)=0\), is a derivation of \(W(A)/p\). Since derivations in \(\mathcal{T}^{\prime}_{W(A)/p}\) are required to preserve the filtration given by the \(\{\mathcal{I}_{j}\}\), one deduces the first claim of the lemma.
From this we deduce
**Corollary 5.6**.: _Let \(\{\partial_{1},\ldots,\partial_{n}\}\) be a set of coordinate derivations on \(A\). Then every element of \(\mathcal{U}^{\prime}_{W(A)/p}\) may be written as_
\[\sum_{J}b_{J}\partial^{J}+\phi\]
_where \(b_{J}\in W(A)/p\), \(\phi\) is contained in the left ideal generated by \(\mathcal{T}_{1}\), and the sum is finite. The analogous statement holds in \(\widehat{\mathcal{U}}^{\prime}_{W(A)/p}\); namely, any element thereof may be written as_
\[\sum_{J}b_{J}\partial^{J}+\phi\]
_where \(b_{J}\in W(A)/p\), \(\phi\) is contained in the completed left ideal generated by \(\mathcal{T}_{1}\), and \(b_{J}\to 0\) in \(W(A)/p\) as \(|J|\to\infty\)._
Proof.: Let \(x\in\mathcal{U}^{\prime}_{W(A)/p}\). By definition \(x\) can be written as a sum of terms of the form \(\alpha_{1}\cdots\alpha_{m}\) where each \(\alpha_{i}\) is either an element of \(W(A)/p\), one of the \(\partial_{i}\), or \(\phi\in\mathcal{T}_{1}\) (here we use the fact that any element of \(\operatorname{Der}_{A}(A,W_{1}(A)/p)\) is a sum of elements of the form \(a_{i}\partial_{i}\) with \(a_{i}\in W(A)/p\)). Commuting the \(\alpha_{i}\) past one another and using Lemma 5.5, and then noting that \(\mathcal{T}_{1}\) is closed under products, we deduce the existence of a form as above. The case of \(\widehat{\mathcal{U}}^{\prime}_{W(A)/p}\) follows by taking completion.
We now construct the fundamental bimodule \(W(A)/p\otimes_{A}\mathcal{D}^{(0)}_{A}\):
**Proposition 5.7**.: _There is a left action of \(\mathcal{U}^{\prime}_{W(A)/p}\) on \(W(A)/p\otimes_{A}\mathcal{D}^{(0)}_{A}\) which makes \(W(A)/p\otimes_{A}\mathcal{D}^{(0)}_{A}\) into a \((\mathcal{U}^{\prime}_{W(A)/p},\mathcal{D}^{(0)}_{A})\) bimodule._
_Let \(\Phi^{*}\mathcal{D}^{(0)}_{A}\) be the completion of \(W(A)/p\otimes_{A}\mathcal{D}^{(0)}_{A}\) along the filtration \(\mathcal{I}_{j}\otimes_{A}\mathcal{D}^{(0)}_{A}\). There is a left action of \(\widehat{\mathcal{U}}^{\prime}_{W(A)/p}\) on \(\Phi^{*}\mathcal{D}^{(0)}_{A}\) making it into a \((\widehat{\mathcal{U}}^{\prime}_{W(A)/p},\mathcal{D}^{(0)}_{A})\) bimodule._
Proof.: In Lemma 5.2 above, we constructed a surjection \(\mathcal{T}^{\prime}_{W(A)/p}\to\mathcal{T}_{A}\). By the functoriality of the Lie-Rinehart construction, we obtain a map \(\mathcal{U}(\mathcal{T}^{\prime}_{W(A)/p})\to\mathcal{U}(\mathcal{T}_{A})= \mathcal{D}^{(0)}_{A}\). As the ideal \(\mathcal{T}^{\prime}_{W(A)/p}(1)\) is contained in the kernel of this map, it necessarily factors through \(\mathcal{U}^{\prime}_{W(A)/p}\); so we obtain \(Q:\mathcal{U}^{\prime}_{W(A)/p}\to\mathcal{D}^{(0)}_{A}\).
Now consider a derivation \(\partial\in\mathcal{T}^{\prime}_{W(A)/p}\). If we restrict \(\partial\) to \(A\) we obtain an element of \(\operatorname{Der}_{A}(A,W(A)/p)\); so we may write
\[\partial|_{A}=Q(\partial)+\sum_{i=1}^{n}\epsilon_{i}\partial_{i}\]
where \(\epsilon_{i}\in\mathcal{I}_{1}\) and \(Q(\partial)\in\operatorname{Der}_{A}(A,A)\) is the application of the algebra map \(Q\) to \(\partial\). (Here \(\partial_{i}\) stands for the extension of the canonical coordinate derivation to \(W(A)/p\) which vanishes on terms of the form \(p^{r}T^{I/p^{r}}\) as above).
Then for \(a\in W(A)/p\) and \(P\in\mathcal{D}^{(0)}_{A}\) we set
\[\partial\cdot(a\otimes P)=\partial(a)\otimes P+a\otimes Q(\partial)P+\sum_{i= 1}^{n}a\epsilon_{i}\otimes\partial_{i}P\]
We claim that this defines an action of \(\mathcal{U}^{\prime}_{W(A)/p}\) on \(W(A)/p\otimes_{A}\mathcal{D}^{(0)}_{A}\). To see this, let \(b\in A\), and \(a\) and \(P\) as above. Then we have
\[\partial(ab\otimes P)=\partial(ab)\otimes P+ab\otimes Q(\partial)P+\sum_{i=1} ^{n}abe_{i}\otimes\partial_{i}P\]
\[=\partial(a)b\otimes P+a\partial(b)\otimes P+ab\otimes Q(\partial)P+\sum_{i= 1}^{n}abe_{i}\otimes\partial_{i}P\]
\[=\partial(a)\otimes bP+a\otimes Q(\partial)(b)P+\sum_{i=1}^{n}a\epsilon_{i} \otimes\partial_{i}(b)P+a\otimes bQ(\partial)P+\sum_{i=1}^{n}a\epsilon_{i} \otimes b\partial_{i}P\]
where we used \(\partial(b)=Q(\partial)(b)+\sum_{i=1}^{n}\epsilon_{i}\partial_{i}(b)\). Further, using the equalities \(Q(\partial)(b)P+bQ(\partial)P=Q(\partial)bP\) and \(\partial_{i}(b)P+b\partial_{i}P=\partial_{i}bP\), we see that the previous line is equal to
\[\partial(a)\otimes bP+a\otimes Q(\partial)bP+\sum_{i=1}^{n}a\epsilon_{i} \otimes\partial_{i}bP=\partial(a\otimes bP)\]
Therefore each derivation defines an endomorphism of \(W(A)/p\otimes_{A}\mathcal{D}^{(0)}_{A}\); one sees directly that this defines an action of a left action of \(\mathcal{U}^{\prime}_{W(A)/p}\) on \(W(A)/p\otimes_{A}\mathcal{D}^{(0)}_{A}\). Furthermore, let \(b\in A\) and \(\partial_{i}\) a coordinate derivation on \(A\). Then
\[\partial\cdot(a\otimes Pb)=\partial(a)\otimes Pb+a\otimes Q(\partial)Pb+\sum_ {i=1}^{n}a\epsilon_{i}\otimes\partial_{i}Pb\]
\[=\partial\cdot(a\otimes P)b\]
and similarly \(\partial\cdot(a\otimes P\partial_{i})=\partial\cdot(a\otimes P)\partial_{i}\). So the left action of \(\mathcal{U}^{\prime}_{W(A)/p}\) commutes with the natural right action of \(\mathcal{D}^{(0)}_{A}\), as required. The second sentence follows from the first by simply completing both objects along their natural filtrations.
Then we have
**Corollary 5.8**.: _There is an isomorphism_
\[\Phi^{*}\mathcal{D}^{(0)}_{A}\hat{\subset}\widehat{\mathcal{U}}^{\prime}_{W(A )/p}/\widehat{\mathcal{U}}^{\prime}_{W(A)/p}\cdot\mathcal{T}^{\prime}_{1}\]
_where \(\widehat{\mathcal{U}}^{\prime}_{W(A)/p}\cdot\mathcal{T}^{\prime}_{1}\) denotes the completion of the left ideal generated by \(\mathcal{T}^{\prime}_{1}\)._
Proof.: The map is given by sending an element \(x\in\widehat{\mathcal{U}}^{\prime}_{W(A)/p}\) to \(x\cdot(1\otimes 1)\). Writing
\[x=\sum_{J}b_{J}\partial^{J}+\phi\]
as in 5.6, we see
\[x\cdot(1\otimes 1)=(\sum_{J}b_{J}\partial^{J}+\phi)\cdot(1\otimes 1)\]
\[=\sum_{J}b_{j}\otimes\partial^{J}\]
(the last equality follows from the definition of the action in 5.7). As the last sum is zero iff all \(b_{J}\) are \(0\), the result follows directly.
Now suppose we are in the presence of two distinct coordinatized lifts of Frobenius \(\Phi_{1},\Phi_{2}\) on \(\mathcal{A}\), with two associated embeddings which we now label \(\iota_{1},\iota_{2}:A\to W(A)/p\). Then we have
**Corollary 5.9**.: _There is a canonical isomorphism \(\Phi_{1}^{*}\mathcal{D}_{A}^{(0)}\tilde{\to}\Phi_{2}^{*}\mathcal{D}_{A}^{(0)}\) of bimodules. If given a third lift \(\Phi_{3}\), then the cocycle condition on isomorphisms is satisfied._
Proof.: Slightly modifying the above notation, we let \(\mathcal{T}_{1,\iota_{1}}^{\prime}\) denote the set of derivations in \(\mathcal{T}^{\prime}\) which vanish on \(\iota_{1}(A)\), and similarly for \(\mathcal{T}_{1,\iota_{2}}^{\prime}\). We need to provide a canonical isomorphism
\[\widehat{\mathcal{U}}_{W(A)/p}^{\prime}/\widehat{\mathcal{U}}_{W(A)/p}^{ \prime}\cdot\mathcal{T}_{1,\iota_{1}}^{\prime}\tilde{\to}\widehat{\mathcal{U} }_{W(A)/p}^{\prime}/\widehat{\mathcal{U}}_{W(A)/p}^{\prime}\cdot\mathcal{T}_{ 1,\iota_{2}}^{\prime} \tag{5.1}\]
Now, the fact that the kernel of \(W(A)/p\to A\) is a square zero ideal implies that \(\iota_{12}=i_{1}-i_{2}:A\to W(A)/p\) is a derivation. We can therefore consider \(\iota_{12}\) as a derivation \(\iota_{2}(A)\to W(A)/p\), and then we extend this to a derivation \(\tilde{\iota}_{12}:W(A)/p\to W(A)/p\) as in the proof of 5.7, so that \(\tilde{\iota}_{12}(p^{r}T^{I/p^{r}})=0\), where \(T\) refers to the coordinates for \(\Phi_{2}\). Then we can consider \(\tilde{\iota}_{12}\) as an element of \(\mathcal{T}^{\prime}\) and therefore as an element of \(\widehat{\mathcal{U}}_{W(A)/p}^{\prime}\). The element \(1+\tilde{\iota}_{12}\) is a unit in \(\widehat{\mathcal{U}}_{W(A)/p}^{\prime}\), with inverse \(1-\tilde{\iota}_{12}\) (this follows from the relation \(\partial_{1}\cdot\partial_{2}-\partial_{1}\circ\partial_{2}\) for all \(\partial_{1},\partial_{2}\in\mathcal{T}_{W(A)/p}^{\prime}(1)\)). Therefore there is an isomorphism
\[\widehat{\mathcal{U}}_{W(A)/p}^{\prime}\xrightarrow{\cdot(1+\tilde{\iota}_{ 12})}\widehat{\mathcal{U}}_{W(A)/p}^{\prime}\]
of left \(\widehat{\mathcal{U}}_{W(A)/p}^{\prime}\)-modules. It clearly takes \(\widehat{\mathcal{U}}_{W(A)/p}^{\prime}\cdot\mathcal{T}_{1,\iota_{1}}^{\prime}\) to \(\widehat{\mathcal{U}}_{W(A)/p}^{\prime}\cdot\mathcal{T}_{1,\iota_{2}}^{\prime}\) (and the inverse map \(\cdot(1-\tilde{\iota}_{12}\) takes \(\widehat{\mathcal{U}}_{W(A)/p}^{\prime}\cdot\mathcal{T}_{1,\iota_{2}}^{\prime}\) to \(\widehat{\mathcal{U}}_{W(A)/p}^{\prime}\cdot\mathcal{T}_{1,\iota_{1}}^{\prime}\)). Therefore we deduce an isomorphism (5.1), and we need to check that this map intertwines the associated right \(\mathcal{D}_{A}^{(0)}\)-module structures; the cocycle condition is clear from the definition of the isomorphism.
To see this, we shall show
\[x\cdot\iota_{1}(a)(1+\tilde{\iota}_{12})=x\cdot(1+\tilde{\iota}_{12})\iota_{2 }(a)\]
for all \(x\in\widehat{\mathcal{U}}_{W(A)/p}^{\prime}\) and \(a\in A\) and
\[x\cdot\delta_{1}(1+\tilde{\iota}_{12})=x\cdot(1+\tilde{\iota}_{12})\delta_{2}\]
for all \(x\in\widehat{\mathcal{U}}_{W(A)/p}^{\prime}\) ; here the notation is as follows let \(\delta\) be a derivation of \(A\). Then \(\delta_{1}\) and \(\delta_{2}\) refer to elements of \(\mathcal{T}_{W(A)/p}^{\prime}\), so that \(\delta_{1}\) preserves \(\iota_{1}(A)\), and \(\delta_{1}|_{\iota_{1}(A)}=\delta\), and \(\delta_{2}\) preserves \(\iota_{2}(A)\) and \(\delta_{2}|_{\iota_{2}(A)}=\delta\) and so that \(\delta_{1}|_{\mathcal{I}_{1}}=\delta_{2}|_{\mathcal{I}_{1}}\) (it is easy to see that it is always possible to choose such \(\delta_{1},\delta_{2}\), for a given \(\delta\)). As \(\mathcal{D}_{A}^{((0)}\) is generated by \(A\) and derivations, this implies the result by the previous corollary.
Now let us verify these equations; for the first, we have
\[x\cdot(1+\tilde{\iota}_{12})\iota_{2}(a)=x\iota_{2}(a)+x\tilde{\iota}_{12} \cdot\iota_{2}(a)\]
\[=x\iota_{2}(a)+x\tilde{\iota}_{12}\cdot\iota_{2}(a)\]
and, as \(\tilde{\iota}_{12}\cdot\iota_{2}(a)=\iota_{2}(a)\cdot\tilde{\iota}_{12}+\tilde{ \iota}_{12}(\iota_{2}(a))=\iota_{2}(a)\cdot\tilde{\iota}_{12}+i_{1}(a)-\iota_{2 }(a)\) we see that the above term equals
\[x\iota_{2}(a)\cdot\tilde{\iota}_{12}+x\cdot i_{1}(a)\]
Further, \(\iota_{2}(a)\cdot\tilde{\iota}_{12}=\iota_{1}(a)\cdot\tilde{\iota}_{12}\), as the difference \(\iota_{1}(a)-\iota_{2}(a)\in\mathcal{I}_{1}\) and therefore annihilates \(\tilde{\iota}_{12}\) (as \(\tilde{\iota}_{12}\) takes values in \(\mathcal{I}_{1}\) and this ideal has square zero). Thus this term is equal to
\[x\iota_{1}(a)\cdot\tilde{\iota}_{12}+x\cdot i_{1}(a)\]
as required.
For the second equation, we set \(\epsilon=[\tilde{\iota}_{12},\delta_{2}]\in\mathcal{T}^{\prime}_{A}\) and note that this derivation takes values in \(\mathcal{I}_{1}\) and vanishes on \(\mathcal{I}_{1}\); indeed, one easily sees that in fact \(\epsilon=\delta_{1}-\delta_{2}\). Therefore
\[x\cdot(1+\tilde{\iota}_{12})\delta_{2}=x\delta_{1}+x\delta_{2}\tilde{\iota}_{12}\]
and since \((\delta_{1}-\delta_{2})\cdot\tilde{\iota}_{12}=0\) (as both of these derivations take values in \(\mathcal{I}_{1}\)) we see that this equals \(x\delta_{1}+x\delta_{1}\tilde{\iota}_{12}\) as required.
Now we wish to globalize the whole situation. In order to do that, we need one more basic fact about the structure of \(\widehat{\mathcal{U}}^{\prime}_{W(A)/p}\). Let us assume again that we are in the presence of local coordinates on \(A\). Choose a set of elements \(\{\phi_{i}\}_{i\in\mathbb{N}}\) which form a topological basis for \(\mathcal{T}^{\prime}_{1}\) as an \(A\)-module, and such that \(\phi_{i}\to 0\) as \(i\to\infty\). Then we have
**Corollary 5.10**.: _Let \(\{\partial_{1},\dots,\partial_{n}\}\) be a set of coordinate derivations on \(A\). Then every element of \(\mathcal{U}^{\prime}_{W(A)/p}\) may be written uniquely as_
\[\sum_{J}(b_{J}\partial^{J}+\sum_{i_{J}=1}^{\infty}a_{i_{J}}\phi_{i_{J}} \partial^{J})\]
_where \(J\) ranges over multi-indices in \(\mathbb{N}^{n}\), \(i_{J}\in\mathbb{N}\), \(a_{i_{J}},b_{J}\in W(A)/p\), and all but finitely many \(b_{J}\) are \(0\). Similarly, every element of \(\widehat{\mathcal{U}}^{\prime}_{W(A)/p}\) can be written uniquely as_
\[\sum_{J}(b_{J}\partial^{J}+\sum_{i_{J}=1}^{\infty}a_{i_{J}}\phi_{i_{J}} \partial^{J})\]
_with the notation as above, but we allow infinitely many \(b_{J}\), where \(b_{J}\to 0\) as \(|J|\to\infty\)._
Proof.: The existence is proved exactly as in 5.6. For the uniqueness, we use the action of \(\mathcal{U}^{\prime}_{W(A)/p}\) on \(W(A)/p\otimes_{A}\mathcal{D}^{(0)}_{A}\) (respectively, the action of \(\widehat{\mathcal{U}}^{\prime}_{W(A)/p}\) on \(\Phi^{*}\mathcal{D}^{(0)}_{A}\)); we'll just do the case of \(\mathcal{U}^{\prime}_{W(A)/p}\), the case of the completion being entirely analogous. Suppose that
\[\sum_{J}(b_{J}\partial^{J}+\sum_{i_{J}=1}^{\infty}a_{i_{J}}\phi_{i_{J}} \partial^{J})=0\]
for some \(\{a_{i_{J}},b_{J}\}\) as above. Then
\[0=\sum_{J}(b_{J}\partial^{J}+\sum_{i_{J}=1}^{\infty}a_{i_{J}}\phi_{i_{J}} \partial^{J})(1\otimes 1)=\sum_{J}b_{J}\otimes\partial^{J}\]
so that each \(b_{J}=0\). Now, let \(p^{r}T^{I/p^{r}}\in\mathcal{I}_{1}\). then each \(\partial_{i}(p^{r}T^{I/p^{r}})=0\), so that
\[0=(\sum_{i_{J}=1}^{\infty}a_{i_{J}}\phi_{i_{J}}\partial^{J})(p^{r}T^{I/p^{r}} \otimes 1)=\sum_{i_{J}=1}^{\infty}a_{i_{J}}\phi_{i_{J}}(p^{r}T^{I/p^{r}})\otimes \partial^{J}\]
as this is true for all basis elements \(p^{r}T^{I/p^{r}}\), we see that \(a_{i_{J}}\phi_{i_{J}}=0\) inside \(\mathcal{T}_{1}\). So \(a_{i_{J}}=0=b_{J}\) for all \(J\) as required.
This implies
**Corollary 5.11**.: _Let \(A\) be a smooth \(k\)-algebra admitting local coordinates. The assignment \(B\to\widehat{\mathcal{U}}^{\prime}_{W(B)/p}\) is a sheaf on the etale site of \(A\)._
Proof.: The fact that this assignment is a presheaf follows immediately from the functoriality of the Lie-Rinehart construction. To see that it is actually a sheaf, we note that that previous corollary yields an isomorphism
\[(B\otimes_{A}\widehat{\mathcal{U}}^{\prime}_{W(A)/p})\widehat{\to}\mathcal{U }^{\prime}_{W(B)/p}\]
where the completion on the left is with respect to \(B\otimes_{A}\mathcal{I}_{j}\) (where \(\mathcal{I}_{j}\) is the two sided ideal in \(\widehat{\mathcal{U}}^{\prime}_{W(A)/p}\)).
So we may now define for any smooth \(X\) over \(k\), \(\widehat{\mathcal{U}}^{\prime}_{W(X)_{p=0}}\) as the Zariski sheaf associated to the functor \(U\to\widehat{\mathcal{U}}^{\prime}_{W(A)/p}\) where \(U=\text{Spec}(A)\) is an open subset of \(X\) admitting local coordinates (these form a base for the topology). From this definition and 5.9, we obtain
**Corollary 5.12**.: _There is a well-defined \((\widehat{\mathcal{U}}^{\prime}_{W(X)_{p=0}},\mathcal{D}^{(0)}_{A})\)-bimodule \(\mathcal{B}^{(0)}_{X}\), which for every open affine \(U=\text{Spec}(A)\) (admitting local coordinates), and every lift of Frobenius \(\Phi\) to \(\mathcal{A}\), is isomorphic to \(\Phi^{*}\mathcal{D}^{(0)}_{A}\)._
Finally, we need to relate all this to Witt-differential operators:
**Theorem 5.13**.: _There is a surjective morphism of sheaves of algebras_
\[\widehat{\mathcal{D}}^{(0)}_{W(X)}/p\to\widehat{\mathcal{U}}^{\prime}_{W(X)_{ p=0}}\]
_This map is continuous with respect to the inverse limit topologies on these algebras._
To prove this, we first need the following fact about the action of the Witt-differential operators on \(W(X)_{1}\); we let \(X=\text{Spec}(A)\) as above.
**Proposition 5.14**.: _Let \(\varphi\in\mathcal{E}W^{(0)}_{A}\). Then the induced map \(\overline{\varphi}:W(A)/p\to W(A)/p\) is a derivation of the algebra \(W(A)/p\); furthermore, \(\overline{\varphi}\) preserves the filtration \(\{\mathcal{I}_{i}\}\) of \(W(A)/p\)._
Proof.: For each \(r\geq 0\), we have the natural surjection \(W(A)/p\to W_{r+1}(A)/p\), and there is an isomorphism \(W(A)/p\tilde{=}\lim_{r}W_{r+1}(A)/p\). By construction \(\overline{\varphi}\) is an inverse limit of operators on \(W_{r+1}(A)/p\), and hence preserves each \(K_{r+1}\), and it suffices to show that the restriction of \(\overline{\varphi}\) to \(W_{r+1}(A)/p\) is a derivation for each \(r\geq 0\).
Let \(D=(D_{0},\ldots,D_{i})\) be a Hasse-Schmidt derivation on \(A\), with \(i\leq p^{r}\), and \(\tilde{D}_{i}\) is the canonical lift to \(W_{r+1}(A)\), then we must show that for any \(a\in W_{r+1}(A)\), \(F^{\text{val}_{p}(i)-r}(a)\cdot\tilde{D}_{i}\) acts as a derivation on \(W_{r+1}(A)/p\). By definition, we have
\[F^{\text{val}_{p}(i)-r}(a)\tilde{D}_{i}(\alpha\beta)=F^{\text{val}_{p}(i)-r}( a)\sum_{j+l=i}\tilde{D}_{j}(\alpha)\tilde{D}_{l}(\beta)\]
and we must show that \(F^{\operatorname{val}_{p}(i)-r}(a)D_{j}(\alpha)D_{l}(\beta)\in pW_{r+1}(A)\) whenever \(0<j,l<i\). To that end, we may, by 2.11, write \(\tilde{D}_{j}(\alpha)=V^{r-\operatorname{val}_{p}(j)}(x)\) and \(\tilde{D}_{l}(\beta)=V^{r-\operatorname{val}_{p}(l)}(y)\). Suppose \(\operatorname{val}_{p}(j)\geq\operatorname{val}_{p}(l)\) (the other case having an identical proof). Then
\[V^{r-\operatorname{val}_{p}(j)}(x)V^{r-\operatorname{val}_{p}(l)}(y)=p^{r- \operatorname{val}_{p}(j)}V^{r-\operatorname{val}_{p}(l)}(F^{\operatorname{ val}_{p}(j)-\operatorname{val}_{p}(l)}(x)\cdot y)\]
so that
\[F^{\operatorname{val}_{p}(i)-r}(a)\cdot V^{r-\operatorname{val}_{p}(j)}(x)V^{r -\operatorname{val}_{p}(l)}(y)\]
\[=p^{r-\operatorname{val}_{p}(j)}F^{\operatorname{val}_{p}(l)-r}(F^{ \operatorname{val}_{p}(i)-\operatorname{val}_{p}(l)}(a))V^{r-\operatorname{ val}_{p}(l)}(F^{\operatorname{val}_{p}(j)-\operatorname{val}_{p}(l)}(x)\cdot y)\]
where we have used \(\operatorname{val}_{p}(i)\geq\min\{\operatorname{val}_{p}(j),\operatorname{ val}_{p}(l)\}=\operatorname{val}_{p}(l)\). Thus we conclude that
\[F^{\operatorname{val}_{p}(i)-r}(a)\tilde{D}_{j}(\alpha)\tilde{D}_{l}(\beta)=p^{ r-\operatorname{val}_{p}(j)}F^{\operatorname{val}_{p}(l)-r}(A)V^{r-\operatorname{ val}_{p}(l)}(B)=p^{r-\operatorname{val}_{p}(j)}V^{r-\operatorname{val}_{p}(l)}(AB)\]
where \(A=F^{\operatorname{val}_{p}(i)-\operatorname{val}_{p}(l)}(a)\) and \(B=F^{\operatorname{val}_{p}(j)-\operatorname{val}_{p}(l)}(x)\cdot y\); so, since \(\operatorname{val}_{p}(j)<r\) we deduce that \(F^{\operatorname{val}_{p}(i)-r}(a)D_{j}(\alpha)D_{l}(\beta)\in pW_{r+1}(A)\) as required.
Now we can give the
Proof.: (of Theorem 5.13) Since the image of \(\mathcal{E}W_{X}^{(0)}\) in \(\widehat{\mathcal{D}}_{W(X)}^{(0)}/p\) topologically generates this sheaf of algebras over \(\mathcal{O}_{W(X)_{p=0}}\), we see that there _at most one_ morphism \(\widehat{\mathcal{D}}_{W(X)}^{(0)}/p\to\widehat{\mathcal{U}}_{W(X)_{p=0}}^{\prime}\) which is the identity on \(\mathcal{O}_{W(X)_{p=0}}\) and which sends the image of a section \(\varphi\in\mathcal{E}W_{X}^{(0)}/p\) to the derivation \(\overline{\varphi}\). It therefore suffices to prove that such a morphism exists locally, i.e. for \(X=\operatorname{Spec}(A)\) possessing local coordinates.
In that case, invoking Theorem 2.17, we see that it suffices to compute the commutator of any pair of operators of the form \(F^{\min\{0,-r\}}(\alpha_{J_{I}})\{\partial\}_{J_{I}/p^{r}}\) and show that it maps to the commutator in \(\widehat{\mathcal{U}}_{W(X)_{p=0}}^{\prime}\), but this is straightforward.
Combining this result with 5.12, we obtain
**Corollary 5.15**.: _There is a well-defined \((\widehat{\mathcal{D}}_{W(A)}^{(0)}/p,\mathcal{D}_{A}^{(0)})\)-bimodule \(\mathcal{B}_{X}^{(0)}\), which for every open affine \(U=\text{Spec}(A)\) (admitting local coordinates), and every lift of Frobenius \(\Phi\) to \(\mathcal{A}\), is isomorphic to \(\Phi^{*}\mathcal{D}_{\mathcal{A}}^{(0)}/p\)._
Proof.: The only thing to do is to check that the action of \(\widehat{\mathcal{D}}_{W(A)}^{(0)}/p\) on \(\Phi^{*}\mathcal{D}_{A}^{(0)}\) via the surjection \(\widehat{\mathcal{D}}_{W(A)}^{(0)}/p\to\widehat{\mathcal{U}}_{W(A)/p}^{\prime}\) agrees with the action of \(\widehat{\mathcal{D}}_{W(A)}^{(0)}/p\) on \(\Phi^{*}\mathcal{D}_{\mathcal{A}}^{(0)}/p\) obtained from 3.2. But this is a straightforward check in local coordinates.
### De Rham-Witt connections mod \(p^{r}\)
In this subsection we study the de Rham-Witt connections over \(W(X)_{p^{r}=0}\).. To state the main result, we need some notation. We define the category \(\text{MIC}_{W(X)_{p^{r}=0}}\) to be the category of quasicoherent sheaves \(\mathcal{M}\) on \(W(X)_{p^{r}=0}\), equipped with an integrable de Rham-Witt connection
\[\nabla:\mathcal{M}\to\mathcal{M}\widehat{\otimes}_{\mathcal{O}_{W(X)/p^{r}}}W \Omega_{X}^{1}/p^{r}\]
where the \(\widehat{?}\) stands for completion with respect to the filtration
\[\{V^{i}(\mathcal{O}_{W(X)}/p^{r})\cdot\mathcal{M}\otimes G^{j}(W\Omega_{X}^{1}/ p^{r})\}_{i+j\geq l}\]
where \(G^{j}\) is the standard filtration on the de Rham-Witt complex, defined by \(V^{i}+dV^{i-1}\). We also demand that \(\nabla\) is continuous, in the sense that it is the inverse limit of the connections
\[\nabla_{j}:\mathcal{M}/V^{j}(\mathcal{O}_{W(X)}/p^{r})\to(\mathcal{M}\otimes_{ \mathcal{O}_{W(X)/p^{r}}}W\Omega^{1}_{X}/p^{r})/V^{j}(\mathcal{O}_{W(X)}/p^{r})\]
The morphisms in this category are the continuous maps of sheaves which respect the connections.
In 4.38 we constructed a functor \(\mathcal{C}\) from \(\widehat{\mathcal{D}}^{(0)}_{W(X)}/p^{r}-\operatorname{mod}_{\operatorname{ acc},\operatorname{qcoh}}\) to integrable de Rham-Witt connection on \(W(X)_{p^{r}=0}\). Passing to the completions, we obtain a functor \(\widehat{\mathcal{C}}\) from \(\widehat{\mathcal{D}}^{(0)}_{W(X)}/p^{r}-\operatorname{mod}_{\operatorname{ c}-\operatorname{qcoh}}\) to integrable de Rham-Witt connection on \(W(X)_{p^{r}=0}\); this functor clearly takes values in \(\operatorname{MIC}_{W(X)_{p^{r}=0}}\). Then the main result is:
**Theorem 5.16**.: _The functor \(\widehat{\mathcal{C}}\) from \(\widehat{\mathcal{D}}^{(0)}_{W(X)}/p^{r}-\operatorname{mod}_{\operatorname{ c}-\operatorname{qcoh}}\) to \(\operatorname{MIC}_{W(X)_{p^{r}=0}}\) is an equivalence of categories._
The result, and the proof, are inspired by [11], where the case of a vector bundle with connection is handled. We should mention that he also takes the inverse limit over \(r\) to obtain the following equivalence for locally free crystals on \(X\):
**Theorem 5.17**.: _(Bloch) There is an equivalence of categories between locally free crystals over \(W(k)\) (on \(X\)) and vector bundles on \(W(X)\) equipped with integrable de Rham-Witt connections, which are locally nilpotent._
One can take the inverse limit over \(r\) in Theorem 5.16, but I don't know any reasonable characterization of the image.
As in Bloch's paper, we start in the case \(r=1\). Let us assume this until further notice. We shall also suppose, to start with, that \(X=\operatorname{Spec}(A)\) for a smooth \(A\) which admits local coordinates; as usual we fix also a lift \(\Phi:\mathcal{A}\to\mathcal{A}\) of Frobenius. Therefore any element of \(\widehat{\mathcal{D}}^{(0)}_{W(X)}/p^{r}-\operatorname{mod}_{\operatorname{ qcoh}}\) may be expressed uniquely as \(\widehat{\Phi}^{*}\mathcal{M}\) for a quasicoherent \(\mathcal{D}^{(0)}_{X}\)-module \(\mathcal{M}\). As above, we let \(\mathcal{I}=\mathcal{I}_{1}:=\ker(W(A)/p\to A)\).
Then the full faithfulness of the functor is straightforward to check, it follows directly from the fact that \((\widehat{\Phi}^{*}\mathcal{M})/\mathcal{I}=\mathcal{M}\) (as flat connections). To show the essential surjectivity, we start by recalling the following lemma of Bloch ([11], lemma 3.2):
**Lemma 5.18**.: _Suppose \(A\) is a smooth \(k\)-algebra which admits local coordinates. Let \(d:W(A)/p\to(W\Omega^{1}_{A}/p)/\mathcal{I}\) denote the map induced by the differential. Then there is an isomorphism of \(A\)-modules_
\[(W\Omega^{1}_{A}/p)/\mathcal{I}{\cong}d(\mathcal{I})\oplus\Omega^{1}_{A}\]
_and \(d:\mathcal{I}\to d(\mathcal{I})\) is an isomorphism of \(A\)-modules. Further, the induced filtration on \(d(\mathcal{I})\) (coming from \(V^{i}(W\Omega^{1}_{A}/p)\)) coincides with the \(V\)-adic filtration on \(\mathcal{I}\). The analogue_
\[(W\Omega^{1}_{X}/p)/\mathcal{I}{\cong}d(\mathcal{I})\oplus\Omega^{1}_{X}\]
_for the sheaf \((W\Omega^{1}_{X}/p)/\mathcal{I}\) also holds._
This implies
**Proposition 5.19**.: _Let \(\tilde{Hom}_{W(A)/p}((W\Omega^{1}_{A}/p)/\mathcal{I},W(A)/p)\) denote the set of \(W(A)/p\)-module homomorphisms which preserve the \(V\)-adic filtrations on both sides. Then_
\[\tilde{Hom}_{W(A)/p}((W\Omega^{1}_{A}/p)/\mathcal{I},W(A)/p){\cong}\mathcal{ T}^{\prime}_{W(A)/p}(1)\]
_Similarly let \(\tilde{\mathcal{H}om}_{\mathcal{O}_{W(X)/p}}((W\Omega^{1}_{X}/p)/\mathcal{I}, \mathcal{O}_{W(X)/p})\) denote the sheaf of \(\mathcal{O}_{W(X)/p}\)-module homomorphisms which preserve the \(V\)-adic filtrations on both sides. Then_
\[\tilde{\mathcal{H}om}_{\mathcal{O}_{W(X)/p}}((W\Omega^{1}_{X}/p)/\mathcal{I}, \mathcal{O}_{W(X)/p})\tilde{=}\mathcal{T}^{\prime}_{W(X)/p}(1)\]
Proof.: By precomposing with \(d:W(A)/p\to(W\Omega^{1}_{A}/p)/\mathcal{I}\) we obtain a map
\[\tilde{\mathrm{Hom}}\ ((W\Omega^{1}_{A}/p)/\mathcal{I},W(A)/p)\to\mathrm{Der}_{W(A )/p}\]
and the fact that each element of the left hand side preserves the relevant filtrations implies that the image lives in \(\mathcal{T}^{\prime}_{W(A)/p}\). As \(\mathcal{I}\) annihilates \((W\Omega^{1}_{A}/p)/\mathcal{I}\), the image of any map \(\phi\in\mathrm{Hom}_{W(A)/p}((W\Omega^{1}_{A}/p)/\mathcal{I},W(A)/p)\) must be contained in \(\mathcal{I}\). Using the above lemma, we see that
\[\tilde{\mathrm{Hom}}\ ((W\Omega^{1}_{A}/p)/\mathcal{I},W(A)/p)\tilde{=}\]
\[\mathrm{Hom}_{A}(\Omega^{1}_{A},\mathcal{I})\oplus\tilde{\mathrm{Hom}}_{A}( \mathcal{I},\mathcal{I})\]
where \(\tilde{\mathrm{Hom}}_{A}(\mathcal{I},\mathcal{I})\) consists of \(A\)-linear endomorphisms of \(\mathcal{I}\) which preserve the \(V\)-adic filtration. Thus the result follows from Lemma 5.5. The sheaf variant follows directly.
If \(\mathcal{M}\) is any sheaf of \(\mathcal{O}_{W(X)}/p\) modules with an integrable connection, denote by \(\tilde{\nabla}\) the induced map
\[\mathcal{M}\xrightarrow{\nabla\otimes 1}\mathcal{M}\otimes_{\mathcal{O}_{W(X)/p }}W\Omega^{1}_{X}/p\to\mathcal{M}\otimes_{\mathcal{O}_{W(X)/p}}(W\Omega^{1}_{ X}/p)/\mathcal{I}\]
Therefore
**Corollary 5.20**.: _Let \(\mathcal{M}\in\mathit{MIC}_{W(X)_{p=0}}\). Then \(\mathcal{M}\) inherits the structure of a module over \(\tilde{\mathcal{U}}^{\prime}_{W(X)_{p=0}}(1)\), the sub-sheaf of algebras of \(\tilde{\mathcal{U}}^{\prime}_{W(X)_{p=0}}\) topologically generated by \(\mathcal{T}^{\prime}_{W(X)/p}(1)\)._
Proof.: By the previous result there is an evaluation map
\[(W\Omega^{1}_{X}/p)/\mathcal{I}\otimes_{\mathcal{O}_{W(X)/p}}\mathcal{T}^{ \prime}_{W(X)/p}(1)\xrightarrow{\mathrm{ev}}\mathcal{O}_{W(X)/p}\]
So, we obtain a map
\[\mathcal{M}\otimes_{\mathcal{O}_{W(X)/p}}\mathcal{T}^{\prime}_{W(X)/p}(1) \xrightarrow{\tilde{\nabla}\otimes 1}\mathcal{M}\otimes_{\mathcal{O}_{W(X)/p}}(W \Omega^{1}_{X}/p)/\mathcal{I}\otimes_{\mathcal{O}_{W(X)/p}}\mathcal{T}^{ \prime}_{W(X)/p}(1)\]
\[\xrightarrow{1\otimes\mathrm{ev}}\mathcal{M}\]
This defines a continuous action of \(\mathcal{T}^{\prime}_{W(X)/p}(1)\subset\tilde{\mathcal{U}}^{\prime}_{W(X)_{p =0}}(1)\) on \(\mathcal{M}\). From the flatness of the connection, one sees that this extends to an associated action of \(\tilde{\mathcal{U}}^{\prime}_{W(X)_{p=0}}(1)\) on \(\mathcal{M}\) as required.
Now let us give the
Proof.: (of Theorem 5.16 in the case \(r=1\)) Inside \(\tilde{\mathcal{U}}^{\prime}_{W(X)_{p=0}}(1)\) we have the derivation \(\delta\) which is zero on \(A\) and the identity on \(\mathcal{I}\); and therefore also the element \(\pi=1-\delta\). This is an idempotent in the global sections of \(\tilde{\mathcal{U}}^{\prime}_{W(X)_{p=0}}(1)\), and, as \(\pi\) annihilates \(\mathcal{I}\) it follows that
\[\mathcal{M}^{\pi}:=\{m\in\mathcal{M}|\pi m=m\}\tilde{\to}\mathcal{M}/\mathcal{I}\]
where the map is the restriction of the natural projection map. As \(\delta\in\mathcal{I}\) we obtain
\[\mathcal{M}=\mathcal{M}^{\pi}\oplus\mathcal{I}\mathcal{M}^{\pi}\]
and, by the definition of \(\pi\) the restriction of the de Rham-Witt connection to \(\mathcal{M}^{\pi}\) is compatible with the splitting \((W\Omega^{1}_{A}/p)/\mathcal{I}\tilde{=}d(\mathcal{I})\oplus\Omega^{1}_{A}\) and so we obtain that the restriction of \(\nabla\) to \(\mathcal{M}^{\pi}\) yields a connection on \(\mathcal{M}^{\pi}\) which is isomorphic to the natural connection on \(\mathcal{M}/\mathcal{I}\). Using the completeness of \(\mathcal{M}\), it follows that there is a natural map
\[\widehat{\Phi}^{*}(\mathcal{M}/\mathcal{I})\tilde{\to}\widehat{\Phi}^{*}( \mathcal{M}^{\pi})\to\mathcal{M}\]
of de Rham-Witt connections. The surjectivity follows from the above splitting \(\mathcal{M}=\mathcal{M}^{\pi}\oplus\mathcal{I}\mathcal{M}^{\pi}\). For the injectivity, one uses the induced map
\[\mathcal{M}\to\mathcal{M}\otimes_{\mathcal{O}_{W(X)/p}}W\Omega^{1}_{X}/p\to \mathcal{M}\otimes_{\mathcal{O}_{W(X)/p}}(W\Omega^{1}_{X}/p)/\mathcal{I}\]
Taking the restriction to \(\mathcal{I}\cdot\mathcal{M}\) we obtain by Bloch's lemma the map
\[\mathcal{I}\cdot\mathcal{M}\to d(\mathcal{I})\otimes_{A}\mathcal{M}^{\pi}\]
which is necessarily injective as \(\mathcal{I}\to d(\mathcal{I})\) is an isomorphism; from the compatibility with the connection this forces the injectivity of \(\widehat{\Phi}^{*}(\mathcal{M}^{\pi})\to\mathcal{M}\) as required.
This proves the theorem in the local case. To prove it for an arbitrary \(X\), we note that the full faithfulness in the local case implies that
\[\mathcal{H}om_{\widehat{\mathcal{D}}^{(0)}_{W(X)}/p}(\mathcal{N}_{1}, \mathcal{N}_{2})\to\mathcal{H}om_{\nabla}(\widehat{\mathcal{C}}\mathcal{N}_{1 },\widehat{\mathcal{C}}\mathcal{N}_{2})\]
is an isomorphism of sheaves. Applying global sections shows that \(\widehat{\mathcal{C}}\) is fully faithful. But this implies, using the cocycle definition of sheaves, that the essential surjectivity can be checked locally, which is what we have just done.
Now we have to explain what to do in the general (i.e., \(r>1\)) case. Again we start by supposing that \(X=\operatorname{Spec}(A)\) and that a coordinatized lift of Frobenius is fixed. In this case we have the morphism \(\mathcal{A}_{r}\to W(A)/p^{r}\) and the splitting
\[W(A)/p^{r}=\mathcal{A}_{r}\oplus F^{1}(W(A)/p^{r})\]
as \(\mathcal{A}_{r}\)-modules (c.f. Lemma 3.24); the same holds at the level of sheaves. This means that for sheaf of \(\mathcal{O}_{W(X)}/p^{r}\)-modules \(\mathcal{M}\) we can consider the \(\mathcal{O}_{\mathfrak{X}_{r}}\)-submodule \(F^{1}(\mathcal{O}_{W(X)}/p^{r})\cdot\mathcal{M}\). To save space, we denote the resulting quotient \(\mathcal{O}_{\mathfrak{X}_{r}}\)-module by \(\mathcal{M}/F^{1}\). If \(\mathcal{M}=\widehat{\Phi}^{*}(\mathcal{N})\) then we have
\[\mathcal{M}/F^{1}\tilde{\to}\mathcal{N}\]
Further, if \(\nabla:\mathcal{M}\to\mathcal{M}\otimes_{\mathcal{O}_{W(X)/p^{r}}}W\Omega^{1 }_{X}/p^{r}\) is the induced flat connection, we can consider the induced map
\[\tilde{\nabla}:\mathcal{M}\to(\mathcal{M}\widehat{\otimes}_{\mathcal{O}_{W(X )/p^{r}}}W\Omega^{1}_{X}/p^{r})/F^{1}\]
Now, using Nakayama's lemma, it follows from Lemma 5.18 that we have an isomorphism
\[W\Omega^{1}_{X}/p^{r}/F^{1}\tilde{=}d(F^{1}(\mathcal{O}_{W(X)}/p^{r}))\oplus \Omega^{1}_{\mathfrak{X}_{r}}\]
of \(\mathcal{O}_{\mathfrak{X}_{r}}\)-modules. We have
**Lemma 5.21**.: _For any \(\mathcal{M}\in\text{MIC}_{W(X)_{p^{r}=0}}\), we have an isomorphism_
\[(\mathcal{M}\widehat{\otimes}_{\mathcal{O}_{W(X)/p^{r}}}W\Omega^{1}_{X}/p^{r}) /F^{1}\]
\[\tilde{\to}(\mathcal{M}/F^{1})\widehat{\otimes}_{\mathcal{O}_{\mathfrak{X}_{r }}}(W\Omega^{1}_{X}/p^{r}/F^{1})\]
\[\tilde{\to}\mathcal{M}/F^{1}\widehat{\otimes}_{\mathcal{O}_{\mathfrak{X}_{r }}}d(F^{1}(\mathcal{O}_{W(X)}/p^{r}))\oplus(\mathcal{M}/F^{1}\otimes_{\mathcal{ O}_{\mathfrak{X}_{r}}}\Omega^{1}_{\mathfrak{X}_{r}})\]
_Here, the symbol \(\widehat{\otimes}_{\mathcal{O}_{X_{r}}}\) denotes taking the tensor product followed by the completion with respect to the \(V^{i}\)- filtration on \(W\Omega^{1}_{X}/p^{r}/F^{1}\), respectively \(d(F^{1}(\mathcal{O}_{W(X)}/p^{r}))\)._
Proof.: The last isomorphism follows from the above discussion. For the first, we start by noting that there is a natural map
\[\mathcal{M}\times W\Omega^{1}_{X}/p^{r}\to(\mathcal{M}/F^{1})\widehat{\otimes }_{\mathcal{O}_{X_{r}}}(W\Omega^{1}_{X}/p^{r}/F^{1})\]
sending a pair of local sections \((m,\phi)\) to \(\overline{m}\otimes\overline{\phi}\). By using the decomposition \(\mathcal{O}_{W(X)}/p^{r}=\mathcal{O}_{\mathfrak{X}_{r}}\oplus F^{1}(\mathcal{ O}_{W(X)}/p^{r})\) we see that this map is bilinear over \(\mathcal{O}_{W(X)}/p^{r}\). Thus we obtain a map
\[(\mathcal{M}\widehat{\otimes}_{\mathcal{O}_{W(X)/p^{r}}}W\Omega^{1}_{X}/p^{r })\to(\mathcal{M}/F^{1})\widehat{\otimes}_{\mathcal{O}_{X_{r}}}(W\Omega^{1}_ {X}/p^{r}/F^{1})\]
which clearly factors through \((\mathcal{M}\widehat{\otimes}_{\mathcal{O}_{W(X)/p^{r}}}W\Omega^{1}_{X}/p^{r })/F^{1}\). For the map in the other direction, note that there is a canonical map
\[\mathcal{M}\widehat{\otimes}_{\mathcal{O}_{X_{r}}}W\Omega^{1}_{X}/p^{r}\to \mathcal{M}\widehat{\otimes}_{\mathcal{O}_{W(X)/p^{r}}}W\Omega^{1}_{X}/p^{r}\]
\[\to(\mathcal{M}\widehat{\otimes}_{\mathcal{O}_{W(X)/p^{r}}}W\Omega^{1}_{X}/p^ {r})/F^{1}\]
which factors through \((\mathcal{M}/F^{1})\widehat{\otimes}_{\mathcal{O}_{X_{r}}}(W\Omega^{1}_{X}/p^ {r}/F^{1})\). It is straightforward to see that these maps are inverse to one another.
This implies
**Corollary 5.22**.: _Let \(\mathcal{M}\in\text{MIC}_{W(X)_{p^{r}=0}}\). Then the surjection \(\mathcal{M}\to\mathcal{M}/F^{1}\) admits a unique \(W_{r}(k)\)-linear splitting, \(\iota\), satisfying the following: for \(\overline{m}\in\mathcal{M}/F^{1}\), \(\iota(\overline{m})\) satisfies_
\[\tilde{\nabla}(\iota(\overline{m}))\in\mathcal{M}/F^{1}\otimes_{\mathcal{O}_ {\mathfrak{X}_{r}}}\Omega^{1}_{\mathfrak{X}_{r}}\]
_where the latter is regarded as a summand of \((\mathcal{M}\widehat{\otimes}_{\mathcal{O}_{W(X)/p^{r}}}W\Omega^{1}_{X}/p^{r })/F^{1}\) by the previous lemma._
Proof.: Let \(U\subset X\) be open affine. Then \(\Gamma(U,\mathcal{M}/F^{1})=\Gamma(U,\mathcal{M})/\Gamma(F^{1})\). This follows from the fact that \(\mathcal{M}\) is quasicoherent on \(W(X)_{p^{r}=0}\).
By the previous lemma, we have, for any \(m\in\Gamma(U,\mathcal{M})\) we have
\[\tilde{\nabla}(m)=\sum_{(I,r)}m_{I}d(p^{r}T^{I/p^{r}})+\sum_{i=1}^{n}m_{i}dT_{i}\]
So replacing \(m\) with \(m-\sum_{(I,r)}m_{I}(p^{r}T^{I/p^{r}})\) gives the existence of a lift of \(\overline{m}\) with the required property. If there are two such lifts, called \(m_{1}\) and \(m_{2}\), then \(m_{1}-m_{2}\in F^{1}(\mathcal{O}_{W(U)}/p^{r})\cdot\mathcal{M}\). Therefore
\[\tilde{\nabla}(m_{1}-m_{2})\in\mathcal{M}/F^{1}\widehat{\otimes}_{\mathcal{O} _{\mathfrak{U}_{r}}}d(F^{1}(\mathcal{O}_{W(U)}/p^{r}))\]
As both \(\tilde{\nabla}(m_{1})\) and \(\tilde{\nabla}(m_{2})\) take values in \(\mathcal{M}/F^{1}\otimes_{\mathcal{O}_{\mathfrak{U}_{r}}}\Omega^{1}_{\mathfrak{ U}_{r}}\), we see that \(\tilde{\nabla}(m_{1}-m_{2})=0\). But then, writing
\[m_{1}-m_{2}=\sum_{(I,r)}p^{r}T^{I/p^{r}}m_{I}\]
we see that
\[\tilde{\nabla}(m_{1}-m_{2})=\sum_{(I,r)}d(p^{r}T^{I/p^{r}})m_{I}=0\]
which forces \(m_{I}=0\) for all \(I\), so that \(m_{1}=m_{2}\) as required.
Note that the value of \(\tilde{\nabla}\) on \(\iota(\overline{m})\) is completely determined by the flat connection on \(\mathcal{M}/F^{1}\). Now we can give the
Proof.: (of Theorem 5.16 in the general case) As above it suffices to consider the case \(X=\operatorname{Spec}(A)\). Then we may consider
\[\mathcal{H}om_{\tilde{\nabla}}(\widehat{\Phi}^{*}\mathcal{D}^{(0)}_{\mathfrak{ X}_{r}},\mathcal{M})=\mathcal{N}\]
where \(\mathcal{H}om_{\tilde{\nabla}}\) refers to those continuous morphisms which respect the induced map \(\tilde{\nabla}\) on both sides. As \(\widehat{\Phi}^{*}\mathcal{D}^{(0)}_{\mathfrak{X}_{r}}/F^{1}=\mathcal{D}^{(0) }_{\mathfrak{X}_{r}}\) with its canonical connection, we see that a local section of \(\mathcal{N}\) is determined by the value of the map at \(1\), and by the remark directly above we see that that we may send \(1\) to any local section in the image of the map \(\iota\) constructed in the previous corollary. Thus \(\mathcal{N}\) is a summand of \(\mathcal{M}\), and in fact we see
\[\mathcal{N}\widetilde{\to}\mathcal{M}/F^{1}\]
as flat connections (the connection on \(\mathcal{N}\) coming from the right action of \(\mathcal{D}^{(0)}_{\mathfrak{X}_{r}}\) on \(\widehat{\Phi}^{*}\mathcal{D}^{(0)}_{\mathfrak{X}_{r}}\)). Therefore there is an induced morphism of \(\tilde{\nabla}\)-modules
\[\widehat{\Phi}^{*}\mathcal{N}\to\mathcal{M}\]
and as \(\mathcal{N}\) is a summand of \(\mathcal{M}\) when we apply \(\otimes_{W_{r}(k)}^{L}k\) we obtain the analogous maps
\[\widehat{\Phi}^{*}\mathcal{H}^{i}(\mathcal{N}\otimes_{W_{r}(k)}^{L}k)\to \mathcal{H}^{i}(\mathcal{M}\otimes_{W_{r}(k)}^{L}k)\]
for \(i\in\{0,-1,-2,\dots\}\). But these maps are isomorphisms by the \(r=1\) case of the theorem. Thus the original map \(\widehat{\Phi}^{*}\mathcal{N}\to\mathcal{M}\) is an isomorphism as well.
To finish the proof, we need to show that this map is an isomorphism of de Rham-Witt connections (not just modules over \(\tilde{\nabla}\)). To do so, recall that there is a splitting
\[W\Omega^{1}_{X}/p^{r}=\Omega^{1}_{\mathfrak{X}_{r}}\oplus G^{1}(W\Omega^{1}_{ X}/p^{r})\]
and, by what we have just proved, for \(n\in\mathcal{N}\) we have
\[\nabla(n)=\nabla_{0}(n)+\eta\]
where \(\nabla_{0}(n)\in\mathcal{N}\otimes\Omega^{1}_{\mathfrak{X}_{r}}\) and \(\eta\in F^{1}(\mathcal{O}_{W(X)}/p^{r})\cdot W\Omega^{1}_{X}/p^{r}\). Now, by the same token, \(\nabla(\nabla(n))\in F^{1}(\mathcal{O}_{W(X)}/p^{r})\cdot(\mathcal{M}\widehat {\otimes}W\Omega^{2}_{X}/p^{r})\). Since \(\nabla\) is flat, we see that \(\nabla(\eta)=0\) inside
\[(\mathcal{M}\widehat{\otimes}W\Omega^{2}_{X}/p^{r})/(\mathcal{N}\otimes \Omega^{1}_{\mathfrak{X}_{r}}+F^{1}(\mathcal{O}_{W(X)}/p^{r})\cdot(\mathcal{M }\widehat{\otimes}W\Omega^{2}_{X}/p^{r}))\]
However, the induced map
\[F^{1}(\mathcal{O}_{W(X)}/p^{r})\cdot(\mathcal{M}\widehat{\otimes}W\Omega^{2}_ {X}/p^{r})\stackrel{{\overline{\nabla}}}{{\to}}\]
\[(\mathcal{M}\widehat{\otimes}W\Omega^{2}_{X}/p^{r})/(\mathcal{N}\otimes \Omega^{1}_{\mathfrak{X}_{r}}+F^{1}(\mathcal{O}_{W(X)}/p^{r})\cdot(\mathcal{M }\widehat{\otimes}W\Omega^{2}_{X}/p^{r}))\]
is injective; this follows from [11], lemma 2.3 (c.f. also the proof of proposition 4.1). Therefore, as \(\overline{\nabla}(\eta)=0\), we see that \(\eta=0\). So \(\nabla(n)=\nabla_{0}(n)\) which forces \(\widehat{\Phi}^{*}\mathcal{N}\to\mathcal{M}\) to be a map of de Rham-Witt connections as needed. |
2310.19532 | Scalar field, nucleon structure and relativistic chiral theory for
nuclear matter | The work that Peter Schuck and I carried out during the nineties in
collaboration with the Lyon and Darmstadt theory groups is summarized. I
retrace how our theoretical developments combined with experimental results
concerning the in-medium modification of the pion-pion interaction allowed a
clarification of the chiral status of the sigma meson introduced in
relativistic theories of nuclear matter. This enabled us to build a
relativistic chiral theory, now called the chiral confining model of nuclear
matter, which includes the effect of the nucleon substructure, namely the
response of the nucleon to the nuclear scalar field generating an efficient and
natural contribution to the saturation mechanism. Using parameters from a
QCD-connected version of the chiral confining model, I describe the relative
roles of the chiral scalar field and two-pion (or two-rho) exchange for the
in-medium $NN$ attractive interaction and the associated sources of three-body
interactions needed for the saturation mechanism. | Guy Chanfray | 2023-10-30T13:36:28Z | http://arxiv.org/abs/2310.19532v1 | # Scalar field, nucleon structure and relativistic chiral theory for nuclear matter
###### Abstract
The work that Peter Schuck and I carried out during the nineties in collaboration with the Lyon and Darmstadt theory groups is summarized. I retrace how our theoretical developments combined with experimental results concerning the in-medium modification of the pion-pion interaction allowed a clarification of the chiral status of the sigma meson introduced in relativistic theories of nuclear matter. This enabled us to build a relativistic chiral theory, now called the chiral confining model of nuclear matter, which includes the effect of the nucleon substructure, namely the response of the nucleon to the nuclear scalar field generating an efficient and natural contribution to the saturation mechanism. Using parameters from a QCD-connected version of the chiral confining model, I describe the relative roles of the chiral scalar field and two-pion (or two-rho) exchange for the in-medium \(NN\) attractive interaction and the associated sources of three-body interactions needed for the saturation mechanism.
Keywords:Chiral Lagrangians confinement Nuclear matter +
Footnote †: journal: Eur. Phys. J. A
## 1 Introduction
In the early nineties, Peter Schuck and I published with other collaborators from the Lyon and Darmstadt groups a set of papers concerning the in-medium modification of the \(\pi\pi\) interaction and of the associated resonances, namely the rho meson and this very elusive object known as the sigma meson [1; 2; 3; 4; 5; 6; 7; 8; 9]. In both cases a quite spectacular accumulation of strength was obtained in the associated spectral functions at low energy, just above the two-pion threshold. Concerning the rho meson we predicted a rather dramatic change in its spectral function when the density is increased [4; 5]. A few years later I, in collaboration with R. Rapp and J. Wambach, incorporated this effect in a dynamical approach to explain the dilepton production data in relativistic heavy ion collisions of the CERES collaboration at CERN/SPS [10; 11]. This scenario, once completed by the direct coupling of the rho to some N*h excitations [12; 13], allowed to give an interpretation to the broadening of the dilepton spectrum observed by the NA60 collaboration [14]. We also showed that this broadening that we have predicted could be interpreted as associated with a mechanism of chiral symmetry restoration through a mixing of vector and axial correlators [15; 16; 13; 17; 18].
Although these works related to the in-medium rho meson had a notable repercussion, I will rather discuss in what follows the properties of the sigma meson and consequently the question of the role of the scalar field in nuclear physics. The in-medium scalar-isoscalar modes in nuclei have been studied experimentally through two-pion production in reactions induced either by pions [19; 20] or photons [21] on various nuclei. All these collaborations have observed a systematic A dependent downwards shift of the strength when the pion pair is produced in the sigma meson channel. These results brought considerable attention in the nuclear physics community. Then came three related questions:
1. Do these spectacular medium effects affect the exchange of the "sigma meson" in nuclear matter?
2. What is the relationship with the "nuclear physics sigma meson" of relativistic mean field theory?
3. What is the real nature of this "nuclear physics sigma meson", regarding the constraints from chiral symmetry?
To answer the second an third questions let us consider the relativistic mean-field approaches initiated by Walecka and collaborators [22; 23] where the nucleons move in an attractive scalar and a repulsive vector background fields. It provides an economical saturation mechanism and a well-known success is the correct magnitude of the spin-orbit potential where the large vector and scalar fields contribute in an additive way. Now the question of the very nature of these background fields has to be elucidated. It is highly desirable to clarify their relationship with the QCD condensates, in particular the chiral quark condensate, and more generally with the low energy realization of chiral symmetry which is spontaneously broken in the QCD vacuum and is expected to be progressively restored when the density increases. If the origin of the repulsive vector field can be safely identified as associated with the omega vector-meson exchange, the real nature of the attractive Lorentz scalar field has been a controversial subject since there is no sharp scalar resonance with a mass of about \(500-700\) MeV, which would lead to a simple interaction based on a scalar particle exchange.
To bridge the gap between relativistic theories of the Walecka type and approaches insisting on chiral symmetry, one has to map the "nuclear physics sigma meson" of the Walecka model (let us call it \(\sigma_{W}\) from now on) with a chiral quantity. For instance, one may be tempted to identify \(\sigma_{W}\) with \(\sigma\), the chiral partner of the pion. It is however forbidden by chiral constraints. This point has been first addressed by Birse [24]: it would lead to the presence of terms of order \(M_{\pi}\) in the \(NN\) interaction which is not allowed. The other possibility is to formulate an effective theory parametrized in terms of the field associated with the fluctuations of the chiral condensate in a \(SU(2)\) matrix form, \(W=\sigma+i\vec{\tau}\cdot\vec{\pi}\), by going from cartesian to polar coordinates, i.e., going from a linear to a non linear representation \(:W=\sigma+i\vec{\tau}\cdot\vec{\pi}\equiv S\,U\equiv(s+F_{\pi})\,U\equiv( \sigma_{W}+F_{\pi})\,e^{i\,\vec{\tau}\cdot\vec{\phi}(x)/F_{\pi}}\). It was instead proposed and justified in Ref. [25] to identify \(\sigma_{W}\), not with \(\sigma\), but with the chiral invariant \(s=S-F_{\pi}\) field associated with the radial fluctuation of the chiral condensate, \(S\), around the "chiral radius" \(F_{\pi}\), identified with the pion decay constant. The sigma and the pion, associated with the amplitude \(s\equiv\sigma_{W}\) and phase fluctuations \(\vec{\phi}\) of this condensate, are considered in our approach to be effective degrees of freedom. Their dynamics are governed by an effective chiral potential, \(V_{\chi}\,(\sigma,\vec{\pi})\), having a typical Mexican hat shape associated with a broken (chiral) symmetry of the QCD vacuum. This proposal, which gives a plausible answer to the long standing problem of the chiral status of Walecka theories, has also the merit of respecting all the desired chiral constraints. In particular the correspondence \(s\equiv\sigma_{W}\) generates a coupling of the scalar field to the derivatives of the pion field. Hence the radial mode decouples from low-energy pions (as the pion is a quasi-Goldstone boson) whose dynamics is governed by chiral perturbation theory. It follows that the answer to the first question is negative. The strong medium effect seen in \(2\pi\) production experiment is associated with the dressing of the \(\sigma\) propagator by in-medium modified \(2\pi\) states, the single pion line being replaced by a pion branch, a collective mixture of pions, p-hole and \(\Delta\)-hole states. However, this in-medium modified sigma exchange between nucleon has to be completed by other exchanges with resulting delicate compensations. Their origin is the well-known pair suppression, in the case of pseudo-scalar coupling, by \(\sigma\) exchange for the \(\pi N\) amplitude. This translates into the elimination of the sigma dressing in the \(NN\) interaction. We have explicitly checked that this cancellation holds to all orders in the dressing of the sigma. The net result amounts to the exchange of the \(s\) field which is essentially free of medium modification since it decouples from the pion field \(\vec{\phi}\) which itself has a pseudovector coupling to the nucleon. This is this chiral invariant object which is identified with the "nuclear physics sigma meson". A detailed discussion of this somewhat subtle topic is given in [25; 26] but it can be concisely summarized by the figure 3 of [26]. Due to the aforementioned compensations the total "correlated \(2\pi\)-pion chiral partner \(\sigma\)" reduces to the \(s\equiv\sigma_{W}\) exchange if we neglect the tiny contribution from \(\pi\pi\) interaction which vanishes in the chiral limit.
## 2 Chiral symmetry and nucleon structure
In the early 2000s, we realized that the above construction faces with two major problems related to nuclear stability and the underlying nucleon structure.
Concerning the first point there is a well identified problem regarding the nuclear saturation with usual chiral effective theories [27; 28; 29; 18]: independently of the particular chiral model, in the nuclear medium the value of the scalar field \(S\) (\(\equiv S_{medium}\)) is different from the one in vacuum (\(\equiv S_{vacuum}\), the minimum of the vacuum effective potential represented by a "Mexican hat" potential). At \(S_{medium}\) the chiral potential has a smaller curvature \(:V^{\prime\prime}(S_{medium})<V^{\prime\prime}(S_{vacuum})\). This single effect results in the lowering of the sigma mass and destroys the stability, which is a problem for the applicability of such effective theories in the nuclear context. The effect can be associated with a \(s^{3}\) tadpole diagram generating attractive three-body forces destroying saturation even if the repulsive Z graph from the Walecka mechanism is present.
The second point is associated with the chiral properties of the nucleon, namely the pion-nucleon sigma term and the chiral susceptibility of the nucleon. According to the lattice data analysis of the Adelaide group, [30; 31; 32; 33], the nucleon mass can be expanded in terms of the pion mass squared, \(M_{\pi}^{2}\), as \(M_{N}(M_{\pi}^{2})=a_{0}+a_{2}\,M_{\pi}^{2}\,+\,a_{4}\,M_{\pi}^{4}\,+\,...\,+\, \Sigma_{\pi}(M_{\pi}^{2},\,\Lambda)\) where the pionic self-energy, \(\Sigma_{\pi}(M_{\pi}^{2},\Lambda)\), containing the non analytical contribution, is explicitly separated out. The latter is calculated with just one adjustable cutoff parameter \(\Lambda\) entering the \(\pi NN,\pi N\Delta\) form factor regularizing the pion loops. While the \(a_{2}\) parameter is related to the non pionic piece of the \(\pi N\) sigma term, \(a_{4}\) is related to the nucleon QCD scalar susceptibility. The important point is that \(a_{4}\simeq-0.5\,GeV^{-3}\) is essentially compatible with zero in the sense that it is much smaller than in a chiral effective model, \((a_{4})_{L\sigma M}=-F_{\pi}\,g_{S}/2M_{\sigma}^{4}\simeq-3.5\,GeV^{-3}\), where the nucleon is seen as a juxtaposition of three constituent quarks getting their mass from the chiral condensate [34] (\(g_{S}\) is the scalar coupling constant of the nucleon and \(M_{\sigma}\) is the sigma mass in the linear sigma model).
The common origin of these two failures can be attributed to the absence of confinement. In reality the composite nucleon responds to the nuclear environment, i.e., by readjusting its confined quark structure as pointed out in the pioneering paper of P. Guichon [35] at the origin of the Quark Meson Coupling (QMC) model [36]. The resulting polarization of the nucleon can be accounted for by the phenomenological introduction of the positive scalar nucleon response in the nucleon mass evolution. The physical motivation to introduce this nucleonic response is the observation that nucleons experience huge fields at finite density, e.g., the scalar field is of the order of a few hundred of MeV at saturation density. Nucleons, being in reality composite objects, will react against the nuclear environment (i.e., the background nuclear scalar field) through a (self-consistent) modification of the quarks wave functions.
## 3 The chiral confining model
These considerations led the development of a phenomenological model [37; 38; 39; 40; 41], that we now call the chiral confining model, where we complemented the relativistic chiral approach in such a way that the effect of the nucleon response is able to counterbalance the attractive chiral tadpole diagram to get good saturation properties, especially the correct curvature coefficient - the incompressibility empirical parameter, \(\kappa_{sat}\). It is described by a lagrangian, with standard Yukawa couplings of the nucleon to various mesonic fields, \({\cal L}=\bar{\Psi}\,i\gamma^{\mu}\partial_{\mu}\Psi\,+\,{\cal L}_{s}\,+\,{ \cal L}_{\omega}\,+\,{\cal L}_{\rho}\,\,+\,{\cal L}_{\pi}\), with:
\[{\cal L}_{s}=-M_{N}^{*}(s)\bar{\Psi}\Psi\,-\,V(s)\,+\,\frac{1}{2 }\partial^{\mu}s\,\partial_{\mu}s\] \[{\cal L}_{\omega}=-g_{V}\,\omega_{\mu}\,\bar{\Psi}\gamma^{\mu} \Psi\,+\,\frac{1}{2}\,m_{V}^{2}\,\omega^{\mu}\omega_{\mu}\,-\,\frac{1}{4}\,F^ {\mu\nu}F_{\mu\nu}\] \[{\cal L}_{\rho}=-g_{\rho}\,\rho_{a\mu}\,\bar{\Psi}\gamma^{\mu} \tau_{a}\Psi\,-\,g_{\rho}\frac{\kappa_{\rho}}{2\,M_{N}}\,\partial_{\nu}\rho_{a \mu}\Psi\bar{\sigma}^{\mu\nu}\tau_{a}\Psi\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+ \frac{1}{2}\,m_{\rho}^{2}\,\rho_{a\mu}\rho_{a}^{\mu}\,-\,\frac{1}{4}\,G_{a}^{ \mu\nu}G_{a\mu\nu}\] \[{\cal L}_{\pi}=\frac{g_{A}}{2\,F_{\pi}}\,\partial_{\mu}\phi_{a \pi}\bar{\Psi}\gamma^{\mu}\gamma^{5}\tau_{a}\Psi\,-\,\frac{1}{2}\,m_{\pi}^{2 }\phi_{a\pi}^{2}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+ \frac{1}{2}\,\partial^{\mu}\phi_{a\pi}\partial_{\mu}\phi_{a\pi}. \tag{1}\]
It involves the scalar field \(s\) (the "nuclear physics sigma meson" \(\sigma_{W}\)), the pion field \(\phi_{a\pi}\), and the vector fields associated with the omega meson channel (\(\omega^{\mu}\)) and with the rho meson channel (\(\rho_{a}^{\mu}\)). Each meson-nucleon vertex is regularized by a monopole meson-nucleon form factors (with cutoff \(\Lambda_{S,\pi,V,\rho}\)) mainly originating from the compositeness of nucleon hence generating a bare OBE Bonn-like \(NN\) interaction. This potential, completed by a fictitious \(\sigma^{\prime}\) meson exchange simulating the effect of two-pion exchange with \(\Delta\)'s in the intermediate state, is represented in fig. 1.
This is a rather standard lagrangian but with no ad hoc density dependent coupling constant. Instead, the three (multi)-body forces are generated by two specific crucial ingredients beyond the simplest approach: the introduction of an in-medium modified nucleon mass which is supposed to embed its quark substructure and the presence of a chiral effective potential associated with the chirally broken vacuum. The effective Dirac nucleon mass \(M_{N}^{*}(s)\) deviates from the bare nucleon mass in presence of the nuclear scalar field \(s\):
\[M_{N}^{*}(s)=M_{N}+g_{S}\,s+\frac{1}{2}\kappa_{NS}\,s^{2}+{\cal O}(s^{3}). \tag{2}\]
Figure 1: The full bare NN potential (full line) and the various contributions. The value of the parameters are those given in the text of section 4.
In [37; 26; 38; 39; 40; 42], we took the pure linear sigma model value (\(g_{S}=M_{N}/F_{\pi}\)) for the scalar coupling constant but in the recent work [41], \(g_{S}\) was allowed to deviate from the L\(\sigma\)M and was fixed by a Bayesian analysis. This quantity actually corresponds to the first order response of the nucleon to an external scalar field and can be obtained in an underlying microscopic model of the nucleon. The nucleon scalar susceptibility \(\kappa_{\rm NS}\) is another response parameter which reflects the polarization of the nucleon, i.e., the self-consistent readjustment of the quark wave function in presence of the scalar field. Very generally the scalar coupling constant, \(g_{S}\), and the nucleon response parameter, \(\kappa_{NS}\), depend on the subquark structure and the confinement mechanism as well as the effect of spontaneous chiral symmetry breaking. In our previous works [37; 38; 39; 40; 41; 42] we introduced a dimensionless parameter,
\[C\equiv\frac{\kappa_{\rm NS}\,F_{\pi}^{2}}{2M_{N}}, \tag{3}\]
which is expected to be of the order \(C\sim 0.5\) as in the MIT bag used in the QMC framework. Such a positive response parameter can be generated if confinement dominates spontaneous chiral symmetry breaking in the nucleon mass origin, as discussed in Ref. [43] within particular models.
In the majority of our previous works [26; 37; 38; 39; 40; 41; 42], the chiral effective potential had the simplest linear sigma model (L\(\sigma\)M) form:
\[V_{\chi,L\sigma M}(s)=\frac{1}{2}\,M_{\sigma}^{2}\,s^{2}+\frac{1}{2}\frac{M_{ \sigma}^{2}-M_{\pi}^{2}}{F_{\pi}}\,s^{3}+\frac{1}{8}\,\frac{M_{\sigma}^{2}-M_ {\pi}^{2}}{F_{\pi}^{2}}\,s^{4}. \tag{4}\]
Combining the effects of the tadpole diagram (i.e, associated with the \(s^{3}\) in the chiral potential) and of the response parameters, it is possible to show that the scalar sector generates a three-nucleon contribution to the energy per nucleon:
\[E^{(3b-s)} \simeq \frac{g_{S}^{2}}{2\,M_{\sigma}^{4}}\,\left(\kappa_{\rm NS}-\frac{ g_{S}}{F_{\pi}}\right)\,\rho_{s}^{2} \tag{5}\] \[= \frac{g_{S}^{3}}{2\,M_{\sigma}^{4}\,F_{\pi}}\,\left(2\,\frac{M_{N }}{g_{S}\,F_{\pi}}\,C\,-\,1\right)\,\rho_{s}^{2}.\]
This is the result already quoted in Eq. (44) of Ref. [34], but without the factor \(M_{N}/g_{S}F_{\pi}\) which appears when the scalar coupling constant is allowed to deviate from its pure L\(\sigma\)M value as in Ref. [41]. If the confinement (\(C\)) dominates over the attractive chiral attraction (tadpole diagram), it provides a very natural saturation mechanism but at variance with the QMC model which ignores the attractive tadpole diagram present in the chiral approach, this requires a \(C\) parameter close to or even larger than one [37; 38; 39; 40; 41; 42; 44].
The explicit connection between lattice QCD parameters \(a_{2},a_{4}\) and the response parameter \(g_{S},C\), in the L\(\sigma\)M case has been given in [38; 39]:
\[a_{2}=\frac{F_{\pi}\,g_{S}}{M_{\sigma}^{2}},\qquad a_{4}=-\frac{F_{\pi}\,g_{S }}{2M_{\sigma}^{4}}\,\left(2\,\frac{M_{N}}{g_{S}\,F_{\pi}}\,C\,-\,3\right). \tag{6}\]
Notice that in the expression of \(a_{4}\) the factor \(M_{N}/F_{\pi}g_{S}\) in front of \(C\), present in our recent paper [41], was absent in [38; 39] since the nucleon mass was fixed to be \(M_{N}=F_{\pi}g_{S}\). Depending of the details of the chiral extrapolation, the extracted values of the parameters are within the range in between \(a_{2}\simeq 1.5\,GeV^{-1},\,\,a_{4}\simeq-0.5\,GeV^{-3}\)[31] and \(a_{2}\simeq 1.\,GeV^{-1},\,\,a_{4}\simeq-0.25\,GeV^{-3}\)[30; 33]. The quantity \(a_{2}M_{\pi}^{2}\sim 20-30\,MeV\) represents the non pionic piece of the sigma commutator directly associated with the scalar field, \(s\) (see the detailed discussion of this quantity in Ref. [38]). One very robust conclusion is that the lattice result for \(a_{4}\) is much smaller than the one obtained in the simplest linear sigma model ignoring the nucleonic response (\(C=0\)), for which \(a_{4}\simeq-3.5\,GeV^{-3}\). Hence Lattice data require a strong compensation from effects governing the three-body repulsive force needed for the saturation mechanism: compare Eqs. (6) and (5). Consequently both lattice data constraints and nuclear matter phenomenology require a quite large value of the dimensionless response parameter, \(C\), at least larger than one. Moreover in a recent work based on a Bayesian analysis with lattice data as an input [41], we found that the response parameter is strongly constrained to a value \(C\sim 1.4\) very close to the value where the scalar susceptibilities changes of sign: \(C=1.5\).
This model has been applied in the past to the equation of state of nuclear matter [37; 38; 39; 40; 41; 44] and neutron stars [39; 42; 44] as well as to the study of chiral properties of nuclear matter [37; 26; 34; 38] at different levels of approximation in the treatment of the many-body problem (RMF, Relativistic Hartree Fock or RHF, pion loop correlation energy). Among other results, one important lesson is the importance of a coherent treatment of the Fock term including the rearrangement terms [39; 44] which has to be consistently incorporated at the level of the self-energies and total energy. Although this problem is of minor importance for what concerns the binding energy around nuclear matter density, the use of a Hartree-Fock basis in place of the Hartree basis for the nucleon Dirac wave function, may play a very important role at high density in limiting the maximum mass of hyperonic neutron star as pointed out in Ref. [42].
The problem one has to face is that it seems impossible to find a realistic confining model for the nucleon able to generate \(C\) larger than one required by lattice data and saturation properties of nuclear matter. As pointed out in our recent paper [45], one possible reason for this discrepancy between models and phenomenological values of \(C\) lies in the use of the linear sigma model (L\(\sigma\)M) which is probably too naive. Hence one should certainly use a enriched chiral effective potential from a model able to give a correct description of the low energy realization of chiral symmetry in the hadronic world. A good easily tractable candidate is the Nambu-Jona-Lasinio (NJL) model defined by the Lagrangian given in Eq. (9) of [45] or Eq. (59) of [46]: it depends on four parameters: the coupling constants \(G_{1}\) (scalar), \(G_{2}\) (vector), the current quark mass \(m\) and a (non covariant) cutoff parameter \(\Lambda\). Three of these parameters (\(G_{1}\), \(m\), and \(\Lambda\)) are adjusted to reproduce the pion mass, the pion decay constant and the quark condensate, ignoring pion-axial mixing. We refer the reader to [43; 45] for more details. As established in [45], the net effect of the use of the NJL model is to replace the L\(\sigma\)M chiral potential by its NJL equivalent which very well approximated by its expansion to third order in \(s\):
\[V_{\chi,\text{NJL}}(s)=\frac{1}{2}\,M_{\sigma}^{2}\,s^{2}+\frac{1}{2}\,\frac{M _{\sigma}^{2}-M_{\pi}^{2}}{F_{\pi}}\,s^{3}\left(1-C_{\chi,\text{NJL}}\right)+ \dots\,. \tag{7}\]
The effective sigma mass \(M_{\sigma}\sim 2M_{0}\) (\(M_{0}\) being the vacuum value of the constituent quark mass) as well as the pion mass \(M_{\pi}\) and \(F_{\pi}\) are calculated within the model. This constitutes a new version of the model, called the NJL chiral confining model, as proposed in [45] and in another recent paper [46]. The main difference with the original phenomenological version using the L\(\sigma\)M lies in the presence of the \(C_{\chi,\text{NJL}}\) parameter whose expression in terms of a NJL loop integral is given in [45]. For typical values of the NJL parameters its value is in the range \(C_{\chi,\text{NJL}}:0.4-0.5\). The effect of the parameter \(C_{\chi,\text{NJL}}\) is thus to reduce the attractive tadpole diagram and make the chiral potential more repulsive. The net visible effect of this enriched chiral potential is to modify the three-body force contribution to the the energy per nucleon generated by the scalar sector initially given by Eq. (5), according to:
\[E^{(3b-s)}=\frac{g_{S}^{3}}{2\,M_{\sigma}^{4}\,F_{\pi}}\,\left(2\,\left[\frac{ M_{N}}{g_{S}\,F_{\pi}}\,C+\,\frac{1}{2}\,C_{\chi}\right]\,-\,1\right)\,\rho_{s}^{2}. \tag{8}\]
The lattice QCD parameter \(a_{4}\) is as well modified according to (compare with Eq. (6)):
\[a_{4}=\frac{F_{\pi}\,g_{S}}{2M_{\sigma}^{4}}\,\left(2\,\left[\frac{M_{N}}{g_{ S}\,F_{\pi}}\,C+\,\frac{3}{2}\,C_{\chi}\right]\,-\,3\right). \tag{9}\]
The important conclusion discussed in detail in Ref. [45] is that for a given three-nucleon force allowing saturation one needs a lower value of the dimensionless parameter \(C\) to obtain a small value of the lattice parameter \(a_{4}\) compatible with lattice data. Said slightly differently, lattice data (small value of \(a_{4}\) ) become compatible with models value for \(C\) while preserving the three-body repulsive forces needed for the saturation mechanism.
A plausible physical picture underlying our approach can be summarized as follows: nucleons are viewed as Y shaped-strings with constituent quarks at the ends moving in a in-medium modified broken vacuum. Hence, what is usually called the "nuclear medium" can be seen as a "shifted vacuum" with lower value of the constituent quark mass which coincides with the in-medium expectation value, \(M=\bar{\mathcal{S}}(\rho)\), of the chiral invariant scalar field, \(\mathcal{S}=(M_{0}/F_{\pi})\,(s+F_{\pi})\), associated with the radial fluctuation mode of the chiral condensate. This idea has been implemented in our recent paper [46] in a framework inspired from the field correlator method (FCM) developed by Y. Simonov and collaborators [47; 48; 49; 50; 51]. By performing a gluon averaging in the euclidean QCD partition function, on can generate an effective non perturbative interaction between quarks mediated by a gluon correlator parametrized in a convenient gaussian form: [48; 49]:
\[D(x)==\frac{\sigma}{2\pi T_{g}^{2}}\,e^{-x^{2}/4T_{g}^{2}}\,\,\,\,\text{ with}\,\,\,\,\,T_{g}=\sqrt{\frac{9\sigma}{\pi^{3}\mathcal{G}_{2}}}. \tag{10}\]
It depends on two QCD quantities measured in lattice QCD [52], namely the string tension \(\sigma=0.18\) GeV\({}^{2}\) and the gluon correlation length, \(T_{g}=0.25\div 0.3\,fm\) itself related to the gluon condensate \(\mathcal{G}_{2}\). In presence of a string junction (or a static heavy quark) placed in some point \(\mathbf{r_{0}}\) in the QCD vacuum, multiple gluon correlations will generate a string with width \(T_{g}\) which develops between the point \(\mathbf{r_{0}}\) and the two light quarks located in \(\mathbf{x}\) and \(\mathbf{y}\) with \(\mathbf{r}=\mathbf{x}-\mathbf{y}\) and and \(\mathbf{R}=(\mathbf{x}+\mathbf{y})/\mathbf{2}\,-\,\mathbf{r_{0}}\), the string length variable. As explained in detail in section 3.3 of Ref. [46], modulo some ansatz prescription, this approach allows to generate simultaneously, at a semi-quantitative level, a confining interaction built from the gluon correlator with long distance (\(R\gg T_{g}\)) behaviour \(V_{C}(R)=\sigma\,R\), together with an equivalent NJL model with scalar interaction strength \(G_{1}=120\pi\sigma T_{g}^{4}/(4N_{c}N_{F})\sim 10\) GeV\({}^{-2}\) and
cutoff \(\Lambda\sim 1/T_{g}\sim 600\) MeV. If we take accepted values of the string tension, \(\sigma=0.18\,{\rm GeV}^{2}\), and gluon condensate, \({\cal G}_{2}=0.025\,{\rm GeV}^{4}\), which implies a gluon correlation length, \(T_{g}=\sqrt{9\sigma/\pi^{3}{\cal G}_{2}}=0.286\,{\rm fm}\), one obtains \(G_{1}=12.514\,{\rm GeV}^{-2}\). We thus fix \(\Lambda=0.604\,MeV\) and a current quark mass, \(m=5.8\,MeV\), to obtain \(M_{0}=356.7\,MeV\), \(F_{\pi}=91.9\,MeV\), \(M_{\pi}=140\,MeV\), \(\langle\bar{q}q\rangle=-\,(241.1\,MeV)^{3}\). It is remarkable that taking accepted values of the two basic QCD parameters, namely the string tension and the gluon condensate, one recovers the NJL parameters yielding the good NJL phenomenology, or at least their order of magnitude. We also find that the NJL parameter entering the cubic term of the chiral effective potential is significant, \(C_{\chi}=0.488\), and the effective sigma mass parameter is \(M_{\sigma}=716.4\,MeV\).
It is also possible to establish a bound state equation [46] which is equivalent to a Dirac equation for a constituent quark with mass \(M=\bar{\cal S}(\rho)\) moving in a BCS-like state representing the (modified) broken QCD vacuum and submitted to the confining potential with origin at the string junction point (see also [53]). The nucleon wave function is just the product of the three quark orbitals properly projected on to the color singlet state with \(I=J=1/2\). After center of mass correction the quark core contribution to the bare nucleon mass is found to be \(M_{N}^{(core)}=3.62\,\sqrt{\sigma}\). The response parameters are finally obtained as:
\[g_{S} = \frac{M_{0}}{F_{\pi}}\left(\frac{\partial M_{N}^{(core)}({\cal S} )}{\partial{\cal S}}\right)_{{\cal S}=M_{0}}=6.52, \tag{11}\] \[C = \frac{M_{0}^{2}}{2M_{N}}\left(\frac{\partial^{2}M_{N}^{(core)}({ \cal S})}{\partial{\cal S}^{2}}\right)_{{\cal S}=M_{0}}=0.32. \tag{12}\]
The parameters entering the three-body force (Eq.(8)) and the chiral susceptibility (Eq. (9)) are:
\[\tilde{C}_{3}=\frac{M_{N}}{g_{S}F_{\pi}}\,C+\frac{1}{2}\,C_{\chi}=0.75\]
and
\[\tilde{C}_{L}=\frac{M_{N}}{g_{S}F_{\pi}}\,C+\frac{3}{2}\,C_{\chi}=1.23.\]
For the comparison with lattice data our model calculation yields: \(a_{2}=1.175\,GeV^{-1}\), \(a_{4}=-0.615\,GeV^{-3}\). The value of \(a_{2}\) is compatible with lattice data and yields a non pionic contribution to the sigma commutator \(\sigma^{(s)}=a_{2}\,M_{\pi}^{2}=23.5\,MeV\). The value of \(a_{4}\), although a bit high, is essentially compatible with lattice data in the sense that it is much smaller than the value obtained in the simplest linear sigma model ignoring the nucleonic response (i.e., \(C=0\)) and the NJL correction (i.e., \(C_{\chi}=0\)), for which \(a_{4}\simeq-3.5\,GeV^{-3}\). Hence the model generates the strong compensation required by lattice data from effects governing the three-body repulsive force needed for the saturation mechanism: compare Eqs. (9) and (8). It is also possible to calculate the core rms and the vertex form factors in the various (scalar, axial and vector) Yukawa channels. The cutoff values for the equivalent monopole form factors regularizing the corresponding Yukawa couplings to the nucleon are all close to \(\Lambda=1\,GeV\).
The relative roles of the scalar field and two -pion exchange in the nuclear matter equation of state
Already in the early nineties, Peter Schuck, Wolfgang Norenberg and I started to think about the effect of the in-medium modified two-pion (and two-rho) exchange on the nuclear equation of state. In Ref. [54] we used a model hamiltonian containing pion and rho exchanges completed by a quartic interaction in the \(NN\) spin-isospin channel characterized by a Landau-Migdal \(g^{\prime}_{NN}\) parameter which was also extended to the \(N\Delta\) an \(\Delta\Delta\) interactions. This effect of the pion and rho loops in presence of short-range correlations on top of the mean field was calculated using the text-book charging formula but generalized to a non static (pion and rho exchanges) interactions. Such a calculation contains not only the effect of RPA-like long-range correlations but also includes short-range correlations through the aforementioned screening of the pion and rho exchanges. Using Green's function techniques [54] and with the notations of [38] (except that the screened interactions \(V\) are replaced by \(G\)), the result of the ring summation is
\[E^{ring} = E_{L}^{ring}+E_{T}^{ring}\] \[E_{L}^{ring} = \frac{3}{2\rho}\int\frac{id\omega d{\bf q}}{(2\pi)^{4}}\big{[}- \ln\big{(}1-G_{LNN}\Pi_{N}^{0} \tag{13}\] \[-G_{L\Delta\Delta}\Pi_{\Delta}^{0}-(G_{LNA}^{2}-G_{LNN}\,G_{L \Delta\Delta})\Pi_{N}^{0}\Pi_{\Delta}^{0}\big{)}\] \[-G_{LNN}\Pi_{N}^{0}-G_{L\Delta\Delta}\Pi_{\Delta}^{0}\big{]}\] \[E_{T}^{ring} = \frac{3}{\rho}\int\frac{id\omega d{\bf q}}{(2\pi)^{4}}\big{[}- \ln\big{(}1-G_{TNN}\Pi_{N}^{0}\] (14) \[-G_{T\Delta\Delta}\Pi_{\Delta}^{0}-(G_{TNA}^{2}-G_{TNN}\,G_{T \Delta\Delta})\Pi_{N}^{0}\Pi_{\Delta}^{0}\big{)}\] \[-G_{TNN}\Pi_{N}^{0}-G_{T\Delta\Delta}\Pi_{\Delta}^{0}\big{]},\]
where the third lines of the above two expressions (\(E_{l}^{ring}\), \(E_{T}^{ring}\)) for the energy per nucleon correspond to the subtraction of the mean field Fock terms. In short, \(G_{L}\sim\pi+g^{\prime}\) and \(G_{T}\sim\rho+g^{\prime}\) represent the effective spin-isospin longitudinal and transverse interaction in the various \(NN\), \(N\Delta\) and \(\Delta\Delta\) channels. In a recent paper [46], we used essentially the same approach as in [38]
but with input parameters appearing in the chiral confining model lagrangian (Eq. 1).
Before going further, let us summarize the origin of the various input parameters entering the nuclear matter calculation of [46] and the new one presented below. For clarity we distinguish those coming from the QCD-connected model from those coming from hadron phenomenology.
- For what concerns the parameters entering the bare \(NN\) interaction, the scalar and pionic sectors are entirely given or strongly constrained by the QCD- connected model: \(g_{S}=6.52\), \(M_{\sigma}=716.4\,MeV\), \(\Lambda_{S}=1\,GeV\), \(g_{A}=1.26\), \(F_{\pi}=92.4\,MeV\), \(m_{\pi}=140\,MeV\), \(\Lambda_{\pi}=1\,GeV\). Notice that the cutoff parameters \(\Lambda_{S}\), \(\Lambda_{\pi}\), as well as the cutoff in the vector channel, \(\Lambda_{V}=1\,GeV\), are only approximately compatible with the QCD- connected model.
- The vector-(Lorentz) tensor \(NN\) sector is constrained by well established hadron phenomenology: \(g_{V}=7.5\), \(m_{V}=783\,MeV\), \(g_{\rho}=g_{V}/3\), \(m_{\rho}=770\,MeV\), \(\kappa_{\rho}=6\). We stress that a large value of \(\kappa_{\rho}\) (the strong rho scenario [55] used in the Bonn potential [56; 57]) is required to decrease the (Wigner) tensor force, to get a not too large D state probability in the deuteron. In the present paper we will use slightly different parameters, namely \(g_{V}=8\) and \(\kappa_{\rho}=6.125\).
- The two parameters entering the three-nucleon force, \(C=0.32\) and \(C_{\chi}=0.488\), are given by the QCD-connected model.
- Finally the nuclear matter calculation requires a pair correlation function. We used a simple Jastrow ansatz \(f(r)=1-j_{0}(q_{c}r)\) with \(q_{c}=670\,MeV\) in [46] or \(q_{c}=610\,MeV\) in the present paper. The quantity \(q_{c}\), which corresponds to the inverse of the correlation hole size, is obtained with an adapted G matrix calculation (including the fictitious \(\sigma^{\prime}\), see section 2.3 of Ref. [46]) preserving the UV regularization of the loop integrals entering the correlation energy. The effective longitudinal and transverse spin-isospin interactions (up to a factor \(g_{A}^{2}/4F_{\pi}^{2}\) usually included in the definition of the polarization propagator) become \(G_{L}(q)=V_{\pi}(q)+g^{\prime}(q)+2h^{\prime}(q)\) and \(G_{T}(q)=V_{\rho}(q)+g^{\prime}(q)-h^{\prime}(q)\) (see the explicit expression of \(g^{\prime}(q)\) and \(h^{\prime}(q)\) in Eq. (49) of Ref. [46]). This represents an efficient way for incorporating the effect of short-range correlations. Moreover, to get a sufficiently large value of the Landau-Migdal parameter (\(g^{\prime}=0.59\) in [46], \(g^{\prime}=0.64\) in the present paper) a rather large value of the cutoff, \(\Lambda_{\rho}=2\,GeV\), for the tensor coupling of the rho meson is needed. This is the only constraint or requirement from nuclear matter phenomenology [58]. As a starting point it is a RMF calculation as in [41], but completed by the inclusion, on top of the Hartree scheme, of the pion and rho loops, namely the Fock energy and the correlation energy, including the contribution of the \(\Delta\) resonance. One main difference with [38] is the utilization of the NJL potential (Eq. (7)) in place of the L\(\sigma\)M potential (Eq. (4)) with the value \(C_{\chi}=0.488\) obtained in the underlying QCD-connected NJL model. Another difference is the replacement of the three purely phenomenological \(g^{\prime}=g^{\prime}(q=0)\) Landau-Migdal parameters, by a unique one, \(g^{\prime}_{NN}=g^{\prime}_{N\Delta}=g^{\prime}_{\Delta\Delta}=0.59\), generated with the Jastrow ansatz. As discussed in details in [46] we obtained without further fine tuning a decent description of the saturation properties of nuclear matter. [61][61]
### Correlation energy and Fermi sea depopulation
This approach is however questionable for various reasons. The first one is the use of a universal \(g^{\prime}\) parameter, i.e, \(g^{\prime}_{NN}=g^{\prime}_{N\Delta}=g^{\prime}_{\Delta\Delta}\). There is indeed a consensus that \(g^{\prime}_{NN}\) is larger that \(g^{\prime}_{N\Delta}\) and \(g^{\prime}_{\Delta\Delta}\) and frequently quoted values are \((g^{\prime}_{NN},g^{\prime}_{N\Delta},g^{\prime}_{\Delta\Delta})=(0.6{-}0.7,0.3,0.5)\)[58]. As discussed below this deviation from universality strongly reduces the contribution of diagrams with at least one \(\Delta\) in the intermediate state (see fig. 2). In addition this type of iterative \(\Delta\) box diagrams, not reducible to the iteration of single meson (pion or rho) exchange in the \(NN\) sector, are often simulated by a fictitious \(\sigma^{\prime}\) meson exchange. In the \(NN\) potential, its contribution, named \(2\pi\) in fig. 1, is obtained using the parameters, \(g_{2\pi}=4.8\equiv g_{\sigma^{\prime}}\), \(m_{2\pi}=550\,\mathrm{MeV}\equiv m_{\sigma^{\prime}}\), \(\Lambda_{2\pi}=1\,\mathrm{GeV}\equiv\Lambda_{\sigma^{\prime}}\). One second weak point is that the contribution to the binding energy of the \(\Delta N\) and \(\Delta\Delta\) diagrams depicted in fig. 2 is only marginally compatible with the \(\sigma^{\prime}\) contribution. Finally, there is the old problem of the double counting in the calculation of the correlation energy on top of the lowest order Brueckner calculation. The two-bubble diagrams of fig. 2 cor
Figure 2: Diagrams contributing to the correlation energy or to the depopulation of the Fermi sea.
responding to the iterated uncorrelated two-pion (and two-rho) exchange, are totally or partially taken into account in the calculation based on a Brueckner G matrix [59]. Hence it should be subtracted from the ring diagrams summation. Moreover, the explicit calculation of the diagrams with one or two \(\Delta\)'s in the intermediate state, should exactly coincide with the contribution of the \(\sigma^{\prime}\) entering the bare \(NN\) interaction, at least at low density, i.e., in absence of Pauli-blocking effect. To clarify, this question without resorting to a diagrammatic analysis, we reconsider and extend the (non relativistic) G matrix approach described in section 2 of [46].
For this purpose let us consider a nuclear system described by a Hamiltonian where the bare \(NN\) interaction (i.e., \(s,\omega,\pi,\rho\) exchanges) reduces to a two-body potential, written with standard notation as
\[H = \hat{T}+\hat{V}=\sum_{k_{1},k_{2}}<k_{1}\,|\,T\,|\,k_{2}>a_{k_{1}} ^{\dagger}a_{k_{2}} \tag{14}\] \[+ \frac{1}{4}\sum_{k_{1}k_{2}k_{3}k_{4}}<k_{1}\,k_{2}\,|\,\bar{V}\, |\,k_{3}\,k_{4}>a_{k_{1}}^{\dagger}a_{k_{2}}^{\dagger}a_{k_{4}}a_{k_{3}},\]
where \(a\) (\(a^{\dagger}\)) is the annihilation (creation) operator relative to the bare nucleons. According to Goldstone [60], the correlated ground state of the nucleus, \(|0>\), can be obtained from the state of uncorrelated bare nucleons, \(|0>_{uncorr}\), by a unitary transformation as follows
\[|0> = U(a)\,|0>_{uncorr}\equiv\exp(S(a))\,|0>_{uncorr}\] \[S(a) = \frac{1}{4}\left(s_{p_{1}p_{2}h_{1}h_{2}}a_{p_{1}}^{\dagger}a_{p _{2}}^{\dagger}a_{h_{1}}a_{h_{2}}\,-\,hc\right), \tag{15}\]
with \(S(A)\) being an antihermitian operator truncated at 2p-2h excitations (with the notation \(s_{p_{1}p_{2}h_{1}h_{2}}^{*}=s_{h_{1}h_{2}p_{1}p_{2}}\)), which incorporates the effect of short-range correlations. In particular this generates the Fermi sea depopulation and the occupation number for holes and particles are [61]:
\[n_{h} \equiv 1-\Delta n^{h}=1\,-\,\frac{1}{2}\sum_{h_{2}p_{3}p_{4}}|s_{p_{ 1}p_{2}h_{1}h}|^{2} \tag{16}\] \[n_{p} \equiv \Delta n^{p}=\frac{1}{2}\sum_{p_{2}h_{1}h_{2}}|s_{pp_{2}h_{1}h_{2} }|^{2}\,. \tag{17}\]
In addition, notice that, as explained in detail in [61], the \(\Delta\) resonance states can be incorporated as extra particle states in all parts of this approach. To second order in the \(s_{p_{1}p_{2}h_{1}h_{2}}\) parameters the ground state expectation values of the kinetic energy operator and of the interaction operator are given by
\[\langle\hat{T}\rangle = \sum_{h}h_{h}t_{h}+\sum_{p}n_{p}t_{p}=\langle\hat{T}\rangle_{FG} \tag{18}\] \[+ \frac{1}{4}\sum_{h_{1}h_{2}p_{1}p_{2}}|s_{p_{1}p_{2}h_{1}h_{2}}|^ {2}\left(t_{p_{1}}+t_{p_{2}}-t_{h_{1}}-t_{h_{2}}\right)\] \[\langle\hat{V}\rangle = \frac{1}{2}\sum_{h_{1}h_{2}}<h_{1}h_{2}\,|\bar{V}|h_{1}h_{2}>\] \[+ \frac{1}{4}\sum_{h_{1}h_{2}p_{1}p_{2}}<h_{1}h_{2}\,|\bar{V}|p_{1} p_{2}>s_{p_{1}p_{2}h_{1}h_{2}}\] \[+ \frac{1}{4}\sum_{h_{1}h_{2}p_{1}p_{2}}s_{h_{1}h_{2}p_{1}p_{2}}<p_ {1}p_{2}|\bar{V}|h_{1}h_{2}>\] \[+ \frac{1}{4}\sum_{h_{1}h_{2}p_{1}p_{2}}|s_{p_{1}p_{2}h_{1}h_{2}}|^ {2}\left(v_{p_{1}}+v_{p_{2}}-v_{h_{1}}-v_{h_{2}}\right)\] \[+ \frac{1}{8}\sum_{h_{1}h_{2}p_{1}p_{2}p_{1}^{\prime}p_{2}^{\prime} }s_{h_{1}h_{2}p_{1}p_{2}}<p_{1}p_{2}|\bar{V}|p_{1}^{\prime}p_{2}^{\prime}>s_{p _{1}^{\prime}p_{2}^{\prime}h_{1}h_{2}}\] \[+ \frac{1}{8}\sum_{h_{1}h_{2}k_{1}^{\prime}h_{2}p_{1}p_{2}}s_{p_{1} p_{2}h_{1}h_{2}}<h_{1}h_{2}|\bar{V}|h_{1}^{\prime}h_{2}^{\prime}>s_{h_{1}^{ \prime}h_{2}^{\prime}p_{1}p_{2}}\]
with \(t_{k}=k^{2}/2M_{N}\) for a nucleon state and \(t_{\Delta k}=M_{\Delta}-M_{N}+k^{2}/2M_{\Delta}\) for a \(\Delta\) state. The quantities \(v_{k}=\sum_{h}<kh|\bar{V}|kh>\) correspond to the HF one-particle potential for the state \(k\). We can remark that we can group together the second line of the kinetic energy and the fourth line of the potential to reconstitute a quantity
\[\frac{1}{4}\sum_{h_{1}h_{2}p_{1}p_{2}}\left|s_{p_{1}p_{2}h_{1}h_{2}}\right|^{2 }\left(\epsilon_{p_{1}}+\epsilon_{p_{2}}-\epsilon_{h_{1}}-\epsilon_{h_{2}} \right),\]
where \(\epsilon_{k}=t_{k}+v_{k}\) is the single particle energy. In the following we will assume that the one-particle potential is momentum independent, which is true at the Hartree level, so that the \(2p-2h\) energy, \(\epsilon_{p_{1}}+\epsilon_{p_{2}}-\epsilon_{h_{1}}-\epsilon_{h_{2}}\) can be replaced by \(t_{p_{1}}+t_{p_{2}}-t_{h_{1}}-t_{h_{2}}\). The coefficients \(s_{p_{1}p_{2}h_{1}h_{2}}\)can be obtained variationally, i.e., by minimizing the ground state energy, yielding:
\[\left(\epsilon_{h_{1}}+\epsilon_{h_{2}}-\epsilon_{p_{1}}-\epsilon _{p_{2}}\right)s_{p_{1}p_{2}h_{1}h_{2}}=<p_{1}p_{2}|\bar{V}|h_{1}h_{2}>\] \[+ \frac{1}{2}\sum_{p_{1}^{\prime},p_{2}^{\prime}}<p_{1}p_{2}\,| \bar{V}|\,p_{1}^{\prime}p_{2}^{\prime}>s_{p_{1}^{\prime}p_{2}^{\prime}h_{1}h_{2}}\] \[+ \frac{1}{2}\sum_{h_{1}^{\prime},h_{2}^{\prime}}s_{p_{1}p_{2}h_{1}^ {\prime}h_{2}^{\prime}}<h_{1}^{\prime}\,h_{2}^{\prime}\,|\,\bar{V}\,|\,h_{1}\,h_{2 }>\,. \tag{20}\]
This equation has the form of a Feynmann-Galiskii equation. We however neglect the last term involving hole states which is justified by phase space limitations (see also the discussion around Eqs. 3.8 and 3.9 of the review paper of P. Schuck et al [62]). In that case the solution of the previous equation can be found as follows:
\[s_{p_{1}p_{2}h_{1}h_{2}}=\frac{<p_{1}\,p_{2}\,|\,\bar{G}(E=\epsilon_{h_{1}}+ \epsilon_{h_{2}})\,|\,h_{1}\,h_{2}>}{\epsilon_{h_{1}}+\epsilon_{h_{2}}- \epsilon_{p_{1}}-\epsilon_{p_{2}}}, \tag{21}\]
where \(G(E)\), which can be identified with the Brueckner G matrix, is a two-body operator satisfying a Bethe-Goldstone equation,
\[G(E)=V\,+\,V\,\frac{Q}{E-h_{0}}\,G(E)\Longleftrightarrow\] \[<k_{1}\,k_{2}\,|\,G(E)\,|\,k_{1}^{\prime}\,k_{2}^{\prime}>=<k_{1} \,k_{2}\,|\,V\,|\,k_{1}^{\prime}\,k_{2}^{\prime}>\] \[+\sum_{p_{1},p_{2}>k_{F}}\frac{<k_{1}k_{2}|V|p_{1}p_{2}><p_{1}p_{ 2}|G(E)|k_{1}^{\prime}k_{2}^{\prime}>}{E-\epsilon_{p_{1}}-\epsilon_{p_{2}}}, \tag{22}\]
and \(Q\) is the Pauli-blocking operator projecting on particle states above the Fermi sea as well as \(\Delta\) states. It follows that he ground state energy per nucleon can be written as:
\[E_{0}=E^{BHF}+E^{2p-2h}+E^{depop}. \tag{23}\]
\(E^{BHF}\) is the mean-field energy associated with the G matrix considered as an effective interaction,
\[E^{BHF}=\frac{1}{A}\sum_{h<k_{F}}<h|T|\,h>\] \[+\,\frac{1}{2A}\sum_{h_{1}h_{2}<k_{F}}<h_{1}h_{2}\,|\bar{G}(E= \epsilon_{h_{1}}+\epsilon_{h_{2}})|h_{1}h_{2}>, \tag{24}\]
with
\[<h_{1}h_{2}\,|\bar{G}(E=\epsilon_{h_{1}}+\epsilon_{h_{2}})|h_{1}h _{2}>=<h_{1}h_{2}\,|\bar{V}|h_{1}h_{2}>\] \[+\frac{1}{2}\sum_{p_{1},p_{2}>k_{F}}\frac{<h_{1}h_{2}|\bar{V}|p_ {1}p_{2}><p_{1}p_{2}|\bar{G}(E)|h_{1}h_{2}>}{\epsilon_{h_{1}}+\epsilon_{h_{2}} -\epsilon_{p_{1}}-\epsilon_{p_{2}}}, \tag{25}\]
where the summation over particle states may also include \(\Delta\) states without restriction of momentum.
The second term in the expression of the ground state energy (23) represents the effect of the G matrix interaction to second order in perturbation theory on top of the mean-field energy:
\[E^{2p-2h}=\frac{1}{4A}\sum_{p_{1},p_{2}>k_{F}}\frac{\left|<<p_{1}p_{2}|\bar{G }(E)|h_{1}h_{2}>\right|^{2}}{\epsilon_{h_{1}}+\epsilon_{h_{2}}-\epsilon_{p_{1} }-\epsilon_{p_{2}}}. \tag{26}\]
To make an explicit connection with the ring diagrams energy, let us come back to the correlation energy to the two-loop (two-bubble or two-hole-line) order:
\[E^{ring-2h} =E^{ring-2h}_{L}\,+\,E^{ring-2h}_{T}\] \[E^{ring-2h}_{L} =\frac{3}{2\rho}\int\frac{id\omega d\mathbf{q}}{(2\pi)^{4}}\, \big{[}G^{2}_{LNN}\Pi^{02}_{N}\,+\,G^{2}_{L\Delta\Delta}\Pi^{02}_{\Delta}\] \[+2\,G^{2}_{LN\Delta}\Pi^{0}_{N}\Pi^{0}_{\Delta}\big{]}\] \[E^{ring-2h}_{T} =\frac{3}{\rho}\,\int\frac{id\omega d\mathbf{q}}{(2\pi)^{4}}\, \big{[}G^{2}_{TNN}\Pi^{02}_{N}\,+\,G^{2}_{T\Delta\Delta}\Pi^{02}_{\Delta}\big{]}\] \[+2\,G^{2}_{TNA}\Pi^{0}_{N}\Pi^{0}_{\Delta}\big{]}. \tag{27}\]
If we remove the energy dependence of the spin-isospin interaction, i.e., in a static approximation, one can show that this two-loop correlation energy takes the simple form:
\[E^{ring-2h} \simeq E^{corr-st}\qquad\text{(static limit)}\] \[E^{corr-st} =\frac{1}{2A}\,\sum_{h_{1},h_{2},p_{1},p_{2}}\frac{\left|<p_{1}\,p _{2}\,\right|G_{\sigma\tau}\left|\,h_{1}\,h_{2}>\right|^{2}}{\epsilon_{h_{1}}+ \epsilon_{h_{2}}-\epsilon_{p_{1}}-\epsilon_{p_{2}}}, \tag{28}\]
where \(G_{\sigma\tau}\) is the static spin-isospin interaction. We see that \(E^{corr-st}\) coincides with the 2p-2h energy, \(E^{2p-2h}\), but omitting the Pauli exchange term which does not appears in the ring summation.
Finally the last term appearing in the expression of the BHF ground state energy (23) is:
\[E^{depop} =\frac{1}{A}\sum_{h}(1-n_{h})\epsilon_{h}+\sum_{p}n_{p}\epsilon_ {p}\] \[\equiv\frac{1}{4A}\sum_{p_{1},p_{2}>k_{F}}\frac{\left|<p_{1}p_{2}| \bar{G}(E)|h_{1}h_{2}>\right|^{2}}{\epsilon_{p_{1}}+\epsilon_{p_{2}}- \epsilon_{h_{1}}-\epsilon_{h_{2}}}\] \[=-E^{2p-2h}\simeq T^{depop}=\frac{1}{A}\langle T\rangle_{depop}. \tag{29}\]
Hence we see that the leading order (two-bubble or two-hole-line) correlation energy, \(E^{ring-2h}\simeq E^{corr-st}\), identified with the 2p-2h energy, \(E^{2p-2h}\), is just compensated by the depopulation energy, \(E^{depop}\), itself very close to the excess of kinetic energy per nucleon due to short range correlations. Said differently one has:
\[E^{ring-2h}\simeq E^{corr-st}\simeq-\frac{1}{A}\langle T\rangle_{depop}.\]
This excess of kinetic energy per nucleon not larger than \(20\,MeV\) brings a very strong constraint on this correlation energy and especially on the magnitude of the loop energy involving \(\Delta\)'s in the intermediate state. As emphasized in [63], this question of the excess of kinetic energy, associated with the tail of the genuine hole spectral function, is related to the impact of short-range correlations on the binding energy of nuclear matter according to the well-known non relativistic energy sum rule,
\[B =\frac{2}{\rho}\int dE\int\frac{d\mathbf{k}}{(2\pi)^{3}}\left( \frac{k^{2}}{2M}\,+\,E\,S^{h}(E,\mathbf{k})\right)\] \[\equiv\frac{1}{2}\left(\frac{1}{A}\langle T\rangle\,+\,\epsilon \right), \tag{30}\]
where the second form is known as the Koltun sum rule. \(S^{h}(E,\mathbf{k})\) is the hole spectral function whose explicit form in our approach is given by Eq. (24) of [61]. \(\epsilon\) is the separation energy, a negative quantity which is the opposite of the mean energy needed to remove a nucleon from the nucleus. Let us consider the
case \(\rho=0.8\,\rho_{0}\), i.e., \(k_{F}=245\,MeV\), appropriate for the carbon nucleus. The results are extremely sensitive to the parameters used for the \(N\Delta\rho\) couplings. With the choice discussed below, yielding \((g^{\prime}_{NN},g^{\prime}_{N\Delta},g^{\prime}_{\Delta\Delta})\)\(=(0.64,0.30,0.48)\), we find \(E^{ring-2\hbar}=-20\,MeV\) and \(E^{corr-st}=-16\,MeV\), the difference between the non static and the static calculations coming from the longitudinal channel. This is expected because the static pion exchange is certainly a poor approximation due to the smallness of the pion mass. However the effect of Pauli exchange is also expected to decrease this correlation energy and we can take \(\langle T/A\rangle_{depop}=-E^{corr-st}=16\,MeV\), with a contribution of about \(5\,MeV\) coming from \(\Delta\)'s present in the ground state. It follows that \(\langle T/A\rangle\simeq 20+16=36\,MeV\) and \(-\epsilon=-2B+\langle T/A\rangle\simeq 16+36\simeq 50\,MeV\). This is within the estimate obtained with variational calculations [64], which is needed to explain the EMC effect [65] (see also the discussions given in [61]). It is also interesting to have an estimate of the mean depopulation of the Fermi sea also calculated with the neglect of Pauli exchange,
\[\Delta n=\frac{1}{2A}\,\sum_{h_{1},h_{2},p_{1},p_{2}}\left|\frac{<p_{1}\,p_{2 }\,|\,G\,|\,h_{1}\,h_{2}>}{\epsilon_{h_{1}}+\epsilon_{h_{2}}-\epsilon_{p_{1}} -\epsilon_{p_{2}}}\right|^{2}\simeq 0.175, \tag{31}\]
which is also in agreement with [64].
### the "\(\sigma^{\prime}\) meson" and the \(\Delta\) box diagrams
Let us come back to the two-nucleon problem. As already explained the G matrix has been generated from a Bonn-like potential, \(V\), containing genuine \(s\), \(\omega\), \(\rho\) and \(\pi\) exchanges appearing in the model lagrangian (Eq. (1)) and complemented by a \(\sigma^{\prime}\) exchange simulating the iterative isobar box diagrams as in [66]. We write this effective OBE quasi potential as \(V^{(QP)}=V+V_{\sigma^{\prime}}\). Hence the Bethe-Goldstone equation projected onto the nucleon sector reads with schematic notations
\[G_{NN} = V^{(QP)}_{NN}\,+\,V^{(QP)}_{NN}\,\left(\frac{Q}{e}\right)_{NN}\, G_{NN} \tag{32}\] \[\equiv V^{(QP)}_{NN}\,\Omega,\]
with \(\Omega=1+\left(\frac{Q}{e}\right)_{NN}\,G_{NN}\). In our recent paper [46] we have shown that the Moller operator \(\Omega\) can be represented by a unique Jastrow function, \(f_{c}(r)=1-j_{0}(q_{c}r)\), which plays the role of a unique state independent correlation function as in the LOCV variational approach (see a detailed discussion of this point in Ref. [67]). Since \(f_{c}(r=0)=0\), an immediate consequence is that the effective interaction automatically vanishes at high momentum providing a natural regularization of the loop integrals entering for instance the two-pion (or two-rho) exchange diagrams. As discussed in [68], this property can be seen as a consequence of the Beg-Agassi-Gal theorem which states that the virtual mesons propagate freely inside the correlation hole. Of course such a prescription yields a G matrix which cannot be an exact solution the Bethe-Goldstone equation but only an approximate one. In [46] we proposed a method to find the value of \(q_{c}\) preserving the property \(f_{c}(r=0)=0\) after one G matrix iteration. We now require that the G matrix obtained with the fictitious \(\sigma^{\prime}\) is equivalent at least at low density with the the one obtained after the explicit incorporation of \(\Delta\)'s in the intermediate state. For this purpose we write the Bethe-Goldstone equation (32) in two equivalent forms, one involving the quasi potential \(V^{(QP)}=V+V_{\sigma^{\prime}}\), without \(\Delta\) states, and the other one involving the genuine \(OBE\) potential \(V\) with explicit incorporation of the \(\Delta\) states (fig. 6b):
\[G_{NN} = V_{NN}+V_{NN}\,\left(\frac{Q}{e}\right)_{NN}\,G_{NN} \tag{33}\] \[+\,V_{\sigma^{\prime}}+V_{\sigma^{\prime}}\left(\frac{Q}{e} \right)_{NN}\,G_{NN}\] \[= V_{NN}\Omega\,+\,V_{\sigma^{\prime}}\Omega\equiv\,G^{(OBE)}_{NN }\,+\,G_{\sigma^{\prime}},\]
\[G_{NN} = V_{NN}+V_{NN}\,\left(\frac{Q}{e}\right)_{NN}\,G_{NN} \tag{34}\] \[+V_{N\Delta}\,\left(\frac{Q}{e}\right)_{N\Delta}\,G_{N\Delta}\] \[+V_{\Delta\Delta}\,\left(\frac{Q}{e}\right)_{\Delta\Delta}\,G_{ \Delta\Delta}\] \[= V_{NN}+V_{NN}\,\left(\frac{Q}{e}\right)_{NN}\,G_{NN}\,+\,G^{( \Delta\,box)}_{NN}\] \[\equiv G^{(OBE)}_{NN}\,+\,G^{(\Delta\,box)}_{NN}.\]
Hence we arrive at the conclusion that the effective interaction \(G_{\sigma^{\prime}}=V_{\sigma^{\prime}}\Omega\) (i.e., \(G_{\sigma^{\prime}}(r)=V_{\sigma^{\prime}}(r)(1-j_{0}(q_{c}r))\) should be equivalent to the one generated by the iterative \(\Delta\) diagrams corresponding to the second and third lines of Eq. ((34)) and depicted in fig. 6b. This equivalence should be exact in the low density limit in order to generate an identical scattering T matrix. We now adjust the \(N\Delta\) and the \(\Delta\Delta\) interactions in such a way that the binding energy calculated with the auxiliary "\(\sigma^{\prime}\) meson" is the same as the one issued from the iterative \(\Delta\) box diagrams. In other words we require in the
low density limit:
\[E_{H}^{\sigma^{\prime}} = \frac{1}{2A}\,\sum_{h_{1},h_{2}}<h_{1}\,h_{2}\,|\,G_{\sigma^{\prime} }\,|\,h_{1}\,h_{2}> \tag{35}\] \[= \frac{1}{2A}\,\sum_{h_{1},h_{2}}<h_{1}\,h_{2}\,|\,G_{NN}^{(\Delta \,box)}\,|\,h_{1}\,h_{2}>\] \[= \frac{1}{2A}\,\sum_{h_{1},h_{2};p_{1},p_{2}}<\,h_{1}\,h_{2}|\,V_{ \sigma\tau}\,|(p_{1}\,p_{2})_{\Delta}>\] \[\frac{<(p_{1}\,p_{2})_{\Delta}|\,G_{\sigma\tau}\,|\,h_{1}\,h_{2}>} {\epsilon_{h_{1}}+\epsilon_{h_{2}}-\epsilon_{p_{1}}-\epsilon_{p_{2}}}=E^{ \Delta\,box}.\]
We now adapt these non relativistic results to our relativistic lagrangian approach. For this purpose we introduce an auxiliary lagrangian, \(\mathcal{L}_{\sigma^{\prime}}=-g_{\sigma^{\prime}}\bar{N}\sigma^{\prime}N+ \partial^{\mu}\sigma^{\prime}\partial_{\mu}\sigma^{\prime}/2-m_{\sigma^{ \prime}}^{2}\,\sigma^{\prime}/2\), with \(g_{\sigma^{\prime}}=4.8\) and \(m_{\sigma^{\prime}}\)=550 MeV, completed by a monopole \(\sigma^{\prime}NN\) vertex form factor with cutoff \(\Lambda_{\sigma^{\prime}}=1\,GeV\). The corresponding Hartree energy in presence of short range correlation reads:
\[E_{H}^{\sigma^{\prime}}= - \frac{g_{\sigma^{\prime}}^{2}}{2\,m_{\sigma^{\prime}}^{2}}\frac{ \rho_{S}^{2}}{\rho}\left(1-\Lambda_{\sigma^{\prime}}^{2}(q_{c}^{2})\frac{m_{ \sigma^{\prime}}^{2}}{m_{\sigma^{\prime}}^{2}+q_{c}^{2}}\right) \tag{36}\] \[+ \frac{g_{\sigma^{\prime}}^{2}}{m_{\sigma^{\prime}}^{2}}\frac{ \rho_{S}^{2}-\rho^{2}}{\rho}.\]
The iterative \(\Delta\) box diagram is calculated using non static \(\pi\) and \(\rho\) exchanges:
\[E_{H}^{\Delta\,box} = E_{H}^{L\Delta\,box}+E_{H}^{T\Delta\,box}\] \[E_{H}^{L\Delta\,box} = \frac{3}{2\rho}\int\frac{id\omega d\mathbf{q}}{(2\pi)^{4}}\left[ 2\,V_{\pi N\Delta}\,G_{LN\Delta}\,\Pi_{N}^{0}\Pi_{\Delta}^{0}\right.\] \[\left.+V_{\pi\Delta\Delta}\,G_{L\Delta\Delta}\Pi_{\Delta}^{02}\right]\] \[E_{H}^{T\Delta\,box} = \frac{3}{\rho}\,\int\frac{id\omega d\mathbf{q}}{(2\pi)^{4}}\left[ 2\,V_{\rho N\Delta}\,G_{TNA}\,\Pi_{N}^{0}\Pi_{\Delta}^{0}\right. \tag{37}\] \[\left.+V_{\rho\Delta\Delta}\,G_{T\Delta\Delta}\Pi_{\Delta}^{02} \right].\]
The effective spin-isospin interactions in the \(N\Delta\) and \(\Delta\Delta\) channels are obtained from the bare interactions with the same Moller operator, i.e, \(G_{N\Delta}=V_{N\Delta}\Omega\) and \(G_{\Delta\Delta}=V_{\Delta\Delta}\Omega\). In order to satisfy the constraint \(E_{H}^{\sigma^{\prime}}=E_{H}^{\Delta\,box}\) together with the phenomenological requirement of having different \(g^{\prime}\) parameters, we take different \(\rho\) meson form factors in the \(N\Delta\) and \(\Delta\Delta\) channels. If we choose \(\Lambda_{\rho N\Delta}=600\,MeV\) and \(\Lambda_{\rho\Delta\Delta}=1041\,MeV\), we obtain \((g^{\prime}_{NN},g^{\prime}_{N\Delta},g^{\prime}_{\Delta\Delta})=(0.64,0.30,0.48)\) which is very close to what is suggested in [58]. In fig. 3, we show the comparison between the \(\Delta\) box energy (depicted in fig. 6b) and the Hartree \(\sigma^{\prime}\) energy (depicted in fig. 6a). We see that with increasing density these \(\Delta\) diagrams are less and less well represented by this fictitious \(\sigma^{\prime}\) which lacks the Pauli blocking effect present in the p-h bubble of fig. 6b1. This is the origin of the genuine repulsive three-body force depicted in fig. 7c corresponding in a different language to the earlier celebrated forces proposed in Refs. [69; 70; 71; 72] and also contained at NNLO in chiral EFT [73]. We also see in fig. 3 that the longitudinal spin-isospin contribution (it-action of the pion exchange) is twice larger than the transverse one (iteration of the rho exchange) hence justifying the usual terminology of "two-pion exchange" simulated by the scalar-isosclar \(\sigma^{\prime}\) exchange.
### The nuclear matter equation of state
In our previous paper [46], the G matrix was actually generated starting from a bare potential keeping only in the spin-isospin component its central part,
\[V_{C}(\mathbf{q})=\frac{1}{3}\big{(}V_{\pi}(q)+2V_{\rho}(q)\big{)}\sigma_{1} \cdot\sigma_{2},\]
ignoring the "weak" tensor force:
\[V_{W}(\mathbf{q})=\frac{1}{3}\big{(}V_{\pi}(q)-V_{\rho}(q)\big{)}\left(3\sigma_ {1}\cdot\hat{\mathbf{q}}\,\sigma_{2}\cdot\hat{\mathbf{q}}\,-\,\sigma_{1} \cdot\sigma_{2}\right).\]
Hence if we write the quasi potential, including the \(\sigma^{\prime}\), as \(V^{(QP)}=V_{S}+V_{W}\), the Bethe-Goldstone equation which was actually solved in [46] reads
\[G_{S}=V_{S}\,+\,V_{S}\,\left(\frac{Q}{e}\right)_{NN}\,G_{S}, \tag{38}\]
with solution \(G_{S}=V_{S}\,\Omega_{S}\). \(\Omega_{S}\) corresponds to the aforementioned Moller operator represented by a unique Jastrow function, \(f_{c}(r)=1-j_{0}(q_{c}(r)\) (i.e., \(G_{S}(r)=V_{S}(r)\)\(f_{c}(r)\)), whereas the "exact" Bethe-Goldstone equation should be:
\[G=V_{S}+V_{W}+(V_{S}+V_{W})\,\left(\frac{Q}{e}\right)\,G. \tag{39}\]
Figure 3: Contribution to the energy per nucleon versus \(\rho/\rho_{0}\), with \(\rho_{0}=0.16\,fm^{-3}\), of the \(\Delta\) box diagrams compared with the exchange of the fictitious sharp \(\sigma^{\prime}\) meson. Full line: total iterative \(\Delta\) box. Dotted line: longitudinal \(\Delta\) box. Dotted dashed line: transverse \(\Delta\) box. Double dotted-dashed line: Hartree \(\sigma^{\prime}\) exchange.
where we now omit the suffix \(NN\) in the Pauli operator. This equation can be rewritten in the successive forms as:
\[G =V_{S}+V_{S}\left(\frac{Q}{e}\right)\,G_{S}\,+\,V_{W}+V_{W}\left( \frac{Q}{e}\right)\,G_{S}\] \[+(V_{S}+V_{W})\left(\frac{Q}{e}\right)(G-G_{S})\] \[=G_{S}+V_{W}\Omega_{S}\,+\,(V_{S}+V_{W})\left(\frac{Q}{e}\right) (G-G_{S}) \tag{40}\] \[\simeq G_{S}+G_{W}+(V_{S}+V_{W})\left(\frac{Q}{e}\right)G_{W}\] (41) \[=V^{(QP)}\Omega_{S}+(V_{S}+V_{W})\left(\frac{Q}{e}\right)G_{W}. \tag{42}\]
To obtain the last form we have approximated the difference \(G-G_{S}\) by \(G_{W}=V_{W}\Omega_{S}\), ignoring terms such as:
\[V^{(QP)}\left(\frac{Q}{e}\right)V^{(QP)}\left(\frac{Q}{e}\right)G_{W}+...\,.\]
\(G_{W}\) is the effective interaction in the tensor channel in presence of short-range correlations obtained from the bare tensor interaction according to
\[\frac{1}{3}\big{(}V_{\pi}(q)-V_{\rho}(q)\big{)}\quad\rightarrow\quad\frac{1} {3}\big{(}V_{\pi}(q)-V_{\rho}(q)+3h^{\prime}(q)\big{)},\]
\(h^{\prime}(q)\), as well as \(g^{\prime}(q)\), being given by Eq. (49) of [46]. The first term of Eq. (42) represents the full G matrix obtained from the scalar-isoscalar \(s=\sigma_{W}\) exchange, the \(\omega\) exchange, the Lorentz-vector piece of \(\rho\) exchange and the full spin-isospin \(\pi+\rho\) exchange including the tensor force, using the Jastrow ansatz. In the second term of Eq. (42), only the \(V_{W}(Q/e)G_{W}\) term survives after the spin-isospin trace. It follows that the G matrix in the NN sector, incorporating the iteration of the tensor potential, reads:
\[G=G_{S}+G_{W}+V_{W}\left(\frac{Q}{e}\right)G_{W}. \tag{43}\]
In the calculation of the binding energy the BHF contribution of \(G_{W}\),
\[E^{BHF-tensor}=\frac{1}{2A}\sum_{h_{1},h_{2}}<h_{1}h_{2}|\bar{G}_{W}|h_{1}h_{2 }>\equiv 0, \tag{44}\]
identically vanishes for either the direct term or the exchange term. Hence the spin-isospin tensor potential contributes to the binding energy only through the second term of Eq. (42). Relaxing the static limit, this tensor loop contribution writes:
\[E^{tensor}=\frac{1}{\rho}\int\frac{id\omega d\mathbf{q}}{(2\pi)^{4}}\,(V_{ \pi}-V_{\rho})(G_{LNN}-G_{TNN})\Pi_{N}^{02}. \tag{45}\]
For completeness we recall that this type of calculation, such as Eqs. (13),(27),(45), fully incorporates the recoil correction and the energy dependence of the spin-isospin interaction, which means that the \(q^{2}\) appearing in the pion and rho static propagators is systematically replaced by \(q^{2}-\omega^{2}\), and the integral \(\int id\omega d\mathbf{q}\) becomes \(-\int dzd\mathbf{q}\) after a Wick rotation (see [38; 40]), hence transforming \(q^{2}-\omega^{2}\) into \(q^{2}+z^{2}\).
We are now in position to calculate the nuclear matter equation of state. The total energy per nucleon can be decomposed according to:
\[E_{0}=E_{0}^{RHF}+E^{\pi\rho-Fock}+E^{2\pi-loop}. \tag{46}\]
** \(E_{0}^{RHF}\) corresponds to the mean-field RBHF result with, \(s\), \(\omega\) and (Lorentz vector piece of) \(\rho\) exchanges. In this calculation, as in [46], the nucleon scalar self-energy also includes the effect of the \(\sigma^{\prime}\), improving the agreement with the Hugenholtz-Van-Hove theorem and yielding an effective Dirac mass, \(M_{N}^{*}(\rho_{0})=764\,MeV\). It contains the three-body forces originating from the nucleon response to the scalar field (fig. 7a) and from the chiral tadpole diagram (fig. 7b) whose net effect is to generate a repulsive three-body contribution to the energy per nucleon given by Eq. (8). It also includes the Fock terms but with an averaged value of the exchanged momentum (see Eqs. (117,118) of [46]). We also have to mention that this Fock term is included perturbatively in the sense that we remain in the Hartree basis, since we neglect the Fock contribution to the scalar self-energy of the nucleon. Although this problem is of minor importance for the binding energy around the nuclear matter density, the use of a fully self-consistent Hartree-Fock basis in place of the Hartree basis for the nucleon Dirac wave function may play a very important role at a high density in limiting the maximum mass of a hyperonic neutron star, as pointed out in Ref. [42]. The passage to this Hartree-Fock basis can be implemented without formal difficulty but at the expense of increasing numerical complexity [39; 44]. We also introduce the effect of short-range correlations by subtracting from the Hartree and Fock contributions the same quantities but adding \(q_{c}^{2}\) to the exchange momentum squared in the meson propagators and form factors.
** \(E^{\pi\rho-Fock}\) is the Fock term associated with the \(\pi+\rho\) spin-isospin inteaction in presence of short-range correlations. Notice that only the central part of the spin-isospin interaction gives an non vanishing contribution; in particular it involves only g'(q) but not \(h^{\prime}(q)\).
** The two-pion (or two-rho) loop contribution is the sum of three pieces:
\[E^{2\pi-loop}=E^{corr-3h}\,+\,E^{\Delta\,box}\,+\,E^{tensor}. \tag{47}\]
* \(E^{corr-3h}=E^{ring}-E^{ring-2h}\) is the genuine correlation energy starting at the three-hole-line order which is obtained by subtracting the two-bubble (two-hole-line) 2p-2h energy, \(E^{ring-2h}\) (27), from the full ring summation, (13). As demonstrated above, the reason is the depopulation of the Fermi sea due to short-range correlations, inducing an excess of kinetic energy which just compensates this 2p-2h energy, \(E^{ring-2h}\).
* \(E^{\Delta\,box}\) is the contribution of the iterative \(\Delta\) box diagrams depicted in fig.6b. Since one has by construction at the hartree level \(E^{\Delta\,box}_{H}\simeq E^{\sigma^{\prime}}_{H}\), we incorporate the Pauli exchanged term according to
\[E^{\Delta\,box}=E^{\Delta\,box}_{H}\left(1-\frac{E^{\sigma^{ \prime}}_{F}}{E^{\sigma^{\prime}}_{H}}\right), \tag{48}\]
with \(E^{\Delta\,box}_{H}\) given by (37) and \(E^{\sigma^{\prime}}_{H}\) given by (36). The Fock \(\sigma^{\prime}\) energy is computed as
\[E^{\sigma^{\prime}}_{F}=\frac{g^{2}_{\sigma^{\prime}}}{16}\,\frac{\rho^{2}\,+ \,\rho_{S}^{2}}{\rho}\left(\frac{\Lambda^{2}_{\sigma^{\prime}}(\bar{q}^{2})}{ m^{2}_{\sigma^{\prime}}+\bar{q}^{2}}-\frac{\Lambda^{2}_{\sigma^{\prime}}(\bar{q}^{2} +q^{2}_{c}}{m^{2}_{\sigma^{\prime}}+\bar{q}^{2}+q^{2}_{c}}\right), \tag{49}\]
with \(\bar{q}^{2}=6k_{F}^{2}/5\).
* \(E^{tensor}\) given by Eq. (42) is the contribution of the iterated \(\pi-\rho\) tensor force in presence of short-range correlations.
Even if we are aware that many of the various approximations or prescriptions of this multi-step approach should or would deserve a more detailed study, we do not refrain from showing the results of the calculation without a further fine-tuning of the parameters. The saturation curve is displayed in Fig. 4. The binding energy at the saturation point, \(E_{sat}/A=-17.73\,MeV\), and the saturation density, \(\rho=1.19\,\rho_{0}=0.19\,fm^{-3}\), are two large, as well as the incompressibility modulus, \(K_{sat}=338\,MeV\). Even though the saturation point is not perfect, it is rather satisfactory to arrive at decent results for the properties of nuclear matter in an approach based on a microscopic model that mainly depends on QCD inputs, namely, the string tension and the gluon correlation length (or the gluon condensate) and parameters (\(g_{V}/m_{V},\,\kappa_{\rho}\)) known from well-established hadronic phenomenology.
It is not the purpose of this article to adjust the parameters derived from the QCD-connected model, but we simply notice that if we increase the \(C\) parameter from \(C=0.32\) to \(C=0.42\), we can improve the results for the saturation point: \(E_{sat}/A=-15.45\,MeV\), \(\rho=1.07\,\rho_{0}=0.17\,\mbox{fm}^{-3}\), \(K_{sat}=290\,MeV\) (Fig. 5), whereas the lattice parameter \(a_{4}=-0.28\,GeV^{-3}\) is compatible with lattice results. A robust conclusion is that the pion and rho loops are necessary to obtain sufficient binding, a conclusion already reached in Ref. [38], even if this contribution is much smaller than what is obtained from iterated pion exchange (planar diagrams) in in-medium chiral perturbation theory [74] where the short-range physics is accounted for by a unique cutoff regularizing the pion loops.
### The two "sigma mesons" and their associated three-body forces
I now come back to the questions raised in our earlier works of the nineties about the status of the sigma meson in nuclear physics. The conclusion is that the scalar-isoscalar attraction has two quite different origins associated in some sense with two "sigma mesons". The first one is associated with the radial fluctuation of the chiral condensate in the chirally broken vacuum (fig. 6a). In the field theoretical lagrangian approach, this mode corresponds to the scalar-isoscalar field, \(s=\sigma_{W}\), and can be identified with the "nuclear physics sigma meson" of the relativistic mean field theories, but with a
Figure 4: Binding energy of nuclear matter versus \(\rho/\rho_{0}\). Full line: total energy. Dotted line: mean field RHF result. Dot-dashed line: spin-isospin Fock term. Double dotted-dashed line: \(2\pi\)-loop.
Figure 5: the same as fig. 4 but with \(C=0.42\) in place of \(C=0.32\). In addition we also show the result of the calculation when the \(\Delta\) box diagrams are replaced by the \(\sigma^{\prime}\) exchange (dashed line).
well defined chiral status. The second one, named \(\sigma^{\prime}\) as sometimes in Bonn-like OBE approaches, is a fictitious object simulating two-pion (or two-rho) exchange not reducible to the iteration of the single pion (or single rho) exchange with only nucleons in the intermediate states. It is represented by the \(\Delta\) box diagrams of fig. 6b. It turns out that these two objects contribute with similar strength to the \(NN\) attraction in the nuclear medium.
A remarkable point is that each of these two "sigma mesons" can be associated with a specific three-body force, which is undoubtedly the main origin of the saturation mechanism. In effect the combination of the response (e.g. the self-consistent modification of the confined quark wave function) of the nucleon to the nuclear scalar field (fig. 7a) and of the cubic \(s^{3}\) tadpole term in the chiral potential (fig. 7b) generates a contribution to the binding energy per nucleon, \(E^{(3b-s)}\), given by Eq. (8). We recall that this effect is governed by two QCD-connected parameters, \(C\) and \(C_{\chi}\). As already mentioned above the diagram of fig. 7c accounts for the Pauli-blocking in the \(p-h\) polarization bubble of fig. 6, which would be lacking if the \(\Delta\) box diagrams were replaced by a fictitious \(\sigma^{\prime}\) meson. It can also be seen as a correction of the spin-isospin Fock term due to the dressing of the screened \(\pi+\rho\) exchange by the \(\Delta-h\) virtual excitations (see fig. 7c, lower panel). The corresponding contribution to the binding energy can be obtained as the difference:
\[E^{(3b-\sigma^{\prime})}=E^{\Delta\,box}_{H}-E^{sigma^{\prime}}_{H}. \tag{50}\]
We see in fig. 8 that \(E^{(3b-s)}\) and \(E^{(3b-\sigma^{\prime})}\) are very similar in magnitude and both have a density dependence close to \(\rho^{2}\) as it should for a pure three-body force.
## 5 Summary and conclusion
The set of works carried out with Peter Schuck in the nineties and dealing with the modification of the pion-pion interaction in the nuclear medium provided the main stimulus of an approach now known as the chiral confining model. It has progressively emerged as
Figure 8: Contribution to the binding energy per nucleon coming from the three-body diagrams of the lower panel of fig. 7, versus \(\rho/\rho_{0}\). Full line: three body force (nucleon response + tadpole) associated with the chiral scalar field \(s\) (fig. 7a,b); dotted line: the same but rescaled by \(\rho_{0}^{2}/\rho^{2}\); dot-dashed line: correction to the spin-isospin Fock-term by \(\Delta-h\) bubble dressing (fig. 7c); double dotted-dashed line: the same but rescaled by \(\rho_{0}^{2}/\rho^{2}\).
Figure 6: Upper panel: diagrams contributing to the scalar-isoscalar interaction in nuclear matter. (a) hiral scalar field, \(\sigma_{W}=s\), exchange; (b) \(\Delta\) box- \(2\pi\) exchange, often simulated by a fictitious \(\sigma^{\prime}\). Lower panel: the equivalent many-body diagrams.
Figure 7: Upper panel: Three-body forces (3-hole-line) associated with scalar-isoscalar exchange in nuclear matter. (a) response of the nucleon to the scalar field \(s\) generating a repulsive three-body force; (b) tadpole diagram from the \(s^{3}\) term in the chiral effective potential generating an attractive three-body force; (c) in-medium modification of the \(\pi\rho\) Fock term from Pauli-blocking in diagram (b1) of fig. 6 ; this diagram generating a repulsive three-body force can also be seen as a in-medium modification of the \(\sigma^{\prime}\) meson due to Pauli-blocking. Lower panel: the equivalent many-body diagrams.
based on three important physical pillars. First, the chiral invariant scalar field \(s\), associated with the radial fluctuation of the chiral condensate, is identified with the "nuclear physics sigma meson" of relativistic theories [25; 26]. Second, the effect of the quark substructure of the nucleon is reflected by its polarizability in the presence of the nuclear scalar field generating a repulsive three-nucleon force providing an efficient saturation mechanism [37; 38; 39; 41]. Third, the associated response parameters, namely, the nucleon scalar coupling constant, \(g_{S}\), and scalar susceptibility, \(\kappa_{NS}\), can be related to two chiral properties of the nucleon given by lattice QCD simulations, imposing severe constraints on their values [38; 39; 41]. It was originally built using a linear sigma model (L\(\sigma\)M) but subsequently enriched by replacing the L\(\sigma\)M chiral potential by its NJL equivalent [43; 45], making lattice QCD constraints more compatible with the chiral properties obtained in confining models of the nucleon. A further step has been recently accomplished [46] using an effective Hamiltonian inspired by QCD that allows the simultaneous introduction, within some ansatz prescriptions, of a confining model for the nucleon allowing the calculation of the response parameters while generating an equivalent NJL model. This QCD-connected chiral confining model, with inputs linked to genuine QCD quantities (string tension \(\sigma\), gluon condensate \(\mathcal{G}_{2}\)) and well-established hadronic phenomenology (\(\kappa_{\rho}\sim 6\)), is able to generate, via an adapted G matrix approach, a pair correlation function that ensures the natural UV regularization of the pionic correlation energy. It also gives, without further fine-tuning, reasonable results for the chiral properties of the nucleon (\(a_{2}\) and \(a_{4}\) parameters extracted from lattice data) and saturation properties of nuclear matter, once the effect of short-range correlations (G matrix) is properly implemented. Using this QCD-connected model as un input, we have proposed in this paper an improved version of the contribution of \(2\pi\) loops to the binding energy with a clarification of the real origin of the scalar-isoscalar attraction between nucleon in the nuclear medium. It is illustrated by the "two-sigma-mesons" picture, one being related to the chiral invariant \(s=\sigma_{W}\), and the other one to the \(\Delta\) box diagrams. Moreover the associated three-body forces presumably constitute the dominant saturation mechanism. Hence this approach provides a link between the chiral properties of the nucleon and the saturation mechanism and/or a link between the fundamental theory of strong interaction and nuclear matter properties, although the results presented in this paper should be enriched in various aspects of this multi-step approach for application to high density nuclear matter and the neutron star equation of state. Given the remaining theoretical uncertainties concerning both the many-body treatment and the nucleon modeling, the strategy presently adopted by the Lyon group is to use a Bayesian method with QCD-connected parameters and lattice data \((a_{2},a_{4})\) parameters, implemented with their associated uncertainties, as prior input variables as described in two recent papers [41; 44].
## acknowledgments
I acknowledge my long term collaborators, M. Ericson, M. Martini, D. Davesne, H. Hansen and also more recent ones, J. Margueron and M. Chamsedinne, who where involved at various stages of the theoretical developments described in this paper.
|
2303.01245 | An Incremental Gray-box Physical Adversarial Attack on Neural Network
Training | Neural networks have demonstrated remarkable success in learning and solving
complex tasks in a variety of fields. Nevertheless, the rise of those networks
in modern computing has been accompanied by concerns regarding their
vulnerability to adversarial attacks. In this work, we propose a novel
gradient-free, gray box, incremental attack that targets the training process
of neural networks. The proposed attack, which implicitly poisons the
intermediate data structures that retain the training instances between
training epochs acquires its high-risk property from attacking data structures
that are typically unobserved by professionals. Hence, the attack goes
unnoticed despite the damage it can cause. Moreover, the attack can be executed
without the attackers' knowledge of the neural network structure or training
data making it more dangerous. The attack was tested under a sensitive
application of secure cognitive cities, namely, biometric authentication. The
conducted experiments showed that the proposed attack is effective and
stealthy. Finally, the attack effectiveness property was concluded from the
fact that it was able to flip the sign of the loss gradient in the conducted
experiments to become positive, which indicated noisy and unstable training.
Moreover, the attack was able to decrease the inference probability in the
poisoned networks compared to their unpoisoned counterparts by 15.37%, 14.68%,
and 24.88% for the Densenet, VGG, and Xception, respectively. Finally, the
attack retained its stealthiness despite its high effectiveness. This was
demonstrated by the fact that the attack did not cause a notable increase in
the training time, in addition, the Fscore values only dropped by an average of
1.2%, 1.9%, and 1.5% for the poisoned Densenet, VGG, and Xception,
respectively. | Rabiah Al-qudah, Moayad Aloqaily, Bassem Ouni, Mohsen Guizani, Thierry Lestable | 2023-02-20T09:48:11Z | http://arxiv.org/abs/2303.01245v1 | # An Incremental Gray-box Physical Adversarial Attack on Neural Network Training
###### Abstract
Neural networks have demonstrated remarkable success in learning and solving complex tasks in a variety of fields. Nevertheless, the rise of those networks in modern computing has been accompanied by concerns regarding their vulnerability to adversarial attacks. In this work, we propose a novel gradient-free, gray box, incremental attack that targets the training process of neural networks. The proposed attack, which implicitly poisons the intermediate data structures that retain the training instances between training epochs acquires its high-risk property from attacking data structures that are typically unobserved by professionals. Hence, the attack goes unnoticed despite the damage it can cause. Moreover, the attack can be executed without the attackers' knowledge of the neural network structure or training data making it more dangerous. The attack was tested under a sensitive application of secure cognitive cities, namely, biometric authentication. The conducted experiments showed that the proposed attack is effective and stealthy. Finally, the attack effectiveness property was concluded from the fact that it was able to flip the sign of the loss gradient in the conducted experiments to become positive, which indicated noisy and unstable training. Moreover, the attack was able to decrease the inference probability in the poisoned networks compared to their unpoisoned counterparts by 15.37%, 14.68%, and 24.88% for the Densenet, VGG, and Xception, respectively. Finally, the attack retained its stealthiness despite its high effectiveness. This was demonstrated by the fact that the attack did not cause a notable increase in the training time, in addition, the Fscore values only dropped by an average of 1.2%, 1.9%, and 1.5% for the poisoned Densenet, VGG, and Xception, respectively.
Adversarial Attacks, Data Poisoning, Neural Networks, Iris Recognition.
## I Introduction
Cognitive cities [1] are proactive, hyper-connected, and citizen-driven cities that are designed to minimize resource consumption, in order to achieve sustainability. In addition, the vast advancement in Artificial Intelligence (AI) and Internet of Things (IoT) technologies have enhanced the evolution of research that integrates both technologies to deliver and automate services for cognitive cities' residents. In fact, the great development that emerged from the integration of those technologies has brought unforeseen exposures to cybersecurity, in addition to novel attacks that need to be addressed in order to deliver secure automation to cognitive cities.
Securing access to different services and facilities, such as connected buildings and data centers, and managing the flow of foot traffic are crucial requirements when adopting the cognitive city paradigm. Those requirements can be implemented using biometric authentication such as fingerprint recognition and iris recognition. Despite the benefits of biometric authentication, privacy concerns and security attacks pose serious challenges to this technology after deployment. Attacks that target biometric recognition systems typically include presenting human characteristics or artifacts directly to a biometric system to interfere or bias with its standard operation. Such attacks can result in granting access to unauthorized individuals into secured premises, allowing tailgating, or triggering denial of service by rejecting the biometrics of authorized individuals. For instance, in 2017, the Chaos Computer Club executed a successful attack on the Samsung Galaxy S8 iris scanner using a simple photograph and a contact lens [2].
On a different note, neural networks have gained wide popularity in the past decade due to their supremacy in terms of accuracy and minimal need for human intervention. Moreover, those networks are data hungry and are very sensitive to patterns they are exposed to during the training phase. On the other hand, neural networks are vulnerable and can be biased even with the introduction of simple adversarial attacks. For example, altering a single pixel in the data fed to an image classifier can disrupt the learning experience and result in a biased model [3].
Adversarial attacks are considered white box when the attacker has full access to the neural network and data, while gray box attacks assume having access to either and black box attacks assume access to neither. Those attacks can be categorized into digital attacks and physical attacks. Digital attacks engineer pixel values of input images, whereas physical attacks insert pixel patches that represent real world objects into the input image instance. Attacker goals can vary from faulting the predictions of a certain class, in what is called "targeted attacks". Moreover, an attack can be "non-targeted" and aim to fault the model in general.
Furthermore, attacks that target faulting the inference phase have been extensively studied in the literature. On the contrary, only a handful of papers focused on faulting the training phase and the intermediate values related to its computations. In 2022, Breier _et al._ introduced the first attack that directly targets the training phase by perturbing the ReLu values while training [4]. In fact, the lack of research attention on attacks
that target the training phase puts many applications that rely on neural networks in jeopardy. In this work, we propose and test a novel attack that focuses on faulting the training process of neural networks in the domain of biometric authentication through iris recognition. The contributions of this work can be summarized as follows:
1. We introduce a novel gradient-free, data poisoning attack that incrementally and directly targets the training set during the training process of a neural network with minimal knowledge by the attacker. To the best of our knowledge, this is the first attack that executes between training epochs and targets the intermediate data structures of the training phase.
2. We conduct extensive experimental verification on the proposed attack to test its effectiveness and stealthiness. We define four evaluation criteria to quantify the effect of the attack, namely, the average of the loss change, the average inference probability, the training time difference, and the performance degradation measure.
3. We experiment the proposed attack on an important aspect of a cognitive city, namely, iris recognition. To the best of our knowledge, this is the first attempt to test the effect of an adversarial attack that occurs during training on the domain of iris recognition.
The rest of this paper is organized as follows: the most recent literature on the domain of physical attacks and iris recognition is presented in Section II. The proposed methods are outlined in Section III. The results are described and discussed in Section IV. Finally, Section V concludes and summarizes the main highlights and observations of this work.
## II Related Work
### _Attacks on Neural Networks_
Patch attacks are physical attacks that replace a subset of pixels in an image with pixels from adversarial patches to bias a model [5]. While many studies have proposed attacks that target faulting the inference phase [6, 7], only a handful of papers focused on faulting the training phase and the intermediate values related to its computations [4]. For example, Zhao _et al._[6] applied the alternating direction method of multipliers at the inference time to solve the optimization problem of the targeted fault sneaking attack. The results showed that the attack was successful and stealthy, moreover, the success rate was approximately 100% when the number of targeted images was less than 10. Whereas, the success rate decreased as the number of fooled images increased. Furthermore, the work in [7] studied the effects of bitwise perturbations at inference time on 19 deep networks. The vulnerable parameters of the experimented networks were identified using heuristic functions. The results showed that most deep architectures have at least one parameter that causes an accuracy loss of over 90% when a bit-flip is executed on their bitwise representation.
In addition, the Fast Gradient Sign Method (FGSM) has been widely used in the literature as an attacking strategy [8]. This method includes adding noise whose direction is the same as the gradient of the cost function with respect to the data using a trained model. The work in [4], proposed the first attack that targets the training phase by changing the values of the ReLu function to bias the neural network. The novel attack was proven to be effective and stealthy.
### _Attacks on Iris Recognition Systems_
The crucial role iris recognition has played in securing premises, in addition to the threatening effects of breaching such authentication systems, have made iris biometric authentication systems an active target for adversarial attacks. A novel morph attack on iris recognition systems was tackled in [9]. Sharma _et al._ generated morphed iris images using the Indian Institute of Technology Delhi (IITD) Iris Database and West Virginia University (WVU) multi-modal datasets. The morph attack achieved a success rate higher than 90% on two state-of-the-art iris recognition methods, which indicates the vulnerability of iris recognition systems.
In order to protect against the increasing attacks, researchers have also focused on studying countermeasures and detection mechanisms for iris recognition attacks. For example, Thukral _et al._[10] proposed an iris spoofing detection system that utilized Gabor filters and Histogram of Gradient (HOG) bins to extract features. Next, a Support Vector Machine (SVM) was used to detect if the extracted features represented fake or real iris. The proposed system was able to detect spoofing attacks with an accuracy of 98%. Finally, Tapia _et al._[11] tackled testing the liveness of the scanned iris to protect the system from being fooled by printed images or artificial eyes. The proposed work utilized a MobileNetV2 network, which was trained from scratch. Moreover, the authors increased the number of filters and weighted each class based on the number of its instances. The proposed method was able to accurately classify irises with competitive Bona Fide Classification Error Rates (BPCER) of less than 4% in all experiments.
## III Physical Gray-box Adversarial Attacks
A labeled training set of size \(s\) can be represented as \(DS=\{(x^{i},\,y^{i})\}_{i=1}^{s}\), where \(y^{i}\in\mathcal{Y}\) and \(\mathcal{Y}\) is the set of all possible output classes for an image classification problem. When training a deep classifier, we aim to optimize a discriminant function \(\mathcal{F}\) that maps each instance,\(x^{i}\), to the class associated with the highest class probability, as can be seen in Equation 1. This optimization process takes place during the training process by passing \(DS\) to a deep classifier for a number of training rounds. The number of training rounds will be referred to as \(Epochs\) throughout the rest of this paper. The aforementioned setting of training \(\mathcal{F}\) without any attacks will be referred to as the **base model** throughout this work.
\[\mathcal{F}\to argmax(P(\mathcal{Y}\mid x^{i})) \tag{1}\]
### _Attack Definition_
In our proposed attack, an attacker aims to corrupt the training process by perturbing the training instances incrementally between training epochs in order to optimize a
corrupted poisoned discriminant function \(\mathcal{F}^{\prime}\) that produces faulty probability distributions over the possible output classes. The attack is executed implicitly in multiple rounds. In each poisoning round, a poisoning procedure that selects \(X\subseteq DS\) of size \(|X|=\alpha*s\) is executed, where \(\alpha\in(0\%,100\%]\) is the poisoning percentage coefficient. The attacker's goal is to replace \(X=\{(x^{i},y^{i})\}_{i=1}^{|X|}\) with a poisoned set \(X^{\prime}=\{g(x^{i})\), \(y^{i})\}_{i=1}^{|X|}\), where g(.) is the poisoning function that modifies \(x^{i}\) at a pixel level. The poisoning function replaces the pixels that fall within a selected area, namely \(Patch_{Area}\), with faulty pixels, \(x^{\prime}\), in order to corrupt the image representation and result in a faulty training process. The poisoning function can be seen in Equation 2, where \(W\) and \(H\) are the width and height of the training image instance \(x^{i}\).
\[g(x)=\begin{cases}x^{\prime}_{u,v}&if(u,v)\in Patch_{Area}\\ &,u\in[0,W),v\in[0,H)\\ x_{u,v}&Else\end{cases} \tag{2}\]
The attack targets the intermediate data structures, where the training instances are saved. In addition, it is executed incrementally between training epochs, such that a different \(X\) is selected every poisoning round in order to accumulate the poisoned training instances and increase the effectiveness of the attack.
The attack frequency coefficient determines the number of poisoning rounds and is called \(\beta\in[1,Epochs]\). When the value of \(\beta\) is chosen to be 1, then the attack will be executed after each training epoch causing an increased risk of damage. On the contrary, if the value is chosen to be \(Epochs\), then the poisoning process will only happen once after the first training epoch.
### _Poisoning Strategy_
Function \(g(.)\) in Equation 2 replaces pixels in a training instance within the defined poisoning area, \(Patch_{Area}\). This poisoning procedure can be implemented in multiple ways. In this work, we opted to implement \(g(.)\) to execute local perturbations and global perturbations [12]. It is worth mentioning that only one type of perturbations was considered in each of the conducted experiments in this work.
In the local perturbations setting, a small area called physical patch in the training instance is selected and replaced with pixels from another image. In this work, the physical patch was chosen to be close to the training set domain, hence it was an image of a human eye. It is worth mentioning that the size of the \(Patch_{Area}\) and its location are randomized, and optimizing them is out of the scope of this work [5].
On the other hand, in the global perturbations setting, all the instances in \(X\) are replaced with another randomly selected image from the training set. This way the classifier will be exposed to a highly redundant training set which corrupts the training process by increasing the risk of overfitting. Both poisoning strategies are not easy to blacklist, since the local setting only alters a small area of each instance and the global perturbation setting uses an image from within the training instances in a manner that imitates image augmentation, which is a benign, widely used technique in training neural networks.
### _Attack Characteristics_
The attack specifications can be summarized as:
1. **The attack is non-targeted:** the attack definition in III-A shows that no restrictions apply on the choice of \(y^{i}\) in \(X\). Moreover, the value of \(y^{i}\) remains unchanged after poisoning takes place in \(X^{\prime}\).
2. **The attack does not affect packets delay:** the attack only targets the training phase, whereas the inference phase is executed in the usual manner. Hence, the attack is stealthy in the sense that it does not affect the packet delay when the deep classifier is deployed on the cloud.
3. **The attack samples without replacement:** to guarantee faster and stealthier execution, \(X\) is sampled every poisoning round without replacement; that is an instance can only be included once in \(X\) at a certain poisoning round, however an instance can be included in multiple poisoning rounds. This implies that the network will be exposed to a different training set after every poisoning round, which results in a higher training instability.
4. **The attack is incremental for increased effectiveness:** the poisoned instances in \(X^{\prime}\) accumulate in the training set after each poisoning round and throughout the training phase, which in turn intensifies the effect of poisoning even at a low value of \(\alpha\).
5. **The attack is gradient-free [13] and is gray box:** the attack is gray box since we assume that the attacker only has access to the intermediate data structures of the training process without the need to access the physical path of the training instances or the neural network architecture. In other words, the attack is agnostic to the neural network architecture. The attack is also gradient-free since it perturbs the training data between epochs without the need to access the gradients of the attacked neural network.
6. **The attack targets intermediate data structures:** typically developers' efforts are more focused on preparing and preprocessing the training set before training. On the other hand, what happens during training and the values of the intermediate data structures that keep the training instances are overlooked, especially that the training is usually conducted on powerful servers with limited physical access. Hence, this attack which poisons the data implicitly between training epochs, acquires its high risk property from attacking data structures that are typically not monitored by professionals, and hence the attack goes unnoticed despite the damage it causes.
### _Evaluation Metrics_
In each experiment, the neural networks will be evaluated and compared in terms of the following evaluation measures:
1. Attack effectiveness measures: an attack is called effective if it achieves its intended goals. In our proposed attack, the goal is to expose the deep classifier to an
unstable training process, which in turn, will result in faulty probability distributions produced by the network at the inference stage. 1. Average of Loss Change \((ALC)\): the loss function is typically expected to decrease as the training process progresses. This is due to backpropagation, which reflects what the network learned during each training epoch. The \(ALC\) measures the average change in the loss value over the training epochs, and the sign of this evaluation metric is a leading element, as it reflects whether the loss was decreasing or increasing throughout training. Executing the attack is expected to cause instability in the training process due to the noisy poisoned data and, hence, increase the \(ALC\) value. The \(ALC\) can be defined as follows, where the \(\ell\) is the loss and \(Epochs\) is the number of training epochs: \[ALC=\frac{\sum_{i=1}^{Epochs}(\ell_{i}-\ell_{i-1})}{Epochs-1}\] (3) 2. Average Inference Probability (AIP): the softmax function is typically used in the last layer of deep classifiers to normalize the output to a probability distribution over the possible output classes. Each test instance is classified as the class of the highest probability. In this evaluation criterion, we assess the effect of the attack on the probabilities produced by the model at the inference stage, as typically higher probabilities imply more confidence about the selected class. As a result, a decreased average probability reflects the effectiveness of the attack on the final output of the model. \(AIP\) can be calculated using Equation 4, where \(t^{i}\) is a test instance. \[AIP=Average(argmax(P(Y\mid t^{i})))\] (4)
2. Attack stealthiness measures: an attack is called stealthy if the evaluation metrics of the corrupted classifier \(\mathcal{F}^{\prime}\) are close to the metrics of the base model \(\mathcal{F}\)[4]. 1. Training Time Difference \((TTD)\): training a neural network can be a lengthy process, especially when the training instances are large. Hence, it is crucial to ensure that executing the attack will not cause an observable added amount of time to the training phase, in order to keep the attack unnoticed. The \(TTD\) measure can be defined as follows: \[TTD=TrainingTime^{\prime}-TrainingTime_{base}\] (5) where \(TrainingTime_{base}\) is the time taken to train the base model, and \(TrainingTime^{\prime}\) is the training time when the neural network is trained with poisoned data. 2. Performance Degradation Measure (PDM): in order to confirm the attack stealthiness, the metrics of the poisoned classifier need to be reasonably close to the metrics of the base classifier. In this evaluation criterion, the difference between the macro Fscore of the base model and each poisoned model is calculated, as described in Equation 6, where \(Fscore^{\prime}\) is the Fscore of a poisoned model. \[PDM=Fscore_{base}-Fscore^{\prime}\] (6)
### _Datasets_
The proposed attack perturbs images and hence can target any computer vision application. Nevertheless, we opted to apply it to an iris recognition dataset, due to the significance of this domain. The CASIA Iris Subject Ageing dataset [14] was considered in our experiments. This dataset was collected by the National Laboratory of Pattern Recognition (NLPR) in China in April 2009 and April 2013. In this work, the subset of CASIA Iris Subject Ageing which was collected in 2009 using the H100 sensor was chosen due to its high diversity and good size. The subset comprises 37912 instances of the left and right eyes of 48 individuals. The dataset instances pose some challenging scenarios, like glasses, partially closed eyes, Moreover, some instances have very low brightness. The cross-validation method was used to train and evaluate the neural networks, and 100 images from each user subject were randomly selected for the test dataset.
### _Technical and Experimental Setup_
Three state-of-the-art deep classifiers, namely, Densenet, VGG, and Xception were considered for this work. Moreover, the number of epochs was set to 10, the cross entropy loss function was used and the networks were trained with a learning rate of.01 on the Google Colab Pro platform which utilizes NVIDIA GPUs. It is worth mentioning that the code of this work is available on Github [15].
Each of the 3 considered deep classifiers were experimented with \(\alpha\) values of 5%, 10%,15%, and 20%, as 20% is typically the maximum poisoning percentage considered in the literature [16]. In the experiments description and results, the local perturbations poisoning strategy will be referred to as \(P\), and the global perturbations strategy will be referred to as \(R\).
## IV Results and Discussion
In this section, the results of evaluating the proposed attack will be presented in detail. Figures 1, 2, and 3 depict the results of the evaluation metrics described in III-D. In all figures, the result of the base model is depicted as the origin point (\(\alpha\) = 0%).
On a different note, increasing the attack frequency (i.e., a lower \(\beta\)) resulted in increased effectiveness in all experiments. In the experiments where \(\beta\)'s value was set to 1, the \(ALC\) kept increasing as the value of \(\alpha\) increased, and the value was positive in all experiments where \(\alpha\geq 10\%\). On the other hand, when \(\beta=Epochs\), the \(ALC\) results were increasing but negative in all experiments, which means that the loss values were still decreasing but at a lower rate compared to the base model and the experiments of higher frequency.
The \(AIP\) results are depicted in Figure 2, where it can be seen that increasing the value of \(\alpha\) resulted in decreasing the \(AIP\) in all experiments. However, this decrease varied in the experiments; for example, the decrease was slight, even when \(\alpha\) increased, in the experiments where \(\beta\)=\(Epochs\). On the other hand, increasing \(\alpha\) with a higher frequency (\(\beta=1\)) resulted in a more noticeable drop in the \(AIP\) values. For example, it can be seen in Figure 2(c) that the \(AIP\) value dropped by 24.88% when \(\alpha=20\%\) and \(\beta=1\) in the random poisoning experiment, \(R\). Whereas, the \(AIP\) value only dropped by 5% when we only changed the value of \(\beta\) to be equal to the number of \(Epochs\). Furthermore, the highest drop in the \(AIP\) in the poisoned networks compared to their unpoisoned counterparts at inference time was 15.37%, 14.68%, and 24.88% for the Densenet, VGG, and Xception, respectively. Overall, we can conclude that the attack was effective in all conducted experiments. Moreover, the attack effectiveness has a positive correlation with the percentage \(\alpha\) and frequency \(\beta\).
### _Analysis of Attack Stealthiness_
It is crucial to keep the proposed attack undetected. The attack can be easily noticed if it takes long to execute, thus, to ensure the attack stealthiness, the \(TTD\) measure is monitored
Fig. 1: Experimental results of the Average of Loss Change (ALC) values
Fig. 3: Experimental results of the Performance Degradation Measure (PDM) values
Fig. 2: Experimental results of the Average Inference Probability (AIP) values
in all experiments. Among all conducted experiments, the maximum \(TTD\) value was 63 seconds. Hence, the attack did not add a noticeable period of time to the training time of the base model. Moreover, to monitor the stealthiness of the attack, the \(PDM\) values were recorded as can be seen in Figure 3. The maximum \(PDM\) value was recorded for the VGG network with \(\alpha=20\%\) and \(\beta=1\) in the random poisoning experiment, \(R\). Overall, the average \(PDM\) values were 1.2%, 1.9%, and 1.5% for the Densenet, VGG, and Xception, respectively. Hence, it can be concluded that the attack demonstrated a stealthy behavior.
### _Analysis of Poisoning Strategy_
As explained in Section III-B, the attack was experimented under local perturbations setting (\(P\)) and global perturbations setting (\(R\)). The influence of the perturbation type was highly associated with the value of \(\beta\). It can be seen in Figures 1, 2 and 3 that in the experiments of low frequency, where \(\beta=Epochs\), both perturbation types achieved comparable results. On the other hand, when the poisoning rounds were executed after every epoch, where \(\beta\)=1, the attack showed the highest effectiveness in the global perturbations setting, \(P\).
Finally, the results showed that the proposed attack is effective and stealthy. Those properties increase when the attack is intensified by increasing the value of \(\alpha\), increasing the number of affected pixels, similar to the case of global perturbations, and decreasing \(\beta\) for higher execution frequency. Moreover, the proposed attack inherits its riskiness from attacking unobserved data structures that usually reside on powerful servers with limited physical access. The attack is also incremental and accumulates poisoned data gradually to intensify its effectiveness across the training epochs. In addition, the attack requires no knowledge about the neural network structure, as all experiments in this work were conducted using the same injection code.
## V Conclusion and Future work
Neural networks are vulnerable to adversarial attacks. Moreover, the digital transformation adopted worldwide implies continuous acquisition and analytics of big streams of data, which has brought novel digital threats and unforeseen exposures to cybersecurity. In this work, we propose a novel gradient-free, gray box, incremental attack that targets the intermediate data structures of the training phase of neural networks. The attack has 3 main parameters: the attack percentage coefficient, the attack frequency coefficient, and the poisoning strategy. In all conducted experiments, it was noted that the attack stealthiness and effectiveness had a positive correlation with the aforementioned parameters.
Moreover, the attack resulted in unstable training, as it made the loss values increase which in turn indicates poor learning and generalization. Moreover, the attack was able to decrease the probability of the output class (\(AIP\)) in the poisoned networks compared to their unpoisoned counterparts at inference time by 15.37%, 14.68%, and 24.88% for the Densenet, VGG, and Xception, respectively. Despite its effectiveness, the attack remained stealthy as it only dropped the Fscore values by 1.2%, 1.9%, and 1.5% for the poisoned Densenet, VGG, and Xception, respectively.
In future works, further sensitivity analyses will be conducted on existing and new parameters, such as the type of communication protocol, and the area and size of the patch area. Moreover, the attack will be compared to other iris recognition attacks.
## Acknowledgements
This research was supported by the Technology Innovation Institute (TII), Abu Dhabi, UAE, under the CyberAI project (grant number: TII/DSRC/2022/3036).
|
2301.10531 | 3D Tooth Mesh Segmentation with Simplified Mesh Cell Representation | Manual tooth segmentation of 3D tooth meshes is tedious and there is
variations among dentists. %Manual tooth annotation of 3D tooth meshes is a
tedious task. Several deep learning based methods have been proposed to perform
automatic tooth mesh segmentation. Many of the proposed tooth mesh segmentation
algorithms summarize the mesh cell as - the cell center or barycenter, the
normal at barycenter, the cell vertices and the normals at the cell vertices.
Summarizing of the mesh cell/triangle in this manner imposes an implicit
structural constraint and makes it difficult to work with multiple resolutions
which is done in many point cloud based deep learning algorithms. We propose a
novel segmentation method which utilizes only the barycenter and the normal at
the barycenter information of the mesh cell and yet achieves competitive
performance. We are the first to demonstrate that it is possible to relax the
implicit structural constraint and yet achieve superior segmentation
performance | Ananya Jana, Hrebesh Molly Subhash, Dimitris N. Metaxas | 2023-01-25T11:43:56Z | http://arxiv.org/abs/2301.10531v1 | # 3D Tooth Mesh Segmentation With Simplified Mesh Cell Representation
###### Abstract
Manual tooth segmentation of 3D tooth meshes is tedious and there is variations among dentists. Several deep learning based methods have been proposed to perform automatic tooth mesh segmentation. Many of the proposed tooth mesh segmentation algorithms summarize the mesh cell as - the cell center or barycenter, the normal at barycenter, the cell vertices and the normals at the cell vertices. Summarizing of the mesh cell/triangle in this manner imposes an implicit structural constraint and makes it difficult to work with multiple resolutions which is done in many point cloud based deep learning algorithms. We propose a novel segmentation method which utilizes only the barycenter and the normal at the barycenter information of the mesh cell and yet achieves competitive performance. We are the first to demonstrate that it is possible to relax the implicit structural constraint and yet achieve superior segmentation performance.1
Ananya Jana\({}^{\star\dagger}\), Hrebesh Molly subhash\({}^{\dagger}\), Dimitris Metaxas\({}^{\star}\)\({}^{\star}\) Department of Computer Science, Rutgers University
\({}^{\dagger}\)Colgate Palmolive Company, Piscataway
Footnote 1: [https://github.com/ananyajama/tooth_mesh_seg](https://github.com/ananyajama/tooth_mesh_seg)
Intraoral scan segmentation, 3D tooth mesh segmentation, deep learning, tooth mesh, tooth point cloud
## 1 Introduction
With the advancement of technology, computer-aided orthodontic treatment is being widely embraced. Intraoral scanners are being widely adopted in place of the intraoral/dental cameras due to their ability to reconstruct the 3D surface. A vital task in computer aided orthodontic treatment is automated and accurate segmentation of teeth from intraoral scans. The intraoral scanners produce 3D surface reconstructions of the teeth either in the form of point cloud or in a mesh format or both. A highly accurate automated tooth mesh segmentation can help in downstream tasks such as recognising and classifying different dental/oral conditions like gingivitis, caries, and white lesions. There are multiple challenges involved in tooth mesh segmentation such as - crowded teeth, misaligned teeth, missing teeth. The size and shape of teeth can also vary widely across subjects. The second and third molar may evade capturing due to their being in the deep intra oral regions. Or the second/third molar might not be fully formed. Different teeth and gum conditions like recession, enamel loss etc can also alter the appearance of the teeth significantly. Multiple automatic tooth mesh segmentation algorithms have been proposed[1, 2, 3, 4, 5, 6]. These tooth mesh segmentation algorithms can achieve high accuracy. Some of these methods can even achieve high accuracy when trained on a single 3D tooth mesh[7]. In this paper, we note that a dominating trend in these highly accurate deep learning based tooth segmentation methods is to summarize or represent the mesh cell in a specific way which attaches the mesh cell vertices to the barycenter of the mesh cell as features. This summarizing makes it hard to use multiple resolutions of the tooth mesh in the segmentation methods. Utilizing multiple resolutions of the data is common in point cloud processing algorithms such as BAAFNet[8]. Sampling from the tooth mesh is also difficult with conventional mesh cell summarizing as it leads to loss of surface information and and causes disconnectedness as shown in Fig. 2. It can also be noted that the existing summarizing implicitly poses a structural constraint as shown in Fig. 1. This structural constraint on the data is artificial. The reason is that the mesh representation consists of mesh cells which are artificially created to represent the entire object surface and the mesh cells could have been alternatively laid out as well. In other words, it is possible to have multiple mesh cell layouts for the same 3D dental surface as the mesh cells are a way to approximate the surface. Given this constrained representation we explore, in this paper, if we can utilize a simplified mesh cell representation by relaxing the structural constraint, yet achieve high segmentation performance. Our key con
tribution is - (a) proposing a novel tooth mesh segmentation method that utilizes a simplified mesh cell representation. Our model achieves competitive performance; (b) We are the first to demonstrate that the simplified mesh cell representation can be equally or even more effective if coupled with a suitable deep learning network; (c) The simplified mesh cell representation obtained by relaxing the implicit structural constraint can pave the way for utilization of multi resolution tooth mesh in the future segmentation algorithms.
## 2 Methods
Our proposed method has three steps (1) Data preprocessing, (2) Data augmentation, and (3) Segmentation network to segment the jaw into the seven tooth labels and the background/gingiva label.
### Data Pre-processing
We utilize 589 subjects from the public dataset. These subjects do not have the wisdom teeth and hence they have a teeth count \(\leq\) 14. We utilize the lower jaw scans. Each raw lower jaw scan has labels for every point. In this work we are interested in tooth mesh segmentation hence we interpolate the pointwise labels to mesh triangle labels using k nearest neighbor algorithm. The raw lower jaw scan contains more than 100000 meshes. The meshes are downsampled to 16000 meshes using quadric downsampling. Each mesh cell can be characterized with four vertices - three vertices of the mesh triangle and the barycenter of the mesh triangle. With these four points, a 24 dimensional vector is constructed comprising of 12 coordinate vectors and 12 normal vectors at the four points respectively as per the convention followed in [2, 3].
### Data Augmentation
We perform three types of data augmentation to improve the model's generalization ability - 1) random rotation, 2) random translation, and 3) random rescaling. We perform 40 augmentations for each data point, thereby, effectively creating 40 new samples for each lower jaw scan.
### Segmentation Network
Our proposed method is shown in Fig. 3. Our method consists of two parallel branches - a geometry processing branch and a curve processing branch. The two branches output two different global features which are then concatenated. Finally two lightweight 1D convolutions process the concatenated global features to give the segmentation scores.
The current mesh cell summarizing technique utilized by the state-of-the-art methods introduces an implicit structural constraint by attaching the mesh cell vertices to the barycenter. We aim to take away this implicit constraint in our proposed method by summarizing the mesh cell with only the barycenter and the normal at the barycenter. The relaxation in the structural constraint and the absence of the mesh vertices could potentially hamper the ability of the segmentation method in learning the representation of the mesh cell or, broadly, the representation of the surface containing the barycenter. To counter that effect we introduce the geometry processing branch in our tooth segmentation network. This geometry processing branch is a PointMLP[9] network and consists of a Geometric Affine Module (GAM) and a number of residual point(ResP) blocks. The geometric affine module of the PointMLP[9] is of interest to us as this module helps in creating a normalized representation of the surface/neighborhood even in case of sparse and diverse geometric structures. Once the vertices of the mesh cells are no longer attached to the barycenter in the form of features, the barycenters alongwith the normals at those barycenters become sparse. The PointMLP head helps in learning representation from this comparatively sparse data and creating global feature. In addition to the geometry processing branch, we also introduce a curve processing branch in our network. We utilize CurveNet[10] for this branch. The curve processing head is tasked with understanding and evaluating curve features from the barycenters (not on the normals) of the mesh
Figure 3: Overall Architecture of our proposed network. Each mesh cell is summarized using the barycenter and the normal at the barycenter. The data is processed via a geometry processing branch and a curve processing branch
Figure 2: (a) sample of a mesh. (b) the mesh after some triangles have been sampled randomly by sampling their barycenter. Such sampling will result in upsetting the mesh topology and loss of connectedness.
cells. The intuition behind this step is that the different types of tooth have a large difference in shape and size, e.g. the molar teeth and the incisor teeth have different appearances. Hence, the curves induced on the barycenters coordinates (not the normals) can convey meaningful information and thereby, increase the representation learning capability of our tooth mesh segmentation network. Similar to CurveNet[10], the curve processing branch consists of Local Point Feature Aggregation (LPFA), Curve Intervention Convolution (CIC) foll
Figure 4: The qualitative comparison of tooth labeling via different methods. Due to space constraint, we could not show all the eleven methods. (zoom in for better view in color)
structural constraint, our geometry processing branch can relax this constraint more effectively with the residual connections and affine geometric module. At the same time the curve processing branch can enrich the features by adding the information regarding the curves formed using the barycenters. The curve processing branch also benefits by utilizing only the barycenter because the addition of the mesh vertices information could have confused the network. The relaxation in the structural constraint is a key advantage in our method.
#### 3.3.2 Ablation Study
We performed ablation studies to illustrate the effectiveness of the proposed method. The results are shown in Table 3. Ablation1 is the geometry processing branch which is similar to PointMLP[9] but operates on the 24 dimensional vector as feature of the mesh cell. Ablation2 is similar to Ablation1 but Ablation2 only utilizes the barycenter and the normal at the barycenter. As we can see between Ablation1 and Ablation2, the relaxation of the structural constraint already has a positive effect on the geometry processing network. Ablation3 is similar to Ablation1 but Ablation3 utilizes only the barycenter and not the normal at the barycenter. This reaffirms the understanding that the normals information can encode the surface information better than just the coordinates. Ablation4 is the curve processing branch similar to CurveNet[10]. We can see that each component of our carefully designed segmentation network improves the performance of our method.
## 4 Conclusion
In this work, we proposed a method to segment teeth from tooth mesh data using a simplified mesh cell representation. We demonstrate that although the state-of-the-art tooth segmentation methods utilize the mesh vertices as a feature of the mesh cell, this type of representation might be redundant at the commonly used resolution of the tooth mesh utilized by these state-of-the-art tooth segmentation algorithms. Rather this representation imposes an implicit structural constraint on the data which may hamper the learning and also prevent using the multi resolution of the tooth mesh data. Our proposed method based on our intuition outperforms the existing methods thus compelling us to question whether extra data always imply additional learning as generally believed, or it can be self-limiting in certain scenarios.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline Method & BG & T1 & T2 & T3 & T4 & T5 & T6 & T7 \\ \hline PointNet [CVPR’17][12] & 0.9374 & 0.7836 & 0.9100 & 0.8853 & 0.9151 & 0.8937 & 0.8994 & 0.9236 \\ \hline PointNet++ [NeurIPS’17][13] & 0.9145 & 0.7706 & 0.8931 & 0.8663 & 0.8739 & 0.8276 & 0.7724 & 0.8275 \\ \hline DGCNN [ATG’19][14] & 0.9588 & 0.8377 & 0.9340 & 0.9269 & 0.9457 & 0.9319 & 0.9295 & 0.9370 \\ \hline MeshSegNet[TMI’20][15] & 0.9120 & 0.7026 & 0.7899 & 0.7653 & 0.8505 & 0.8211 & 0.6744 & 0.7845 \\ \hline MeshSegNet+GCO[TMI’20][15] & 0.9470 & 0.8408 & 0.8948 & 0.8925 & 0.916 & 0.8690 & 0.7681 & 0.8969 \\ \hline TSGCNet [CVPR’21][3] & 0.9528 & 0.6323 & 0.9055 & 0.9067 & 0.9352 & 0.9278 & 0.9065 & 0.9160 \\ \hline GAC [PRL’21][2] & 0.8995 & 0.6330 & 0.8099 & 0.7495 & 0.8189 & 0.8365 & 0.8130 & 0.8356 \\ \hline BAAFNet [CVPR’21][8] & 0.5016 & 0.4559 & 0.6676 & 0.6293 & 0.6634 & 0.6457 & 0.5767 & 0.6724 \\ \hline pointMLP [ICLR’22][9] & 0.9655 & 0.8552 & 0.9490 & 0.9405 & **0.9596** & 0.9490 & 0.9351 & 0.9436 \\ \hline PCT [CVM’21][16] & 0.7791 & 0.2974 & 0.5147 & 0.4496 & 0.3207 & 0.3654 & 0.4497 & 0.5788 \\ \hline MBESegNet [ISBI’22][5] & 0.8089 & 0.4107 & 0.6989 & 0.6852 & 0.7295 & 0.6512 & 0.5464 & 0.5255 \\ \hline CurveNet [ICCV’21][10] & 0.9540 & 0.7735 & 0.9132 & 0.9076 & 0.9291 & 0.9129 & 0.9085 & 0.9293 \\ \hline Ours & **0.9657** & **0.8654** & **0.9516** & **0.9462** & 0.9595 & **0.9495** & **0.9395** & **0.9488** \\ \hline \end{tabular}
\end{table}
Table 1: The tooth segmentation results from ten different methods in terms of the labelwise Dice Score.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline Method & Input & OA & DSC & SEN & PPV \\ \hline PointNet[12] & 4p, 4a & 0.9167 & 0.8935 & 0.9033 & 0.9020 \\ \hline PointNet++[13] & 4p, 4a & 0.8820 & 0.8432 & 0.8546 & 0.8553 \\ \hline DGCNN[14] & 4p, 4a & 0.9435 & 0.9251 & 0.9334 & 0.9330 \\ \hline MeshSegNet[15] & 4p, 1n & 0.8914 & 0.8631 & 0.8787 & 0.8693 \\ \hline MeshSegNet+GCO[15] & 4p, 1n & 0.9319 & 0.9085 & 0.9295 & 0.9013 \\ \hline TSGCNet[3] & 4p, 4a & 0.9265 & 0.8853 & 0.9148 & 0.8928 \\ \hline BAAFNet[8] & 4p, 4a & 0.8451 & 0.7994 & 0.8080 & 0.8346 \\ \hline BAFNet[8] & 4p, 4a & 0.9510 & 0.6015 & 0.7458 & 0.5846 \\ \hline pointMLP[9] & 4p, 4a & 0.9537 & 0.9372 & 0.9468 & 0.9416 \\ \hline DFT[16] & 1p & 0.6192 & 0.4694 & 0.4994 & 0.5760 \\ \hline MBESegNet[5] & 4p, 1n & 0.7062 & 0.6320 & 0.7002 & 0.6344 \\ \hline CurveNet[10] & 1p & 0.9298 & 0.9127 & 0.9220 & 0.9136 \\ \hline Ours & 1p, 1n & **0.9553** & **0.9454** & **0.9505** & **0.9457** \\ \hline \end{tabular}
\end{table}
Table 2: The tooth segmentation results from different methods in terms of the Overall Accuracy and the Dice Score. The input column specifies how many points (p) and how many normals (n) are used in the algorithm
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Method & b & b-n & v & v-n & OA & DSC & SEN & PPV \\ \hline Ablation1 & ✓ & ✓ & ✓ & ✓ & 0.9537 & 0.9372 & 0.9468 & 0.9416 \\ \hline Ablation2 & ✓ & ✓ & ✗ & ✗ & 0.9552 & 0.9405 & 0.9496 & 0.9435 \\ \hline Ablation3 & ✓ & ✗ & ✗ & ✗ & 0.9364 & 0.9157 & 0.9266 & 0.9213 \\ \hline Ablation4 & ✓ & ✗ & ✗ & ✗ & 0.9298 & 0.9127 & 0.9220 & 0.9136 \\ \hline Ours & ✓ & ✓ & ✗ & ✗ & **0.9553** & **0.9454** & **0.9505** & **0.9457** \\ \hline \end{tabular}
\end{table}
Table 3: The tooth segmentation results from ten different methods in terms of the Overall Accuracy and the Dice Score. b, b-n, v, v-n denote the barycenter, barycenter-normal, vertices, normals at the vertices respectively.
## 5 Compliance with Ethical Standards
This research study was conducted retrospectively using human subject data made available in open access by [11]. Ethical approval was not required as confirmed by the license attached with the open access data.
## 6 Acknowledgments
The work has been funded by the Colgate-Palmolive Company.
|
2310.06412 | Encoder-Decoder-Based Intra-Frame Block Partitioning Decision | The recursive intra-frame block partitioning decision process, a crucial
component of the next-generation video coding standards, exerts significant
influence over the encoding time. In this paper, we propose an encoder-decoder
neural network (NN) to accelerate this process. Specifically, a CNN is utilized
to compress the pixel data of the largest coding unit (LCU) into a fixed-length
vector. Subsequently, a Transformer decoder is employed to transcribe the
fixed-length vector into a variable-length vector, which represents the block
partitioning outcomes of the encoding LCU. The vector transcription process
adheres to the constraints imposed by the block partitioning algorithm. By
fully parallelizing the NN prediction in the intra-mode decision, substantial
time savings can be attained during the decision phase. The experimental
results obtained from high-definition (HD) sequences coding demonstrate that
this framework achieves a remarkable 87.84\% reduction in encoding time, with a
relatively small loss (8.09\%) of coding performance compared to AVS3 HPM4.0. | Yucheng Jiang, Han Peng, Yan Song, Jie Yu, Peng Zhang, Songping Mai | 2023-10-10T08:29:34Z | http://arxiv.org/abs/2310.06412v1 | # Encoder-Decoder-Based Intra-Frame Block Partitioning Decision
###### Abstract
The recursive intra-frame block partitioning decision process, a crucial component of the next-generation video coding standards, exerts significant influence over the encoding time. In this paper, we propose an encoder-decoder neural network (NN) to accelerate this process. Specifically, a CNN is utilized to compress the pixel data of the largest coding unit (LCU) into a fixed-length vector. Subsequently, a Transformer decoder is employed to transcribe the fixed-length vector into a variable-length vector, which represents the block partitioning outcomes of the encoding LCU. The vector transcription process adheres to the constraints imposed by the block partitioning algorithm. By fully parallelizing the NN prediction in the intra-mode decision, substantial time savings can be attained during the decision phase. The experimental results obtained from high-definition (HD) sequences coding demonstrate that this framework achieves a remarkable 87.84% reduction in encoding time, with a relatively small loss (8.09%) of coding performance compared to AVS3 HPM4.0.
intra prediction, block partition, CNN encoder, Transformer decoder
## I Introduction
An efficient video coding standard is of great significance for the development of high-definition video industry. To this end, the next-generation video coding standards, Versatile Video Coding (VVC) and Audio Video coding Standard 3 (AVS3), were respectively released in July and November 2020. Compared to the widely used HEVC standard, both VVC and AVS3 achieves around 30% performance improvement [1], but at the cost of increasing the encoding complexity by an order of magnitude [2].
The intra-frame prediction in video coding is usually used for key frame compression. It utilizes spatial correlation between adjacent pixels to predict the current pixel value, thereby reducing the number of bits required to represent the current pixel value and thus reducing the amount of video data. Considering that the intra-frame block partitioning decision algorithm accounts for over 97% of the intra-frame encoding time [3], there has been a lot of research on reducing the time required for this decision making.
Traditional acceleration algorithms for intra-frame block partition usually use hand-crafted feature extraction for analysis, and skip some of the partitioning modes of the current coding unit (CU) based on analysis results [4, 5]. While compared with hand-crafted feature extraction, neural network-based acceleration algorithms can usually achieve higher coding efficiency. There are two primary categories of neural network acceleration algorithms.
One category combines neural networks with traditional intra-frame prediction algorithms. These algorithms leverage the information acquired from the preceding neural network to reduce the number of traversals required by traditional intra-frame prediction algorithms. More specifically, a pooling-variable CNN was proposed to predict the continuation of current CUs partition in advance [6]. Tech _et al_ proposed in their various works to use CNNs to predict different parameters related to the current LCU, which were utilized to constrain the block partitioning process [7, 8]. Xu _et al_ used two lightweight CNNs as classifiers, to distinguish whether the CU is to be partitioned and the direction in which the partition should take place [9]. Moreover, one set of research employs CNNs to predict the partitioning probabilities of basic edges within a LCU. The predicted probability vector is then exploited in subsequent block partitioning decisions to expedite the judgement process [10, 11].
The other category decouples neural networks from traditional intra-frame prediction algorithms, and allows the neural network to directly output the block partitioning result of the current LCU. The traditional intra-frame prediction algorithm only needs to make mode decisions without block partitioning decisions. For instance, Li _et al._ proposed a MSECNN, as described in [12], for predicting partitioning results in a greedy way. However, the proposed network architecture is excessively complex, leading to a significant increase in computational load. When selecting the Top-1 result, the method proposed in [11] can replace the block partitioning decision process, but it demonstrates a significant decrease in performance.
In this paper, we propose a novel approach utilizing a compact network structure to accurately predict block partitioning results. By fully parallelizing the process of predicting block partition, significant reductions in encoding time can be achieved. The proposed network has been trained and validated within the AVS3 coding framework, but can be easily adapted to the VVC framework with minor modifications.
## II Task Formulation
For video coding standards such as VVC and AVS3, once the partition of the parent node has been determined in the CU block partitioning decision process, the decision order for its child nodes is fixed. Therefore, the block partitioning result of the LCU can be represented as a structured variable-length integer vector, with each integer element in the vector ranging
from 0 to 5, corresponding to the six partitioning modes of Non-Split (NS), QuadTree (QT), Horizontal Binary-Tree (BT-H), Vertical Binary-Tree (BT-V), Horizontal Ternary-Tree (TT-H) and Vertical Ternary-Tree (TT-V) in the VVC standard, or NS, QT, BT-H, BT-V, Horizontal Extend Quad-Tree (EQT-H) and Vertical Extend Quad-Tree (EQT-V) in the AVS3 standard.
As described in [11], the partitioning result of a 64x64-sized LCU can also be represented as a fixed-length vector consisting of 480 binary elements of 1 or 0, where 1 represents that the corresponding 4-pixel length edge (basic edge) belongs to the boundary of the partitioning block, and 0 represents the opposite. Fig. 1 provides a visual representation of these two different forms of representation discussed above, using two LCUs as examples. It should be noted that, in the actual inference process, the elements in the fixed-length vector are floating-point numbers ranging from 0 to 1, representing the probability of the basic edge belonging to the block partitioning boundary.
Our objective is to use a neural network to predict the variable-length vector for each LCU. This vector can then be utilized by the video encoder to replace the recursive process of block partitioning decisions. The retention of only the mode decision algorithm leads to a substantial reduction in encoding time. Based on previous studies [11], the pixels of a LCU can be encoded to a fixed-length vector using a neural network similar to ResNet. With this encoding, the subsequent prediction task can be reformulated as a sequence-to-sequence (seq2seq) task. In this task, the input sequence comprises flattened block partitioning information, while the output sequence consists of structured block partitioning information. In other words, we need to extract structured information from the fixed-length vector.
## III Proposed Method
Fig. 2 illustrates the neural network based on an encoder-decoder architecture proposed in this study. The CNN shown in subfigure (a) encodes the pixel data into a floating-point vector of length 480. Then the Transformer decoder in (b) performs the transformation from fixed-length vector to variable-length vector. The purpose of Block Partitioning Constrain (BPC) module in (c) is to ensure that the variable-length vector generated by the neural network conforms to the AVS3 coding constrains and can be correctly decoded.
Compared with the fixed threshold method in [10] and the decision tree (DT) method in [11], the network structure proposed in this study has the following advantages,
* The self-attention mechanism in the Transformer decoder facilitates the integration of the previous block partitioning decision results, especially the partitioning information of the parent nodes. This correlation between different decision layers was usually overlooked in other methods.
* The decoder can automatically extract information by enhancing the attention of the current CU element in the encoder output feature. This replaces the manual construction of the DT model input. Furthermore, this structure achieves framework uniformity, eliminating the need to train different DT models for different CU sizes.
* The variable-length vector output from the model can be directly used for LCU coding, completely replacing the recursive process of block partitioning decision making.
### _CNN Encoder_
In the traditional AVS3 encoding algorithm, the optimal intra prediction mode for the current CU is computed by utilizing the reconstructed pixels from the top, left, and top-left neighboring CUs as references [13]. Therefore, we fold the reference pixels corresponding to each 64x64-sized LCU and fill them into the upper two rows and left two columns of a 66x66-sized block, while the remaining part is filled with the luminance pixels of the current LCU. We also input the Quantization Parameter (QP) into the neural network in addition to the pixel block, as QP value significantly affects block partitioning results.
The architecture of the CNN encoder is similar to the standard ResNet18 [14], consisting of an input convolutional layer, four residual blocks, and an output fully connected layer. The difference lies in the size of the convolutional kernel in the convolutional layer and the dimension of the fully connected layer. Additionally, before entering the output fully connected layer, the QP value is concatenated with the flattened feature vector, enabling the probability vector of the model output to be affected by changes in the QP value.
### _Transformer Decoder_
We employ the standard Transformer Decoder [15] for the seq2seq task, benefiting from its multi-head self-attention mechanism and multi-head encoder-decoder attention mechanism. These mechanisms enable us to effectively extract structured information from the fixed-length vector, enhancing the model's ability to capture and utilize contextual dependencies.
Fig. 1: Variable-length vector and Fixed-length vector representation of CTU partitioning results.
To prevent limited expressive capacity of the model caused by a small feature vector dimension, it is necessary to reshape the fixed-length vector. In consideration of the correlation typically found among basic edges located in the same row or column during block partition, and with each row or column comprising 16 basic edges, we reshape the fixed-length vector of length 480 into a (30,16) shape, thereby increasing the feature vector dimension from 1 to 16.
As shown in Fig. 2(b), the decoder consists of 4 layers of Transformer decoder modules, with a hidden layer dimension of 16 and 4 attention heads. Its output is computed by (1).
\[l_{t}=Decoder(p_{0},p_{1},...,p_{479},l_{0},l_{1},...,l_{t-1})\in R^{N} \tag{1}\]
where \(p_{i}\) represents the i-th element of a fixed-length vector, and \(l_{j}\) represents the output of the decoder at the j-th step. \(N\) is the vocabulary size. For this task, since there are only 6 block partitioning modes in AVS3 and VVC, \(N\) is set to 6. Specifically, the reshaped fixed-length vector is used as the key-value pairs and the concatenated historical output of the decoder is used as the query. The output vector of the decoder represents the probability of the 6 block partitioning methods corresponding to the current CU.
### _Block Partitioning Constrain_
Multiple constraints exist for block partitioning decision of CUs in the AVS3 video coding standard. Failure to meet these constraints can result in a undecodable bitstream. Specifically, these constraints include: the aspect ratio of child CUs after partition cannot exceed 8, the total partitioning depth cannot exceed 6, non-QuadTree (QT) partitioning nodes cannot have child nodes that are QT partitioned, and so on. We design BPC module, which selects the block partitioning result that satisfies the encoder constraints from the probability vector output by the decoder. This is done by choosing the maximum probability value that meets the encoder constraint, rather than simply selecting the highest probability value. Since we adopt the teacher-forcing training method [15], BPC module is not used during the training process. In this way, during both the training and testing processes, the model receives queries that adhere to encoding constraints. This ensures that the data is drawn from the same distribution.
### _Neural Network Training_
We adjusted high-definition images from two publicly available datasets [16, 17] to HD resolution, and concatenated them into YUV video format. The resulting video consisted of a total of 19200 frames. Then we encoded the video using AVS3 reference software, and extracted the data. Each set of data consists of stitched 66x66 luminance pixel data, QP value, fixed-length vector labels, and variable-length vector labels. Additionally, over 85% of the fixed-length vectors in the dataset have less than half of their elements equal to 1. To tackle this data imbalance and improve the accuracy of complex LCU block partition predictions, we balanced the dataset by adjusting the number of LCUs with deep and shallow partitions during training.
In contrast to [18], where a pre-trained CNN is required to provide input data for the Transformer decoder, our decoder can utilize fixed-length vector labels as input. This enables
Fig. 2: Neural network architecture. (a) CNN encoder. (b) Transformer decoder. (c) Block Partitioning Constrain module
us to use a two-step training approach that includes both independent training and joint training. Independent training serves as a form of pre-training, supplying the initialization parameters for both the CNN encoder and the Transformer decoder. This methodology reduces the time required for joint training and subsequently enhances the overall predictive accuracy of the model.
## IV Experimental Results
### _Neural Network Prediction_
Fig. 3 presents the prediction results of the trained network, which indicates that the network exhibits higher accuracy in predicting the partition of LCU blocks with simple textures, but lower accuracy in predicting more complex texture LCUs. Thus, improving the model's predictive capability on complex textures is crucial to further enhancing network performance. On average, the validation accuracy of the network can reach over 80%, which is higher than the TOP1 accuracy of 67.24% reported in [11] using the DT models.
### _Video Coding Performance_
We deployed the trained neural network and AVS3 reference software on a heterogeneous platform to evaluate their coding performance and efficiency. The neural network was executed on a NVIDIA RTX3090 GPU, while the AVS3 reference software was executed on an Intel Xeon Gold 6148 CPU. It should be noted that to reduce training cost and time, our training, testing, and performance evaluation are all focused on HD video sequences. In the future, LCU data of videos with different resolutions will be added to the dataset to enhance the applicability of the network.
As shown in Table I, we conducted joint testing of neural networks with reference software versions High-Performance Model (HPM) 14.0 and HPM4.0, and compared the test results with the coding results obtained solely using the reference software. The BDBR stands for Bjontegaard delta bit rate [19], while TS represents encoding time saving, which is calculated from (2),
\[TS=\frac{T_{HPM}-\min(T_{NN},T_{HPM}^{{}^{\prime}})}{T_{HPM}} \tag{2}\]
where \(T_{HPM}\) represents the encoding time of the reference software, \(T_{HPM}^{{}^{\prime}}\) represents the encoding time of the reference software with block partitioning decisions removed, and \(T_{NN}\) represents the inference time of the block partitioning results of the neural network. Due to the parallelized design, theoretically, the encoding time can be reduced by up to 97%, which refers to the time required for block mode decisions. However, limited by our GPU performance, we save 93.97% and 87.84% of the encoding time for two versions of reference software respectively.
In terms of coding performance, as shown in Table II, the BDBR metric exhibits a decrease of 8.09% compared to HPM4.0, which outperforms the 8.97% performance decrease reported in [20] using the same reference standard. Compared to the methods designed for VVC, our approach reduces more complexity, but its performance is inferior and requires further improvement.
## V Conclusion
This paper proposes an encoder-decoder structured neural network for predicting the intra-frame block partitioning results of the LCUs. Firstly, the CNN encoder extracts local features from the LCU pixels. Then, a Transformer decoder is incorporated to predict the block partitioning results. Compared to other algorithms such as DT, the proposed structure achieves superior prediction accuracy. Furthermore, our designed BPC module enables the output of the Transformer decoder to be directly utilized by the video coding frameworks. Therefore,
Fig. 3: Comparison of block partitioning labels (left) and neural network prediction results (right) for LCU with different textured complexity levels
by adopting a parallelized design, this heterogeneous framework can save a significant amount of encoding time. Due to the similarity of the intra-frame block partitioning decision algorithm, the model can be easily adapted to other video coding algorithms, such as VVC, through retraining while maintaining the same network structure.
|
2310.10270 | $h$-function, Hilbert-Kunz density function and Frobenius-Poincaré
function | Given ideals $I,J$ of a noetherian local ring $(R, \mathfrak m)$ such that
$I+J$ is $\mathfrak m$-primary and a finitely generated $R$-module $M$, we
associate an invariant of $(M,R,I,J)$ called the $h$-function. Our results on
$h$-functions allow extensions of the theories of Frobenius-Poincar\'e
functions and Hilbert-Kunz density functions from the known graded case to the
local case, answering a question of V.Trivedi. When $J$ is $\mathfrak
m$-primary, we describe the support of the corresponding density function in
terms of other invariants of $(R, I,J)$. We show that the support captures the
$F$-threshold: $c^J(I)$, under mild assumptions, extending results of V.
Trivedi and Watanabe. The $h$-function encodes Hilbert-Samuel, Hilbert-Kunz
multiplicity and $F$-threshold of the ideal pair involved. Using this feature
of $h$-functions, we provide an equivalent formulation of a conjecture of
Huneke, Musta\c{t}\u{a}, Takagi, Watanabe; recover a result of Smirnov and
Betancourt; prove that a result of Hanes comparing multiplicities, is
equivalent to an a priori weaker containment condition on ideals. We also point
out that a conjecture of Smirnov-Betancourt as stated is false and suggest a
correction which we relate to the conjecture of Huneke et al.
We develop the theory of $h$-functions in a more general setting which yields
a density function for $F$-signature. A key to many results on $h$-functions is
a `convexity technique' that we introduce, which in particular proves
differentiability of Hilbert-Kunz density functions almost everywhere on
$(0,\infty)$, thus contributing to another question of Trivedi. | Cheng Meng, Alapan Mukhopadhyay | 2023-10-16T10:53:12Z | http://arxiv.org/abs/2310.10270v2 | # \(h\)-Function, Hilbert-Kunz density function and Frobenius-Poincare function
###### Abstract.
Given ideals \(I,J\) of a noetherian local ring \((R,\mathfrak{m})\) such that \(I+J\) is \(\mathfrak{m}\)-primary and a finitely generated module \(M\), we associate an invariant of \((M,R,I,J)\) called the \(h\)-function. Our results on \(h\)-function allow extensions of the theories of Frobenius-Poincare functions and Hilbert-Kunz density functions from the known graded case to the local case, answering a question of Trivedi. When \(J\) is \(\mathfrak{m}\)-primary, we describe the support of the corresponding density function in terms of other invariants of \((R,I,J)\). We show that the support captures the \(F\)-threshold: \(c^{J}(I)\), under mild assumptions, extending results of Trivedi and Watanabe. The \(h\)-function treats Hilbert-Samuel, Hilbert-Kunz multiplicity and \(F\)-threshold on an equal footing. We develop the theory of \(h\)-functions in a more general setting which yields a density function for \(F\)-signature. A key to many results on \(h\)-function is a 'convexity technique' that we introduce, which in particular proves differentiability of Hilbert-Kunz density function almost everywhere, thus contributing to another question of Trivedi.
## 1. Introduction
Hilbert-Kunz multiplicity and \(F\)-signature are numerical invariants appearing in prime characteristics commutative algebra and algebraic geometry. These quantify severity of singularities at a point of a variety and also relate to other invariants, such as the cardinality of the local fundamental group of the punctured spectrum of a strongly \(F\)-regular local ring; see [1], [17] and Section 2. The theory of Hilbert-Kunz multiplicity in the graded case has witnessed two new generalizations in recent years: the Hilbert-Kunz density function and the Frobenius-Poincare function. Fix a standard graded ring \(S\) in prime characteristics and a homogeneous ideal \(\mathfrak{a}\) of finite co-length. When the Krull dimension \(\dim(S)\) is at least two, Trivedi has proven the existence of a compactly supported real valued continuous function \(g_{S,\mathfrak{a}}\) of a real variable- called the Hilbert-Kunz density function- whose integral is the Hilbert-Kunz multiplicity \(e_{HK}(\mathfrak{a},S)\); see Section 2 for details. For the pair \((S,\mathfrak{a})\), where \(\dim(S)\) is not necessarily at least two, the associated Frobenius-Poincare function is an entire function in one complex variable, whose value at the origin is the Hilbert-Kunz multiplicity \(e_{HK}(\mathfrak{a},S)\); see Section 2. These two functions not only encode more subtle invariants of \((S,\mathfrak{a})\) than the Hilbert-Kunz multiplicity but also allow application of geometric tools, such as sheaf cohomology on \(\operatorname{Proj}(S)\), and tools from homological algebra. Successful applications of the Hilbert-Kunz density functions have resolved Watanabe and Yoshida's conjecture on the values of Hilbert-Kunz multiplicity of quadric hypersurfaces, rationality of Hilbert-Kunz multiplicities and \(F\)-thresholds of two dimensional normal rings among other results; see [14], [15], [16].
Building extensions of these two theories to the setting of a noetherian local ring is a natural question; see Trivedi's question [14, Question 1.3]. In this article, we extend the theories of Hilbert-Kunz density function and Frobenius-Poincare function to the local setting. Our extensions are facilitated by a systematic study of a new function, which
## 1. Introduction
Let \(R\) be a real-valued function on \(R\). A function \(f:R\to R\) is called _\(f\)-invariant_ if \(f(x)=f(x)\) for all \(x\in R\).
Extending the theory of Hilbert-Kunz density functions is more involved. Set
\[f_{n}(s)=h_{n,I,J}(s+\frac{1}{p^{n}})-h_{n}(s).\]
When \((R,\mathfrak{m},J)\) comes from a graded pair \((S,\mathfrak{a})\), where \(\dim(S)\geq 2\), we point out that the sequence of functions
\[\frac{f_{n}(s)}{(p^{n})^{d-1}}\]
converges uniformly to the Hilbert-Kunz density function of \((S,\mathfrak{a})\); see Theorem 6.6. But for arbitrary ideals \(I,J\) of a local ring \((R,\mathfrak{m})\), the pointwise convergence of, \(f_{n}(s)/(p^{n})^{d-1}\) at every \(s\) is not clear; in fact when \(I=0\) the sequence does not converge,see Example 5.11. In this direction, we relate the convergence of \(f_{n}(s)/(p^{n})^{d-1}\) to the differentiability of \(h_{I,J}\) at \(s\). We prove,
_If \(h_{I,J}\)(s) if differentiable at \(s\), \(f_{n}(s)/(p^{n})^{d-1}\) converges to \(h^{\prime}_{I,J}(s)\); see Theorem 5.8_
In the direction of differentiability of \(h\), we prove:
**Theorem B:**(Theorem 5.4,(3),(4)) Let \(h_{I,J}\) be as before
1. The left and right hand derivative of \(h\) exist at all non-zero points.
2. Outside a countable subset of \(\mathbb{R}\), \(h\) is differentiable; if \(h\) is differentiable on an open interval, then it is continuously differentiable on the same interval.
**Thm B**, (2) implies that for any \(I,J\) in the local setting, \(f_{n}(s)/(p^{n})^{d-1}\) converges outside a countable subset of \(\mathbb{R}\) and coincides with the derivative of \(h_{I,J}(s)\); thus outside this countable set the limiting function \(f_{n}(s)/(p^{n})^{d-1}\) yields a well-defined notion of density function. In Theorem 5.4, we actually prove existence of density function more generally for a family satisfying **Condition C**. This generalization in particular yields a density function for \(F\)-signature. When \((R,\mathfrak{m})\) comes from a graded pair \((S,\mathfrak{a})\) with \(\dim(S)\geq 2\), we prove that the corresponding \(h\)-function is continuously differentiable and the derivative coincides with the Hilbert-Kunz density function that Trivedi defines. We moreover prove the existence and continuity of the density function to the case when \(\mathfrak{a}\) does not have finite colength; see Theorem 6.7. Our work shows that \(h\)-function is twice differentiable outside a set of measure zero contributing to Trivedi's question about the order of differentiability of the Hilbert-Kunz density function; see [19, Question 1], Remark 5.5.
**Thm B** is a consequence of a 'convexity technique' that we introduce. For fixed \(s_{0}>0\), in Theorem 5.3, we construct a function \(H(s,s_{0})\) which we prove to be convex and show that
\[H(s,s_{0})=h(s)/c(s)-h(s_{0})/c(s_{0})+\int_{s_{0}}^{s}h(t)c^{\prime}(t)/c^{2} (t)dt,\]
where \(c(s)=s^{\mu-1}/(\mu-1)!\), \(\mu\) being the cardinality of a set of generators of \(I\). **Thm B** then follows from general properties of convex functions. The underlying idea of the same convexity argument is used to prove Lipschitz continuity of \(h\)-functions stated in **Thm A**.
The behaviour of \(h_{I,J}\) near zero is more subtle. We prove \(h_{I,J}\) is continuous at zero if and only if \(I\) is non-zero. In fact our result implies,
**Theorem C:**(Theorem 8.11) Suppose \(\dim(R/I)=d^{\prime}\). Denote the set of minimal primes of \(R/I\) of dimension \(d^{\prime}\) by \(\operatorname{Assh}(R/I)\). Then
\[\lim_{s\to 0+}\frac{h(s)}{s^{d-d^{\prime}}}=\frac{1}{(d-d^{\prime})!}\sum_{P\in \operatorname{Assh}(R/I)}e_{HK}(J,R/P)e(I,R_{P}),\]
where \(e(I,\underline{\ })\) denotes the Hilbert-Samuel multiplicity with respect to \(I\). In particular, the order of vanishing \(h(s)\) at \(s=0\) is \(d-d^{\prime}\). **Thm C** extends part of [1, Thm 4.6], where \(R\) is assumed to be regular, \(I\) a principal ideal and \(J=\mathfrak{m}\). The \(h\)-function treats different numerical invariants of \((R,I,J)\) on an equal footing. When \(J\) is \(\mathfrak{m}\)-primary, for large \(s\), \(h_{I,J}(s)=e_{HK}(R,J)\); when \(I\) is \(\mathfrak{m}\)-primary, for \(s>0\) and close to zero \(h_{I,J}(s)=e(I,R)\frac{s^{d}}{d!}\); see [13, Lemma 3.3]. Moreover,
**Theorem D:**(Theorem 8.6)Suppose \(J\) is \(\mathfrak{m}\)-primary, \(R\) is reduced and formally equidimensional (e.g. \((R,\mathfrak{m})\) is a complete domain or localization of a graded domain). Let \(\alpha_{R,I,J}=\sup\{s\in\mathbb{R}\,|\,s>0\,,h_{I,J}(s)\neq e_{HK}(J,R)\}.\) Consider the sequence of numbers,
\[r_{I}^{J}(n)=\max\{t\in\mathbb{N}|I^{t}\nsubseteq(J^{[p^{n}]})^{*}\},\]
where \((J^{[p^{n}]})^{*}\) denotes the tight closure of the ideal \((J^{[p^{n}]})\); see Definition 2.5. Then
\[\lim_{n\to\infty}\frac{r_{I}^{J}(n)}{p^{n}}=\alpha_{R,I,J}.\]
We prove, under suitable hypothesis, for e.g. strong \(F\)-regularity at every point of \(\operatorname{Spec}(R)-\{\mathfrak{m}\}\), \(r_{I}^{J}(n)/p^{n}\) in fact converges to the \(F\)-threshold \(c^{J}(I)\); see Theorem 8.9. \(F\)-threshold is an invariant extensively studied in prime characteristic singularity theory; see [12], [10] and is closely related log canonical threshold via reduction modulo \(p\); see [11], [12]. Whenever \(h_{I,J}\) is differentiable, the support of \(\frac{d}{ds}h_{I,J}\), which agress with the Hilbert-Kunz density function of \((R,I,J)\), is \([0,\alpha_{R,I,J}]\). This generalizes Trivedi and Watanabe's description of the support Hilbert-Kunz density function which was made when \(R\) is strongly \(F\)-regular and graded; see Remark 8.8, [11, Thm 4.9].
**Notation and conventions:** All rings are commutative and noetherian. The symbol \(p\) denotes a positive prime number. Unless otherwise said, the pair \((R,\mathfrak{m})\) denotes a noetherian local ring \(R\)- not necessarily a domain- with maximal ideal \(\mathfrak{m}\). By saying \((R,\mathfrak{m})\) is graded, we mean \(R\) is a standard graded ring with homogeneous maximal ideal \(\mathfrak{m}\). When \((R,\mathfrak{m})\) is assumed to be graded, \(R\)-modules and ideals are always assumed to be \(\mathbb{Z}\)-graded. We assume \(R\) has characteristic \(p\) and \(R\) is \(F\)-finite, i.e. the Frobenius endomorphism of \(R\) is finite. We index the sequences of numbers and functions by \(n\). Whenever the letter \(q\) appears in such a sequence, \(q\) denotes \(p^{n}\). For an ideal \(I\subset R\), \(I^{[p^{n}]}\) or \(I^{[q]}\) denotes the ideal generated by \(\{f^{q}\,|\,f\in I\}\) and is called the \(q\) or \(p^{n}\)-th _Frobenius power_ of \(I\). The operator \(l_{R}(\underline{\ })\) or simply \(l(\underline{\ })\) denotes the length function. For an \(R\)-module \(M\), \(F_{*}^{n}M\) denotes the \(R\)-module whose underlying abelian group is \(M\), but the \(R\)-action comes from restriction scalars through the iterated Frobenius morphism \(F^{n}:R\to R\).
## 2. Background material
Let \((R,\mathfrak{m})\) be a noetherian local or graded ring, \(J\) be an \(\mathfrak{m}\)-primary ideal, \(M\) be a finitely generated \(R\)-module. Although the germ of Hilbert-Kunz multiplicity was present in Kunz's seminal work [14], its existence was not proven until Monsky's work:
**Theorem 2.1**.: _(see [10]) There is a real number denoted by \(e_{HK}(J,M)\) such that,_
\[l(\frac{M}{J^{[p^{n}]}M})=e_{HK}(J,M)(p^{n})^{\dim(M)}+O((p^{n})^{\dim(M)-1}).\]
_The number \(e_{HK}(J,M)\) is called the Hilbert-Kunz multiplicity of \(M\) with respect to \(J\)._
Smaller values of \(e_{HK}(R,\mathfrak{m})\) predicts milder singularity of \((R,\mathfrak{m})\); see for e.g. [1, Cor 3.6], [10]. It is imperative to consider Hilbert-Kunz multiplicity with respect to arbitrary ideals, for e.g. to realize \(F\)-signature (see Example 3.10)- an invariant characterizing strong \(F\)-regularity of \((R,\mathfrak{m})\)- in terms of Hilbert-Kunz multiplicity; see [12, Cor 6.5]. We refer the readers to [10], [11, Chapter 2] and the references there in for surveying the state of art.
When \((R,\mathfrak{m})\) is graded, Trivedi's Hilbert-Kunz density function refines the notion of Hilbert-Kunz multiplicity:
**Theorem 2.2**.: _(see [11]) Let \((R,\mathfrak{m})\) be graded, \(J\) be a finite co-length homogeneous ideal, \(M\) be a finitely generated \(\mathbb{Z}\)-graded \(R\)-module. Consider the sequence of functions of a real variable \(s\),_
\[\tilde{g}_{n,M,J}(s)=l([\frac{M}{J^{[q]}M}]_{[sq]}).\]
1. _There is a compact subset of_ \(\mathbb{R}\) _containing the supports of all_ \(\tilde{g}_{n}\)_'s._
2. _When_ \(\text{dim}(M)\geq 1\)_, there is a function-denoted by_ \(\tilde{g}_{M,J}\)_- such that_ \((\frac{1}{q})^{\text{dim}M-1}\tilde{g}_{n,M,J}(s)\) _converges pointwise to_ \(\tilde{g}_{M,J}(s)\) _for all_ \(s\in\mathbb{R}\)_._
3. _When_ \(\text{dim}(M)\geq 2\)_, the above convergence is uniform and_ \(\tilde{g}_{M,J}\) _is continuous._
4. \[e_{HK}(J,M)=\int\limits_{0}^{\infty}\tilde{g}_{M,J}(s)ds.\]
**Definition 2.3**.: The function \(\tilde{g}_{M,J}\) is called the _Hilbert-Kunz density function_ of \((N,J)\).
For a graded ring \((R,\mathfrak{m})\), the _Frobenius-Poincare_ function produces another refinement of the Hilbert-Kunz multiplicity. Frobenius-Poincare functions are essentialy a limiting function of the Hilbert series of \(\frac{M}{J^{[q]}M}\) in the variable \(e^{-iy}\), see [11, Rmk 3.6].
**Theorem 2.4**.: _(see [11]) Let \(M\) be a finitely generated \(\mathbb{Z}\)-graded module over a graded \((R,\mathfrak{m})\), \(J\) be a finite colength homogeneous ideal. Consider the sequence of entire functions on \(\mathbb{C}\)_
\[G_{n,M,J}(y)=(\frac{1}{q})^{\text{dim}(M)}l([\frac{M}{J^{[q]}M}]_{j})e^{-iyj/q}.\]
1. _The sequence of functions_ \(G_{n,M,J}(y)\) _converges to an entire function_ \(G_{M,J}(y)\)__1 _on_ \(\mathbb{C}\)_. The convergence is uniform on every compact subset of_ \(\mathbb{C}\)_._ Footnote 1: Note the difference in notation from [11].
2. \[G_{M,J}(0)=e_{HK}(J,M).\]
The last theorem holds for any graded ring which are not necessarily standard graded. For the notion of Hilbert-Kunz density function in the non-standard graded setting, see [12]. By [11, Thm 8.3.2], for a standard graded \((R,\mathfrak{m})\) of Krull dimension at least one, the holomorphic Fourier transform of \(\tilde{g}_{M,J}\) is \(G_{M,J}\), i.e.
\[G_{M,J}(y)=\int\limits_{0}^{\infty}\tilde{g}_{M,J}(s)e^{-iys}ds.\]
Thus when \(\dim(M)\) is at least two, the Hilbert-Kunz density function and the Frobenius-Poincare function determine each other; see [12, Rmk 8.2.4]. Both Hilbert-Kunz density function and Frobenius-Poincare function capture more subtle graded invariants of \((M,J)\) than the Hilbert-Kunz multiplicity. For e.g. when \(R\) is two dimensional, normal, \(J\) is generated by forms of the same degree, \(\tilde{g}_{R,J}\) and \(G_{R,J}\) determine and are determined by slopes and ranks of factors in the Harder-Narasimhan filtration of a syzygy bundle associated to \(J\) on \(\operatorname{Proj}(R)\); see [13], [14, Example 3.3], [12, Chap 6]. For other results on Hilbert-Kunz density functions and Frobenius-Poincare functions, see the reference section of [12]. These two functions and the Hilbert-Kunz multiplicity of \((R,J)\) detects \(J\) up to its tight closure. Recall:
**Definition 2.5**.: ([12, Def 3.1]) Let \(A\) be a ring of characteristic \(p>0\). We say \(x\in A\) is in the tight closure of an ideal \(I\) if there is a \(c\) not in any minimal primes of \(A\) such that \(cx^{p^{n}}\in I^{[p^{n}]}\) for all large \(n\). The elements in the tight closure of \(I\) form an ideal; denoted \(I^{*}\).
**Theorem 2.6**.: _Let \(I\subseteq J\) be two ideals in \((R,\mathfrak{m})\)._
1. _If_ \(I^{*}=J^{*}\)_,_ \(e_{HK}(I,R)=e_{HK}(J,R)\)_._
2. _Conversely, when_ \(R\) _is formally equidimensional, i.e. all the minimal primes of the completion_ \(\hat{R}\) _have the same dimension,_ \(e_{HK}(I,R)=e_{HK}(J,R)\) _implies_ \(I^{*}=J^{*}\)_. When_ \((R,m)\) _is a graded ring where all the minimal primes have the same dimension,_ \(\tilde{g}_{I,R}=\tilde{g}_{J,R}\) _or_ \(G_{I,R}=G_{J,R}\) _implies_ \(I^{*}=J^{*}\)_._
## 3. \(h\)-function
Given ideals \(I,J\) of a local ring \((R,\mathfrak{m})\) such that \(I+J\) is \(\mathfrak{m}\)-primary and a finitely generated \(R\)-module \(M\), we assign a real-valued function \(h_{M,I,J}\) of a real variable, which we refer to as the corresponding \(h\)_-function_. The existence and continuity of \(h_{M,I,J}\) is proven in Section 3.4. When \(R\) is additionally a domain and \(M=R\), given an ideal \(I\) and a family of ideals \(\{J_{n}\}_{n\in\mathbb{N}}\)- satisfying what we call **Condition C** below- in Section 3.1, we associate a corresponding \(h\)-function which is continuous on \(\mathbb{R}_{>0}\).
### \(h\)-functions of a domain
**Definition 3.1**.: Let \(\{I_{n}\}_{n\in\mathbb{N}}\) be a family of ideals of the \(F\)-finite local ring \(R\).
1. \(I_{\bullet}\) is called a _weak \(p\)-family_ if there exists \(c\in R\)- not contained in any minimal primes of maximal dimension of \(R\) such that \(cI_{n}^{[p]}\in I_{n+1}\).
2. \(I_{\bullet}\) is called a _weak \(p^{-1}\)-family_ if exists a nonzero \(\phi\in Hom_{R}(F_{*}R,R)\) such that \(\phi(F_{*}I_{n+1})\subset I_{n}\).
3. A _big \(p\)-family (resp. big \(p^{-1}\) family)_ is a weak \(p\) (resp. \(p^{-1}\))-family \(I_{\bullet}\) such that there is an \(\alpha\in\mathbb{N}\) for which \(\mathfrak{m}^{[p^{n+\alpha}]}\subseteq I_{n}\) for all \(n\).
A family of ideals where (1) holds with \(c=1\) and \(\mathfrak{m}^{[p^{n}]}\subseteq I_{n}\), has been called a \(p\)-family of ideals; see [12]. Notions of \(p\) and \(p^{-1}\) families provide an abstract framework for proving existence of asymptotic numerical invariants:
**Theorem 3.2**.: _(see [14, Theorem 4.3]) Let \((R,\mathfrak{m},k)\) be an \(F\)-finite local domain of dimension \(d\), \(\{I_{n}\}_{n\in\mathbb{N}}\) a sequence of ideals such that \(\mathfrak{m}^{[p^{n}]}\subset I_{e}\) for all \(n\in\mathbb{N}\)._
1. _If there exists a nonzero_ \(c\in R\) _such that_ \(cI_{n}^{[p]}\subset I_{n+1}\) _for all_ \(n\in\mathbb{N}\)_, then_ \(\eta=\lim_{e\to\infty}1/p^{nd}l_{R}(R/I_{n})\) _exists, and there exists a positive constant_ \(C\) _that only depends on_ \(c\) _such that_ \(\eta-1/p^{nd}l_{R}(R/I_{n})\leq C/p^{n}\) _for all_ \(n\in\mathbb{N}\)
_._
2. _If there exists a non-zero_ \(\phi\in Hom_{R}(F_{*}R,R)\) _such that_ \(\phi(F_{*}I_{n+1})\subset I_{n}\) _for all_ \(e\in\mathbb{N}\)_, then_ \(\eta=\lim_{n\to\infty}1/p^{nd}l_{R}(R/I_{n})\) _exists, and there exists a positive constant_ \(C\) _that only depends on_ \(\phi\) _such that_ \(1/p^{nd}l_{R}(R/I_{n})-\eta\leq C/p^{n}\) _for all_ \(n\in\mathbb{N}\)_._
3. _If the conditions in (1) and (2) are both satisfied then there exists a constant_ \(C\) _that only depends on_ \(c\) _and_ \(\phi\) _such that_ \(|1/p^{nd}l_{R}(R/I_{n})-\eta|\leq C/p^{n}\)_._
**Lemma 3.3**.: _Let \((R,\mathfrak{m})\) be a local domain. Let \(I_{n}\), \(J_{n}\) be two weak \(p\)-families, then so is the family \(I_{n}+J_{n}\). If \(I_{n}\), \(J_{n}\) are two weak \(p^{-1}\)-families, then so is the family \(I_{n}+J_{n}\). When one of the families are big (\(p\) or \(p^{-1}\)), then so is their sum._
Proof.: Suppose there are nonzero elements \(c_{1},c_{2}\) such that \(c_{1}I_{n}^{[p]}\subset I_{n+1}\) and \(c_{2}J_{n}^{[p]}\subset J_{n+1}\), then \(c=c_{1}c_{2}\) is still nonzero and satisfies \(cI_{n}^{[p]}\subset I_{n+1}\), \(cJ_{n}^{[p]}\subset J_{n+1}\). So \(c(I_{n}+J_{n})^{[p]}\subset I_{n+1}+J_{n+1}\). If there are non-zero elements \(\phi_{1},\phi_{2}\in Hom_{R}(F_{*}R,R)\), such that \(\phi_{1}(F_{*}I_{n+1})\subset I_{n}\) and \(\phi_{2}(F_{*}J_{n+1})\subset J_{n}\). For \(\phi\in\mathrm{Hom}_{R}(F_{*}R,R)\) and \(r\in R\), define \(r\phi\in\mathrm{Hom}_{R}(F_{*}R,R)\) by the formula \(r\phi(s)=\phi(rs)\). This puts an \(R\)-module structure on \(\mathrm{Hom}_{R}(F_{*}R,R)\), which turns out to be a torsion free module of rank one. So the \(R\)-submodules of \(Hom_{R}(F_{*}R,R)\) generated by \(\phi_{1}\) and \(\phi_{2}\) has a nonzero intersection, or in other words, there exist nonzero \(c_{1},c_{2}\in R\) and a nonzero element \(\phi\in Hom_{R}(F_{*}R,R)\) such that \(\phi=\phi_{1}(F_{*}(c_{1}\cdot))=\phi_{2}(F_{*}(c_{2}\cdot))\). Thus, \(\phi(F_{*}I_{n+1})\subset I_{n}\) and \(\phi(F_{*}J_{n+1})\subset J_{n}\). So \(\phi(F_{*}(I_{n+1}+J_{n+1}))\subset I_{n}+J_{n}\).
To prove the 'big'ness, assume that there is an \(\alpha\) such that \(\mathfrak{m}^{[p^{n+\alpha}]}\subseteq I_{n}\). Then we have \(\mathfrak{m}^{[p^{n+\alpha}]}\subseteq I_{n}+J_{n}\).
**Condition C:** Let \((R,\mathfrak{m})\) be an \(F\)-finite local ring, \(I\) is an ideal and \(J_{\bullet}=\{J_{n}\}_{n\in\mathbb{N}}\) be a family of ideals in \(R\). We say \(I,J_{\bullet}\) satisfies **Condition C** if
1. The family \(J_{\bullet}\) is weakly \(p\) and also weakly \(p^{-1}\).
2. For each real number \(t\), there is an \(\alpha\) such that \(\mathfrak{m}^{[p^{\alpha+n}]}\subseteq I^{[tq]}+J_{n}\) for all \(n\).
**Condition C** provides the right framework where we can prove existence of \(h\)-functions; see Theorem 3.7.
**Definition 3.4**.: Let \((R,\mathfrak{m})\) be a local or graded ring. Let \(I\) be an ideal and \(J_{\bullet}=\{J_{n}\}_{n\in\mathbb{N}}\) be a family of ideals in \(R\)- homogeneous when \(R\) is graded, such that \(I+J_{n}\) is \(\mathfrak{m}\)-primary for all \(n\). For a finitely generated \(R\)-module \(M\) (homogeneous when \(R\) is graded) and \(s\in\mathbb{R}\), set
1. \(h_{n,M,I,J_{\bullet}}(s)=l(\frac{M}{(I^{[sq]}+J_{n})M})\).
2. For an integer \(d\), set \[h_{n,M,I,J_{\bullet},d}(s)=\frac{1}{q^{d}}l(\frac{M}{(I^{[sq]}+J_{n})M}).\]
3. We denote the limit of the sequence of numbers \(h_{n,M,I,J_{\bullet},d}(s)\), whenever it exists, by \(h_{M,I,J_{\bullet},d}(s)\).
Whenever one or more of the parameters \(M,I,J_{\bullet}\) is clear from the context, we suppress those from \(h_{n,M,I,J_{\bullet}}(s)\), \(h_{n,M,I,J_{\bullet},d}(s)\) or \(h_{M,I,J_{\bullet},d}(s)\). In the absence of an explicit \(d\), it should be understood that \(d=\dim(M).\) When \(J_{n}=J^{[p^{n}]}\) for some ideal \(J\), \(h_{n,M,I,J},h_{n,M,I,J,d},h_{M,I,J}\) stand for \(h_{n,M,I,J_{\bullet}}\), \(h_{n,M,I,J_{\bullet},d}\) and \(h_{M,I,J_{\bullet},d}\) respectively.
_Remark 3.5_.: (1) With the notational conventions and suppression of parameters declared above, \(h_{n,M,I,J}\) stands for both \(l(\frac{M}{(I^{[sq]}+J_{n})M})\) and \(\frac{1}{q^{\dim(M)}}l(\frac{M}{(I^{[sq]}+J_{n})M})\). But in the article, it is always clear from the context what \(h_{n,M,I,J}\) denotes. So we do not introduce further conventions.
(2) When \((R,\mathfrak{m})\) is graded, \(M,I\) and \(J_{\bullet}\) are homogeneous, \(h_{n,M,I,J}=h_{n,M_{\mathfrak{m}},IR_{\mathfrak{m}},J_{\mathfrak{m}}}\). So once we prove statements involving \(h_{n}\)'s in the local setting, the corresponding statements in the graded setting follow.
The following comparison between ordinary powers and Frobenius powers is used throughout this article:
**Lemma 3.6**.: _Let \(R\) be a ring of characteristic \(p>0\), \(J\) be an \(R\)-ideal generated by \(\mu\) elements, \(k\in\mathbb{N}\), and \(q=p^{n}\) is a power of \(p\). Then \(J^{q(\mu+k-1)}\subset(J^{[q]})^{k}\subset J^{qk}\)._
Proof.: The second containment is trivial. We prove the first containment. Let \(J=(a_{1},...,a_{\mu})\), then \(J^{q(\mu+k-1)}\) is generated by \(a_{1}^{u_{1}}...a_{\mu}^{u_{\mu}}\) where \(\sum u_{i}=q(\mu+k-1)\). Let \(a=a_{1}^{u_{1}}...a_{\mu}^{u_{\mu}}\), \(v_{i}=\lfloor u_{i}/q\rfloor\) and \(b=a_{1}^{v_{1}}...a_{\mu}^{v_{\mu}}\), then since \(qv_{i}\leq u_{i}\), \(b^{q}\) divides \(a\). Now \(qv_{i}\geq u_{i}-q+1\), so \(\sum qv_{i}\geq q(\mu+k-1)+(-q+1)\mu=q(k-1)+\mu>q(k-1)\), so \(\sum v_{i}\geq k\). This means \(b\in J^{k}\) and \(a\in J^{k[q]}=J^{[q]k}\).
**Theorem 3.7**.: _Let \((R,\mathfrak{m},k)\) be an \(F\)-finite local domain of dimension \(d\). Let \(J_{\bullet}\) be a family of ideals such that there is a non-zero \(c\in R\) and \(\phi\in\text{Hom}_{R}(F_{*}R,R)\) satisfying \(c.J_{n}^{[p]}\subseteq J_{n+1}\) and \(\phi(F_{*}J_{n+1})\subseteq J_{n}\). Let \(I\) be an ideal such that for each \(s\in\mathbb{R}\), there is an integer \(\alpha\) such that \(m^{[p^{n+\alpha}]}\subseteq I^{\lceil sq\rceil}+J_{n}\) for all \(n\). Set \(I_{n}(s)=I^{\lceil sq\rceil}+J_{n}\)._
1. _Fix_ \(t\in\mathbb{R}\)_. Choose_ \(\alpha\in\mathbb{N}\) _such that_ \(\mathfrak{m}^{[p^{n+\alpha}]}\subseteq I^{\lceil tq\rceil+J_{n}}\) _for all_ \(n\)_. Then there exists a positive constant_ \(C\) _depending only on_ \(c,\phi,I\) _and_ \(\alpha^{2}\) _such that for any_ \(s\in(-\infty,t]\)_,_ \[h_{R,I,J_{\bullet},d}(s)=\lim_{n\to\infty}1/p^{nd}l_{R}(R/I_{n}(s))\,\text{ exists, and}\] (3.1) \[|1/p^{nd}l_{R}(R/I_{n}(s))-h_{R,I,J_{\bullet},d}(s)|\leq C/p^{n}\,\text{for all}\,n\in\mathbb{N}.\]
2. _Given choices_ \(I,J_{\bullet}\) _and_ \(t\in\mathbb{R}\)_, one can choose_ \(C\) _depending only on_ \(t\)_, such that Equation (_3.1_) holds on_ \([0,t]\)_._
3. _On every bounded subset of_ \(\mathbb{R}\)_, the sequence of functions_ \(h_{n,I,J_{\bullet},d}(s)\) _converges uniformly to_ \(h_{R,I,J_{\bullet}}(s)\)_._
Proof.: (1) When \(I=0\), \(I_{n}(s)=J_{n}\), so everything follows from Theorem 3.2.
We assume \(I\) is non-zero for the rest of the proof. Note \(I_{n}(s)^{[p]}=I^{\lceil sq\rceil[p]}+J_{n}^{[p]}\subseteq I^{\lceil sq\rceil p }+J_{n}^{[p]}\subseteq I^{\lceil sq\rceil}+J_{n}^{[p]}\) as \(\lceil sq\rceil p\geq\lceil sqp\rceil\). So
\[c.I_{n}(s)^{[p]}\subseteq I_{n+1}(s). \tag{3.2}\]
Suppose \(I\) is generated by \(\mu\)-many elements. Then
\[I^{\lceil spq\rceil}\subseteq I^{\lceil sq\rceil p-p}\subseteq I^{[p](\lceil sq \rceil-\mu)};\ \ \text{see Lemma \ref{lem:2}.}\]
Fix a non-zero \(r\in(I^{\mu})^{[p]}\). Then the last containment implies,
\[\phi(F_{*}r.F_{*}I_{n+1}(s))=\phi(F_{*}(rI^{\lceil spq\rceil}))+\phi(F_{*}(rJ_{ n+1}))\subseteq\phi(F_{*}(I^{\lceil sq\rceil[p]}))+J_{n}\subseteq I_{n}(s)\, \text{for all}\,s\in\mathbb{R}\,. \tag{3.3}\]
Equation (3.2) and Equation (3.3) imply that, for all \(s\), the non-zero elements \(c\in R\) and \(\phi(F_{*}r.\underline{\underline{\phantom{-}}})\in\text{Hom}_{R}(F_{*}R,R)\) endow \(I_{n}(s)\) with weakly \(p\) and \(p^{-1}\) family structures, respectively. The ideal \(m^{[p^{n+\alpha}]}\) is contained in \(I_{n}(t)\) and hence in \(I_{n}(s)\) for \(s\leq t\). The rest follows by applying Theorem 3.2 to the family \(I_{n+\alpha}(s)\) for every \(s\leq t\). The feasibility of choosing \(C\) depending only on \(c,\phi,\alpha\) and \(r\) also follows from Theorem 3.2. Since \(r\in(I^{\mu})^{[p]}\) can be chosen depending only on \(I\), the choice of \(C\) depends only on \(c,\phi,\alpha\)
and \(I\).
(2) Once \(I,J_{n}\) satisfying the hypothesis is given and \(t\in\mathbb{R}\) is given, \(c,\phi,\alpha\) can be chosen depending only on \(I,J_{n},t\).
(3) Every bounded subset of \(\mathbb{R}\) is contained in some interval \((-\infty,t]\). The dependence of \(C\) only on \(I,J_{n},t\) and \(t\) implies (3).
The domain assumption is made in the above theorem just so that we can apply Theorem 3.2.
**Lemma 3.8**.: _Suppose \(I\) and \(J_{\bullet}\) satisfy the hypothesis of Theorem 3.7. Suppose there is an integer \(r\) such that \(I^{rp^{n}}\subseteq J_{n}\). Then \(h_{n,I,J_{\bullet}}(s)\) and \(h_{I,J_{\bullet},d}\) are constant on \([r,\infty)\)._
The next two propositions produce examples of an ideal \(I\) and ideal family \(J_{\bullet}\) satisfying **Condition C**. For specific choices of \(J_{\bullet}\) and \(I\), the corresponding corresponding functions \(h_{I,J_{\bullet},d}\) encode widely studied invariants of a prime characteristic ring such as Hilbert-Kunz multiplicity, \(F\)-signature, \(F\)-threshold. We do not assume \(R\) is a domain in the next two examples.
**Proposition 3.9**.: _Let \(J_{\bullet}\) be a family of ideals which is a big \(p\) and also \(p^{-1}\) family. For any ideal \(I\), \(I\), \(J_{\bullet}\) satisfy **Condition C**._
Proof.: Since \(J_{\bullet}\) is big, there is an \(\alpha\) such that \(\mathfrak{m}^{[p^{n+\alpha}]}\subseteq J_{n}\). Thus for every \(s\in\mathbb{R}\), \(\mathfrak{m}^{[p^{n+\alpha}]}\subseteq I^{\lceil sq\rceil}+J_{n}\).
When \(R\) is a domain, a big \(p\),\(p^{-1}\) family \(J_{\bullet}\) thus produces an \(h\)-function. Thanks to Lemma 3.8 such an \(h_{I,J_{\bullet}}\) is eventually constant.
**Example 3.10**.: Examples of \(J_{\bullet}\) which are both big \(p\) and also \(p^{-1}\) include \(J_{n}=J^{[p^{n}]}\), where \(J\) is an \(\mathfrak{m}\)-primary ideal. Another example of interest is when \(J_{n}\) is the sequence of ideals defining \(F\)-signature of \((R,\mathfrak{m})\) which we now recall. Set \(p^{\alpha}=[k:k^{p}]\). Take
\[J_{n}=\{x\in R\,|\,\phi(x)\in\mathfrak{m},\,\text{for all}\,\phi\in\text{Hom} _{R}(F_{*}^{n}R,R)\}.\]
Then \(p^{\alpha n}l(R/I_{n})\) coincides with the free rank of \(F_{*}^{n}R\): the maximal rank of a free module \(M\) such that there is an \(R\)-module surjection \(F_{*}^{n}R\to M\); see [14, Prop 4.5]. The family \(J_{n}\) is both \(p\) and \(p^{-1}\); and \(J_{n}\) contains \(\mathfrak{m}^{[p^{n}]}\). Thanks to Theorem 3.2, the limit
\[s(R):=\lim_{n\to\infty}(\frac{1}{q})^{\dim(R)}l(\frac{R}{J_{n}})\]
exist. The number \(s(R)\) measuring the asymptotic growth of the free rank of \(F_{*}^{n}R\) is called the \(F\)-signature of \(R\). The ring \((R,\mathfrak{m})\) is strongly \(F\)-regular if and only if \(s(R)\) is positive; see [1, Thm 0.2]. When \(R\) is a domain, for any nonzero ideal \(I\), we have \(h_{I,J_{\bullet}}(s)\) whose value for large \(s\) is \(s(R)\). The continuity, left-right differentiability of such \(h_{I,J_{\bullet}}\) are consequences of Theorem 5.4.
The examples of \(h\)-functions produced by the result below are central to extending theories of Frobenius-Poincare and Hilbert-Kunz density functions to the local setting.
**Proposition 3.11**.: _For any pair of ideals \(I,J\) such that \(I+J\) is \(\mathfrak{m}\)-primary, the ideal \(I\) and the family \(J_{n}=J^{[p^{n}]}\) satisfies **Condition C**._
Proof.: Since \(I+J\) is \(\mathfrak{m}\)-primary, given a real number \(s\), \(\mathfrak{m}^{[p^{n}]}\subseteq I^{\lceil s\rceil}+J\) for some \(\alpha\). Then \(m^{[p^{n+n}]}\subseteq(I^{\lceil s\rceil}+J)^{[p^{n}]}\subseteq I^{\lceil sq \rceil}+J^{[q]}\). So \(I^{\lceil sq\rceil}+J^{[q]}\) is a big \(p\) and \(p^{-1}\) family.
For two \(\mathfrak{m}\)-primary ideals \(I,J\), in [14] Taylor considers \(s\)-multiplicity(function) which is a scalar multiple of the corresponding \(h_{I,J}\). When \(J_{n}=J^{[q]}\), our proof of the existence of \(h\) function in Theorem 3.7 is not only different from the proof of Theorem 2.1 of [14], but also is still valid when both \(I\) and \(J\) are not necessarily \(\mathfrak{m}\)-primary. Moreover, in Theorem 3.7, the flexibility of choosing \(C\) depending only \(\phi\) and \(c\) is a byproduct of our proof; this flexibility is crucial in Theorem 3.13 and later.
### Growth of \(h\)-function, \(\mathfrak{m}\)-adic continuity
Next, we investigate how \(h_{n,I,J_{\bullet}}(s)\) changes when the \(I\) or \(J_{\bullet}\) is replaced by another ideal or ideal family which is \(\mathfrak{m}\)-adically close the initial one. The results we prove are used later in Section 6, for example, to prove continuity of Hilbert-Kunz density function \(\tilde{g}_{M,J}\) for non \(\mathfrak{m}\)-primary \(J\); see Theorem 6.7.
**Lemma 3.12**.: _Let \(R\) be a noetherian local ring, \(I,J\) be two \(R\)-ideals such that \(I+J\) is \(\mathfrak{m}\)-primary. Let \(I^{\prime}\), \(J^{\prime}\) be two ideals such that \(I\subset I^{\prime}\), \(J\subset J^{\prime}\). Then \(h_{n,M,I,J}(s)\geq h_{n,M,I^{\prime},J^{\prime}}(s)\)._
Proof.: If \(I\subset I^{\prime}\), \(J\subset J^{\prime}\) then \((I^{[tp]}+J^{[p]})M\subset(I^{\prime[tp]}+J^{\prime[p]})M,\) so \(l(M/(I^{[tp]}+J^{[p]})M)\geq l(M/(I^{\prime[tp]}+J^{\prime[p]})M)\), which just means \(h_{n,M,I,J}(s)\geq h_{n,M,I^{\prime},J^{\prime}}(s)\).
**Theorem 3.13**.: _Let \((R,\mathfrak{m})\) be a noetherian local ring. Assume \(I,J_{\bullet}\) satisfy **Condition C**. (1) Fix \(s_{0}\in\mathbb{R}\). We can choose \(t\) depending only on \(I,J_{\bullet},s_{0}\) such that for any ideals \(J\subset\mathfrak{m}^{t}\),\(I\subset I^{\prime}\), and all \(n\),_
\[h_{n,M,I^{\prime},J\bullet}(s)=h_{n,M,I^{\prime},J_{\bullet}+J^{[p]}}(s)\ \text{for $s\leq s_{0}$}.\]
_(2) Assume \(J_{\bullet}\) is both big \(p\) and \(p^{-1}\) family. There exists a constant \(c\) such that for any ideals \(I^{\prime}\subset\mathfrak{m}^{t}\), \(t\in\mathbb{N}\) and \(s\in\mathbb{R}\),_
\[h_{n,M,I,J}(s-c/t)\leq h_{n,M,I+I^{\prime},J}(s)\leq h_{n,M,I,J}(s)\leq h_{n, M,I+I_{t},J}(s+c/t).\]
_(3) Fix \(s_{0}>0\). There exists a \(t_{0}\) and a constant \(c\), both only depending on \(s_{0},I,J_{\bullet}\) such that for any \(t\geq t_{0}\), \(I_{t}\subseteq\mathfrak{m}^{t}\),_
\[h_{n,M,I,J_{\bullet}}(s-c/t)\leq h_{n,M,I+I_{t},J_{\bullet}}(s)\leq h_{n,M,I, J_{\bullet}}(s)\leq h_{n,M,I+I_{t},J_{\bullet}}(s+c/t),\]
_for \(s\leq s_{0}\)._
Proof.: (1) Let \(t\) be the smallest integer such that \(\mathfrak{m}^{t[q]}\subset I^{\lceil sqq\rceil}+J_{n}\) for all \(n\). By the previous lemma, it suffices to consider the case where \(J=\mathfrak{m}^{t}\). So for \(I\subseteq I^{\prime}\),
\[I^{\prime\lceil sq\rceil}+J_{n}=I^{\prime\lceil sq\rceil}+J_{n}+\mathfrak{m}^ {t[q]}\,\text{for}\,s\leq s_{0}\,\text{and all}\,n\in\mathbb{N},\]
proving the desired statement.
(2) Since \(J_{\bullet}\) is a big family, we can choose \(t_{0}\) such that \(\mathfrak{m}^{t_{0}[q]}\subseteq J_{n}\) for all \(n\). We may also assume \(I^{\prime}=\mathfrak{m}^{t}\). Let \(\mathfrak{m}\) be generated by \(\mu\)-elements, set \(\epsilon_{t}=t_{0}\mu/t\). Then \(\mathfrak{m}^{t[\epsilon_{t}q]}\subseteq\mathfrak{m}^{t_{0}\mu q}\subseteq \mathfrak{m}^{t_{0}[q]}\subset J_{n}\) for all \(n\). So
\[(I+\mathfrak{m}^{t})^{\lceil sq\rceil}=\sum_{0\leq j\leq\lceil sq\rceil}I^{ \lceil sq\rceil-j}\mathfrak{m}^{tj}\subset I^{\lceil sq\rceil-\lceil\epsilon_{ t}q\rceil}+\mathfrak{m}^{t\lceil\epsilon_{t}q\rceil}\subset I^{\lceil sq\rceil- \lceil\epsilon_{t}q\rceil}+J_{n}\subseteq I^{\lceil(s-t_{0}\mu/t)q\rceil}+J_{n}\]
Thus we have
\[l(M/(I^{\lceil(s-t_{0}\mu/t)q\rceil}+J_{n})M)\leq l(M/((I+\mathfrak{m}^{t})^{ \lceil sq\rceil}+J_{n})M)\leq l(M/(I^{\lceil sq\rceil}+J_{n})M).\]
So taking \(c=t_{0}\mu\) verifies the first two inequalities. These equalities are independent of \(s\), so we may replace \(s\) by \(s+c/t\) to get the third inequality.
(3) By (1) we can choose \(t_{1}\) depending on \(s_{0},I,J_{\bullet}\) such that \(h_{n,M,I^{\prime},J_{\bullet}+\mathfrak{m}^{t_{1}|\mathfrak{sl}}}(s)=h_{n,M,I^{ \prime},J_{\bullet}}(s)\) whenever \(I\subset I^{\prime}\) and \(s\leq s_{0}+1\). By (2), we can choose \(c\) depending on \(J+\mathfrak{m}^{t_{1}}\) such that \(h_{n,M,I,J_{\bullet}+\mathfrak{m}^{t_{1}|\mathfrak{sl}}}(s-c/t)\leq h_{n,M,I+I _{t},J_{\bullet}+\mathfrak{m}^{t_{1}|\mathfrak{sl}}}(s)\leq h_{n,M,I,J+J_{ \bullet}+\mathfrak{m}^{t_{1}|\mathfrak{sl}}}(s)\leq h_{n,M,I+I_{t},J_{\bullet}+ \mathfrak{m}^{t_{1}|\mathfrak{sl}}}(s+c/t)\), for \(I_{t}\subseteq m^{t}\). Take \(t_{0}=c\). Since for \(t\geq t_{0}\) and \(s\leq s_{0}\), \(s+\frac{c}{t}\leq s_{0}+1\), the above chain of inequalities imply
\[h_{n,M,I,J_{\bullet}}(s-c/t)\leq h_{n,M,I+I_{t},J_{\bullet}}(s)\leq h_{n,M,I,J _{\bullet}}(s)\leq h_{n,M,I+I_{t},J_{\bullet}}(s+c/t).\]
Assertion (1) of the theorem above allows us to replace \(J_{\bullet}\) by a big \(p\) and \(p^{-1}\) family in questions involving local structure of \(h\)-functions. This observation is repeatedly used later; see Theorem 6.7.
Next we prove that the sequence \(h_{n,I,J_{\bullet},d}(s)\) is uniformly bounded on every compact subset. When \(J_{\bullet}=J^{[p^{n}]}\) for some \(J\), we refine the bound to show that \(h_{n,I,J_{\bullet},d}(s)\) is bounded above by a polynomial of degree \(\dim(\frac{R}{J})\) in Theorem 3.16. The uniform (in \(n\)) polynomial bound on \(h_{n}\) is used in the extension of the theory of Frobenius-Poincare functions in Lemma 4.1, Theorem 4.3.
**Lemma 3.14**.: _In a local ring \((R,\mathfrak{m})\), let \(I,J_{\bullet}\) satisfy **condition C**. Let \(M\) be a finitely generated \(R\)-module. Given \(s_{0}\in\mathbb{R}\), there is a constant \(C\) depending only on \(s_{0}\) such that_
\[h_{n,M,I,J_{\bullet}}(s)\leq C.q^{d}\,\text{for all}\,n.\]
Proof.: Choose \(\alpha\) such that \(\mathfrak{m}^{[p^{n+\alpha}]}\subseteq I^{[s_{0}q]}+J_{n}\). So for \(s\leq s_{0}\),
\[h_{n,M,I,J_{\bullet}}(s)\leq l(\frac{M}{\mathfrak{m}^{[p^{n+\alpha}]}}M)\leq Cq ^{d}.\]
The last ineuqality is a consequence of [11].
_Remark 3.15_.: Given a noetherian local ring \((R,\mathfrak{m},k)\) containing \(\mathbb{F}_{p}\), a field extension \(k\subseteq L\) denote by \(S\) the \(\mathfrak{m}\)-adic completion of \(L\otimes_{k}\hat{R}\). Here \(\hat{R}\) is the \(\mathfrak{m}\)-adic completion of \(R\) and \(\hat{R}\) can be treated as a \(k\)-algebra thanks to the existence of coefficient field of \(\hat{R}\); see [14, tag 0323]. The residue field of the local ring \(S\) is isomorphic to \(L\). The natural map \(R\to S\) is faithfully flat. Now given a finite length \(R\)-module \(M\), \(l_{R}(M)=l_{S}(S\otimes_{R}M)\). We use this observation to make simplifying assumption on the residue field of \(R\).
**Theorem 3.16**.: _Let \((R,\mathfrak{m},k)\) be a noetherian local ring of dimension \(d\), \(I,J\) be two \(R\)-ideals such that \(I+J\) is \(\mathfrak{m}\)-primary. Assume \(I\) is generated by \(\mu\) elements, \(M\) is generated by \(\nu\) elements, and \(d^{\prime}=\dim R/J\). Then:_
1. _There exist a polynomial_ \(P_{1}(s)\) _of degree_ \(d^{\prime}\) _such that for any_ \(s\geq 0\)_,_ \[\frac{l(M/I^{[sq]}+J^{[q]}M)}{l(R/\mathfrak{m}^{[q]})}\leq P_{1}(s).\] _Moreover if_ \(d^{\prime}>0\)_, the leading coefficient of_ \(P_{1}\) _can be taken to be_ \(\frac{\nu e(I,R/J)}{d^{\prime}!}\)__
2. _There exist a polynomial_ \(P_{2}(s)\) _such that_ \[\frac{l(M/I^{[sq]}+J^{[q]}M)}{q^{d}}\leq P_{2}(s).\] _In other words,_ \(h_{n,M,d}(s)/q^{d}\leq P_{2}(s)\)_._
3. _There exists a polynomial_ \(P_{3}\) _of degree_ \(d^{\prime}\) _and leading coefficient_ \(\frac{\nu e(I,R/J)_{eHK}(R)}{d^{\prime}!}\) _such that for any_ \(s\geq 0\)_,_ \[\overline{\lim}_{n\to\infty}\frac{l(M/I^{[sq]}+J^{[q]}M)}{q^{d}}\leq P_{3}(s).\]
Proof.: We may assume that the residue field is perfect by using Remark 3.15
1. Suppose \(M\) is generated by \(\nu\) many elements. Then \[l(M/I^{\lceil sq\rceil}+J^{[q]}M)\leq\nu l(R/I^{\lceil sq\rceil}+J^{[q]})\] \[\leq\nu l(R/(I^{\lceil s\rceil})^{[q]}+J^{[q]})\] \[\leq\nu l(F_{*}^{n}R/(I^{\lceil s\rceil}+J)F_{*}^{n}R)\] \[\leq\nu\mu_{R}(F_{*}^{n}R)l(R/I^{\lceil s\rceil}+J)\] Let \(P_{0}\) be the Hilbert-Samuel polynomial for the \(I\)-adic filtration on \(R/J\); \(P_{0}\) has degree \(d^{\prime}\) and leading coefficient \(\frac{\nu e(I,R/J)}{d^{\prime}!}\). Fix \(s_{0}\) such that \(l(R/I^{\lceil s\rceil}+J)=P_{0}(\lceil s\rceil)\) and \(P_{0}\) is non-decreasing for \(s\geq s_{0}\). Thus for \(s\geq s_{0}\), \[l(R/I^{\lfloor s\rceil}+J)\leq P_{0}(s+1).\] When \(\frac{R}{J}\) has Krull dimension zero, \(P_{0}(s)=l(R/J)\) and \(l(R/I^{\lceil s\rceil}+J)\leq P_{0}(s+1)\) for all \(s\), so we can take the desired \(P_{1}\) to be \(P_{0}(s+1)\). When \(R/J\) has positive Krull dimension, we can add a suitable positive constant to \(P_{0}(s+1)\) to get a \(P_{1}\) so that \(l(R/I^{\lfloor s\rfloor}+J)\leq P_{1}(s)\) on \([0,2]\) and thus on \(\mathbb{R}\).
2. Since \(\lim_{n\to\infty}l(R/\mathfrak{m}^{[q]})/q^{d}\) exists, \[C=\sup_{n}l(R/\mathfrak{m}^{[q]})/q^{d}\] exists. So for any \(n\), \(l(R/\mathfrak{m}^{[q]})/q^{d}\leq C\), and \(P_{2}=CP_{1}\) satisfies (2).
3. \[\overline{\lim}_{n\to\infty}\frac{l(M/I^{\lceil sq\rceil}+J^{[q]} M)}{q^{d}}\] \[\leq\overline{\lim}_{n\to\infty}\frac{l(M/I^{\lceil sq\rceil}+J^ {[q]}M)}{l(R/\mathfrak{m}^{[q]})}\overline{\lim}_{n\to\infty}\frac{l(R/ \mathfrak{m}^{[q]})}{q^{d}}\] \[\leq e_{HK}(R)P_{1}(s).\] So \(P_{3}=e_{HK}(R)P_{1}\) works.
### Lipschitz continuity of \(h\)-functions, application of a 'convexity technique'
Proving continuity of \(h_{R,I,J_{\bullet}}\)- when \(R\) is a domain is more involved than proving its existence. In this subsection, we develop results aiding the proof of Lipschitz continuity of \(h_{R,I,J_{\bullet}}\); see Theorem 3.20. When \(J_{n}=J^{[q]}\), these results are used to prove existence and continuity of the \(h\)-function of a finitely generated module in Theorem 3.30, by reducing the problem to the case where \(R\) is reduced. The key result aiding these applications is Theorem 3.19. We prove this by utilizing the monotonicity of a certain numerical function. This 'convexity technique' is repeatedly used later to prove left and right differentiability of the \(h\)-function in among other properties. The required monotonicity result appears in Lemma 3.17. This is an adaptation and generalization of Boij-Smith's result in [1] which is suitable for our purpose.
**Lemma 3.17**.: _Let \((R,\mathfrak{m})\) be a noetherian local ring, \(I\) be an \(\mathfrak{m}\)-primary ideal generated by \(\mu\) elements, \(M\) be a finitely generated \(R\)-module, \(S\) be the polynomial ring of \(\mu\)-variables over \(\frac{R}{\mathfrak{m}}\). Then the function \(i\to l(I^{i}M/I^{i+1}M)/l(S_{i})\) is decreasing for \(i\geq 0\)._
Proof.: Consider the associated graded ring \(gr_{I}(R)\). Since \(I\) is generated by a set of \(\mu\) elements, as a graded ring \(gr_{I}(R)\) is a quotient of the standard graded polynomial ring \(R/I[T_{1},...,T_{\mu}]\) over \(R/I\). Recall \(S=\frac{R}{\mathfrak{m}}[T_{1},...,T_{\mu}]\). Since \(M/IM\) is Artinian, there exists a filtration
\[0=N_{0}\subset N_{1}\subset...\subset N_{l}=M/IM,\,\text{such that}\,N_{j+1}/N_{j}= \frac{R}{\mathfrak{m}}\text{ for}\,0\leq j\leq l-1.\]
Let \(M_{j}\) be the \(gr_{I}(R)\)-submodule of \(gr_{I}(M)\) spanned by \(N_{j}\). Then \(M_{j+1}/M_{j}\) is annihilated by \(\mathfrak{m}gr_{I}(R)\). So it is naturally a \(gr_{I}(R)/\mathfrak{m}gr_{I}(R)\)-module, hence is an \(S\)-module, and it is generated in degree \(0\). So by Theorem 1.1 of [1], for any \(i\geq 0\),
\[l(M_{j+1}/M_{j})_{i}/l(S_{i})\leq l(M_{j+1}/M_{j})_{i+1}/l(S_{i+1}).\]
Since truncation at degree \(i\) is an exact functor from \(gr_{I}(R)\)-modules to \(R\)-modules, taking sum over \(0\leq j\leq l-1\) we get \(l(M_{l})_{i}/l(S_{i})\leq l(M_{l})_{i+1}/l(S_{i+1})\). Since \(M_{l}=gr_{I}(R)N_{l}=gr_{I}(M)\), we are done.
When \(I\) is a principal ideal, the above lemma manifests into the following easily verifiable result.
**Example 3.18**.: Let \(R\) be a noetherian local ring, \(f\) be an element in \(R\) such that \(R/fR\) has finite length. Then for any \(j\geq i\), \(l(f^{i}R/f^{i+1}R)\geq l(f^{j}R/f^{j+1}R)\). This means that the function \(i\to l(R/f^{i}R)\) is convex on \(\mathbb{N}\); see Definition 5.2.
**Theorem 3.19**.: _Let \(R\) be a noetherian local ring, \(M\) be a finitely generated module of dimension \(d\). Suppose \(I,J_{\bullet}\) satisfy **Condition C**. Fix \(0<s_{1}<s_{2}<\infty\in\mathbb{R}\). Then there is a constant \(C\) and a power \(q_{0}=p^{n_{0}}\) that depend on \(s_{1},s_{2}\), but independent of \(n\) such that for any \(s_{1}\leq s\leq s_{2}-1/q\) and \(q\geq q_{0}\)_
\[l(\frac{(I^{\lceil sq\rceil}+J_{n})M}{(I^{\lceil sq\rceil+1}+J_{n})M})\leq Cq ^{d-1}\]
_In other words, whenever \(s_{1}\leq s\leq s_{2}-1/q\) and \(q\geq q_{0}\),_
\[|h_{n,M}(s+1/q)-h_{n,M}(s)|\leq Cq^{d-1}.\]
Proof.: We may assume \(s_{1},s_{2}\in\mathbb{Z}[1/p]\). Otherwise, since \(\mathbb{Z}[1/p]\) is dense in \(\mathbb{R}\), we can choose \(s^{\prime}_{1}\in(0,s_{1})\cap\mathbb{Z}[1/p]\), \(s^{\prime}_{2}\in(s_{2},\infty)\cap\mathbb{Z}[1/p]\) and replace \(s_{1},s_{2}\) by \(s^{\prime}_{1},s^{\prime}_{2}\). Choose \(s_{3}\in\mathbb{Z}[1/p]\) such that \(0<s_{3}<s_{1}\) and choose \(q_{0}\) such that \(s_{1}q_{0},s_{2}q_{0},s_{3}q_{0}\in\mathbb{Z}\). Let \(I\) be generated by a set of \(\mu\) many elements. Applying Lemma 3.17 to the module \(M/J_{n}M\) we know for any \(0\leq t\leq\lceil sq\rceil\),
\[\frac{l(\frac{I^{\lceil sq\rceil}(M/J_{n}M)}{\lceil sq\rceil+1}(M/J_{n}M))}{ \binom{\mu+\lceil sq\rceil-1}{\mu-1}}\leq\frac{l(\frac{I^{t}(M/J_{n}M)}{I^{t +1}(M/J_{n}M)})}{\binom{\mu+t-1}{\mu-1}}.\]
Rewritten, the above inequality yields
\[\frac{l(\frac{(I^{\lceil sq\rceil}+J_{n})M}{(I^{\lceil sq\rceil+1}+J_{n}M)M}) }{\binom{\mu+\lceil sq\rceil-1}{\mu-1}}\leq\frac{l(\frac{(I^{t}+J_{n})M}{(I^ {t+1}+J_{n})M})}{\binom{\mu+t-1}{\mu-1}}.\]
Thus for \(s_{1}\leq s\leq s_{2}-\frac{1}{q}\),
\[(\lceil sq\rceil-s_{3}q)l(\frac{(I^{\lceil sq\rceil}+J_{n})M}{(I^{ \lceil sq\rceil+1}+J_{n})M}) \leq\binom{\mu+\lceil sq\rceil-1}{\mu-1}\sum_{t=s_{3}q}^{\lceil sq \rceil-1}\frac{l(\frac{(I^{t}+J_{n})M}{I^{t+t}+J_{n}M})}{\binom{\mu+t-1}{\mu-1}}\] \[\leq\frac{\binom{\mu+\lceil sq\rceil-1}{\mu-1}}{\binom{\mu+s_{3}q -1}{\mu-1}}l(\frac{(I^{\lceil sq\rceil}+J_{n})M}{(I^{s_{3}q}+J_{n}M})\] \[\leq\frac{\binom{\mu+\lceil sq\rceil-1}{\mu-1}}{\binom{\mu+s_{3}q -1}{\mu-1}}[l(\frac{M}{(I^{\lceil sq\rceil}+J_{n})M})-l(\frac{M}{(I^{s_{3}q}+ J_{n})M})]\] \[\leq\frac{\binom{\mu+s_{3}q-1}{\mu-1}}{\binom{\mu+s_{3}q-1}{\mu-1 }}[l(\frac{M}{(I^{\lceil sq\rceil}+J_{n})M})-l(\frac{M}{(I^{s_{3}q}+J_{n})M})].\]
Therefore for \(s_{1}\leq s\leq s_{2}-\frac{1}{q}\) and \(q\geq q_{0}\),
\[l(\frac{(I^{\lceil sq\rceil}+J_{n})M}{(I^{\lceil sq\rceil+1}+J_{n})M})\leq \frac{1}{s_{1}q-s_{3}q}\frac{\binom{\mu+s_{2}q-1}{\mu-1}}{\binom{\mu+s_{3}q-1 }{\mu-1}}[l(\frac{M}{(I^{s_{2}q}+J_{n})M})-l(\frac{M}{(I^{s_{3}q}+J_{n})M})] \leq Cq^{d-1}.\]
By Lemma 3.14, we can choose a constant \(C^{\prime}\) depending only on \(s_{2}\) such that for \(s\leq s_{2}\),
\[l(\frac{M}{(I^{sq}+J_{n})M})\leq C^{\prime}q^{d}.\]
Since \(\binom{\mu+s_{2}q-1}{\mu-1}/\binom{\mu+s_{3}q-1}{\mu-1}\) is bounded above by a constant depending on \(s_{1},s_{3}\) and \(s_{3}\) depends only on \(s_{2}\), we can choose \(C\) depending only on \(s_{1},s_{2}\) such that for all \(n\) and \(q\geq q_{0}\),
\[l(\frac{(I^{\lceil sq\rceil}+J_{n})M}{(I^{\lceil sq\rceil+1}+J_{n})M})\leq Cq ^{d-1}.\]
Here \(C\) is a constant only depending on \(s_{1},s_{2},s_{3}\), and \(s_{3}\) depends only on \(s_{1}\).
Therefore, whenever \(h_{M,I,J_{\bullet}}\) exists, it is locally Lipschitz continuous away from zero.
**Theorem 3.20**.: _Let \(I\) be an ideal and \(J_{\bullet}\) be a family of ideals satisfying **Condition C** in a domain \((R,\mathfrak{m})\) of Krull dimension \(d\). Given real numbers \(0<s_{1}<s_{2}\), there is a constant \(C\) depending only in \(s_{1},s_{2}\) such that for any \(x,y\in[s_{1},s_{2}]\),_
\[|h_{R}(x)-h_{R}(y)|\leq C|x-y|\]
Proof.: Given \(s_{1},s_{2}\) as above and \(x,y\) in \([s_{1},s_{2}]\), by Theorem 3.19, we can choose a constant \(C\) depending only on \(s_{1},s_{2}\) such that
\[|h_{n,R}(x)-h_{n,R}(y)|=|h_{n,R}(\frac{\lceil qx\rceil}{q})-h_{n,R}(\frac{ \lceil qy\rceil}{q})|\leq C|\frac{\lceil qx\rceil}{q}-\frac{\lceil qy\rceil} {q}|q^{d}\,\text{for all}\,n.\]
Divide both sides by \(q^{d}\) and take limit as \(n\) approaches infinity. Since for any real number \(s\), \(\frac{h_{n}(s)}{q^{d}}\) and \(\lceil qs\rceil/q\) converge to \(h_{R}(s)\) and \(s\) respectively,
\[|h_{R}(x)-h_{R}(y)|\leq C|x-y|.\]
**Lemma 3.21**.: _Assume the residue field of \(R\) is perfect and \(M\) is a module of dimension \(d\). For each integer \(n_{0}\geq 0\) and fixed \(0<s_{1}<s_{2}<\infty\in\mathbb{R}\), there is a constant \(C\) independent of \(n\) such that_
\[|h_{n+n_{0},M,I,J}(s)-h_{n,F_{n}^{n_{0}}M,I,J}(s)|\leq Cq^{d-1}\]
_for any \(s_{1}\leq s\leq s_{2}\)._
Proof.: For any \(q_{0}\), \(\lceil sqq_{0}\rceil\leq\lceil sq\rceil q_{0}\leq\lceil sqq_{0}\rceil+q_{0}\). We have,
\[|h_{n+n_{0},M,I,J}(s)-h_{n,F^{n_{0}}M,I,J,d}(s)|\] \[=|l(M/(I^{\lceil sqq_{0}\rceil}+J^{[qq_{0}]})M)-l(F^{n_{0}}_{*}M/( I^{\lceil sq\rceil}+J^{[q]})F^{n_{0}}_{*}M)|\] \[=|l(M/(I^{\lceil sqq_{0}\rceil}+J^{[qq_{0}]})M)-l(M/(I^{\lceil sq \rceil[qq_{0}]}+J^{[qq_{0}]})M)|\] \[=(l(I^{\lceil sqq_{0}\rceil}+J^{[qq_{0}]})M/(I^{\lceil sq \rceil q_{0}}+J^{[qq_{0}]})M)+l(I^{\lceil sq\rceil q_{0}}+J^{[qq_{0}]})M/(I^ {\lceil sq\rceil[qq_{0}]}+J^{[qq_{0}]})M))\.\]
Note that \(1/q_{0}\lceil sqq_{0}\rceil\geq sq\geq\lceil sq\rceil-1\), so \(\lceil sq\rceil q_{0}\leq\lceil sqq_{0}\rceil+q_{0}\), so \(I^{\lceil sqq_{0}\rceil+q_{0}}\subset I^{\lceil sq\rceil q_{0}}\).
Suppose \(I\) is generated by \(\mu\) elements, then by Lemma 3.6, \(I^{\lceil sq\rceil q_{0}}\subset I^{(\lceil sq\rceil-\mu+1)[q_{0}]}\). Now by Theorem 3.19, we can choose a constant \(C\) depending only on \(s_{1},s_{2}\) such that for all \(s\in[s_{1},s_{2}]\),
\[l(\frac{(I^{\lceil sqq_{0}\rceil}+J^{[qq_{0}]})M}{(I^{\lceil sq \rceil q_{0}}+J^{[qq_{0}]})M})+l(\frac{(I^{\lceil sq\rceil q_{0}}+J^{[qq_{0}] })M}{(I^{\lceil sq\rceil[q_{0}]}+J^{[qq_{0}]})M})\] \[\leq l(\frac{(I^{\lceil sqq_{0}\rceil}+J^{[qq_{0}]})M}{(I^{\lceil sqq _{0}\rceil+q_{0}}+J^{[qq_{0}]})M})+l(\frac{(I^{\lceil sq\rceil-\mu+1)[q_{0}] }+J^{[qq_{0}]})M}{(I^{\lceil sq\rceil[q_{0}]}+J^{[qq_{0}]})M})\leq Cq^{d-1}\.\]
The lemma above allows us to replace \(M\) by \(F^{n_{0}}_{*}M\). Since we may replace \(R\) by \(R/\text{ann}F^{n_{0}}_{*}M\) and for large enough \(n_{0}\), \(\text{ann}F^{n_{0}}_{*}M\) contains the nilradical of \(R\); case, we may assume \(R\) is reduced while proving the existence of \(h_{M,I,J}\).
**Corollary 3.22**.: Assume the residue field of \(R\) is perfect. For each \(n_{0}\geq 0\), \(h_{M,I,J,d}(s)\) exists if and only if \(h_{F^{n_{0}}_{*}M,I,J,d}(s)\) exists, and if they both exist then
\[q_{0}^{d}h_{M,I,J,d}(s)=h_{F^{n_{0}}_{*}M,I,J,d}(s).\]
### Existence of \(h_{M,I,J}\)
For a noetherian local ring \((R,\mathfrak{m})\), \(R\)-ideals \(I,J\) such that \(I+J\) is \(\mathfrak{m}\)-primary and a finitely generated module, we prove the existence of \(h_{M,I,J}\) in Theorem 3.30. We prove preparatory results to reduce this problem to the situation where \(M=R\) and \(R\) is a domain. We prove the local Lipschitz continuity of \(h_{M,I,J}\) in Theorem 3.31. Recall:
**Definition 3.23**.: Set \(\text{Assh}R=\{P\in\text{Spec}R:\dim R=\dim R/P\}\).
**Lemma 3.24**.: _[_Mon83_, Proof of Lemma 1.3]If \(M,N\) are two \(R\)-modules such that \(M_{P}\cong N_{P},\forall P\in\text{Assh}R\). Then there is an exact sequence_
\[0\to N_{1}\to M\to N\to N_{2}\to 0\]
_such that \(\dim N_{1},\dim N_{2}\leq\text{dim}(R)-1\). Moreover it breaks up into two short exact sequences:_
\[0\to N_{1}\to M\to N_{3}\to 0\]
\[0\to N_{3}\to N\to N_{2}\to 0\]
_such that \(\text{dim}(N_{3})<\text{dim}(R)\)._
**Lemma 3.25**.: _Let \(N\subset M\) be two \(R\)-modules of finite length, and take \(a\in R\), then \(l(M/aM)\geq l(N/aN)\)._
Proof.: Consider the commutative diagram,
We see the map \(0:_{N}a\to 0:_{M}a\) is injective. By the additivity of length on short exact sequences we see \(l(M/aM)=l(0:_{M}a)\geq l(0:_{N}a)=l(N/aN)\).
**Lemma 3.26**.: _Let \(M_{1},M_{2},M_{3},M_{4}\) be four submodules of an \(R\)-module \(M\) such that \(M_{3}\subset M_{1}\), \(M_{4}\subset M_{2}\). Then \(M_{1}+M_{2}/M_{3}+M_{4}\) has a filtration with factors which are quotients of \(M_{1}/M_{3}\) and \(M_{2}/M_{4}\). In particular, if \(M_{1}/M_{3}\) and \(M_{2}/M_{4}\) have finite lengths then so does \(M_{1}+M_{2}/M_{3}+M_{4}\) and \(l(M_{1}+M_{2}/M_{3}+M_{4})\leq l(M_{1}/M_{3})+l(M_{2}/M_{4})\)._
Proof.: Consider the filtration
\[0\subseteq\frac{M_{3}+M_{2}}{M_{3}+M_{4}}\subseteq\frac{M_{1}+M_{2}}{M_{3}+M_ {4}}\]
The factors in the above filtration, namely \(M_{3}+M_{2}/M_{3}+M_{4}\) and \(M_{1}+M_{2}/M_{3}+M_{2}\), are quotients of \(M_{2}/M_{4}\) and \(M_{1}/M_{3}\) respectively.
**Lemma 3.27**.: _Let \((R,\mathfrak{m},k)\) be a local ring of dimension \(d\). Suppose \(I,J_{\bullet}\) satisfy **condition C**, and \(M\) is a module of dimension \(d^{\prime}\leq d-1\). Fix \(s_{0}\in\mathbb{R}\). Then there are constants \(C_{1},C_{2}\) depending on \(s_{0}\) but independent of \(n\) such that \(l(\operatorname{Tor}_{0}^{R}(R/(I^{\lceil sq\rceil}+J_{n}),M))\leq C_{1}q^{d-1}\) and \(l(\operatorname{Tor}_{1}^{R}(R/(I^{\lceil sq\rceil}+J_{n}),M))\leq C_{2}q^{d-1}\) for any \(s\leq s_{0}\). Moreover if \(J_{\bullet}\) is big, \(C_{1},C_{2}\) can be chosen independent of \(s\)._
Proof.: Since \(I,J_{\bullet}\) satisfy **Condition C**, we can find an \(\mathfrak{m}\)-primary ideal \(J\) such that for \(s\leq s_{0}\), \(J^{[q]}\subseteq I^{\lceil sq\rceil}+J_{n}\) for all \(n\). As \(M/J^{[q]}M\) surjects onto \(\operatorname{Tor}_{0}^{R}(R/(I^{\lceil sq\rceil}+J_{n}),M)\), and we can find a constant \(C_{1}\), such that \(l(M/J^{[q]}M)\leq C_{1}q^{\dim M}\), \(l(\operatorname{Tor}_{0}^{R}(R/I^{\lceil sq\rceil}+J^{[q]},M))\leq C_{1}q^{d-1}\).
To see the bound on \(Tor_{1}\), for a fixed \(s\leq s_{0}\), consider the exact sequence:
\[0\to(I^{\lceil sq\rceil}+J_{n})/J^{[q]}\to R/J^{[q]}\to R/(I^{\lceil sq \rceil}+J^{[q]})\to 0\]
So by the long exact sequence of Tor, it suffices to show that we can choose \(C_{2}\) satisfying
\[l(\operatorname{Tor}_{1}^{R}(R/J^{[q]},M))\leq C_{2}q^{d-1}\operatorname{and}l (\frac{I^{\lceil sq\rceil}+J_{n}}{J^{[q]}}\otimes M)\leq C_{2}q^{d-1}.\]
Choosing a \(C_{2}\) satisfying the first inequality is possible thanks to [1, Lemma 1.1]. For the remaining inequality, by taking a prime cyclic filtration of \(M\), we may assume \(M=R/P\) for some \(P\in\operatorname{Spec}(R)\) with \(\dim M\leq\dim R-1\). In this case, \(P\notin\operatorname{Assh}(R)\). So we can choose \(b\in P\) such that \(\dim R/bR\leq\dim R-1\). Taking \(M=R/J^{[q]}\) and \(N=I^{\lceil sq\rceil}+J^{[q]}/J^{[q]}\) in Lemma 3.25, we see that we can enlarge \(C_{2}\) independently of \(s\) and \(q\) so that
\[l(l(\frac{I^{\lceil sq\rceil}+J_{n}}{J^{[q]}}\otimes_{R}R/P) \leq l(l(\frac{I^{\lceil sq\rceil}+J_{n}}{J^{[q]}}\otimes_{R}R/bR)\] \[\leq l(R/J^{[q]}\otimes_{R}R/bR)=l(R/bR+J^{[q]})\leq C_{2}q^{d-1}.\]
So we are done.
**Lemma 3.28**.: _Let \(M,N\) be two finitely generated \(R\)-modules that are isomorphic at \(P\in\operatorname{Assh}R\). Then for any \(t>0\), there is a constant \(C\) depending on \(M,I,J,t\) but independent of \(n\) such that for any \(s<t\)_
\[|h_{n,M,d}(s)-h_{n,N,d}(s)|\leq C/q\]
_Moreover if \(J\) is \(\mathfrak{m}\)-primary, then \(C\) can be chosen independently of \(t\)._
Proof.: By Lemma 3.24, there is an exact sequence
\[0\to N_{1}\to M\to N\to N_{2}\to 0\]
such that \(\dim N_{1},\dim N_{2}\leq d-1\). And it breaks up into two short exact sequences:
\[0\to N_{1}\to M\to N_{3}\to 0\]
\[0\to N_{3}\to N\to N_{2}\to 0\]
Now by the long exact sequence of Tor we get
\[|l(M/(I^{\lceil sq\rceil}+J^{[q]})M)-l(N_{3}/(I^{\lceil sq\rceil}+J^{[q]})N_{ 3})|\leq l(N_{1}/(I^{\lceil sq\rceil}+J^{[q]})N_{1})\]
\[|l(N_{3}/(I^{\lceil sq\rceil}+J^{[q]})N_{3})-l(N/(I^{\lceil sq\rceil}+J^{[q]} )N)|\leq l(N_{2}/(I^{\lceil sq\rceil}+J^{[q]})N_{2})+l(\operatorname{Tor}_{1 }^{R}(R/(I^{\lceil sq\rceil}+J^{[q]}),N_{2}))\]
Thus by Lemma 3.27, there is a constant \(C\) such that
\[|l(M/(I^{\lceil sq\rceil}+J^{[q]})M)-l(N/(I^{\lceil sq\rceil}+J^{[q]})N)| \leq Cq^{d-1}\]
**Lemma 3.29**.: _Let \((R,\mathfrak{m},k)\) be a local ring, \(I,J\) be two ideals such that \(I+J\) is \(\mathfrak{m}\)-primary, and \(M\) be a finitely generated \(R\)-module. For any \(0<s_{1}<s_{2}<\infty\), there is a constant \(C\) depending on \(M,I,J,s_{1},s_{2}\) but independent of \(n\) such that for any \(s_{1}\leq s\leq s_{2}\)_
\[|h_{n+1,M,d}(s)-h_{n,M,d}(s)|\leq C/q\]
Proof.: We may assume that the residue field is perfect by using Remark 3.15. Choose sufficiently large \(n_{0}\) such that \(R/annF_{*}^{n_{0}}M\) is reduced. The positive constants \(C_{1},C_{2},C_{3}\) chosen below depends only on \(M,I,J,s_{1},s_{2}\) and is independent of \(n\). By Lemma 3.21,
\[|h_{n+n_{0},M,I,J}(s)-h_{n,F_{*}^{n_{0}}M,I,J}(s)|\leq C_{1}q^{d-1}\]
and
\[|h_{n+n_{0}+1,M,I,J}(s)-h_{n+1,F_{*}^{n_{0}}M,I,J}(s)|\leq C_{1}q^{d-1}\]
So it suffices to prove existence of a suitable \(C\) such that
\[|h_{n+1,F_{*}^{n_{0}}M,d}(s)-h_{n,F_{*}^{n_{0}}M,d}(s)|\leq C/q.\]
Replacing \(M\) by \(F_{*}^{n_{0}}M\) and \(R\) by \(R/annF_{*}^{n_{0}}M\), so we may assume \(R\) is reduced. In this case,
\[|h_{n+1,M,I,J}(s)-h_{n,F_{*}M,I,J}(s)|\leq C_{2}q^{d-1}.\]
Thanks to the reducedness of \(R\), the localizations of \(M^{\oplus p^{d}}\) and \(F_{*}M\) are isomorphic at all \(P\in AsshR\). So by Lemma 3.28,
\[|h_{n,F_{*}M,I,J}(s)-p^{d}h_{n,M,I,J}(s)|\leq C_{3}q^{d-1}.\]
Thus one can choose a \(C\) which depends only on \(M,I,J,s_{1},s_{2}\) such that for all \(s\in[s_{1},s_{2}]\) and \(n\in\mathbb{N}\),
\[|h_{n+1,M,I,J}(s)-p^{d}h_{n,M,I,J}(s)|\leq Cq^{d-1}.\]
Dividing by \((pq)^{d}\), we get
\[|h_{n+1,M,I,J,d}(s)-h_{n,M,I,J,d}(s)|\leq C/q.\]
**Theorem 3.30**.: _Let \((R,\mathfrak{m},k)\) be a noetherian local ring, \(I,J\) be two \(R\)-ideals such that \(I+J\) is \(\mathfrak{m}\)-primary, and \(M\) is a finitely generated \(R\)-module. Then for every \(s\in\mathbb{R}\),_
\[\frac{1}{q^{\dim(M)}}\lim_{n\to\infty}h_{n,M,I,J}(s)=h_{M,I,J}(s)\]
_exists. Moreover the convergence is uniform on \([s_{1},s_{2}]\) for any \(0<s_{1}<s_{2}<\infty\)._
Proof.: By replacing \(R\) by \(R/\mathrm{ann}(M)\), we may assume \(\dim(M)=\dim(R)\). Given \(s_{1},s_{2}\) as in the statement, it follows from Lemma 3.29 that \(h_{n,M,I,J}(s)/q^{\dim(M)}\) is uniformly Cauchy on \([s_{1},s_{2}]\). So the theorem follows.
We also have:
**Theorem 3.31**.: _Let \((R,\mathfrak{m},k)\) be a local ring of dimension \(d\), \(I\), \(J\) be two \(R\)-ideal, \(I+J\) is \(\mathfrak{m}\)-primary, and \(M\) be a finitely generated \(R\)-module. Then:_
1. \(h_{M}(s)\) _is Lipschitz continuous on_ \([s_{1},s_{2}]\) _for any_ \(0<s_{1}<s_{2}<\infty\)_. Consequently, it is continuous on_ \((0,\infty)\)_._
2. \(h_{M}(s)\) _is increasing. It is_ \(0\) _on_ \((-\infty,0]\)_. It is continuous if and only if it is continuous at_ \(0\)_, if and only if_ \(\lim_{s\to 0^{+}}h_{M}(s)=0\)_. The limit_ \(\lim_{s\to 0^{+}}h_{M}(s)\) _always exists and is nonnegative._
3. _Assume_ \(J\) _is_ \(\mathfrak{m}\)_-primary. Then for_ \(s>>0\)_,_ \(h_{n,M}(s)=e_{HK}(J,M)\) _is a constant._
4. _There is a polynomial_ \(P(s)\) _of degree_ \(\dim R/J\) _such that_ \(h_{M}(s)\leq P(s)\) _on_ \(\mathbb{R}\)_._
Proof.:
1. An argument similar to that in the proof of Theorem 3.20 with \(R\) replaced by \(M\) and \(J_{n}=J^{[q]}\) yields a proof. The difference is that when \(J_{n}=J^{[q]}\), we know the existence of \(h_{M,I,J}\).
2. If \(s_{1}\leq s_{2}\), then \(\lceil s_{1}q\rceil\leq\lceil s_{2}q\rceil\), so \(I^{\lceil s_{2}q\rceil}\subset I^{\lceil s_{1}q\rceil}\). This implies \[l(M/(I^{\lceil s_{1}q\rceil}+J^{[q]})M)\leq l(M/(I^{\lceil s_{2}q\rceil}+J^{[q ]})M),\] which is just \[h_{n,M}(s_{1})\leq h_{n,M}(s_{2}).\] So after dividing \(p^{n\dim M}\) and let \(n\to\infty\), we get \(h_{M}(s_{1})\leq h_{M}(s_{2})\). This implies \(h_{M}(s)\) is increasing; so in particular the limit \(\lim_{s\to 0^{+}}h_{M}(s)\) always exists and is at least \(h_{M}(0)\). If \(s\leq 0\), then \(\lceil sq\rceil\leq 0\), so \(I^{\lceil sq\rceil}=R\). Thus \(M/(I^{\lceil s_{1}q\rceil}+J^{[q]})M=0\) and \(h_{n,M}(s)=0\) for any \(n\), so \(h_{M}(s)=0\). So \(h_{M}(s)\) is continuous on \((-\infty,0)\) and \((0,\infty)\), and \(\lim_{s\to 0^{-}}h_{M}(s)=0=h_{M}(0)\), so we get (2).
3. Let \(J\) be generated by \(\mu\) elements. For \(s>>0\), \(I^{\lfloor s/\mu\rfloor}\subset J\). So \(I^{\lceil sq\rceil}\subset I^{\lfloor s/\mu\rfloor q\mu}\subset J^{q\mu}\subset J ^{[q]}\), so \(h_{n,M}(s)=l(M/J^{[q]}M)\) and \(h_{M}(s)=\lim_{n\to\infty}\frac{l(M/J^{[q]}M)}{q^{d}}=e_{HK}(J,M)\). If \(s=0\) then \(I^{\lceil sq\rceil}=R\) so \(h_{n,M}(0)=0\).
4. This is a corollary of Theorem 3.16 and Theorem 3.30.
The associativity formula for \(h\)-function below follows directly from Lemma 3.28.
**Proposition 3.32**.: _Let \(M\) be a \(d\)-dimensional finitely generated \(R\)-module. Let \(P_{1},P_{2},\ldots,P_{t}\) be the \(d\)-dimensional minimal primes in the support of \(M\). Then,_
\[h_{M,I,J,d}(s)=\sum_{j=1}^{t}l_{R_{P_{j}}}(M_{P_{j}})h_{R/P_{j},IR/P_{j},JR/P_{j },d}(s).\]
## 4. Frobenius-Poincare function in the local setting
We prove the existence of Frobenius-Poincare functions in the local setting. Given an ideal \(I\) and a family \(J_{\bullet}\) and a finitely generated \(R\)-module \(M\), set
\[f_{n,M,I,J_{\bullet}}(s)=h_{n,M,I,J_{\bullet}}(s+\frac{1}{q})-h_{n,M,I,J_{ \bullet}}(s).\]
When \(J_{n}=J^{[q]}\), \(f_{n,M,I,J}(s)\) represents \(f_{n,M,I,J_{\bullet}}(s)\). We drop one or more parameters in \(f_{n,M,I,J_{\bullet}}\) when there is no resulting confusion. For the rest of this article, we denote the
imaginary part a complex number \(y\) by \(\Im y\) and the open lower half complex plane by \(\Omega\), i.e. \(\Omega=\{y\in\mathbb{C}\,|\,\Im y<0\}\).
**Lemma 4.1**.: _Let \((R,\mathfrak{m},k)\) be a local ring of dimension \(d\), \(I\), \(J\) be two \(R\)-ideal, \(I+J\) is \(\mathfrak{m}\)-primary, and \(M\) be a finitely generated \(R\)-module. Consider the function defined by the infinite series_
\[F_{n,M,I,J}(y):=\sum_{j=0}^{\infty}f_{n,M,I,J}(j/q)e^{-iyj/q}\]
_Then \(F_{n,M,I,J}(y)\) defines a holomorphic function on \(\Omega\). We often drop one or more parameters in \(F_{n,M,I,J}\) when there is no chance of confusion._
Proof.: There is a polynomial \(P\) such that \(f_{n,M}(s)\leq h_{n,M}(s+1)\leq P(s)\) for any \(s\); see Theorem 3.16, Theorem 3.31, assertion (2). Thus
\[|f_{n,M,R,I,J}(j/q)e^{-iyj/q}|\leq P(j/q)e^{j\Im y/q}.\]
Since for fixed \(\epsilon>0\), the series \(\sum_{0\leq j<\infty}P(j/q)e^{-j\epsilon/q}\) converges, on the region where \(\Im y<-\epsilon\), the sequence of functions \(\sum_{j=0}^{\infty}f_{n,M,R,I,J}(j/q)e^{-iyj/q}\) converges uniformly. The limit function is thus holomorphic [1, Thm 1, Chap 5]. Taking union over all \(\epsilon>0\), we see \(F_{n,M}(y)\) exists and is holomorphic on \(\Omega\).
_Remark 4.2_.: For a big \(p\), \(p^{-1}\) family \(J_{\bullet}\), the analogous \(F_{n,M,I,J_{\bullet}}(y)\) defined using \(f_{n,M,I,J_{\bullet}}\) is entire since the corresponding sum is a finite sum.
Now, we want to check the convergence of \((F_{n,M,I,J}(y)/q^{\dim(M)})_{n}\) whenever it exists. We will be repeatedly using the dominated convergence: if a sequence of measurable functions \(f_{n}\) converges to \(f\) pointwise on a measurable set \(\Sigma\) and there is a measurable function \(g\) such that \(|f_{n}|\leq g\) on \(\Sigma\) for any \(n\) and \(\int_{\Sigma}|g|<\infty\), then \(\int_{\Sigma}|f_{n}-f|\) converges to \(0\), so in particular \(\int_{\Sigma}f_{n}\) converges to \(\int_{\Sigma}f\).
**Theorem 4.3**.: _Let \((R,\mathfrak{m},k)\) be a local ring, \(I\), \(J\) be two \(R\)-ideal, \(I+J\) is \(\mathfrak{m}\)-primary, and \(M\) be a finitely generated \(R\)-module of dimension \(d\). (1) Assume \(J\) is \(\mathfrak{m}\)-primary. Then \(F_{M,I,J}(y)=\lim_{n\to\infty}F_{n,M}(y)/p^{n\dim M}\) exists for all \(y\in\mathbb{C}\). This convergence is uniform on any compact set of \(\mathbb{C}\). Suppose \(h_{M}(s)\) is constant for \(s\geq C\), then \(F_{M,I,J}(y)=\int_{0}^{C}h_{M}(t)iye^{-iyt}dt+h_{M}(C)e^{-iyC}\). (2) Assume \(J\) is not necessarily \(\mathfrak{m}\)-primary. Then for every \(y\in\Omega\), \(F_{n,M}(y)/p^{n\dim M}\) converges to_
\[F_{M,I,J}(y)=\int_{0}^{\infty}h_{M}(t)e^{-iyt}iydt.\]
_Moreover, this convergence is uniform on any compact subset of \(\Omega\) and \(F_{M}(y):=F_{M,I,J}(y)\) is holomorphic on \(\Omega\)._
Proof.:
(1) Since \(J\) is \(\mathfrak{m}\)-primary, then \(h_{M}(s)=h_{M}(C)\) for some fixed \(C>0\) and any \(s\geq C\); see Lemma 3.8 and Proposition 3.32. Then,
\[F_{n,M}(y) =\sum_{j=0}^{\infty}f_{n,M}(j/q)e^{-iyj/q}\] \[=\sum_{j=0}^{\infty}(h_{n,M}((j+1)/q)-h_{n,M}(j/q))e^{-iyj/q}\] \[=\sum_{j=0}^{Cq-1}(h_{n,M}((j+1)/q)-h_{n,M}(j/q))e^{-iyj/q}\] \[=\sum_{j=0}^{Cq-1}h_{n,M}(j/q)(e^{-iy(j-1)/q}-e^{-iy(j)/q})+h_{n,M }(C)e^{-iy(C-\frac{1}{q})}\] \[=\sum_{j=0}^{Cq-1}h_{n,M}(j/q)e^{-iyj/q}(e^{iy/q}-1)+h_{n,M}(C)e^{ -iy(C-\frac{1}{q})}\] \[=\int_{0}^{C}h_{n,M}(t)e^{-iy[tq]/q}q(e^{iy/q}-1)dt+h_{n,M}(C)e^{ -iy(C-\frac{1}{q})}\.\]
Fix a compact subset \(K\) of \(\mathbb{C}\). Given \(\delta>0\), choose \(b>0\) such that for all \(y\in K\), \(t\in\mathbb{R}\) and \(n\in\mathbb{N}\)
\[\int_{0}^{b}(\frac{1}{q^{d}}|h_{n,M}(t)e^{-iy[tq]/q}q(e^{iy/q}-1)|+|h_{M}(t)e^ {-iyt}(iy)|)dt\leq\frac{\delta}{2}.\]
We have
\[|\frac{1}{q^{d}}F_{n,M}(y)-\int_{0}^{C}h_{M}(t)e^{-iyt}(iy)dt-h_{M}(C)e^{-iyC}|\]
\[\leq\int_{0}^{C}|h_{n,M,d}(t)e^{-iy[tq]/q}q(e^{iy/q}-1)-h(y)ige^{-iyt}|dt+|h_{n,M}(C)e^{-iy(C-\frac{1}{q})}-h_{M}(C)e^{-iyC}|\]
\[\leq\int_{0}^{b}(|h_{n,M,d}(t)e^{-iy[tq]/q}q(e^{iy/q}-1)|+|h_{M}(t)e^{-iyt}(iy) |)dt\]
\[+\int_{b}^{C}|h_{n,M,d}(t)e^{-iy[tq]/q}q(e^{iy/q}-1)-h(y)ige^{-iyt}|dt+|h_{n,M} (C)e^{-iy(C-\frac{1}{q})}-h_{M}(C)e^{-iyC}|.\]
Moreover for \(y\in K\), there is a constant \(C^{\prime}\) independent of \(n\) such that for all \(t\in[b,C]\)
\[|h_{n,M,d}(\lfloor tq\rfloor/q)-h_{M}(t)|\leq C^{\prime}/q\,\text{and}\,|e^{- iy[tq]/q}q(e^{iy/q}-1)-e^{iyt}(iy)|\leq C^{\prime}/q.\]
Thus we can choose \(N_{0}\) such that for all \(n\geq N_{0}\) and \(y\in K\),
\[|\frac{1}{q^{d}}F_{n,M}(y)-\int_{0}^{C}h_{M}(t)e^{-iyt}(iy)dt-h_{M}(C)e^{-iyC} |\leq\delta.\]
This proves the desired uniform convergence.
(2)We prove uniform convergence of \(F_{n,M}/q^{\dim(M)}\) to the integral on every compact subset of \(\Omega\); the holomorphicity of \(F_{M}\) is then a consequence of [1, Thm1, Chap 5]. We have
\[F_{n,M}(y) =\sum_{j=0}^{\infty}f_{n,M}(j/q)e^{-iyj/q}\] \[=\sum_{j=0}^{\infty}(h_{n,M}((j+1)/q)-h_{n,M}(j/q))e^{-iyj/q}\] \[=\sum_{j=0}^{\infty}h_{n,M}(j/q)(e^{-iy(j-1)/q}-e^{-iy(j)/q})\] \[=\sum_{j=0}^{\infty}h_{n,M}(j/q)e^{-iyj/q}(e^{iy/q}-1)\] \[=\int_{0}^{\infty}h_{n,M}(t)e^{-iy[tq]/q}q(e^{iy/q}-1)dt\.\]
The rearrangements leading to the second and third equality are possible thanks to the absolute convergences implied by Theorem 3.16. Fix any compact \(K\subseteq\Omega\). Using triangle inequality, we get
\[|h_{n,d}(t)e^{-iy\frac{[tq]}{q}}q(e^{iy/q}-1)-h(t)e^{-iyt}(iy)|\] \[\leq|h_{n,d}(t)-h(t)||e^{-iy\frac{[tq]}{q}}q(e^{iy/q}-1)|+|h(t)||e ^{-iy\frac{[tq]}{q}}-e^{-iyt}||q(e^{iy/q}-1)|\] \[+|h(t)||e^{-iyt}||q(e^{iy/q}-1)-iy|\] \[=|h_{n,d}(t)-h(t)||e^{-iy\frac{[tq]}{q}}q(e^{iy/q}-1)|+|h(t)e^{- iyt}||e^{-iy\frac{[tq]}{q}-t)}-1||q(e^{iy/q}-1)|\] \[+|h(t)e^{-iyt}||q(e^{iy/q}-1)-iy|\.\]
It follows from the power series expansion of \(e^{z}\) at zero and the boundedness of \(K\) that there are constants \(C_{1}\), \(C_{2}\) such that for all \(y\in K\), \(t\in\mathbb{R}\) and \(n\in\mathbb{N}\)
\[|q(e^{iy/q}-1)|\leq C_{1}|y|,\,|q(e^{iy/q}-1)-iy|\leq C_{2}\frac{|y|^{2}}{q}, \,|e^{-iy(\frac{[tq]}{q}-t)}-1|\leq C_{1}|y(\frac{[tq]}{q}-t)|.\]
Choose \(\epsilon>0\) such that \(K\subseteq\{y\in\mathbb{C}\,|\,\Im y<-\epsilon\}\). Using the comparisons above, we get for all \(y\in K\), \(t\in\mathbb{R}\) and \(n\in\mathbb{N}\),
\[|h_{n,d}(t)e^{-iy\frac{[tq]}{q}}q(e^{iy/q}-1)-h(t)e^{-iyt}(iy)|\] \[\leq|h_{n,d}(t)-h(t)|e^{-\epsilon t}C_{1}|y|+|h(t)e^{-\epsilon t }|C_{1}^{2}|y|^{2}|\frac{[tq]}{q}-t|+|h(t)e^{\epsilon t}|C_{2}\frac{|y|^{2}}{q}\] \[\leq|h_{n,d}(t)-h(t)|e^{-\epsilon t}C_{1}|y|+|h(t)e^{-\epsilon t }|C_{1}^{2}\frac{|y|^{2}}{q}+|h(t)e^{-\epsilon t}|C_{2}\frac{|y|^{2}}{q}\.\]
Taking integral on \(\mathbb{R}_{\geq 0}\), we get for \(y\in K\) and all \(n\in\mathbb{N}\)
\[|\frac{1}{q^{d}}F_{n,M}(y)-F_{M,I,J}(y)|\] \[\leq C_{1}|y|\int_{0}^{\infty}|h_{n,d}(t)-h(t)|e^{-\epsilon t}dt+( C_{1}^{2}+C_{2})\frac{|y|^{2}}{q}\int_{0}^{\infty}|h(t)|e^{-\epsilon t}dt\.\]
Thanks to Theorem 3.16, (2), we can choose a polynomial \(P_{2}\in\mathbb{R}[t]\) such that \(|h_{n,d}(t)|\leq|P_{2}(t)|\) for all \(n\) and \(t\in\mathbb{R}\). Since \(|P_{2}(t)e^{-ct}|\) is integrable on \(\mathbb{R}_{\geq 0}\), by dominated convergence
\[\lim_{n\to\infty}\int\limits_{0}^{\infty}|h_{n,d}(t)-h(t)|e^{-ct}dt=0.\]
Using this in the last inequality implies uniform convergence of \(\frac{1}{q^{2}}F_{n,M}(y)\) to \(F_{M,I,J}(y)\) on \(K\).
_Remark 4.4_.: Suppose \(h_{M}(y)\) is constant for \(y\geq C\). Since for \(y\in\Omega\), \(h_{M}(C)e^{-iyC}\) converges to zero as \(y\) approaches infinity, the two descriptions of \(h_{M}\) in this case match on \(\Omega\). When \(J_{\bullet}\) is both big \(p\) and \(p^{-1}\), our argument actually produces a corresponding entire function \(F_{M,I,J_{\bullet}}(y)\).
**Definition 4.5**.: Let \(I,J\) be two ideals in \((R,\mathfrak{m})\) such that \(I+J\) is \(\mathfrak{m}\)-primary. For a finitely generated \(R\)-module \(M\), the function \(F_{M,I,J}(y)\) is called the _Frobenius-Poincare function_ of \((M,I,J)\).
We drop one or more parameters from \(F_{M,I,J}\) when there is no possible source of confusion.
The next result directly follows from Proposition 3.32.
**Corollary 4.6**.: Let \(M,N\) be two \(R\)-modules such that their localization are isomorphic at all \(P\in\operatorname{Assh}\!R\). Then \(F_{M}(y)=F_{N}(y)\).
Proof.: This is true because \(h_{M}(s)=h_{N}(s)\).
## 5. Differentiability of \(h\)-function, density function in the local setting
In this section, we discuss the extension of the theory of Hilbert-Kunz density function in the local setting.
**Definition 5.1**.: Let \(I\) be an ideal and \(J_{\bullet}\) be a family of ideals in \((R,\mathfrak{m})\) satisfying **Condition C**. For a finitely generated \(R\)-module \(M\) and \(s\in\mathbb{R}\), recall
\[f_{n,M,I,J_{\bullet}}(s)=h_{n,M,I,J_{\bullet}}(s+\frac{1}{q})-h_{n,M,I,J_{ \bullet}}(s)=l(\frac{(I^{\lceil sq\rceil}+J_{n})M}{(I^{\lceil sq\rceil}+J_{n} )M}).\]
Whenever \(((\frac{1}{p^{n}})^{\dim(M)-1}f_{n,M,I,J_{\bullet}}(s))_{n}\) converges, we call the limit the _density function_ of \((M,I,J_{\bullet})\) at \(s\) and denote the limit by \(f_{M,I,J_{\bullet}}(s)\). Whenever \(f_{M,I,J_{\bullet}}(s)\) exists for all \(s\in\mathbb{R}\), the resulting function \(f_{M,I,J_{\bullet}}\) is called the _density function_ of \((M,I,J_{\bullet})\).
We often drop one or more parameters from \(f_{n,M,I,J_{\bullet}}(s),f_{M,I,J_{\bullet}}(s),f_{M,I,J_{\bullet}}\) whenever those are clear from the context.
In Theorem 5.8, we relate the existence of \(f_{M,I,J_{\bullet}}(s)\) to the differentiability of \(h_{M,I,J_{\bullet}}\) at \(s\)-whenever \(h_{M,I,J_{\bullet}}\) exists. We show that \(h_{M,I,J_{\bullet}}\) is always left and right differentiable everywhere on the real line. The new ingredient is our 'convexity technique'. The \(h\)-function being Lipschitz continuous is differentiable outside a set of measure zero. But our method shows that the \(h\)-function is differentiable outside a countable set. Recall:
**Definition 5.2**.: Let \(S\) be a subset of \(\mathbb{R}\). We call a function \(\lambda:S\to\mathbb{R}\) to be _convex_ if for elements of \(S\), \(s_{1}<s_{2}\leq t_{1}<t_{2}\),
\[\frac{\lambda(s_{2})-\lambda(s_{1})}{s_{2}-s_{1}}\geq\frac{\lambda(t_{2})- \lambda(t_{1})}{t_{2}-t_{1}}.\]
Convexity is a notion that appears naturally in mathematical analysis. For references on convex functions, see [20].
Let \(I,J_{\bullet},M\) be as above. Now we lay the groundwork for the construction of the convex function \(\mathcal{H}(s,s_{0})\) in Theorem 5.3. Fix \(\mu\) such that \(I\) is generated by \(\mu\)-many elements. Set \(M_{q}=M/J_{n}M\) and \(S\) to be the polynomial ring in \(\mu\) many variables over \(R/\mathfrak{m}\). Given a compact interval \([a,b]\subseteq(0,\infty)\), thanks to Theorem 3.19 we can choose \(C\) such that for all \(x\in[a,b]\) and \(n\in\mathbb{N}\)
\[\frac{I^{\lceil xq\rceil}M_{q}}{I^{\lceil xq\rceil+1}M_{q}}=h_{n}(x+\frac{1}{ q})-h_{n}(x)\leq Cq^{\dim M-1}.\]
Recall,
\[l(S_{\lceil xq\rceil})=\binom{\mu+\lceil xq\rceil-1}{\mu-1}=1/(\mu-1)!(\lceil xq \rceil)^{\mu-1}+O(\lceil xq\rceil^{\mu-2}).\]
Fix \(s_{0}\in\mathbb{R}\). Taking cues from these two estimates, for \(s>s_{0}\) we define
\[\mathcal{H}_{n}(s,s_{0})=\sum_{j=\lceil sq\rceil}^{\lceil sq\rceil-1}q^{\mu -\dim(M)-1}l(I^{j}M_{q}/I^{j+1}M_{q})/l(S_{j}). \tag{5.1}\]
**Theorem 5.3**.: _Let \(I,J_{\bullet}\) in the local ring \((R,\mathfrak{m})\) satisfy **Condition C**, \(M\) be a finitely generated \(R\)-module of Krull dimsnion \(d\), \(I\) be generated by a set of \(\mu\) elements. Set \(M_{q}=M/J_{n}M\), fix \(s_{0}\in\mathbb{R}_{>0}\). Consider the two situations:_
1. \(R\) _is a domain and_ \(M=R\)_._
2. \(J_{n}=J^{[q]}\) _for some ideal_ \(J\) _such that_ \(I+J\) _is_ \(\mathfrak{m}\)_-primary and_ \(M\) _is any finitely generated_ \(R\)_-module._
_Set \(c(s)=\frac{s^{\mu-1}}{(\mu-1)!}\). In the context of (A) or (B)3, set_
Footnote 3: \(h_{M,I,J_{\bullet}}\) exists in the context of (A) or (B)
\[\mathcal{H}(s,s_{0})=h_{M,I,J_{\bullet}}(s)/c(s)-h_{M,I,J_{\bullet}}(s_{0})/ c(s_{0})+\int_{s_{0}}^{s}h_{M,I,J_{\bullet}}(t)c^{\prime}(t)/c^{2}(t)dt.\]
1. _On any compact subset of_ \((s_{0},\infty)\)_,_ \(\mathcal{H}_{n}(s,s_{0})\) _uniformly converges to_ \(\mathcal{H}(s,s_{0})\)_._
2. _The function_ \(\mathcal{H}(s,s_{0})\) _is a convex function on_ \((s_{0},\infty)\)_._
Proof.: (1) We have
\[\mathcal{H}_{n}(s,s_{0})=\sum_{j=\lceil sq\rceil}^{\lceil sq \rceil-1}q^{\mu-d-1}l(I^{j}M_{q}/I^{j+1}M_{q})/l(S_{j})\] \[=\sum_{j=\lceil sq\rceil}^{\lceil sq\rceil-1}q^{\mu-d-1}(l(M_{q} /I^{j+1}M_{q})-l(M_{q}/I^{j}M_{q}))/l(S_{j})\] \[=q^{\mu-d-1}l(M_{q}/I^{\lceil sq\rceil}M_{q})/l(S_{\lceil sq \rceil-1})-q^{\mu-d-1}l(M_{q}/I^{\lceil sq\rceil}M_{q})/l(S_{\lceil sq \rceil})\] \[\qquad+\sum_{j=\lceil sq\rceil+1}^{\lceil sq\rceil-1}q^{\mu-d-1 }l(M_{q}/I^{j}M_{q})(1/l(S_{j-1})-1/l(S_{j}))\.\]
Since we are in the context of (A) or (B), \(q^{\mu-d-1}l(M_{q}/I^{\lceil sq\rceil}M_{q})/l(S_{\lceil sq\rceil-1})\) converges to \(h(s)/c(s)\) and \(q^{\mu-d-1}l(M_{q}/I^{\lceil sq_{0}\rceil}M_{q})/l(S_{\lceil sq\rceil})\) converges to \(h(s_{0})/c(s_{0})\). Also,
\[\sum_{j=\lceil sq\rceil+1}^{\lceil sq\rceil-1}q^{\mu-d-1}l(M_{q}/I^{j}M _{q})(1/l(S_{j-1})-1/l(S_{j}))\] \[=\int_{s_{0}}^{s-1/q}\frac{l(M_{q}/\lceil^{\lceil tq\rceil}M_{q})}{ q^{d}}(\frac{1}{l(S_{\lceil tq\rceil-1})}-\frac{1}{l(S_{\lceil tq\rceil})})(q^{ \mu})dt\.\]
When \(q\) approaches infinity, \(\frac{l(M_{q}/\lceil^{\lceil tq\rceil}M_{q})}{q^{d}}\) converges to \(h_{M}(t)\), and \((\frac{1}{l(S_{\lceil tq\rceil-1})}-\frac{1}{l(S_{\lceil tq\rceil})})(q^{\mu})\) converges to \(c^{\prime}(t)/c^{2}(t)\). Also, all these convergence are uniform on any compact subset of \((0,\infty)\). So we get a uniform convergence (uniform on \(s\)) on any compact subset of \((s_{0},\infty)\):
\[\int_{s_{0}}^{s-1/q}\frac{l(M_{q}/\lceil^{\lfloor tq\rfloor}M_{q })}{q^{d}}(\frac{1}{l(S_{\lfloor tq\rfloor-1})}-\frac{1}{l(S_{\lfloor tq \rfloor})})(q^{\mu})dt\] \[\to\int_{s_{0}}^{s}h(t)c^{\prime}(t)/c^{2}(t)dt.\]
This proves that \(\mathcal{H}_{n}(s,s_{0})\) converges to \(\mathcal{H}(s,s_{0})\) and the convergence is uniform on any compact subset of \((s_{0},\infty)\).
(2) We claim \(\mathcal{H}_{n}\) is convex on \(1/p^{n}\mathbb{Z}\cap(s_{0},\infty)\). To this end, it suffices to show
\[\mathcal{H}_{n}(\frac{i+1}{p^{n}},s_{0})-\mathcal{H}_{n}(\frac{i}{p^{n}},s_{0} )\geq\mathcal{H}_{n}(\frac{i+2}{p^{n}},s_{0})-\mathcal{H}_{n}(\frac{i+1}{p},s _{0}).\]
By definition, this is equivalent to showing
\[l(I^{i}M_{q}/I^{i+1}M_{q})/l(S_{i})\geq l(I^{i+1}M_{q}/I^{i+2}M_{q})/l(S_{i+1}),\]
which follows from Lemma 3.17. This convexity of \(\mathcal{H}_{n}(s,s_{0})\) implies the convexity of the limit function \(\mathcal{H}(s,s_{0})\) on \((s_{0},\infty)\cap\mathbb{Z}[1/p]\). Therefore for \(s_{1}<s_{2}\leq t_{1}<t_{2}\) in \((s_{0},\infty)\cap\mathbb{Z}[1/p]\),
\[\frac{H(s_{2},s_{0})-H(s_{1},s_{0})}{s_{2}-s_{1}}\geq\frac{H(t_{2},s_{0})-H(t _{1},s_{0})}{t_{2}-t_{1}}.\]
Since \(\mathcal{H}(s,s_{0})\) is continuous on \((s_{0},\infty)\), \((s,t)\to H(t,s_{0})-H(s,s_{0})/(t-s)\) is continuous. Moreover as \(\mathbb{Z}[1/p]\cap(s_{0},\infty)\) is dense in \((s_{0},\infty)\), the slope inequality defining a convex function (see Definition 5.2) holds for \(\mathcal{H}(s,s_{0})\) for points in \((s_{0},\infty)\).
**Theorem 5.4**.: _With notations set in the statement of Theorem 5.3, set \(\mathcal{H}(s)=\mathcal{H}(s,s_{0})\). Denote the left and right derivative of a function \(\lambda\) at \(s\in\mathbb{R}\) by \(\lambda^{\prime}_{-}(s)\) and \(\lambda^{\prime}_{+}(s)\) respectively. In the context of situation (A) or (B) stated in Theorem 5.3,_
1. _On the interval_ \((s_{0},\infty)\)_, the derivative of_ \(\mathcal{H}\) _exists except for countably many points. The left and right derivative of_ \(\mathcal{H}\) _exists everywhere. The second derivative of_ \(\mathcal{H}\) _exists almost everywhere, i.e. outside a set of Lebesgue measure zero._
2. _The left and right derivatives of_ \(\mathcal{H}\) _are both decreasing in terms of_ \(s\)_. We have_ \(\mathcal{H}^{\prime}_{+}(s)\leq\mathcal{H}^{\prime}_{-}(s)\)_, and if_ \(s_{1}<s_{2}\)_,_ \(\mathcal{H}^{\prime}_{-}(s_{2})\leq\mathcal{H}^{\prime}_{+}(s_{1})\)_._
3. _On the interval_ \((0,\infty)\)_, the derivative of_ \(h\) _exists except for countably many points. The left and right derivative of_ \(h\) _exists everywhere. The second derivative of_ \(h\) _exists almost everywhere._
4. _On_ \((s_{0},\infty)\)_,_ \(h^{\prime}_{+}(s)=\mathcal{H}^{\prime}_{+}(s)c(s)\)_,_ \(h^{\prime}_{-}(s)=\mathcal{H}^{\prime}_{-}(s)c(s)\) _exists, and_ \(h^{\prime}_{+}(s)\leq h^{\prime}_{-}(s)\) _for any_ \(s\in(0,\infty)\)
Proof.: (1) and (2) follows from properties of convex functions and the convexity of \(\mathcal{H}\) in Theorem 5.3, (2).
(3), (4): Recall
\[\mathcal{H}(s,s_{0})=h_{M,I,J_{\bullet}}(s)/c(s)-h_{M,I,J_{\bullet}}(s_{0})/c(s _{0})+\int_{s_{0}}^{s}h_{M,I,J_{\bullet}}(t)c^{\prime}(t)/c^{2}(t)dt.\]
Since in the context of (A) and (B) \(h_{M,I,J_{\bullet}}\) is continuous on \((0,\infty)\), the part of \(\mathcal{H}(s,s_{0})\) given by the integral is always differentiable. So (3) follows from the analogous properties of \(\mathcal{H}(s,s_{0})\) in (1) by varying \(s_{0}\). The formulas in (4) follow from a direct computation. That \(h^{\prime}_{+}(s)\leq h^{\prime}_{-}(s)\) follows from these formulas and (2).
_Remark 5.5_.: Trivedi asks when the Hilbert-Kunz density function of a graded pair \((R,J)\) is \(\dim(R)-2\) times continuously differentiable; see [17, Question 1]. In general the Hilbert-Kunz density function need not be \(\dim(R)-2\) times continuously differentiable; see [14, Example 8.3.2]. Our work shows that the Hilbert-Kunz density function is always differentiable outside a set of measure zero. Indeed, a convex function on an interval is twice differentiable outside a set of measure zero; see [15, Section 1.4]. Thus from Theorem 5.3, it follows that outside a set of measure zero the \(h\) function is twice differentiable. Now from Theorem 6.7, we conclude that the Hilbert-Kunz density function of a graded domain of dimension at least two is differentiable outside a set of measure zero.
_Remark 5.6_.: The conclusions of Theorem 5.3 and Theorem 5.4 are deduced in the context of situation (A) or (B), because we prove existence and continuity of \(h_{M,I,J_{\bullet}}\) in those two contexts. So even outside the context of (A) or (B) whenever there is an \(h\)-function continuous on \((0,\infty)\), we have a corresponding version of Theorem 5.3 and Theorem 5.4.
We return to the question of existence of \(f_{M,I,J_{\bullet}}(s)\) at a given \(s\in\mathbb{R}\). We make comparisons between the limsup and and liminf of the sequence defining \(f_{M,I,J_{\bullet}}(s)\) and the corresponding \(h^{\prime}_{+}(s)\) and \(h^{\prime}_{-}(s)\).
**Lemma 5.7**.: _With the notation set in Theorem 5.3, set_
\[D_{n,t}=f_{n,M,I,J_{\bullet}}(t/p^{n})=h_{n,M,I,J_{\bullet}}((t+1)/p^{n})-h_{n,M,I,J_{\bullet}}(t/p^{n}).\]
_In the context of situation (A) or (B),_
\[h^{\prime}_{+}(s)=\lim_{m\to\infty}\lim_{n\to\infty}\frac{\sum\limits_{t=[sp^ {m}p^{n}]}^{[sp^{m}p^{n}]+p^{n}-1}D_{m+n,t}}{p^{m(d-1)}p^{nd}}. \tag{1}\]
\[h^{\prime}_{-}(s)=\lim_{m\to\infty}\lim_{n\to\infty}\frac{\sum\limits_{t=[sp^ {m}p^{n}]-p^{n}}^{[sp^{m}p^{n}]-1}D_{m+n,t}}{p^{m(d-1)}p^{nd}}. \tag{2}\]
Proof.: (1) Note
\[\sum_{t=\lceil sp^{m}p^{n}\rceil}^{\lceil sp^{m}p^{n}\rceil+p^{n}-1}D_{m+n,t}\]
\[=\sum_{t=\lceil sp^{m}p^{n}\rceil}^{\lceil sp^{m}p^{n}\rceil+p^{n}-1}f_{m+n,M}(t/ p^{m}p^{n})\]
\[=h_{m+n,M}(\lceil sp^{m}p^{n}\rceil/p^{m}p^{n}+1/p^{m})-h_{m+n,M}(\lceil sp^{m}p^ {n}\rceil/p^{m}p^{n})\.\]
Since in the context of (A) or (B), the \(h\)-function exists, the right hand side of the desired equation in (1) is
\[\lim_{m\to\infty}\lim_{n\to\infty}\frac{h_{m+n,M}(\lceil sp^{m}p^{n} \rceil/p^{m}p^{n}+\frac{1}{p^{m}})-h_{m+n,M}(\lceil sp^{m}p^{n}\rceil/p^{m}p^ {n})}{p^{m(d-1)}p^{n}}\] \[=\lim_{m\to\infty}\frac{h_{M}(s+1/p^{m})-h_{M}(s)}{1/p^{m}}\] \[=h^{\prime}_{+}(s)\.\]
(2) Note
\[\sum_{t=\lceil sp^{m}p^{n}\rceil-p^{n}}^{\lceil sp^{m}p^{n} \rceil-1}D_{m+n,t}\] \[=\sum_{t=\lceil sp^{m}p^{n}\rceil-p^{n}}^{\lceil sp^{m}p^{n} \rceil-1}f_{m+n,M}(t/p^{m}p^{n})\] \[=h_{m+n,M}(\lceil sp^{m}p^{n}\rceil/p^{m}p^{n})-h_{m+n,M}(\lceil sp ^{m}p^{n}\rceil/p^{m}p^{n}-1/p^{m})\]
Thus the right hand side of the desired equation in (1) is
\[\lim_{m\to\infty}\lim_{n\to\infty}\frac{h_{m+n,M}(\lceil sp^{m}p^ {n}\rceil/p^{m}p^{n})-h_{m+n,M}(\lceil sp^{m}p^{n}\rceil/p^{m}p^{n}-1/p^{m})}{ p^{m(d-1)}p^{n}}\] \[=\lim_{m\to\infty}\frac{h_{M}(s)-h_{M}(s-1/p^{m})}{1/p^{m}}\] \[=h^{\prime}_{-}(s)\.\]
**Theorem 5.8**.: _With the same notation as in Theorem 5.3, in the context of situation (A) or (B),_
1. _for any_ \(s>0\)_,_ \[h^{\prime}_{+}(s)\leq\varliminf_{n\to\infty}f_{n,M,I,J_{\bullet}}(s)/p^{n(d-1)} \leq\varlimsup_{n\to\infty}f_{n,M,I,J_{\bullet}}(s)/p^{n(d-1)}\leq h^{\prime}_ {-}(s),\] _where_ \(\varliminf\) _and_ \(\varlimsup\) _denote liminf and limsup respectively._
2. _At_ \(s>0\)_, if_ \(h_{M}\) _is differentiable, then_ \(f_{M,I,J_{\bullet}}(s)\)_- the density function of_ \((M,I,J_{\bullet})\) _at_ \(s\) _exists and is equal to_ \(h^{\prime}_{M,I,J_{\bullet}}(s)\)_. If_ \(h_{M}(s)\) _is a_ \(C^{1}\)_-function, then_ \(f_{M}(s)\) _is continuous._
3. _There is a countable subset of_ \((0,\infty)\) _outside which_ \(f_{M,I,J_{\bullet}}(s)\) _exists and is equal to_ \(h^{\prime}_{M,I,J_{\bullet}}(s)\)
Proof.: (1) In the proof, we also use the notation set in Lemma 5.7, (1). Set
\[\alpha_{\mu,t}=\binom{\mu+t-1}{\mu-1}.\]
Note \(D_{n,t}=l((I^{t}+J_{n})M/(I^{t+1}+J_{n})M)\). For a fixed \(n\), \(D_{n,t}/\alpha_{\mu,t}\) is a decreasing function of \(t\), thanks to Lemma 3.17. So for \(\lceil sp^{m}p^{n}\rceil\leq t\leq\lceil sp^{m}p^{n}\rceil+p^{n}-1\), \(D_{m+n,t}/\alpha_{\mu,t}\leq D_{m+n,\lceil sp^{m}p^{n}\rceil}/\alpha_{\mu, \lceil sp^{m}p^{n}\rceil}\), so
\[D_{m+n,t} \leq D_{m+n,\lceil sp^{m}p^{n}\rceil}\frac{\alpha_{\mu,t}}{ \alpha_{\mu,\lceil sp^{m}p^{n}\rceil}}\] \[\leq D_{m+n,\lceil sp^{m}p^{n}\rceil}\frac{\alpha_{\mu,\lceil sp ^{m}p^{n}\rceil+p^{n}}}{\alpha_{\mu,\lceil sp^{m}p^{n}\rceil}}\.\]
Also \(\alpha_{\mu,t}\) is a polynomial of degree \(\mu-1\) in \(t\), so
\[\lim_{m\to\infty}\lim_{n\to\infty}\frac{\alpha_{\mu,\lceil sp^{m} p^{n}\rceil+p^{n}}}{\alpha_{\mu,\lceil sp^{m}p^{n}\rceil}} =\lim_{m\to\infty}\lim_{n\to\infty}\frac{(\lceil sp^{m}p^{n} \rceil+p^{n})^{\mu-1}}{\lceil sp^{m}p^{n}\rceil^{\mu-1}}\] \[=\lim_{m\to\infty}\frac{(sp^{m}+1)^{\mu-1}}{(sp^{m})^{\mu-1}}\] \[=1.\]
So
\[h^{\prime}_{+}(s) =\lim_{m\to\infty}\lim_{n\to\infty}\frac{\sum\limits_{t=\lceil sp ^{m}p^{n}\rceil}^{\lceil sp^{m}p^{n}\rceil+p^{n}-1}D_{m+n,t}}{p^{m(d-1)p^{nd}}}\] \[\leq\varliminf_{m\to\infty}\varliminf_{n\to\infty}\frac{p^{n}D_{ m+n,\lceil sp^{m}p^{n}\rceil}}{p^{m(d-1)p^{nd}}}\frac{\alpha_{\mu, \lceil sp^{m}p^{n}\rceil+p^{n}}}{\alpha_{\mu,\lceil sp^{m}p^{n}\rceil}}\] \[=\varliminf_{n\to\infty}\varliminf_{n\to\infty}\frac{p^{n}D_{m+n,\lceil sp^{m}p^{n}\rceil}}{p^{m(d-1)p^{nd}}}\] \[=\varliminf_{n\to\infty}\varliminf_{n\to\infty}\frac{D_{m+n, \lceil sp^{m}p^{n}\rceil}}{p^{m(d-1)p^{nd}}}\.\]
For a sequence of real numbers \(\beta_{n}\) and any \(m\), \(\varliminf_{n\to\infty}\beta_{m+n}=\varliminf_{n\to\infty}\beta_{n}\) is independent of \(m\), so \(\varliminf_{m\to\infty}\varliminf_{n\to\infty}\frac{D_{m+n,\lceil sp^{m}p^{ n}\rceil}}{p^{m(d-1)p^{n(d-1)}}}=\varliminf_{n\to\infty}\frac{D_{n,\lceil sp^{n} \rceil}}{p^{m(d-1)}}\). Therefore we have
\[h^{\prime}_{+}(s)\leq\varliminf_{n\to\infty}\frac{D_{n,\lceil sp^{n}\rceil} }{p^{n(d-1)}}=\varliminf_{n\to\infty}\frac{f_{n}(s)}{p^{n(d-1)}}.\]
The proof of the last inequality is similar. First we have If \(\lceil sp^{m}p^{n}\rceil-p^{n}\leq t\leq\lceil sp^{m}p^{n}\rceil-1\), then \(D_{m+n,t}/\alpha_{\mu,t}\geq D_{m+n,\lceil sp^{m}p^{n}\rceil}/\alpha_{\mu, \lceil sp^{m}p^{n}\rceil}\), so
\[D_{m+n,t} \geq D_{m+n,\lceil sp^{m}p^{n}\rceil}\frac{\alpha_{\mu,t}}{ \alpha_{\mu,\lceil sp^{m}p^{n}\rceil}}\] \[\geq D_{m+n,\lceil sp^{m}p^{n}\rceil}\frac{\alpha_{\mu,\lceil sp ^{m}p^{n}\rceil-p^{n}}}{\alpha_{\mu,\lceil sp^{m}p^{n}\rceil}}\.\]
Also \(\alpha_{\mu,t}\) is a polynomial of degree \(\mu-1\) in \(t\), so
\[\lim_{m\to\infty}\lim_{n\to\infty}\frac{\alpha_{\mu,[sp^{m}p^{n}]-p^ {n}}}{\alpha_{\mu,[sp^{m}p^{n}]}} =\lim_{m\to\infty}\lim_{n\to\infty}\frac{(\lceil sp^{m}p^{n}\rceil- p^{n})^{\mu-1}}{[sp^{m}p^{n}]^{\mu-1}}\] \[=\lim_{m\to\infty}\frac{(sp^{m}+1)^{\mu-1}}{(sp^{m})^{\mu-1}}\] \[=1.\]
So
\[h^{\prime}_{-}(s) =\lim_{m\to\infty}\lim_{n\to\infty}\frac{\stackrel{{ \lceil sp^{m}p^{n}\rceil-1}}{{\sum}}D_{m+n,t}}{p^{m(d-1)}p^{nd}}\] \[\geq\overline{\lim}_{m\to\infty}\overline{\lim}_{n\to\infty}\frac {p^{n}D_{m+n,[sp^{m}p^{n}]}}{p^{m(d-1)}p^{nd}}\frac{\alpha_{\mu,[sp^{m}p^{n}]- p^{n}}}{\alpha_{\mu,[sp^{m}p^{n}]}}\] \[=\overline{\lim}_{m\to\infty}\overline{\lim}_{n\to\infty}\frac{p^ {n}D_{m+n,[sp^{m}p^{n}]}}{p^{m(d-1)}p^{nd}}\] \[=\overline{\lim}_{m\to\infty}\overline{\lim}_{n\to\infty}\frac{D _{m+n,[sp^{m}p^{n}]}}{p^{m(d-1)}p^{n(d-1)}}\.\]
For a sequence of real numbers \(\beta_{n}\) and any \(m\), \(\overline{\lim}_{n\to\infty}\beta_{m+n}=\overline{\lim}_{n\to\infty}\beta_{n}\) is independent of \(m\), so \(\overline{\lim}_{m\to\infty}\overline{\lim}_{n\to\infty}\frac{D_{m+n,[sp^{m}p^ {n}]}}{p^{m(d-1)}p^{n(d-1)}}=\overline{\lim}_{n\to\infty}\frac{D_{n,[sp^{n}]}}{ p^{n(d-1)}}\). Therefore we have
\[h^{\prime}_{-}(s)\geq\overline{\lim}_{n\to\infty}\frac{D_{n,[sp^{n}]}}{p^{n(d -1)}}=\overline{\lim}_{n\to\infty}\frac{f_{n}(s)}{p^{n(d-1)}}.\]
(2) If \(h_{M}\) is differentiable at \(s\), \(h^{\prime}_{+}(s)=h^{\prime}_{-}(s)\). Thus (1) implies that \(f_{n,M}(s)/q^{d-1}\) exists and is equal to \(h^{\prime}(s)\), rest of (2) is clear.
(3) folows from Theorem 5.4, (3).
_Remark 5.9_.: We prove Theorem 5.8 in the context of situation (A) or (B) defined in Theorem 5.3- which is precisely the contexts where we prove existence of \(h_{M,I,J_{\bullet}}\) in this article. Thus when \((R,\mathfrak{m})\) is a domain, \(I,J_{\bullet}\) satisfy **Condition C**, we get a corresponding density function which is well-defined outside a countable subset of \((0,\infty)\). One particular special case, potentially important for its application to prime characteristic singularity theory, is when \(J_{\bullet}\) is the ideal sequence that defines the \(F\)-signature of \((R,\mathfrak{m})\); see Example 3.10.
When \(J_{n}=J^{[d]}\), Theorem 5.8 yields a Hilbert-Kunz density function of \((I,J)\) well defined outside a countable subset of \((0,\infty)\).
The function \(h_{M,I,J_{\bullet}}\) need not be continuous or differentiable at zero. In Theorem 8.11, we prove that \(h_{R,I,J}\) is continuous at zero if and only if \(\dim R-\dim R/I\geq 1\) and differentiable at zero if and only if \(\dim R-\dim R/I\geq 2\).
**Example 5.10**.: We point out that the \(h\)-function need not be differentiable on \((0,\infty)\). Our example of a non differentiable \(h\)-function comes from [1]. Fix a regular local domain \((R,\mathfrak{m})\) of dimension \(d\) and a non-zero \(f\in R\). For \(t\in\mathbb{R}\), [1] considers the function \(t\to s(R,f^{t})\): the \(F\)-signature of the pair \((R,f^{t})\) which is shown to be the same as
\[s(R,f^{t})=\lim_{n\to\infty}\frac{1}{q^{d}}l(\frac{R}{\mathfrak{m}^{[p^{n}]}: f^{[tp^{n}]}}).\]
With \(I=(f)\), \(h_{R,I,\mathfrak{m}}(t)=1-s(R,f^{t})\); see [1, section 4]. At \(t=1\), the left hand derivative of \(h_{I}\) is the \(F\)-signature of \(R/f\); see [1, Thm 4.6], while the right hand
derivative is zero since \(h(s)=1\) for \(s\geq 1\). So \(h\) is not differentiable at one if and only if the \(F\)-signature of \(R/f\) is non-zero, precisely when \(R/f\) is strongly \(F\)-regular. A concrete example comes from the strongly \(F\)-regular ring, \(\mathbb{F}_{p}[[x,y,z]]/(x^{2}+y^{2}+z^{2})\) with \(p\geq 3\).
**Example 5.11**.: We point out that the limit defining the density function at a particular \(s\in\mathbb{R}\), i.e. of \(f_{n,M,I,J}(s)/q^{\dim(M)-1}\) may not converge. For example, when \(I=0\), \(M=R\), then \(f_{n,M,I,J}(0)=l(R/J^{[q]})\); thus \(f_{n,M,I,J}/q^{\dim R}=e_{HK}(J,R)\) is a nonzero real number, so \(f_{n,M,I,J}/q^{\dim R-1}\) goes to infinity. This example implies that some assumption is necessary to guarantee the existence of the density function at every point.
**Example 5.12**.: In the definition of the density function if we replace \(\lceil sq\rceil\) by \(\lfloor sq\rfloor\), then we have more examples where the density function does not exist. We recall Otha's example mentioned in [17, sec 3] which produces such instances. Let \(R\) be the power series ring \(k[[x_{1},\ldots,x_{d+1}]]\), \(\alpha_{1}\leq\ldots\leq\alpha_{d+1}\) be a sequence of positive integers, \(I=(x_{1}^{\alpha_{1}}\ldots x_{d+1}^{\alpha_{d+1}})\) be a monomial principal ideal, \(J=(x_{1},\ldots,x_{d+1})\) be the maximal ideal of \(R\). Assume moreover that \(\alpha_{d}<\alpha_{d+1}\), \(\alpha_{d+1}\) does not divide \(p\),and \(\epsilon_{n}\) is the residue of \(p^{n}\) modulo \(\alpha_{d+1}\). Let \(\tilde{f}\) be the density function defined using \(\lfloor sq\rfloor\), then \(\lim_{n\to\infty}\tilde{f}_{n,R,I,J}/(p^{nd}\epsilon_{n})\) exists and is nonzero, so \(\lim_{n\to\infty}\tilde{f}_{n,R,I,J}/p^{nd}\) exists if and only if \(\epsilon_{n}\) is a constant sequence, and this is false in general. In general, \(\epsilon_{n}\) is a periodic function and its period is the order of \(p+\alpha_{n+1}\mathbb{Z}\) in the multiplicative group \((\mathbb{Z}/\alpha_{n+1}\mathbb{Z})^{*}\).
**Example 5.13**.: We give an example, where the density function exists everywhere although the \(h\)-function is not differentiable everywhere. Note that the resulting density function is not continuous in this case; compare with Theorem 6.4. Let \(M=R=k[[x]]\) be the power seires ring, \(I=J=(x)\). Then \(h_{n}(s)=l(R/I^{\lceil sq\rceil}+J^{[q]})=min\{\lceil sq\rceil,q\}\). By simple calculation we get \(f_{n}(s)=1\) when \(-1/q<s\leq 1-1/q\) and is \(0\) otherwise. So \(f(s)=1\) when \(0\leq s<1\) and \(f(s)=0\) otherwise.
Here \(f_{n}\) converges pointwise but not uniformly. Outside an arbitrary neighborhood of \(0\) and \(1\) then \(f_{n}\) converges uniformly.
On the other hand, \(h(s)\) is \(0\) when \(s\leq 0\), \(s\) when \(0\leq s\leq 1\), \(1\) when \(s\geq 1\), and is continuous. We have \(f(s)=h^{\prime}(s)\) when \(s\neq 0,1\); when \(s=0,1\)\(h^{\prime}(s)\) does not exist and \(f(s)=h^{\prime}_{+}(s)\). This leads us to guessing that whenever the density function exists at \(s\), it coincides with the right hand derivative \(h^{\prime}_{+}(s)\).
_Remark 5.14_.: Assume \(J_{\bullet}\) is big and \(h_{M,I,J_{\bullet}}\) is differentiable everywhere. Since \(h_{M,I,J_{\bullet}}\) is eventually constant (Lemma 3.8), the resulting density function \(f_{M,I,J_{\bullet}}=h^{\prime}_{M,I,J_{\bullet}}\) is supported on some compact interval \([0,b]\). So the density function has to increase and decrease on \([0,b]\). By Theorem 5.4, \(f_{M,I,J_{\bullet}}=h^{\prime}(s)=\mathcal{H}^{\prime}(s)s^{\mu-1}/(\mu-1)!\), where \(\mathcal{H}^{\prime}\) is decreasing since \(\mathcal{H}\) is convex; so this gives a natural way to represent \(f_{M,I,J_{\bullet}}\) as a product of a decreasing and an explicit increasing function, namely \(c(s)\). This may help analyzing the monotonicity of the density function.
## 6. Relation among \(h\), density, Frobenius-Poincare functions
In Section 4 we developed a notion of Frobenius-Poincare function in the local setting. Work of Section 5 gives a notion of Hilbert-Kunz density function in the local setting, at least outside a countable subset of \((0,\infty)\). When \((R,\mathfrak{m})\) is graded, we compare these local notions defined using the \(\mathfrak{m}\)-adic filtration with the classical notion of Frobenius-Poincare function and Hilbert-Kunz density function defined (see Section 2) using the graded structure of the underlying objects.
**Lemma 6.1**.: _Let \((R,\mathfrak{m})\) be a standard graded ring, \(M\) be a finitely generated \(\mathbb{Z}\)-graded module of dimension \(d\), \(J\) be a homogeneous ideal of finite colength. Set_
\[g_{n,M,J,d-1}(s)=\frac{1}{q^{d-1}}l(\frac{M}{J^{[d]}M})_{\lceil sq\rceil},\ g_{n,M,J}(s)=l(\frac{M}{J^{[d]}M})_{ \lceil sq\rceil}\.\]
1. _When_ \(M\) _is generated in degree zero, for any graded submodule_ \(N\subseteq M\ (M/N)_{j}=\mathfrak{m}^{j}(M/N)/\mathfrak{m}^{j+1}(M/N)\)_._
2. _When_ \(M\) _is generated in degree zero,_ \(g_{n,M,J}(s)=l(\frac{M}{J^{[d]}M})_{\lceil sq\rceil}=f_{n,M,\mathfrak{m},J}(s)\)_._
Proof.: Let \(N\) be any submodule of \(M\), then \(M/N\) is also generated in degree 0, so \((M/N)_{\geq j}=\mathfrak{m}^{j}(M/N)\) and \((M/N)_{j}=\mathfrak{m}^{j}(M/N)/\mathfrak{m}^{j+1}(M/N)\) for any \(j\). This implies \(g_{n,M,J}(s)=f_{n,M,\mathfrak{m},J}(s)\).
**Lemma 6.2**.: _We define an equivalence relation \(\sim\) on graded modules over a standard graded ring \(R\) of positive dimension over a field: we say \(M\sim N\) when there is a homogeneous map \(\phi:M\to N\) such that \(\dim Ker\phi,\dim Coker\phi\leq\dim R-1\), and let \(\sim\) also denote the minimal equivalence relation generated by such relations. Then \(M\) is equivalent to some module generated in degree 0._
Proof.: We can choose an element \(c\in R_{1}\) such that \(\dim R/cR\leq\dim R\). First, we find a sufficient large \(n>0\) such that \(M\) is generated in degree at most \(n\). Then we truncate at degree \(n\) to get \(M_{\geq n}:=\oplus_{j=n}^{\infty}M_{j}\), which is generated in degree \(n\). The module \(M/M_{\geq n}\) is Artinian. The inclusion \(M_{\geq n}\hookrightarrow M\) shows \(M_{\geq n}\sim M\). The map \(M_{\geq n}\to M_{\geq n}[n]\) given by multiplication by \(c^{n}\) has its kernel and cokernel annihilated by \(c^{n}\). So the kernel and cokernel have dimension less than \(\dim R\). Thus \(M\sim M_{\geq n}\sim M_{\geq n}[n]\). Since \(M_{\geq n}[n]\) is generated in degree zero, we are done.
The next result follows directly from the lemma above and Proposition 3.32.
**Lemma 6.3**.: _Let \((R,\mathfrak{m})\) be standard graded, \(M\) be a finitely generated \(\mathbb{Z}\)-graded \(R\)-module, \(I\), \(J_{\bullet}\) be homogeneous; assume that the corresponding objects obtained by localizing at \(\mathfrak{m}\) satisfy condition (A) or (B) stated in Theorem 5.3. Then there is a finitely generated \(\mathbb{N}\)-graded \(R\)-module \(M^{\prime}\) generated in degree zero such that, \(h_{M,I,J_{\bullet}}=h_{M^{\prime},I,J_{\bullet}}\)._
In the context of (A) or (B) stated in Theorem 5.3 there is an \(h\)-function and an associated density function defined outside a countable subset of \((0,\infty)\). Although the limit defining the density function may not exist at every point of \((0,\infty)\), we can define the integral of \(f\) on any bounded measurable subset \(\Sigma\) of \([0,\infty)\) by integrating the class in \(L^{1}(\Sigma)\) represented by the density function. Fix the maximal subset \(\Lambda\) of \([0,\infty)\) where the density function \(f_{M,I,J_{\bullet}}\) exists. The continuity of \(f_{M}\) at \(s\in\Lambda\) refers to the notion of continuity coming from the subspace topology on the domain \(\Lambda\) inherited from \(\mathbb{R}\). With this understanding, we have the following theorem.
**Theorem 6.4**.: _Let \((R,\mathfrak{m})\), \(I\), \(J_{\bullet}\), \(M\) be as in Theorem 5.3. Then in the context of situation (A) or (B) as stated in Theorem 5.3, we have for any \(s>0\),_
\[h_{M,I,J_{\bullet}}(s)-\lim_{s_{0}\to 0^{+}}h_{M,I,J_{\bullet}}(s_{0})=\int_{0}^ {s}f_{M,I,J_{\bullet}}(t)dt.\]
_Moreover if the density \(f_{M,I,J_{\bullet}}\) exists and is continuous at \(s>0\), then \(h_{M,I,J_{\bullet}}\) is differentiable at \(s\) and \(f_{M}(s)=h_{M}^{\prime}(s)\)._
Proof.: Given \(s>0\), choose \([a,b]\subseteq\mathbb{R}_{>0}\) containing \(s\). For a fixed \(s_{0}\) in \([a,b]\) and \(s>s_{0}\), we have
\[h_{n}(s)-h_{n}(s_{0})=\sum_{j=\lceil sq\rceil}^{\lceil sq\rceil-1}f_{n}(\frac{j}{q} )\.\]
Thus
\[\frac{1}{q^{d}}h_{n}(s)-\frac{1}{q^{d}}h_{n}(s_{0})=\int\limits_{s_{0}-\frac{1}{ q}}^{s-\frac{1}{q}}\frac{f_{n}(t)}{q^{d-1}}dt\]
By Theorem 3.19, we can choose a constant \(C\) such that for any \(n\in\mathbb{N}\) and \(t\in[a,b]\).
\[\frac{1}{q^{d-1}}f_{n}(t)\leq C.\]
Thus taking limit as \(n\) approaches infinity and using dominated convergence, we get
\[h_{M,I,J_{\bullet}}(s)-h_{M,I,J_{\bullet}}(s_{0})=\int_{s_{0}}^{s}f_{M,I,J_{ \bullet}}(t)dt.\]
Taking limit as \(s_{0}\to 0+\) we get the conclusion involving integrals. Note that \(\lim_{s_{0}\to 0+}\) exists as \(h\) is increasing.
Whenever \(f_{M}(t)\) exists at \(s\) and is continuous at \(s\), the differentiability of \(h_{M}\) at \(s\) and that \(h^{\prime}_{M}(s)=f_{M}(s)\) follows from the second fundamental theorem of Calculus.
**Proposition 6.5**.: _Continue with the same notation as in Lemma 6.1 but \(M\) not necessarily generated in degree zero. Set_
\[\tilde{g}_{n,M,J,d-1}(s)=l(M/J^{[a]}M)_{\lfloor sq\rfloor}/q^{d-1}.\]
_If additionally \(d=\text{dim}(M)\geq 2\), the two limits below exist for all \(s\in\mathbb{R}\):_
\[\tilde{g}_{M,J}(s)=\lim_{n\to\infty}\tilde{g}_{n,M,J,d-1}(s),\,g_{M,J}(s)= \lim_{n\to\infty}g_{n,M,J,d-1}(s).\]
_Moreover \(\tilde{g}_{M,J}(s)=g_{M,J}(s)\)._
Proof.: By [17], \(\tilde{g}_{n,M,J,d-1}(s)\) converges for all \(s\in\mathbb{R}\). For \(s\in\mathbb{Z}[1/p]\), \(g_{n,M,J,d-1}(s)=\tilde{g}_{n,M,J,d-1}(s)\) for \(q\) large; so we conclude convergence of \(g_{n,M,J,d-1}(s)\). When \(s\) is not in \(\mathbb{Z}[1/p]\),
\[g_{n,M,J,d-1}(s)=\tilde{g}_{n,M,J,d-1}(s+\frac{1}{q}).\]
Now for \(d\geq 2\), the uniform convergence of the sequence of functions \(\tilde{g}_{n,M,J,d-1}\) and continuity of \(\tilde{g}_{M,J}\) imply that the sequence \(\tilde{g}_{n,M,J,d-1}(s+\frac{1}{q})\) converges to \(\tilde{g}_{M,J}(s)\).
**Theorem 6.6**.: _Let \((R,\mathfrak{m})\) be standard graded, \(J\) be a homogeneous \(\mathfrak{m}\)-primary ideal, \(M\) an \(R\)-module of dimension \(d\geq 2\). Then_
1. \(h_{M,\mathfrak{m},J}\) _is differentiable on_ \(\mathbb{R}\)_. The density function_ \(f_{M,\mathfrak{m},J}(s)\) _exists everywhere on_ \(\mathbb{R}\) _and is the same as_ \(h^{\prime}_{M,\mathfrak{m},J}(s)\)_._
2. _Moreover_ \(f_{M,\mathfrak{m},J}\) _is the same as Trivedi's Hilbert-Kunz density function_ \(\tilde{g}_{M,J}(s)\)_; see Section_ 2_._
Proof.: (1) It follows from [18, Lemma 3.3], that for \(s\leq 1\), \(h_{M}(s)=e(\mathfrak{m},M)s^{d}/d!\). So \(h_{M}\) is differentiable at zero and the derivative is zero. A direct computation shows that the density function at zero exists and is zero. So we can restrict to \((0,\infty)\). Thanks to Theorem 5.8, (2), it is enough to show that \(h_{M}\) is differentiable on \((0,\infty)\). By using Lemma 6.3, we can assume that \(M\) is generated in degree zero. Thus by Lemma 6.1
\[f_{n,M,\mathfrak{m},J}(s)=g_{n,M,J}(s):=l([\frac{M}{J^{[p]}M}]_{\lfloor sq \rfloor})\,\text{for all}\,s\in\mathbb{R}.\]
As \(d\geq 2\), by Proposition 6.5, \(g_{n,M,J}(s)/q^{d-1}\) converges to Trivedi's density function \(\tilde{g}_{M,J}(s)\) for all \(s\). Since \(\tilde{g}_{M,J}(s)\) is continuous, \(f_{M,J}(s)\) is also continuous. Now by Theorem 6.4, (2), \(h_{M,I,J}\) is differentiable on \((0,\infty)\).
(2) Fix an \(M^{\prime}\) which is generated in degree zero and equivalent to \(M\) in the sense of Lemma 6.2. Thanks to Lemma 6.3 and part (1)
\[h_{M}=h_{M^{\prime}}\,,f_{M}=f_{M^{\prime}}.\]
The associativity formula for Trivedi's density function implies (see [17, Prop 2.14]), \(\tilde{g}_{M,J}=\tilde{g}_{M^{\prime},J}\). Since \(M^{\prime}\) is generated in degree zero and has dimension at least two, by Lemma 6.1 and Proposition 6.5, \(\tilde{g}_{M^{\prime},J}=f_{M^{\prime},\mathfrak{m},J}\). Putting together we conclude that \(f_{M,\mathfrak{m},J}=\tilde{g}_{M,J}\).
We further strengthen the above theorem by proving it for any homogeneous \(J\) which not necessarily has finite colength,
**Theorem 6.7**.: _Let \((R,\mathfrak{m})\) be a standard graded, \(J\) be a homogeneous ideal, \(s\in\mathbb{R}\), \(M\) be a finitely generated graded module of dimension \(d\). Assume \(d\geq 2\). Set \(\tilde{g}_{n,M,J,d-1}(s)=l(M/J^{[q]}M)_{[sq]}/q^{d-1}\). Then:_
1. _The sequence_ \((\tilde{g}_{n,M,J,d-1}(s))_{n}\) _converges uniformly on every compact subset of_ \(\mathbb{R}\)_. The limiting function is continuous._
2. \(h_{M,\mathfrak{m},J}\) _is differentiable and_ \[h^{\prime}_{M,\mathfrak{m},J}(s)=f_{M,\mathfrak{m},J}(s)=\lim_{n\to\infty} \tilde{g}_{n,M,J,d-1}(s).\]
Proof.: (1) For a positive integer \(N\), set \(J^{\prime}=J+\mathfrak{m}^{N+1}\). Then on \([0,N]\), \(\tilde{g}_{n,M,J,d-1}=\tilde{g}_{n,M,J^{\prime},d-1}\). Since \(J^{\prime}\) is \(\mathfrak{m}\)-primary, by [17], \(\tilde{g}_{n,M,J^{\prime},d-1}\) converges uniformly to a continuous function. Thus on \([0,N]\), \(\tilde{g}_{n,M,J,d-1}\) converges uniformly to a continuous function.
(2) Fix a compact interval \([a,b]\subseteq\mathbb{R}\). By Theorem 3.13, (1), we can choose \(t_{0}\) such that for all \(t\geq t_{0}\), \(h_{M,\mathfrak{m},J}=h_{M,\mathfrak{m},J+\mathfrak{m}^{\mathfrak{t}}}\) on \([a,b]\). Using the ideas from the argument in part(1), fix an integer \(t\geq t_{0}\), ensure \(\tilde{g}_{n,M,J,d-1}=\tilde{g}_{n,M,J+\mathfrak{m}^{\mathfrak{t}},d-1}\) on \([a,b]\) for all \(n\). By Theorem 6.6, \(h_{M,\mathfrak{m},J+\mathfrak{m}^{\mathfrak{t}}}\) is differentiable on \(\mathbb{R}\) with derivative \(\tilde{g}_{M,J+\mathfrak{m}^{\mathfrak{t}}}\). Thus on \((a,b)\), \(h_{M,\mathfrak{m},J}\) is differentiable with derivative being the continuous function \(\tilde{g}_{M,J}\). Since by Theorem 5.8\(h^{\prime}_{M}=f_{M}\) on \((a,b)\), we are done.
We point out below that in the graded context the Frobenius-Poincare function defined using the underlying grading and the maximal ideal adic filtration coincide.. Recall that by \(\Omega\), we denote the open lower half complex plane. Let \((R,\mathfrak{m})\) be standard graded, \(M\) is an \(\mathbb{N}\)-graded \(R\)-module, \(J\) be a homogeneous ideal. For \(y\in\Omega\),
**Proposition 6.8**.: _Let \((R,\mathfrak{m})\) be standard graded, \(M\) an \(\mathbb{N}\)-graded \(R\)-module of dimension \(d\), \(J\) be a homogeneous ideal. Consider the sequence of functions on the open lower half plane_
\[G_{n,M,J}(y)=\sum_{j=0}^{\infty}l([\frac{M}{J^{[q]}M}]_{j})e^{-iyj/q}\]
1. \(\frac{1}{q^{d}}G_{n,M,J}(y)\) _defines a holomorphic function on_ \(\Omega\) _for every_ \(n\)_._
2. _Recall that_ \(F_{M,\mathfrak{m},J}\) _denotes the Frobenius-Poincare function defined in Definition_ 4.5_. The sequence_ \[\lim_{n\to\infty}\frac{1}{q^{d}}G_{n,M,J}(y)\] _converges to_ \(F_{M,\mathfrak{m},J}(y)\)_._
3. _When_ \(J\) _is_ \(\mathfrak{m}\)_-primary,_ \(G_{n,M,J}(y)/q^{d}\) _converges to_ \(F_{M,\mathfrak{m},J}(y)\) _on_ \(\mathbb{C}\)_._
Proof.: Fix an \(\mathbb{N}\)-graded module \(M^{\prime}\) generated in degree zero and equivalent to \(M\) in the sense of Lemma 6.2.
(3) Since \(J\) is \(\mathfrak{m}\)-primary, \(G_{n}\) is a sum of finitely many entire functions. So \(G_{n}\) is entire. Fix a compact subset \(K\) of \(\mathbb{C}\). By [14, Lemma 3.2.5], we can find a constant \(D\) such that
\[|\frac{1}{q^{d}}G_{n,M,J}(y)-\frac{1}{q^{d}}G_{n,M^{\prime},J}(y)|\leq\frac{D} {q}\,\text{for all}\,n\,\text{and}\,y\in K.\]
Since \(M^{\prime}\) is generated in degree zero, \(F_{n,M^{\prime},\mathfrak{m},J}=G_{n,M^{\prime},J}\). Since \(F_{n,M^{\prime},\mathfrak{m},J}/q^{d}\) uniformly converges to \(F_{M^{\prime},\mathfrak{m},J}\) on \(K\), the last inequality implies that \(\frac{1}{q^{d}}G_{n,M,J}\) converges uniformly to \(F_{M^{\prime},\mathfrak{m},J}\) on \(K\); see Theorem 4.3. Thanks to Lemma 6.3 and Theorem 4.3, \(F_{M^{\prime},\mathfrak{m},J}=F_{M,\mathfrak{m},J}\) on \(\mathbb{C}\).
(1) There is a polynomial \(P\) of degree \(d\) with non-negative coefficients such that
\[l([\frac{M}{J^{[q]}M}]_{j})\leq l(M_{j})\leq P(j).\]
Fix a compact subset \(K\subseteq\Omega\). Choose \(\epsilon>0\) such that \(\Im y<-\epsilon\) for every \(y\in K\). Since
\[\sum_{j=0}^{\infty}\frac{1}{q^{d}}|P(j)|e^{-j\epsilon/q}\]
is convergent, we conclude that the sequence of holomorphic functions
\[(\frac{1}{q^{d}}\sum_{j=0}^{N}l([\frac{M}{J^{[q]}M}]_{j})e^{-iyj/q})_{N}\]
converges uniformly to \(\frac{1}{q^{d}}G_{n,M,J}(y)\) on \(K\). This proves the holomorphicity of \(\frac{1}{q^{d}}G_{n,M,J}\) on \(\Omega\).
(2) When \(d=0\), the conclusion follows from a direct computation. Assume \(d\geq 1\). Since
\[l([\frac{M}{J^{[q]}M}]_{j})=l([\frac{M}{J^{[q]}M}]_{\leq j})-l([\frac{M}{J^{[q ]}M}]_{\leq j-1}),\]
a direct computation using the equation above shows that,
\[\sum_{j=0}^{\infty}l([\frac{M}{J^{[q]}M}]_{j})e^{-iyj/p^{n}}=\sum_{j=0}^{ \infty}l([\frac{M}{J^{[q]}M}]_{\leq j})e^{-iyj/p^{n}}(1-e^{-iy/p^{n}}). \tag{6.1}\]
Since
\[l(\frac{(\mathfrak{m}^{j}+J^{[q]})M}{(\mathfrak{m}^{j+1}+J^{[q]})M})=l([\frac{ M}{(\mathfrak{m}^{j+1}+J^{[q]})M}])-l([\frac{M}{(\mathfrak{m}^{j}+J^{[q]})M}]),\]
a direct computation shows that,
\[\sum_{j=0}^{\infty}l(\frac{(\mathfrak{m}^{j}+J^{[q]})M}{(\mathfrak{m}^{j+1}+J ^{[q]})M})e^{-iyj/p^{n}}=\sum_{j=0}^{\infty}l(\frac{M}{(\mathfrak{m}^{j+1}+J^{ [q]})M})e^{-iyj/p^{n}}(1-e^{-iy/p^{n}}) \tag{6.2}\]
Choose \(a\) such that as an \(R\)-module \(M\) is generated by homogeneous elements of degree at most \(a\). Therefore
\[\mathfrak{m}^{j}M\subseteq M_{\geq j}\subseteq\mathfrak{m}^{j-a}M.\]
So,
\[l(\frac{M}{(\mathfrak{m}^{j+1}+\,J^{[q]})M})-l([\frac{M}{J^{[q]}M}]_{ \leq j}) =l(\frac{M_{\leq j+1}+J^{[q]}M}{\mathfrak{m}^{j+1}M+J^{[q]}M})\] \[\leq l(\frac{\mathfrak{m}^{j+1-a}M+J^{[q]}M}{\mathfrak{m}^{j+1}M +J^{[q]}M})\] \[\leq l(\frac{\mathfrak{m}^{j+1-a}M}{\mathfrak{m}^{j+1}M})\] \[\leq Cj^{d-1},\]
for some \(C\), which is independent of \(q\) and \(j\). Using Equation (6.1), Equation (6.2) and the comparison above, we get that for any \(y\in\Omega\),
\[|\frac{1}{q^{d}}G_{n,M,J}(y)-\frac{1}{q^{d}}F_{n,\mathfrak{m},J}( y)| \leq\sum_{j=0}^{\infty}C\frac{1}{q}(\frac{j}{q})^{d-1}e^{-\Im yj /q}|1-e^{-iy/q}|\] \[=C|1-e^{-iy/q}|\int\limits_{0}^{\infty}|s\rfloor^{d-1}e^{-\Im y \lfloor s\rfloor}ds\] \[\leq C|1-e^{-iy/q}|\int\limits_{0}^{\infty}s^{d-1}e^{-\Im y(s-1) }ds.\]
Since \(\Im y<0\) for \(y\in\Omega\), the last integral is convergent. It follows from the last chain of inequalities that on a compact subset of \(\Omega\),
\[|\frac{1}{q^{d}}G_{n,M,J}(y)-\frac{1}{q^{d}}F_{n,\mathfrak{m},J}(y)|\]
uniformly converges to zero. This finishes the proof of (2).
## 7. Arithmetic properties
In this section, we record some arithmetic properties of the function we have constructed in the previous sections.
### \(\mathfrak{m}\)-adic continuity
We have proven that the \(h\)-function is continuous with respect to the \(\mathfrak{m}\)-adic topology on the set of ideals in \(R\).
**Theorem 7.1**.: _Let \(t\in\mathbb{N}\), \(I_{t}\), \(J_{t}\) be two sequences of ideals such that \(I_{t}+J_{t}\subset\mathfrak{m}^{t}\). Then for any \(s\), \(\lim_{t\to\infty}h_{M,I+I_{t},J+J_{t}}(s)=h_{M,I,J}(s)\). This convergence is uniform with respect to \(s\) on any compact set in \((0,\infty)\)._
Proof.: If \(s\neq 0\) then both sides are \(0\), so there is nothing to prove. Fix \(0<s_{1}<s_{2}<\infty\) and it suffices to prove the uniform convergence on \([s_{1},s_{2}]\), this is true by Theorem 3.13 and Theorem 3.20.
The Frobenius-Poincare function also satisfies a similar property:
**Proposition 7.2**.: _Let \(t\in\mathbb{N}\), \(I_{t}\), \(J_{t}\) be two sequences of ideals such that \(I_{t}+J_{t}\subset\mathfrak{m}^{t}\). Then for any \(y\in\Omega\): the open lower half complex plane, \(\lim_{t\to\infty}F_{M,I+I_{t},J+J_{t}}(y)=F_{M,I,J}(y)\). If \(J\) is \(\mathfrak{m}\)-primary, then the above holds for \(y\in\mathbb{C}\). In either case, the convergence is uniform on a compact subset of \(\Omega\) or \(\mathbb{C}\)._
Proof.: Fix a compact subset \(K\) of \(\Omega\). Choose \(\epsilon>0\) such that \(\Im y<-\epsilon\) for all \(y\in K\). Recall from Theorem 3.16, that there is a polynomial \(P\in\mathbb{R}[t]\) such that \(h_{n,M,I,J}(s)\leq P(s)\) for all \(s\in\mathbb{R}\) and all \(n\); so \(h_{M,I+I_{t},J+J_{t}}(s)\leq P(s)\) for all \(s\). Notice \(|P(s)e^{-\epsilon s}|\) is integrable on \(\mathbb{R}_{\geq 0}\) and the sequence \(h_{M,I+I_{t},J+J_{t}}\) converges to \(h_{M,I,J}\); the convergence is uniform on every compact subset of \((0,\infty)\); see Theorem 3.13. Say the absolute values of elements of \(K\) is bounded above by \(D\). Given \(\delta>0\), the observations above allows us to choose an interval \([a,b]\subseteq(0,\infty)\) and \(t_{0}\in\mathbb{N}\) such that,
1. \(2\int_{0}^{a}|P(s)|e^{-\epsilon s}ds+2\int_{b}^{\infty}|P(s)|e^{-\epsilon s}ds \leq\frac{\delta}{2D}\).
2. \(|h_{M,I+I_{t},J+J_{t}}(x)-h_{M,I,J}(x)|\leq\frac{\delta}{2D\int_{0}^{a}e^{- \epsilon s}ds}\) for all \(t\geq t_{0}\) and all \(s\in[a,b]\).
Therefore by using Theorem 4.3, for \(y\in K\) and all \(t\geq t_{0}\)
\[|F_{M,I+I_{t},J+J_{t}}(y)-F_{M,I,J}(y)| \leq\int_{0}^{\infty}|y||h_{M,I+I_{t},J+J_{t}}(s)-h_{M,I,J}(s)|e^{ -\epsilon s}ds\] \[\leq D[2\int_{0}^{a}|P(s)|e^{-\epsilon s}ds+2\int_{b}^{\infty}|P( s)|e^{-\epsilon s}ds\] \[+\int_{a}^{b}|h_{M,I+I_{t},J+J_{t}}(s)-h_{M,I,J}(s)|e^{-\epsilon s }ds]\] \[\leq\delta\.\]
This proves uniform convergence of \((F_{M,I+I_{t},J+J_{t}}(y))_{t}\) to \(F_{M,I,J}(y)\) on every compact subset of \(\Omega\). The assertion for \(\mathfrak{m}\)-primary \(J\) follows from a similar argument.
### Basic properties
Let \(R\) be a local ring, \(t\) be an indeterminate, \(I,J\) be \(\mathfrak{m}\)-primary ideals, \(M\) be a finitely generated \(R\)-module.
**Theorem 7.3**.: _[_10_, Proposition 2.6]_ _Assume \(I,J\) are two \(\mathfrak{m}\)-primary ideals. Then_
1. \(\dim M<d\)_, then_ \(h_{M,I,J,d}(s)=0\)_._
2. \(h_{M,I,J}\) _is increasing._
3. \(h_{M,I,J}(s)\leq e(I,M)s^{d}/d!\)_._
4. \(h_{M,I,J}(s)\leq e_{HK}(J,M)\)_._
**Theorem 7.4**.: _The above (1) and (2) is still true if only \(I+J\) is \(\mathfrak{m}\)-primary. (3) remains valid when \(I\) is \(\mathfrak{m}\)-primary and (4) remains valid when \(J\) is \(\mathfrak{m}\)-primary._
Proof.: By \(\mathfrak{m}\)-adic continuity \(\lim_{t\to\infty}h_{M,I+\mathfrak{m}^{t},J+\mathfrak{m}^{t}}(s)=h_{M,I,J}(s)\) and \(I+\mathfrak{m}^{t}\), \(J+\mathfrak{m}^{t}\) are \(\mathfrak{m}\)-primary. We have:
1. \(\dim M<d\), then \(h_{M,I+\mathfrak{m}^{t},J+\mathfrak{m}^{t},d}(s)=0\). Let \(t\to\infty\), \(h_{M,I,J,d}(s)=0\).
2. For \(s_{1}<s_{2}\), \(h_{M,I+\mathfrak{m}^{t},J+\mathfrak{m}^{t}}(s_{1})\leq h_{M,I+\mathfrak{m}^{t},J+\mathfrak{m}^{t}}(s_{2})\). Let \(t\to\infty\), \(h_{M,I,J}(s_{1})\leq h_{M,I,J}(s_{2})\).
3. \(h_{M,I,J+\mathfrak{m}^{t}}(s)\leq e(I,M)s^{d}/d!\). Let \(t\to\infty\), we have \(h_{M,I,J}(s)\leq e(I,M)s^{d}/d!\).
4. \(h_{M,I+\mathfrak{m}^{t},J}(s)\leq e_{HK}(J,M)\). Let \(t\to\infty\), we have \(h_{M,I,J}(s)\leq e_{HK}(J,M)\).
**Proposition 7.5**.: _[Additivity]Let \(0\to M^{\prime}\to M\to M^{\prime\prime}\to 0\) be an exact sequence of modules of dimension at most \(d\). Let \(I,J\) be ideals such that \(I+J\) is \(\mathfrak{m}\)-primary. Recall that the Kronecker delta notation \(\delta_{a,b}\) represents zero if \(a\neq b\) and \(1\) if \(a=b\)._
1. \(\mathcal{F}_{M,I,J}=\delta_{\text{dim}(M),\text{dim}(M^{\prime\prime})} \mathcal{F}_{M^{\prime},I,J}+\delta_{\text{dim}(M),\text{dim}(M^{\prime\prime})} \mathcal{F}_{M^{\prime\prime},I,J}\) _for_ \(\mathcal{F}=h,F\)_;_
2. \(f_{M}(s)=\delta_{\text{dim}(M),\text{dim}(M^{\prime})}f_{M^{\prime}}(s)+ \delta_{\text{dim}(M),\text{dim}(M^{\prime\prime})}f_{M^{\prime\prime}}(s)\)_, whenever_ \(h_{M,I,J}\)_,_ \(\delta_{\text{dim}(M),\text{dim}(M^{\prime\prime})}h_{M^{\prime\prime},I,J}\)_,_ \(\delta_{\text{dim}(M),\text{dim}(M^{\prime\prime})},h_{M^{\prime\prime},I,J}\) _are all differentiable at_ \(s\)
Proof.: (1)When \(\mathcal{F}=h\), this is true by Proposition 3.32. Then Theorem 4.3 implies the statement for \(\mathcal{F}=F_{M}\).
(2) follows from Theorem 5.8.
**Corollary 7.6** (Associativity formula).: The \(h\)-function, density function and Frobenius-Poincare function satisfy the associativity formula. To be precise,
1. let \(\mathcal{F}\in\{h,F\}\), then \[\mathcal{F}_{M,d}(s)=\sum_{P\in\operatorname{Spec}(R),\dim R/P=\dim R}\lambda_ {R_{P}}(M_{P})\mathcal{F}_{R/P}(s),\] for all \(s\in\mathbb{R}\).
2. At a point \(s\) where \(h_{R/P}\) is differentiable for all \(P\in\operatorname{Assh}(R)\), the same associativity formula holds for the density function (i.e. \(\mathcal{F}=f\)) at \(s\).
**Theorem 7.7**.: _Let \((R,\mathfrak{m},k)\) be a noetherian local ring of dimension \(d\), \(M\) be a finitely generated module of dimension \(d\), \(I,I^{\prime},J,J^{\prime}\) be \(R\)-ideals such that \(I^{\prime}\subset I,J^{\prime}\subset J\), \(I^{\prime}+J^{\prime}\) is \(\mathfrak{m}\)-primary. Then \(h_{M,I^{\prime},J^{\prime}}(s)\geq h_{M,I,J}(s)\) and equality holds if \(I\subset\bar{I}^{\prime}\) and \(J\subset J^{\prime*}\)._
Proof.: The first part of (3) is clear. Both sides are additive on \(M\), so by the associativity formula, we can replace \(M\) with \(R/P\) where \(\dim R/P=d\). The containment hypotheses on the ideals also hold for their images in \(R/P\) for any prime ideal \(P\). So we may assume \(M=R\) and \(R\) is a domain. By definition of the integral closure and tight closure we can choose a nonzero \(c\in R\) such that \(cI^{n}\subset I^{\prime n}\) and \(cJ^{[q]}\subset J^{\prime[q]}\), thus \(I^{\lceil sq\rceil}+J^{[q]}/I^{\prime[sq]}+J^{\prime[q]}\) is annihilated by \(c\). So
\[l(\frac{I^{\lceil sq\rceil}+J^{[q]}}{I^{\prime[sq]}+J^{\prime[q]}})\] \[\leq l(0:\tfrac{R}{I^{\lceil sq\rceil}+J^{\prime[q]}}\,c)\] \[=l(\frac{R}{cR+I^{\prime[sq]}+J^{\prime[q]}})\leq Cq^{d-1}\]
The last equation is true because \(\dim R/cR<\dim R\). This means
\[0\leq h_{n,M,I^{\prime},J^{\prime}}(s)-h_{n,M,I,J}(s)\leq Cq^{d-1}.\]
Dividing by \(q^{d}\) and take the limit when \(q\to\infty\), we get \(h_{M,I^{\prime},J^{\prime}}(s)=h_{M,I,J}(s)\).
**Theorem 7.8**.: _Let \(n_{0}\in\mathbb{N}\), then_
\[h_{M,I^{n_{0}},J}(s)=h_{M,I,J}(sn_{0}),h_{M,I,J^{\prime}[n_{0}]}(s)=p^{n_{0}d }h_{M,I,J}(s/p^{n_{0}}).\]
Proof.: If \(s\leq 0\) then both sides of the equation are \(0\) and the equality holds. Now we assume \(s>0\). By definition \(h_{n,M,I^{n_{0}},J}(s)=l(M/I^{n_{0}\lceil sq\rceil}+J^{[q]}M)\). Since \(\lceil sqn_{0}\rceil\leq n_{0}\lceil sq\rceil\leq\lceil sqn_{0}\rceil+n_{0}-1\), \(h_{n,M,I,J}(sn_{0})\leq h_{n,M,I_{0}^{n},J}(s)\leq h_{n,M,I,J}(sn_{0}+(n_{0}- 1)/q)\). We have \(\lim_{n\to\infty}(h_{n,M,I,J}(sn_{0}+(n_{0}-1)/q)-h_{n,M,I,J}(sn_{0}))/q^{d}=0\) by Theorem 3.20. So
\[\lim_{n\to\infty}h_{n,M,I_{0}^{n},J}(s)/q^{d}=\lim_{n\to\infty}h_{n,M,I,J}(sn_ {0})/q^{d},\]
which means \(h_{M,I_{0}^{n},J}(s)=h_{M,I,J}(sn_{0})\). We have \(h_{n,M,I,J^{[p^{n_{0}}]}}(s)=l(M/I^{\lceil sq\rceil}+J^{[qp^{n}_{0}]}M)=l(M/I^{ \lceil s/p^{n_{0}}\cdot qp^{n_{0}}\rceil}+J^{[qp^{n}_{0}]}M)\). So
\[\lim_{n\to\infty}\frac{h_{n,M,I,J^{\prime}[n_{0}]}(s)}{q^{d}}\] \[=p^{n_{0}d}\lim_{n\to\infty}\frac{h_{n+n_{0},M,I,J}(s/p^{n_{0}})}{ q^{d}p^{n_{0}d}}\] \[=p^{n_{0}d}h_{M,I,J}(s/p^{n_{0}}).\]
### Integration and \(h\)-function
Let \(R\) be a local ring of characteristic \(p\), \(R[[t]]\) be a power series ring with indeterminate \(t\). Let \(M\) be a finitely generated \(R\)-module, \(I,J\) be two \(R\)-ideals such that \(I+J\) is \(\mathfrak{m}\)-primary. Let \(M[[t]]=M\otimes_{R}R[[t]]\). We want to express \(h_{M[[t]],R[[t]],(I,t^{\alpha}),(J,t^{\beta})}\) in terms of \(h_{M,R,I,J}\).
**Theorem 7.9**.: _(1)\(h_{M[[t]],R[[t]],(I,t^{\alpha}),(J,t^{\beta})}(s)=\alpha\int_{s-\beta/\alpha}^{s} h_{M,R,I,J}(x)dx\)_
_(2)\(h_{M[[t]],R[[t]],(I,t^{\alpha}),J}(s)=\alpha\int_{0}^{s}h_{M,R,I,J}(x)dx\)_
_(3)\(h_{M[[t]],R[[t]],I,(J,t^{\beta})}(s)=\beta h_{M,R,I,J}(s)\)._
Proof.: We will use the convention \(I^{s}=R\) when \(s\leq 0\). To prove the equality we may assume \(s=s_{0}/q_{0}\in\mathbb{Z}[1/p]\) because the functions on both sides are continuous when \(s>0\). Then for \(q\geq q_{0}\), \(sq\) is an integer.
\[h_{n,M[[t]],R[[t]],(I,t^{\alpha}),(J,t^{\beta})}=l(\frac{M[[t]]}{((I,t^{\alpha })^{sq}+(J^{[q]},t^{\beta q}))M[[t]]})\]
The above length is also equal to
\[l(\frac{M[[t]]}{(\sum_{0\leq j\leq sq}I^{sq-j}t^{\alpha j}+(J^{[q]},t^{\beta q} ))M[[t]]})\]
But by the convention, it is also
\[l(M[[t]]/\sum_{0\leq j\leq\infty}I^{sq-j}t^{\alpha j}+(J^{[q]},t^{\beta q})M[[t ]])\]
and because the existence of the \(t^{\beta q}\)-term, it is also equal to
\[l(M[[t]]/(\sum_{0\leq j\leq[\beta q/\alpha]}I^{sq-j}t^{\alpha j}+(J^{[q]},t^{ \beta q}))M[[t]])\]
Note that the module inside is nonzero only in \(t\)-degree at most \(\beta q-1\). So summing up over the lengths in different \(t\)-degrees, the above length is also equal to the following sum:
\[\sum_{0\leq x\leq\beta q-1}l(M/(J^{[q]}+I^{sq-\lfloor x/\alpha\rfloor})M)\]
Let \(y=\lfloor x/\alpha\rfloor\). Up to adding a term of \(O(q^{d})\), it is equal to
\[\alpha\sum_{0\leq y\leq\lfloor\beta q/\alpha\rfloor}l(M/J^{[q]}+I^{sq-y}M)\]
which is exactly
\[\alpha\sum_{0\leq y\leq\lfloor\beta q/\alpha\rfloor}h_{n,M,I,J}(s-y/q)\]
\[=\alpha q\int_{s-\lfloor\beta q/\alpha\rfloor/q-1/q}^{s}h_{n,M,I,J}(x)dx\]
Now we divide by \(q^{d+1}\) and take the limit, then \(O(q^{d})\)-term disappears, so the left is
\[=\alpha\int_{s-\beta/\alpha}^{s}h_{M,I,J}(x)dx.\]
Since the equation
\[h_{M[[t]],R[[t]],(I,t^{\alpha}),(J,t^{\beta})}=\alpha\int_{s-\beta/\alpha}^{s }h_{M,R,I,J}(x)dx\]
is true on \(\mathbb{Z}[1/p]\) and both sides are continuous with respect to \(s\), they are equal on all of \(\mathbb{R}\). The rest of the two equations can be obtained by taking limit as \(\alpha\) or \(\beta\) goes to infinity and using the \(\mathfrak{m}\)-adic continuity proven in Theorem 3.13.
### Ring extension
**Proposition 7.10**.: _Let \((R,\mathfrak{m})\to(S,\mathfrak{n})\) be a local map such that \(\mathfrak{m}S\) is \(\mathfrak{n}\)-primary and \(\dim R=\dim S\). Then_
\[h_{M\otimes_{R}S,S,IS,JS}(s)\leq l_{S}(S/\mathfrak{m}S)h_{M,R,I,J}(s).\]
_The equality holds when \(S\) is flat over \(R\)._
Proof.: For any \(\mathfrak{m}\)-primary ideal \(\mathfrak{a}\), we have that \(l_{S}(M\otimes_{R}S/(\mathfrak{a}S)M\otimes_{R}S)\leq l_{R}(M/\mathfrak{a}M)l _{S}(S/\mathfrak{m}S)\). This means \(h_{n,M\otimes_{R}S,S,IS,JS}(s)\leq l(S/\mathfrak{m})h_{n,M,R,I,J}(s)\). All these equalities will hold if \(S\) is flat over \(R\).
## 8. Head and Tail of the \(h\)-function
In this section, we discuss the behaviour of \(h(s)\) near zero and \(s\) large enough. The regions near zero and away from zero where the \(h\)-function often shows interesting behaviour are marked by two other already known invariants, namely \(F\)-imbus and \(F\)-threshold. \(F\)-threshold is a well-known numerical invariant in characteristic \(p\) which compares the ordinary power and Frobenius power; it was defined as a limsup in [10] and [11], and is shown to be a limit in [13]. The \(F\)-imbus is less known, which is defined in [12].
**Definition 8.1**.: Let \(R\) be a ring of characteristic \(p>0\) which is not necessarily local, and let \(I,J\) be ideals of \(R\). Define
\[c_{I}^{J}(n)=\sup\{t\in\mathbb{N}:I^{t}\nsubseteq J^{[p^{n}]}\}\] \[c^{J}(I)=\lim_{n\to\infty}\frac{\sup\{t\in\mathbb{N}:I^{t}\nsubseteq J^{[p^{n}]}\}}{p^{n}}\] \[b_{I}^{J}(n)=\inf\{t\in\mathbb{N}:J^{[p^{n}]}\nsubseteq I^{t}\}\] \[b^{J}(I)=\lim_{n\to\infty}\frac{\inf\{t\in\mathbb{N}:J^{[p^{n}]} \nsubseteq I^{t}\}}{p^{n}}\]
The number \(c^{J}(I)\) is called the \(F\)-threshold of \(I\) with respect to \(J\) and the number \(b^{J}(I)\) is called the \(F\)-imbus of \(I\) with respect to \(J\). The following properties are well known, For example, see [12, Lemma 3.2].
**Lemma 8.2**.: _Let \(R\) be a ring of characteristic \(p>0\), and let \(I,J\) be proper ideals of \(R\)._
1. _For any_ \(I,J\)_, any limit above either exists or goes to infinity._
2. _If_ \(I\) _is contained in the Jacobson radical of_ \(R\)_,_ \(I\nsubseteq nil(R)\)_, then_ \(b^{J}(I)\leq c^{J}(I)\)_._
3. _If_ \(I\nsubseteq\sqrt{J}\) _then_ \(c^{J}(I)=\infty\)_._
4. _If_ \(I\subset\sqrt{J}\) _then_ \(0\leq c^{J}(I)<\infty\)_._
5. _If_ \(J\nsubseteq\sqrt{I}\) _then_ \(b^{J}(I)=0\)_._
6. _If_ \(J\subset\sqrt{I}\) _then_ \(0<b^{J}(I)\leq\infty\)_._
7. _If_ \(I\subset Rad(R)\)_,_ \(I\nsubseteq nil(R)\)_,_ \(I\subset\sqrt{J}\)_,_ \(J\subset\sqrt{I}\)_, then_ \(0<b^{J}(I)\leq c^{J}(I)<\infty\)_._
**Lemma 8.3**.: _Let \((R,\mathfrak{m})\) be a local ring of dimension \(d\) and characteristic \(p\), let \(I,J\) be two proper ideals of \(R\), and let \(M\) be a finitely generated \(R\)-module._
1. _If_ \(I\) _is_ \(\mathfrak{m}\)_-primary, then_ \(b^{J}(I)>0\) _and for_ \(s\leq b^{J}(I)\)_,_ \(h_{M}(s)=\frac{s^{d}}{d!}e(I,M)\)_._
2. _If_ \(J\) _is_ \(\mathfrak{m}\)_-primary, then_ \(c^{J}(I)<\infty\) _and for_ \(s\geq c^{J}(I)\)_,_ \(h_{M}(s)=e_{HK}(J,M)\)
Proof.: The above Lemma is a generalization of Lemma 3.3 of [10]. The proof is identically the same since it only uses the containment relation, which does not depend on whether \(I,J\) are \(\mathfrak{m}\)-primary or not. If \(I\) is \(\mathfrak{m}\)-primary then \(J\subset\sqrt{I}\), so \(b^{J}(I)>0\); if \(J\) is \(\mathfrak{m}\)-primary then \(I\subset\sqrt{J}\), so \(c^{J}(I)<\infty\).
### Tail of the \(h\)-function: \(F\)-threshold, minimal stable point and maximal support
Let \((R,\mathfrak{m})\) be a local ring of characteristic \(p>0\), \(I,J\) are \(R\)-ideals. Assume \(J\) is \(\mathfrak{m}\)-primary. By Lemma 8.3, (2), when \(J\) is \(\mathfrak{m}\)-primary, the \(h\)-function becomes a constant \(e_{HK}(J,M)\) when \(s\gg 0\). Since \(h(s)\) is increasing, \(h_{M}(s)\leq e_{HK}(J,M)\) for any \(s\). The \(h\)-function is also an increasing function, so there is a minimal point after which \(h_{M,I,J}(s)\) becomes constant. Define
\[\alpha_{M,I,J}=\sup\{s|h_{M,I,J}(s)\neq e_{HK}(J,M)\}=\min\{s|h_{M,I,J}(s)=e_{ HK}(J,M)\}.\]
We relate \(\alpha_{R,I,J}\) to other seemingly unrelated invariants of \((I,J)\).
**Definition 8.4**.: Let \((R,\mathfrak{m},k)\) be a local ring of characteristic \(p>0\), \(I,J\) be two \(R\)-ideal, \(I\subset\sqrt{J}\). Let
\[r_{I}^{J}(n)=\max\{t\in\mathbb{N}|I^{t}\nsubseteq(J^{[p^{n}]})^{*}\},\]
where \((J^{[p^{n}]})^{*}\) denotes the tight closure of \(J^{[p^{n}]}\); see Definition 2.5.
\[r^{J}(I)^{+}=\overline{\lim}_{n\to\infty}\frac{r_{I}^{J}(n)}{p^{n}}.\]
\[r^{J}(I)^{-}=\underline{\lim}_{n\to\infty}\frac{r_{I}^{J}(n)}{p^{n}}.\]
Under mild hypothesis, in Theorem 8.6, we show that \(r^{J}(I)^{+}=r^{J}(I)^{-}=\alpha_{R,I,J}\).
**Lemma 8.5**.: _Let \((R,\mathfrak{m},k)\) be a reduced \(d\)-dimensional local ring of characteristic \(p>0\), \(I,J\) be two \(R\)-ideals. Then \(e_{HK}(J,R)=\lim_{n\to\infty}l(R/(J^{[q]})^{*})/q^{d}\)._
Proof.: It suffices to show \(\lim_{n\to\infty}l((J^{[q]})^{*}/J^{[q]})/q^{d}=0\). By assumption \(R\) is reduced, \(F\)-finite. So there is a test element \(c\in R\), which is in particular not contained in any minimal prime of \(R\); see [11, sec 6]. Since \(c(J^{[q]})^{*}\subseteq J^{[q]}\) for all \(n\), we have \(l((J^{[q]})^{*}/J^{[q]})\leq l(0_{R/J^{[q]}}:c)=l(R/cR+J^{[q]})\leq Cq^{d-1}\) for some constant \(C\), so \(\lim_{n\to\infty}l((J^{[q]})^{*}/J^{[q]})/q^{d}=0\).
**Theorem 8.6**.: _Let \((R,\mathfrak{m},k)\) be a reduced formally equidimensional ring4 of characteristic \(p>0\), \(I\) be an \(R\)-ideal, \(J\) be an \(\mathfrak{m}\)-primary \(R\)-ideal. Then \(r^{J}(I)^{+}=r^{J}(I)^{-}=\alpha_{R,I,J}\). In particular, \(r^{J}(I)=\lim_{n\to\infty}r^{J}(I)(n)/p^{n}\) exists._
Footnote 4: i.e. the minimal primes of \(\hat{R}\) have the same dimension
Proof.: Obviously \(r^{J}(I)^{+}\geq r^{J}(I)^{-}\), so it suffices to prove \(r^{J}(I)^{+}\leq\alpha_{R,I,J}\leq r^{J}(I)^{-}\). Since \(\mathbb{Z}[1/p]\) is dense in \(\mathbb{R}\), it suffices to prove:
1. For \(x\in\mathbb{Z}[1/p]\), if \(x>r^{J}(I)^{-}\), then \(x\geq\alpha_{R,I,J}\);
2. For \(x\in\mathbb{Z}[1/p]\), if \(x<r^{J}(I)^{+}\), then \(x\leq\alpha_{R,I,J}\).
(1): If \(x>r^{J}(I)^{-}\), then there is an infinite sequence \(n_{i}\), such that \(xp^{n_{i}}>r^{J}(I)(n_{i})\) and \(xp^{n_{i}}\) is an integer for all \(i\). By definition of \(r_{n}\), \(I^{xp^{n_{i}}}\subset(J^{[p^{n_{i}}]})^{*}\). So
\[h_{R,I,J}(x)=\lim_{i\to\infty}l(R/I^{[sp^{n_{i}}]}+(J^{[p^{n_{i}}]})^{*})/q^{d }=\lim_{i\to\infty}l(R/(J^{[p^{n_{i}}]})^{*})/q^{d}=e_{HK}(J,R).\]
So \(x\geq\alpha_{R,I,J}\).
(2): If \(x<r^{J}(I)^{+}\), then there is a integer \(n_{0}\), such that \(xp^{n_{0}}\leq r^{J}(I)(n_{0})\) and \(xp^{n_{0}}\) is an integer. Let \(q_{0}=p^{n_{0}}\). By definition of \(r^{J}(I)(n)\), \(I^{xq_{0}}\nsubseteq(J^{[q_{0}]})^{*}\). Choose \(f\in I^{xq_{0}}\backslash(J^{[q_{0}]})^{*}\).
Let \(\tilde{J}=J^{[q_{0}]}+fR\); then \(e_{HK}(\tilde{J},R)<e_{HK}(J^{[q_{0}]},R)\); see [11, Thm 5.5], [12, Thm 8.17]. Now fix an \(s<xq_{0}\). Then for any \(q=p^{n}\), \(sq<xqq_{0}\). Since \(f\in I^{xq_{0}}\). So \(f^{q}\in I^{xqq_{0}}\subseteq I^{[sq]}\). So
\[I^{[sq]}+(J^{[q_{0}]}+fR)^{[q]}=I^{[sq]}+(J^{[q_{0}]})^{[q]}.\]
This means \(h_{R,I,\tilde{J}}(s)=h_{R,I,J^{[q_{0}]}}(s)\). So for \(s<xq_{0}\), \(h_{R,I,J^{[q_{0}]}}(s)=h_{R,I,\tilde{J}}(s)\leq e_{HK}(\tilde{J},R)<e_{HK}(J^{[ q_{0}]},R)\). This means \(\alpha_{R,I,J^{[q_{0}]}}\geq xq_{0}\). By Theorem 7.8, \(h_{R,I,J^{[q_{0}]}}(s)=q_{0}^{d}h_{R,I,J}(s/q_{0})\), \(\alpha_{R,I,J}=\frac{\alpha_{R,I,J^{[q_{0}]}}}{q_{0}}\geq x\).
Since \(h_{M}(s)\) is the integration of \(f_{M}(s)\), we see the minimal stable point of \(h_{M}\) is the maximal support of \(f_{M}\). Precisely,
**Corollary 8.7**.: Let \((R,\mathfrak{m},k)\) be a local ring of characteristic \(p>0\), \(I\) be an \(R\)-ideal, \(J\) be an \(\mathfrak{m}\)-primary \(R\)-ideal. Then \(\alpha_{R,I,J}=sup\{s|\,f_{R,I,J}(s)\,\text{exists and is nonzero}\}\). Moreover for \(s>\alpha_{R,I,J}\), \(f_{R,I,J}(s)\) is zero.
Proof.: For \(s>\alpha_{R,I,J}\), \(h_{I,J}(s)\) is constant. So by Theorem 5.8, \(f_{I,J}\) exists and is zero. Since \(h_{I,J}\) is the integral of the density function (Theorem 6.4) and \(h\) is a non-constant increasing function on \((0,\alpha_{R,I,J})\) for any \(0<a<\alpha_{R,I,J}\), \(f_{I,J}\) has to be non-zero on a set of non-zero measure.
_Remark 8.8_.: Recall from Theorem 6.7 that for standard graded \((R,m)\) of Krull dimension at least two and a finite colength homogeneous ideal \(J\), Trivedi's density function \(\tilde{g}_{R,J}\) coincides with \(f_{R,\mathfrak{m},J}\) and both are continuous. So Theorem 8.6 gives a precise description of the support of \(\tilde{g}_{R,J}\). Thus Theorem 8.6 and the theorem below extends [13, Thm 4.9], where \(\alpha_{R,J}\) is shown to coincide with the \(F\)-threshold \(c^{J}(\mathfrak{m})\) under suitable hypothesis.
**Theorem 8.9**.: _Let \((R,\mathfrak{m},k)\) be a local ring of characteristic \(p>0\), \(I\) be an \(R\)-ideal, \(J\) be an \(\mathfrak{m}\)-primary \(R\)-ideal. Then \(c^{J}(I)=r^{J}(I)\) is true under either of the assumptions below:_
1. _There exists a sequence of positive numbers_ \(r^{\prime}_{n}\) _such that_ \(I^{r^{\prime}_{n}}\subset J^{[q]}:(J^{[q]})^{*}\) _for infinitely many_ \(q\gg 0\) _and_ \(\lim_{n}r^{\prime}_{n}/p^{n}\to 0\)_._
2. _There exists a constant_ \(n_{0}\) _such that_ \(I^{n_{0}}\subset J^{[q]}:(J^{[q]})^{*}\) _for infinitely many_ \(q\gg 0\)_._
3. \(R\) _is_ \(F\)_-rational_6_, i.e. the tight closure of every parameter ideal coincides with the ideal and_ \(J\) _is a parameter ideal._ Footnote 6: see [12], [13]
4. \(I\subset\sqrt{\tau(R)}\)_, where_ \(\tau(R)=\cap_{\mathfrak{a}\subset R}\mathfrak{a}:\mathfrak{a}^{*}\) _is the test ideal of_ \(R\)_. See_ _[_12_, Definition 8.22, Proposition 8.23]_ _for details on the test ideal._ Footnote 7: see [13], [13]
5. _(Theorem 4.9, [13])_\(R\) _is strongly \(F\)-regular on the punctured spectrum._
Proof.:
1. By definition \(r^{J}_{I}(n)\leq c^{J}_{I}(n)\), and the condition implies \(c^{J}_{I}(n)\leq r^{J}_{I}(n)+r_{n}\), so \(\lim_{n}(c^{J}_{I}(n)-r^{J}_{I}(n))/p^{n}=0\) and \(c^{J}(I)=r^{J}(I)\).
2. By (1) and the fact that \(\lim_{n}n_{0}/n=0\).
3. If \(J\) is a parameter ideal, so is \(J^{[q]}\). Since \(R\) is \(F\)-rational, \(J^{[q]}:(J^{[q]})^{*}=R\) for any \(q\), so \(n_{0}=1\) satisfies the assumption of (2).
4. There exist an \(n_{0}\) such that \(I^{n_{0}}\subset\tau(R)\subset\cap_{q}J^{[q]}:(J^{[q]})^{*}\), and this \(n_{0}\) satisfies the assumption of (2).
5. In this case \(\tau(R)\) is either \(\mathfrak{m}\)-primary or is the unit ideal, so \(I\subset\sqrt{\tau(R)}\) always holds.
### Head of the \(h\)-function: order of \(h_{M}\) at 0 and Hilbert-Kunz multiplicity of quotient rings
So far we have proven continuity of the \(h\)-function on \(\mathbb{R}_{>0}\); see Theorem 3.20, Theorem 3.31. In this section we determine when \(h_{M,I,J}\) is continuous at \(s=0\); see Theorem 8.13. In Theorem 8.11, we determine the order of vanishing of \(h\)-functions near the origin and show that the asymptotic behaviour of \(h_{I,J}\) near the origin captures other numerical invariants of \((R,I,J)\). A major intermediate step involved in proving Theorem 8.11 is Theorem 8.10, which boils down to proving commutation of the order of a double limit. We lay the groundwork for that.
Let \((R,\mathfrak{m},k)\) be a local ring of characteristic \(p>0\), \(I,J\) be two \(R\)-ideals such that \(I+J\) is \(\mathfrak{m}\)-primary. Let \(d=\dim R\), \(d^{\prime}=\dim R/I\). For a positive integer \(s_{0}\), consider the sequence of real numbers:
\[\Gamma_{s_{0},m,n}=\frac{l(R/I^{s_{0}p^{n}}+J^{[p^{n}p^{m}]})}{p^{nd}p^{md^{ \prime}}s_{0}^{d-d^{\prime}}}.\]
\[\lim_{n\to\infty}\Gamma_{s_{0},m,n} =\frac{h_{R}(s_{0}/p^{m})}{(s_{0}/p^{m})^{d-d^{\prime}}}. \tag{8.1}\]
\[\lim_{m\to\infty}\Gamma_{s_{0},m,n} =\frac{e_{HK}(J^{[p^{n}]},R/I^{s_{0}p^{n}})}{p^{nd}s_{0}^{d-d^{ \prime}}}\] \[=\frac{e_{HK}(J,R/I^{s_{0}p^{n}})}{(s_{0}p^{n})^{d-d^{\prime}}}\] \[=\frac{1}{(s_{0}p^{n})^{d-d^{\prime}}}\sum_{P\in\operatorname{ Assh}(R/I)}e_{HK}(J,R/P)l_{R_{P}}(R_{P}/I^{s_{0}p^{n}}R_{P})\.\]
For \(P\in\operatorname{Assh}(R/I)\), we have \(ht(P)\leq\dim R-\dim R/P=\dim R-\dim R/I=d-d^{\prime}\). So
\[\lim_{n\to\infty}\lim_{m\to\infty}\Gamma_{s_{0},m,n} =\lim_{n\to\infty}\frac{1}{(s_{0}p^{n})^{d-d^{\prime}}}\sum_{P\in \operatorname{Assh}(R/I)}e_{HK}(J,R/P)l_{R_{P}}(R_{P}/I^{s_{0}p^{n}}R_{P})\] \[=\frac{1}{(d-d^{\prime})!}\sum_{P\in\operatorname{Assh}(R/I)}e_{ HK}(J,R/P)e(I,R_{P})\.\]
Since \(R\) is \(F\)-finite domain and hence an excellent domain (see [10]), for all \(P\in\operatorname{Assh}(R/I)\), \(ht(P)=d-d^{\prime}\). So the above quantity is
\[\frac{1}{(d-d^{\prime})!}\sum_{P\in\operatorname{Assh}(R/I)}e_{HK}(J,R/P)e(IR _{P},R_{P}).\]
When \(R\) is a Cohen-Macaulay domain and \(I\) is part of a system of parameters, the above quantity recovers the Hilbert-Kunz multiplicity \(e_{HK}(J,R/I)\) as,
\[\sum_{P\in\operatorname{Assh}(R/I)}e_{HK}(J,R/P)e(IR_{P},R_{P})\] \[=\sum_{P\in\operatorname{Assh}(R/I)}e_{HK}(J,R/P)l(R_{P}/IR_{P})\] \[=e_{HK}(J,R/I)\.\]
**Theorem 8.10**.: _Assume \(R\) is a domain and \(I\neq 0\) and \(J\) be such that \(I+J\) is \(\mathfrak{m}\)-primary. Fix a positive integer \(s_{0}\). Set \(\text{dim}(R/I)=d^{\prime}\). Then_
\[\lim_{m\to\infty}\frac{h(s_{0}/p^{m})}{(s_{0}/p^{m})^{d-d^{\prime}}}=\frac{1}{( d-d^{\prime})!}\sum_{P\in\text{Assh}(R/I)}e_{HK}(J,R/P)e(I,R_{P})\.\]
Proof.: We use the notation set above in this subsection. It follows from Equation (8.1) and above that we need to show
\[\lim_{m\to\infty}\lim_{n\to\infty}\Gamma_{s_{0},m,n}=\lim_{n\to\infty}\lim_{m \to\infty}\Gamma_{s_{0},m,n}.\]
We already see that \(\lim_{n\to\infty}\Gamma_{s_{0},m,n}\) and \(\lim_{n\to\infty}\lim_{m\to\infty}\Gamma_{s_{0},m,n}\) exists. We claim that that the sequence \(n\to\Gamma_{s_{0},m,n}\) is uniformly convergent in terms of \(m\); then, by argument of analysis, we get \(\lim_{m\to\infty}\lim_{n\to\infty}\Gamma_{s_{0},m,n}\) exists, and is equal to \(\lim_{n\to\infty}\lim_{m\to\infty}\Gamma_{s_{0},m,n}\).
To this end, we prove that there exist a constant \(C\) such that \(|\Gamma_{s_{0},m,n+1}-\Gamma_{s_{0},m,n}|\leq C/p^{n}\) for all \(m\), which implies that \(|\lim_{n\to\infty}\Gamma_{s_{0},m,n}-\Gamma_{s_{0},m,n}|\leq 2C/p^{n}\) for all \(m\). We can prove it in two steps: we first prove there is a constant \(C_{1}\) such that \(\Gamma_{s_{0},m,n+1}-\Gamma_{s_{0},m,n}\leq C_{1}/p^{n}\), then we prove there is a constant \(C_{2}\) such that \(\Gamma_{s_{0},m,n}-\Gamma_{s_{0},m,n+1}\leq C_{2}/p^{n}\), then \(C=max\{|C_{1}|,|C_{2}|\}\) satisfies the statement of the claim. Without loss of generality we assume \(\frac{R}{m}\) is a perfect field; see Remark 3.15.
Choice of \(C_{1}\): since \(\text{dim}\,R=d\), there is an exact sequence
\[0\to R^{\oplus p^{d}}\to F_{*}R\to N\to 0\]
where \(N\) is an \(R\)-module with \(\text{dim}\,N<d\). Then we have
\[(R/I^{sop^{n}}+J^{[p^{n}p^{m}]})^{\oplus p^{d}}\to F_{*}R/(I^{sop^{n}}+J^{[p ^{n}p^{m}]})F_{*}R\to N/I^{sop^{n}}+J^{[p^{n}p^{m}]}N\to 0.\]
This means
\[l(\frac{R}{I^{s_{0}p^{n+1}}+J^{[p^{n+1}p^{m}]}})\leq l(\frac{R}{I^{s_{0}p^{n} }[p]+J^{[p^{n+1}p^{m}]}})\leq p^{d}l(\frac{R}{I^{s_{0}p^{n}}+J^{[p^{n}p^{m}]}} )+l(\frac{N}{(I^{s_{0}p^{n}}+J^{[p^{n}p^{m}]})N}).\]
So dividing \(p^{(n+1)d}p^{md^{\prime}}s_{0}^{d-d^{\prime}}\), we get
\[\Gamma_{s_{0},m,n+1}\leq\Gamma_{s_{0},m,n}+l(N/I^{sop^{n}}+J^{[p^{n}p^{m}]}N)/ p^{(n+1)d}p^{md^{\prime}}s_{0}^{d-d^{\prime}}.\]
Now we claim that there is a constant \(C_{1}>0\) that depends on \(N,I,J\) and \(s_{0}\) but is independent of \(m,n\) such that \(l(N/I^{s_{0}p^{n}}+J^{[p^{n}p^{m}]}N)/p^{n(d-1)+d}p^{md^{\prime}}s_{0}^{d-d^{ \prime}}\leq C_{1}\). We have
\[l(N/I^{s_{0}p^{n}}+J^{[p^{n}p^{m}]}N) \leq l(N/I^{s_{0}[p^{n}]}+J^{[p^{n}p^{m}]}N)\] \[=l(F_{*}^{n}N/I^{s_{0}}+J^{[p^{m}]}F_{*}^{n}N)\] \[\leq\mu_{R}(F_{*}^{n}N)l(R/I^{s_{0}}+J^{[p^{m}]}).\]
Since \(\text{dim}\,N\leq d-1\) and \(\text{dim}\,R/I=d^{\prime}\), \(\mu_{R}(F_{*}^{n}N)/p^{n(d-1)}\) and \(l(R/I^{s_{0}}+J^{[p^{m}]})/p^{md^{\prime}}\) are both bounded. And \(p^{-d}s_{0}^{d-d^{\prime}}\) is independent of \(m,n\). This means there is a constant \(C_{1}>0\) that depends on \(N,I,J\) and \(s_{0}\) but is independent of \(m,n\) such that \(l(N/I^{s_{0}p^{n}}+J^{[p^{n}p^{m}]}N)/p^{n(d-1)+d}p^{md^{\prime}}s_{0}^{d-d^{ \prime}}\leq C_{1}\). Thus we have
\[\Gamma_{s_{0},m,n+1}\leq\Gamma_{s_{0},m,n}+C_{1}/p^{n}.\]
Choice of \(C_{2}\): since \(\text{dim}\,R=d\), there is an injection \(F_{*}R\xrightarrow{\phi}R^{\oplus p^{d}}\) where \(\text{dim}\,Coker\phi<\text{dim}\,R\). Let \(\mu\) be the minimal number of generators of \(I\). Choose \(0\neq c\in I\) and let \(\psi=c^{\mu}\phi\). Since \(R\) is a domain, \(\psi\) is still an injection, and we have a short exact sequence
\[0\to F_{*}R\xrightarrow{\psi}R^{\oplus p^{d}}\to N^{\prime}\to 0\]
and we have \(\dim N^{\prime}<\dim R\).
\[F_{*}R/(I^{s_{0}p^{n}}+J^{[p^{n}p^{m}]})F_{*}R\xrightarrow{\bar{\phi}}(R/I^{s_{0}p ^{n}}+J^{[p^{n}p^{m}]})^{\oplus p^{d}}\to N^{\prime}/I^{s_{0}p^{n}}+J^{[p^{n}p^ {m}]}N^{\prime}\to 0\]
We claim that \(\bar{\phi}\) induces an \(R\)-linear map \(\Phi:F_{*}(R/(I^{s_{0}p^{n+1}}+J^{[p^{n+1}p^{m}]}))\xrightarrow{\bar{\phi}}(R/I ^{s_{0}p^{n}}+J^{[p^{n}p^{m}]})^{\oplus p^{d}}\). It suffices to show \(\psi(F_{*}(I^{s_{0}p^{n+1}}+J^{[p^{n+1}p^{m}]}))\in(I^{s_{0}p^{n}}+J^{[p^{n}p^ {m}]})^{\oplus p^{d}}\). We have \(I^{s_{0}p^{n+1}}=I^{s_{0}p^{n}p}\subset I^{(s_{0}p^{n}-\mu)[p]}\). So
\[\psi(F_{*}(I^{s_{0}p^{n+1}}+J^{[p^{n+1}p^{m}]}))\] \[\subset\psi(F_{*}(I^{(s_{0}p^{n}-\mu)[p]}+J^{[p^{n+1}p^{m}]}))\] \[\subset I^{(s_{0}p^{n}-\mu)}+J^{[p^{n}p^{m}]}\psi(F_{*}R)\] \[\subset c^{\mu}(I^{(s_{0}p^{n}-\mu)}+J^{[p^{n}p^{m}]})\phi(F_{*}R)\] \[\subset I^{(s_{0}p^{n})}+J^{[p^{n}p^{m}]}\phi(F_{*}R)\] \[\subset(I^{(s_{0}p^{n})}+J^{[p^{n}p^{m}]})^{\oplus p^{d}}.\]
This induces an exact sequence
\[F_{*}(R/(I^{s_{0}p^{n+1}}+J^{[p^{n+1}p^{m}]}))\to(R/I^{s_{0}p^{n}}+J^{[p^{n}p^ {m}]})^{\oplus p^{d}}\to N^{\prime}/I^{s_{0}p^{n}}+J^{[p^{n}p^{m}]}N^{\prime}\to 0\]
Therefore,
\[p^{d}l(R/I^{s_{0}p^{n}}+J^{[p^{n}p^{m}]})\leq l(R/I^{s_{0}p^{n+1}}+J^{[p^{n+1}p ^{m}]}+l(N^{\prime}/I^{s_{0}p^{n}}+J^{[p^{n}p^{m}]}N^{\prime})\]
So dividing \(p^{(n+1)d}p^{md^{\prime}}s_{0}^{d-d^{\prime}}\), we get
\[\Gamma_{s_{0},m,n+1}\leq\Gamma_{s_{0},m,n}+l(N^{\prime}/I^{s_{0}p^{n}}+J^{[p^{ n}p^{m}]}N^{\prime})/p^{(n+1)d}p^{md^{\prime}}s_{0}^{d-d^{\prime}}\]
Since \(\dim N^{\prime}<\dim R\), we can use the same proof in the previous step to show that there is a constant \(C_{2}>0\) that depends on \(N^{\prime},I,J\) and \(s_{0}\) but independent of \(m,n\) such that \(l(N^{\prime}/I^{s_{0}p^{n}}+J^{[p^{n}p^{m}]}N^{\prime})/p^{n(d-1)+d}p^{md^{ \prime}}s_{0}^{d-d^{\prime}}\leq C_{2}\), so
\[\Gamma_{s_{0},m,n}\leq\Gamma_{s_{0},m,n+1}+C_{2}/p^{n}.\]
**Theorem 8.11**.: _Let \((R,\mathfrak{m},k)\) be a local domain, \(I,J\) be two \(R\)-ideals, \(I\neq 0\), \(I+J\) is \(\mathfrak{m}\)-primary. Let \(d=\dim R\), \(d^{\prime}=\dim R/I\). Then:_
1. \(\lim_{s\to 0+}h(s)/s^{d-d^{\prime}}=\frac{1}{(d-d^{\prime})!}\sum_{P\in \operatorname{Assh}(R/I)}e_{HK}(J,R/P)e(I,R_{P})\)_._
2. _The order of vanishing_ \(h(s)\) _at_ \(s=0\) _is exactly_ \(d-d^{\prime}\)_._
3. \(h(s)\) _is continuous at_ \(0\)_._
Proof.: (1) Let \(\frac{1}{(d-d^{\prime})!}\sum_{P\in\operatorname{Assh}(R/I),ht(P)=d-d^{\prime} }e_{HK}(J,R/P)e(I,R_{P})=c=c_{I,J}\), which is a constant that only depends on \(I,J\). The last theorem implies for any fixed \(s_{0}\),
\[\lim_{m\to\infty}h(s_{0}/p^{m})/(s_{0}/p^{m})^{d-d^{\prime}}=c\]
Choose a sequence \(\{s_{i}\}_{i}\subset(0,\infty)\) such that \(\lim_{i\to\infty}s_{i}=0\) and \(\lim_{i\to\infty}h(s_{i})/s_{i}^{d-d^{\prime}}\) exists. Below we argue that \(\lim_{i\to\infty}h(s_{i})/s_{i}^{d-d^{\prime}}=c\); then (1) follows. Fix any \(n_{0}\in\mathbb{N}\). There exists an integer \(\alpha_{i}\) for each \(s_{i}\) such that \(s_{i}p^{\alpha_{i}}\in(p^{n_{0}-1},p^{n_{0}}]\). Since \(h(s)\) is an
increasing function,
\[\frac{h(\lfloor s_{i}q^{\alpha_{i}}\rfloor/q^{\alpha_{i}})}{((\lfloor s _{i}q^{\alpha_{i}}\rfloor+1)/q^{\alpha_{i}})^{d-d^{\prime}}}\leq\frac{h(s_{i})}{s _{i}^{d-d^{\prime}}}\leq\frac{h(\lceil s_{i}q^{\alpha_{i}}\rceil/q^{\alpha_{i}})} {((\lceil s_{i}q^{\alpha_{i}}\rceil-1)/q^{\alpha_{i}})^{d-d^{\prime}}}\] \[\implies (\frac{\lfloor s_{i}q^{\alpha_{i}}\rfloor}{\lfloor s_{i}q^{\alpha_ {i}}\rfloor+1})^{d-d^{\prime}}\frac{h(\lfloor s_{i}q^{\alpha_{i}}\rfloor/q^{ \alpha_{i}})}{(\lfloor s_{i}q^{\alpha_{i}}\rfloor/q^{\alpha_{i}})^{d-d^{\prime }}}\leq\frac{h(s_{i})}{s_{i}^{d-d^{\prime}}}\leq(\frac{\lceil s_{i}q^{\alpha_ {i}}\rceil}{\lceil s_{i}q^{\alpha_{i}}\rceil-1})^{d-d^{\prime}}\frac{h(\lceil s _{i}q^{\alpha_{i}}\rceil/q^{\alpha_{i}})}{(\lceil s_{i}q^{\alpha_{i}} \rceil/q^{\alpha_{i}})^{d-d^{\prime}}}\] \[\implies (\frac{p^{n_{0}-1}}{p^{n_{0}-1}+1})^{d-d^{\prime}}\frac{h(\lfloor s _{i}q^{\alpha_{i}}\rfloor/q^{\alpha_{i}})}{(\lfloor s_{i}q^{\alpha_{i}} \rfloor/q^{\alpha_{i}})^{d-d^{\prime}}}\leq\frac{h(s_{i})}{s_{i}^{d-d^{\prime }}}\leq(\frac{p^{n_{0}-1}}{p^{n_{0}-1}-1})^{d-d^{\prime}}\frac{h(\lceil s_{i} q^{\alpha_{i}}\rceil/q^{\alpha_{i}})}{(\lceil s_{i}q^{\alpha_{i}}\rceil/q^{ \alpha_{i}})^{d-d^{\prime}}}.\]
Let \(i\to\infty\), then \(s_{i}\to 0\), \(\alpha_{i}\to\infty\). Since \(\lfloor s_{i}q^{\alpha_{i}}\rfloor,\lceil s_{i}q^{\alpha_{i}}\rceil\) lies in \([p^{n_{0}-1},p^{n_{0}}]\), so there are only finitely many possible values of \(\lfloor s_{i}q^{\alpha_{i}}\rfloor,\lceil s_{i}q^{\alpha_{i}}\rceil\). So by Theorem 8.10,
\[\lim_{i\to\infty}\frac{h(\lfloor s_{i}q^{\alpha_{i}}\rfloor/q^{\alpha_{i}})}{ (\lfloor s_{i}q^{\alpha_{i}}\rfloor/q^{\alpha_{i}})^{d-d^{\prime}}}=\lim_{i \to\infty}\frac{h(\lceil s_{i}q^{\alpha_{i}}\rceil/q^{\alpha_{i}})}{(\lceil s _{i}q^{\alpha_{i}}\rceil/q^{\alpha_{i}})^{d-d^{\prime}}}=c.\]
This means
\[(\frac{p^{n_{0}-1}}{p^{n_{0}-1}+1})^{d-d^{\prime}}c\leq\lim_{i\to\infty}h(s_{i })/s_{i}^{d-d^{\prime}}\leq(\frac{p^{n_{0}-1}}{p^{n_{0}-1}-1})^{d-d^{\prime}}c.\]
Since this is true for arbitrary \(n_{0}\), we get
\[\lim_{i\to\infty}h(s_{i})/s_{i}^{d-d^{\prime}}=c.\]
This finishes the proof of (1).
(2) follows from (1).
(3) Since \(R\) is a domain and \(I\neq 0\), \(d^{\prime}=\dim R/I<\dim R=d\), \(d-d^{\prime}\geq 1\). So the order of \(h(s)\) at \(0\) is at least \(1\); in particular, \(\lim_{s\to 0^{+}}h(s)=0=h(0)\).
**Lemma 8.12**.: _Let \((R,\mathfrak{m})\) be a noetherian local domain, \(I,J\) be two \(R\)-ideal such that \(I+J\) is \(\mathfrak{m}\)-primary. Then \(h_{R,I,J}(s)\) is continuous at \(0\) if and only if \(I\neq 0\)._
Proof.: If \(I\neq 0\) then by previous theorem it is continuous at \(0\). If \(I=0\), then \(h_{R}(s)=e_{HK}(J,R)\neq 0=h_{R}(0)\) for \(s>0\), so it is discontinuous at \(0\).
**Theorem 8.13**.: _Let \((R,\mathfrak{m})\) be a noetherian local ring, \(I,J\) be two \(R\)-ideals such that \(I+J\) is \(\mathfrak{m}\)-primary, \(M\) be a finitely generated \(R\)-module. Then \(h_{M,I,J}(s)\) is continuous at \(0\) if and only if \(I\nsubseteq P\) for any \(P\in\operatorname{Supp}(M)\) with \(\dim R/P=\dim M\). In particular, \(h_{R,I,J}(s)\) is continuous at \(0\) if and only if \(\dim R>\dim R/I\). If \(h_{M}\) is discontinuous at \(0\) then we have_
\[\lim_{s\to 0^{+}}h_{M}(s)=\sum_{P\in\operatorname{Supp}(M),I\subset P,\dim R/P= \dim M}l_{R_{P}}(M_{P})e_{HK}(J,R/P).\]
Proof.: By the associativity formula for \(h\)-function in Corollary 7.6,
\[h_{M}(s)=\sum_{P\in\operatorname{Supp}(M),\dim R/P=\dim M}l_{R_{P}}(M_{P})h_{R /P}(s).\]
For any \(P\in\operatorname{Supp}(M)\), \(\lim_{s\to 0+}h_{R/P,I,J}(s)\) is always non-negative; the limit is positive if and only if \(I\subseteq P\), in which case the limit is \(e_{HK}(J,R/P)\); see Lemma 8.12. Thus taking limit as \(s\) approaches zero from the right, we get the expression of the right hand limit of \(h_{M}\). Since \(h_{M}\) is continuous at \(0\) if and only if \(\lim_{s\to 0^{+}}h_{R/P}(s)=0\) for any \(P\in\operatorname{Supp}(M)\) with \(\dim R/P=\dim M\), the continuity of \(h_{M}\) at zero is equivalent to asking \(I\nsubseteq P\) for any \(P\in\operatorname{Supp}(M)\) with \(\dim R/P=\dim M\). If \(M=R\), then this means \(I\nsubseteq P\) for any \(P\in\operatorname{Assh}(R)\) which means \(\dim R>\dim R/I\).
## 9. Questions
Inspired by Trivedi's question [11, Question 2], we ask
**Question 9.1**.: Let \(I,J\) be \(\mathfrak{m}\)-primary ideals of a noetherian local ring \(R\). Is \(h_{R,I,J}\) a piecewise polynomial? In other words, does there exists a countable subset \(S\) of \(\mathbb{R}\) and a covering \(\mathbb{R}\setminus S=\coprod\limits_{n\in\mathbb{N}}(a_{n},b_{n})\) such that on each \((a_{n},b_{n})\), \(h_{R,I,J}\) is given by a polynomial function?
We point out that, in the context of the question, \(h_{R,I,J}(s)\) is \(e_{HK}(J,R)\) for large \(s\), \(e(I,R)s^{\dim(R)}/\dim(R)!\) on some interval \((0,a]\) and zero for \(s\) nonpositive.
## 10. Acknowledgements
The first author was supported in part by NSF-FRG grant DMS-1952366. The second author thanks support of NSF DMS # 1952399 and # 2101075. We thank Linquan Ma for supporting our collaboration.
|
2308.05351 | Occupancy-driven Zeeman suppression and inversion in trapped polariton
condensates | We study the magneto-photoluminescence of an optically trapped
exciton-polariton condensate in a planar semiconductor microcavity with
multiple In0.08Ga0.92As quantum wells. Extremely high condensate coherence time
and continuous control over the polariton confinement are among the advantages
provided by optical trapping. This allows us to resolve magnetically induced
{\mu}eV fine-energy shifts in the condensate and identify unusual dynamical
regions in its parameter space. We observe polariton Zeeman splitting and, in
small traps with tight confinement, demonstrate its full parametric screening
when the condensate density exceeds a critical value, reminiscent of the
spin-Meissner effect. For larger optical traps, we observe a complete inversion
in the Zeeman splitting as a function of power, underlining the importance of
condensate confinement and interactions with its background reservoir excitons. | Krzysztof Sawicki, Dmitriy Dovzhenko, Yuan Wang, Helgi Sigurðsson, Pavlos G. Lagoudakis | 2023-08-10T05:44:49Z | http://arxiv.org/abs/2308.05351v2 | # Occupancy-driven Zeeman suppression and inversion in trapped polariton condensates
###### Abstract
We study the magneto-photoluminescence of an optically trapped exciton-polariton condensate in a planar semiconductor microcavity with multiple In\({}_{0.08}\)Ga\({}_{0.92}\)As quantum wells. Extremely high condensate coherence time and continuous control over the polariton confinement are amongst the advantages provided by optical trapping. This allows us to resolve magnetically induced \(\sim\mu\)eV fine-energy shifts in the condensate, and identify unusual dynamical regions in its parameter space. We observe polariton Zeeman splitting and, in small traps with tight confinement, demonstrate its full parametric screening when the condensate density exceeds a critical value, reminiscent of the spin-Meissner effect. For larger optical traps, we observe a complete inversion in the Zeeman splitting as a function of power, underlining the importance of condensate confinement and interactions with its background reservoir excitons.
## I Introduction
Reconfigurable and highly nonlinear cavity-polariton systems controlled by external fields could be key for the development of flexible spinoptronic semiconductor devices [1] and optical topological insulators and lasers [2]. In contrast to weakly interactive pure photonic systems, a medley of active materials embedded in planar microcavities [3] has given unprecedented insight into strong light-matter physics under variable electric, magnetic, and optical fields [4]. This feature results from the hybrid nature of exciton-polaritons (hereinafter called polaritons), which are formed by the strong interaction of light and matter and inherit the properties of both their components - photons and excitons (i.e., bound electron-hole pairs) [5]. Possessing extremely light effective mass and strong interactions, polaritons can accumulate in a non-equilibrium analogue of a Bose-Einstein condensate [6] forming a strongly nonlinear polariton laser that can be electrically driven [7; 8] at both cryogenic and room temperatures.
Spinor polaritons \(\Psi=(\psi_{+},\psi_{-})^{\mathrm{T}}\) possess two integer spin projections \(s=\pm 1\) on the growth axis of the cavity, explicitly related to the two circular polarizations \(\sigma^{\pm}\) of emitted light [5]. Hence, when the polaritonic system is illuminated with a linearly polarized nonresonant pump, the condensate also becomes linearly polarized with a polarization vector determined by the cavity strain or anisotropic disorder. The situation changes when an external magnetic field is applied parallel to the growth axis (i.e., Faraday geometry). Due to the underlying electron and hole constituents in the exciton wavefunction, the magnetic field results in a Zeeman effect between the \(|\psi_{+}\rangle\) and \(|\psi_{-}\rangle\) polariton states denoted by the splitting, \(E_{\mathrm{ZS}}=E_{+}-E_{-}\). The appearance of fine-structure energy splitting is a fundamental manifestation of the influence of the magnetic field on the polariton structure. In particular, the dynamical interplay between the real magnetic field and an effective magnetic field caused by spin-anisotropic polariton-polariton interactions can lead to the full parametric screening of the former [9; 10]. This effect, known as the polariton spin-Meissner effect, is a manifestation of collective quantum behaviour in driven-dissipative polariton fluids.
For more than a decade, the magnetic properties of polaritons in planar microcavities or micropillars attracted considerable attention, accumulating in the observation of the Zeeman and spin-Meissner effects [10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20]. Moreover, investigations using elliptically polarized excitation led to an optical analogue of the Zeeman effect [21; 22; 23; 24; 25; 26; 27; 28]. In recent years, the effect of a magnetic field on polaritons has also been studied in microcavities with semi-magnetic quantum wells reporting _giant_ Zeeman splitting [17; 18; 20; 29]. However, an inherent limitation of the systems studied so far has been their lack of reconfigurability, large disorder, and low coherence times, which restricts polaritons for potential spinoptronic applications [1].
Remarkable progress in the technology of the epitaxial growth of GaAs-based layers enables fabrication of high-quality optical microcavities exhibiting significantly low disorder, wherein optically trapped condensates [30; 31; 32] display the self-induced Larmor precession with remarkably long spinor coherence times of up to \(\sim 9\) ns [26; 27]. When dynamically driven, the spin coherence time can even reach hundreds of ns [33]. However, so far, the influence of the magnetic field on optically trapped polariton condensates has not been investigated. In particular, spin-related phenomena such as the spin-Meissner effect, which are difficult to observe due to the relatively small Zeeman effect of GaAs and InGaAs quantum wells (QWs), have not been observed in optically trapped condensates.
In this article, we present experimental evidence of both the Zeeman and spin-Meissner effects in an optically trapped polariton condensate. Observation of the
abovementioned effects is possible due to the decreased overlap between the pump induced background reservoir of incoherent excitons and the stimulated coherent polariton condensate, consequently decreasing the linewidth of the condensate emission. This makes it possible to detect even minor variations in the polariton fine energy structure. Moreover, using the all-optical reconfigurability of the investigated system, we show that we move from suppressed Zeeman splitting to inverted splitting by simply tuning the optical trap size and the condensate density. Our findings are supported by a generalized spinor Gross-Pitaevskii equation coupled to a reservoir rate equation.
## II Experimental details
The studied sample is a strain-compensated, high-\(Q\) (\(Q\sim 12000\)[34]) GaAs-based \(2\lambda\) microcavity with embedded \(6\,\)nm In\({}_{0.08}\)Ga\({}_{0.92}\)As QWs (see schematic Fig. 1). In the cavity region, three pairs of QWs are located in the central three anti-nodes of the electric field. Two additional QWs positioned at the extreme nodes of the cavity wells serve for carrier collection. The top (bottom) distributed Bragg reflector is made of 23 (26) pairs of alternating refractive index GaAs and AlAs\({}_{0.98}\)P\({}_{0.02}\) layers. The microcavity is of a wedge type, which allows tuning of the light and matter fractions of polaritons by choosing the appropriate in-plane location on the sample. In this study, the experimental measurements are conducted at the exciton-cavity mode detuning of around \(-1.9\,\)meV in the absence of the magnetic field.
The experiments are performed at cryogenic temperature (\(\sim 7\,\)K) in a closed-cycle cryostat equipped with a superconducting magnet producing a magnetic field parallel to the optical axis in the range from \(-5\) to \(5\,\)T. The sample is excited non-resonantly with a linearly polarized continuous-wave Ti:Sapphire laser tuned to the Bragg reflector's minimum (\(\lambda_{\text{exc}}=758.8\,\)nm) of the sample and modulated by an acousto-optic modulator to avoid heating. The pump beam excites co-localized high-energy charge carrier distribution, which undergoes fast energy relaxation to form an incoherent exciton reservoir which in-turn feeds the condensate [5]. Because the photoexcited reservoir of excitons produces not only local gain for polaritons but also local blueshift, the resultant condensate can be confined when the laser beam is structured into an annular profile [30]. For this purpose, we use a spatial light modulator (SLM) to create a ring-shaped pumping beam, similar as in the case of a set of axicon lenses [35], which forms the transverse trap. The photoluminescence (PL) is collected through a microscope with a numerical aperture of 0.4, allowing for the collection of light from the condensate within the trap and the surrounding exciton reservoir. The combination of a quarter-wave plate and a Wollaston prism is used to simultaneously detect signals from both circular polarization components.
## III Experimental results
### Zeeman splitting of optically trapped polaritons
In the absence of a magnetic field, the linearly excited sample has equal buildup of spin-up and spin-down exciton reservoir populations, resulting in the formation of a linearly polarized polariton condensate when pumped above the threshold [6]. When an external magnetic field is introduced the Zeeman effect splits the energy of the spin-up and spin-down excitons, and the corresponding exciton-polariton spin states \(|\psi_{\pm}\rangle\) as shown in Fig. 1 (also referred by \(\sigma^{\pm}\)). The quasi-equilibrium spin populations of the reservoir excitons become unequal in the magnetic field because of the different spin relaxation rates [36] with reservoir spins favouring antiparallel arrangement to the field (i.e., negative heavy-hole exciton \(g\)-factor [37; 38]). Consequently, polaritons will preferentially condense into the same dominant spin-state, resulting in strong circularly polarized emission of definite handedness [14; 16].
We first study the linear regime at low pumping powers below the condensation threshold. We note that the threshold power depends on the magnetic field, and we will use the notation \(P_{\text{th},B}\) where the second index refers to the field value the threshold is evaluated at. The evolution of the circularly polarized PL spectra, extracted at \(k\sim 0\), for consecutive values of the magnetic field is shown in Fig. 2(a). The higher energy line corresponds to polaritons in the ground state of the optical trap, whereas the lower energy line is attributed to the linear PL of
Figure 1: Schematic illustration of the investigated sample. The linearly polarized, non-resonant CW-laser was used to create equal populations of the \(\psi_{\pm}\) polaritons. The corresponding \(\sigma^{\pm}\) emission (blue and red spirals) is detected simultaneously. The magnetic field applied parallel to the sample growth axis lifts the degeneracy of the polariton spins, manifesting in a detectable energy difference (i.e., Zeeman splitting) between the emitted circularly polarized photons.
polaritons from the pumping area and outside the trap. Scrutinizing the higher energy line (i.e., trapped polaritons), we observe a parabolic diamagnetic shift in both spin components [39] alongside the fine-structure splitting due to the Zeeman effect [12; 15; 16].
We next fix the magnetic field at \(B=5\) T in Fig. 2(b) and increase the excitation power. We note that here each subpanel is normalized independently to keep better track of the peak location. For low pumping power values, the upper spin peaks corresponding to the trap ground state monotonically blueshift as the density of reservoir excitons increases. The splitting of the two spin peaks can be clearly resolved all the way up to a critical value of \(P\approx 1.5P_{\text{th,5T}}\). The lower peaks remain fixed, since they correspond to residual low-energy polaritons outside the trap. After reaching this critical value, the energy difference is eliminated completely by parametric screening of the magnetic field. This effect is known as the nonequilibrium spin-Meissner effect [12; 18; 40] and, so far, has not been reported in optically trapped polariton condensates.
In Fig. 3, we compare the polarization- and momentum-resolved spectra for two different pump powers; \(P=0.3P_{\text{th,0T}}\) and \(P=P_{\text{th,0T}}\); and two field values \(B=0\) T and \(B=5\) T. At low powers and zero field the polaritons are mostly unconfined and two spin degenerate dispersion branches are observed [see Fig. 3(a)]. Introducing a \(B=5\) T magnetic field at low powers results in diamagnetic energy shift of \(0.2\,\)meV and Zeeman splitting of around \(24\,\mu\)eV [see Fig. 3(b)], same as in Fig. 2(a). Figures 3(c) and 3(d) represent the same experiment conducted at higher excitation power, which reveals a qualitative change in the dispersion when a magnetic field is applied. The magnetic field lowers the condensation threshold power [41] in both spin components, with polaritons antiparallel to the magnetic field condensing first in the trap ground state. We note the much weaker residual emission coming from the trap excited states.
### Suppression of the Zeeman splitting
We now perform a systematic scan over both the pump power and the magnetic field and scrutinize the changes in the position of the polarization resolved spectral peaks in the trapped condensate. Figure 4(a) shows the average Zeeman splitting of the condensate as a function of excitation power and magnetic field (i.e., each pixel rep
Figure 2: (a) Polarization resolved PL spectra around \(k\approx 0\) below condensation threshold under a changing magnetic field and constant excitation power of \(P=0.32P_{\text{th,5T}}\) The diameter of the trap is \(16\,\mu\)m. The lower and upper pair of peaks correspond to untrapped and trapped polaritons, respectively. The applied magnetic field causes a diamagnetic shift visible as a parabolic change in the emission energy of both spins. The tuning range is \(\approx 0.2\,\)meV. Another manifestation of the influence of the magnetic field is the polariton Zeeman splitting detected as an energy difference between the two opposite circularly polarized emission peaks, reaching a splitting value around \(\approx 24\,\mu\)eV at \(5\,\)T for the trapped polaritons. (b) Same as the previous panel but for varying pump powers and fixed field of \(B=5\) T, showing narrowing and blueshift of the trapped spin peaks. The latter is attributed to the growing densities of interacting reservoir excitons and condensate polaritons within the trap. When the critical value (\(P_{\text{crit}}\approx 1.5P_{\text{th,5T}}\)) is reached, suppression of the Zeeman splitting occurs as a result of the parametric screening of the magnetic field.
resents the average over many condensate realizations). Two regimes of opposite splitting can be clearly seen, separated by a region of negligible splitting. In the latter, the condensate components become equal in terms of population and emission energy, resulting in a fully linearly polarized emission from the condensate. The nongradual change in the splitting implies an abrupt change in the condensate dynamics, wherein the Zeeman splitting becomes fully suppressed. The experimental result is in good agreement with numerical simulations using a generalized Gross-Pitaevskii equation describing a spinor condensate order parameter coupled to excitonic reservoirs (see Sec. IV) presented in Fig. 4(b). In the simulation, we scan a slightly higher range of powers where a stable elliptically polarized condensate solution appears with two distinct energy peaks [see Eq. (14)].
We select three horizontal and two vertical cross-sections from Fig. 4(a) and plot them in Fig. 4(c) and Fig. 4(d) for clarity. Fig. 4(c) shows a non-monotonic behaviour in the Zeeman splitting as a function of power, whose origin may be related to a pump-induced ellipticity with increasing the magnetic field due to birefringence of the pumping optics.
For a given external magnetic field \(\mathbf{B}_{\mathrm{ext}}\), the suppres
Figure 3: Polarization resolved PL spectra (extracted around \(k\approx 0\)) and corresponding dispersion of the polaritons measured both below and at the threshold, for both zero and finite magnetic field. The diameter of the trap is \(16\,\mathrm{\mu m}\). Approaching the condensation threshold increases the density of reservoir excitons observed as a blueshift of trap ground state [compare the right peaks in (a) and (c)] with enhanced polariton intensity. When a magnetic field is applied, the condensation threshold lowers with an enhanced emission intensity of trapped polaritons [compare e.g. (c) and (d)]. The diamagnetic blueshift is also clearly visible for all peaks [compare e.g. (a) and (b)]. Notably, a ladder of trap modes starts forming with the magnetic field due to the larger exciton fraction of polaritons (i.e., the trap effectively becomes deeper).
Figure 4: (a) Zeeman splitting of the trapped polariton condensate as a function of excitation power and magnetic field. The diameter of the trap is \(18\,\mathrm{\mu m}\). The splitting is clearly visible for high magnetic field strengths and low excitation power, but vanishes at certain critical boundaries in the \(B\)-\(P\) plane (white region), becoming parametrically screened by the condensate interactions. (b) Corresponding numerical simulations using mean field modelling (see Sec. IV). (c) and (d) show cross-sections from panel (a) at 0T, 3T, 5T and at \(P=1.05P_{\mathrm{th,0T}}\) and \(P=1.80P_{\mathrm{th,0T}}\), respectively. The plots show that with increasing the excitation power, the range of magnetic field value in which Zeeman splitting is completely suppressed expands.
sion of the fine structure splitting happens above a certain critical pump power \(P>P_{\rm crit}\) due to the spin-anisotropic interactions of polaritons [42; 43]. When the external magnetic field splits the spin levels (see Fig. 1) condensate polaritons start accumulating in the spin ground state. As the condensate becomes more circularly polarized, it produces its own interaction-induced magnetic field written [44],
\[\mathbf{B}_{\rm int}\propto(\alpha_{+}\rho_{+}-\alpha_{-}\rho_{-}+G_{+}n_{+}-G_ {-}n_{-})\mathbf{\hat{z}}. \tag{1}\]
Here, \(\alpha_{\pm}\) and \(G_{\pm}\) are the pairwise polariton-polariton and polariton-exciton interaction strengths in the singlet configuration, respectively. We have tacitly accounted for the slightly different matter (i.e., exciton) component in the polariton spins when they split in energy, which modifies their interaction strength. \(\rho_{\pm}\) and \(n_{\pm}\) denote the spin populations of condensate polaritons and background reservoir excitons. We note that anti-parallel spin interactions are known to be much weaker and therefore safe to neglect [45]. With increasing power, the condensate spin population imbalance increases, eventually generating a strong enough effective field [27] which cancels against the external magnetic field
\[\mathbf{B}_{\rm ext}+\mathbf{B}_{\rm int}\approx 0. \tag{2}\]
This condition represents the white region in Figs. 2(a) and 2(b). When the out-of-plane fields cancel the condensate polarization will instead become linear, pinned in the direction of any finite optical anisotropy present in the sample [6; 24].
### Magnetically induced spin inversion
The Zeeman suppression presented in the previous section occurs for relatively small optical traps (around \(\lesssim 20\)\(\mu\)m). In this section, we describe a qualitatively different regime by increasing the size of the optical trap, consequently reducing the confinement of the condensate. In particular, the Zeeman suppression is now accompanied by a regime of power induced inverted Zeeman splitting, as shown in Fig. 5(a). A similar inversion was reported recently in optically trapped condensates where an elliptically polarized excitation beam replaced the role of a real magnetic field [46]. Also, an inversion of the Zeeman splitting was reported for the excited quantum-confined states in wider InGaAs quantum wells [47] and quantum wires under high magnetic field [48]. Mean field simulations taking into account the increased size of the trap also reproduce the experimental observation as shown in Fig. 5(b). We select three horizontal and two vertical cross-sections from Fig. 5(a) and plot them in Fig. 5(c) and Fig. 5(d) for clarity. We note that the slight asymmetry of the observed splitting about \(B=0\) in Fig. 5(a) is due to the small polarization ellipticity in the pump beam.
In Fig. 6 we compare the Zeeman splitting inversion for three different sizes of the trap: (a) \(27.2\,\mu\)m (b) \(26.4\,\mu\)m (c) \(25.\,\mu\)m. We observe a decrease in the inversion region with decreasing trap size. In addition, the boundary between regions with positive and negative Zeeman splitting is noticeably blurred and the area in which Zeeman splitting is completely suppressed noticeably increases. This can be attributed to the beginning of the transition from the inversion regime to the suppression regime. As we mentioned before, the asymmetry of the splitting pattern about \(B=0\) in Fig. 6 is due to the small polarization ellipticity in the pump beam.
Figure 5: Power induced inversion of the Zeeman splitting. The diameter of the trap is here \(27.2\,\mu\)m. (a) Energy splitting as a function of the magnetic field and excitation power. The emission energy of both circular polarization components is measured simultaneously. The plot presents the partial suppression of Zeeman splitting and the reversal of sign after exceeding the critical value of the excitation power. (b) Corresponding energy splitting from mean field simulations of the spinor condensate [see Sec. IV]. (c) and (d) show horizontal and vertical cross-sections of the data in panel (a), respectively.
## IV Theoretical model
In this section, we describe how the magnetic field changes the exciton spin levels and how these effects enter into the polariton states through strong light-matter coupling. We then introduce a zero-dimensional generalized Gross-Pitaevskii equation describing the condensate spinor order parameter coupled to excitonic reservoirs. Numerically solving the equations of motion allows us to produce qualitatively the experimental observations in Fig. 4(b) and Fig. 5(b).
### Spinor polaritons
The energy of excitons in III-V semiconductor is modified by the magnetic field \(B\) in the following way,
\[E_{\mathrm{X,\pm}}=E_{\mathrm{X,0}}\mp g_{\mathrm{X}}\mu_{\mathrm{B}}B+\gamma _{\mathrm{dia}}B^{2}. \tag{3}\]
Here, \(E_{\mathrm{X,0}}\) is the bare exciton energy, \(\mu_{\mathrm{B}}\) is the Bohr magneton, \(g_{\mathrm{X}}\) is the exciton Lande \(g\)-factor, and \(\gamma_{\mathrm{dia}}\) quantifies the exciton diamagnetic shift. Both of the coefficients determining the magnetic response of the exciton level can be estimated from the fit of the experiment, which gives \(g_{\mathrm{X}}=-0.364\), in good agreement for 6 nm In\({}_{0.08}\)Ga\({}_{0.92}\)As QWs [38], and \(\gamma_{\mathrm{dia}}=0.117\,\mathrm{meV}\,\mathrm{T}^{-2}\) (see Appendix A for details).
The energies of the spin-up and spin-down lower-branch polaritons, at normal incidence \(k=0\) and assuming no losses, follow directly from a standard coupled oscillator model [5]:
\[E_{\pm}=\frac{E_{\mathrm{ph}}+E_{\mathrm{X,\pm}}}{2}-\frac{1}{2}\sqrt{4\Omega ^{2}+\Delta_{\pm}^{2}}. \tag{4}\]
Here, \(\Omega\) is the Rabi energy (i.e., light-matter coupling strength), \(E_{\mathrm{ph}}\) is the cavity photon energy, and \(\Delta_{\pm}=E_{\mathrm{ph}}-E_{\mathrm{X,\pm}}\) is the detuning. In the absence of any pumping, the polariton Zeeman splitting is simply,
\[E_{\mathrm{ZS}}^{(0)}=-g_{\mathrm{X}}\mu_{\mathrm{B}}B+\frac{1}{2}\left[\sqrt{ \Delta_{-}^{2}+4\Omega^{2}}-\sqrt{\Delta_{+}^{2}+4\Omega^{2}}\right]. \tag{5}\]
The squared brackets are related to the matter content of the polaritons which is given by the exciton Hopfield coefficient,
\[|X_{\pm}|^{2}=\frac{1}{2}\left(1+\frac{\Delta_{\pm}}{\sqrt{\Delta_{\pm}^{2}+4 \Omega^{2}}}\right). \tag{6}\]
Moreover, under the influence of the magnetic field, the wave function of the exciton decreases, which results in an increase in the strength of the exciton oscillator and, consequently, Rabi splitting [49; 50]. The Rabi energy also depends on the magnetic field as follows [51],
\[\Omega=\frac{\Omega_{0}}{\sqrt{2}}\left[\sqrt{1+\frac{3}{2}\left(\frac{e^{2}a _{0}^{4}B^{2}}{\hbar^{2}}\right)}+1\right]^{\frac{1}{2}}, \tag{7}\]
in which \(\Omega_{0}\) and \(a_{0}\) are bare Rabi energy and Bohr radius of exciton, respectively.
### Generalized Gross-Pitaevskii model
We define the condensate spinor order parameter as \(\Psi=(\psi_{+},\psi_{-})^{T}\), describing the phase and population \(|\psi_{\pm}|^{2}=\rho_{\pm}\) of each spin component in the trap ground state. The corresponding zero-dimensional generalized Gross-Pitaevskii equation coupled to the rate equation describing a hot background exciton reservoir \(n_{\pm}\) can be written [32]:
\[i\hbar\frac{d\psi_{\pm}}{dt} = \left[E_{\pm}+\frac{i\hbar}{2}(Rn_{\pm}-\gamma)+\alpha_{\pm}|\psi _{\pm}|^{2}\right. \tag{8}\] \[\left.+G_{\pm}\left(n_{\pm}+\frac{\mathcal{P}_{\pm}}{W}\right) \right]\psi_{\pm}+\frac{E_{\mathrm{XY}}}{2}\psi_{\mp},\] \[\frac{dn_{\pm}}{dt} = -(\Gamma+\Gamma_{\mathrm{s,\pm}}+R|\psi_{\pm}|^{2})n_{\pm}\] (9) \[+\Gamma_{\mathrm{s,\mp}}n_{\mp}+\mathcal{P}_{\pm},\]
Figure 6: The map of Zeeman splitting for different sizes of the trap: (a) 27.2 \(\mu\)m (b) 26.4 \(\mu\)m (c) 25.6 \(\mu\)m. In contrast to the suppression regime, where after exceeding the critical value of power, the Zeeman splitting is fully suppressed, in the inversion regime, exceeding the critical value results in the reversal of the Zeeman splitting sign. The value of the critical power that triggers the inversion depends on the applied magnetic field and the size of the optical trap.
where \(\gamma^{-1}\) is the polariton lifetime and \(R\) quantifies the scattering rate of reservoir excitons into the condensate. We will neglect opposite-spin polariton interactions, since they are much weaker than same-spin interactions [42; 45]. We only include the same-spin polariton-reservoir \(G_{\pm}=2u_{\rm X}|X_{\pm}|^{2}\) and polariton-polariton interaction strengths \(\alpha_{\pm}=\xi u_{\rm X}|X_{\pm}|^{4}\). The parameter \(u_{\rm X}\) stands for the exciton-exciton Coulomb interaction strength normalized over the number of quantum wells and \(\xi\) is the quantum confinement integral of the trap ground state (increases for smaller traps). We also account for finite linear polarization splitting by the parameter \(E_{\rm XY}\) coming from small random birefringence due to sample strain and disorder [24; 32; 6]. \(\Gamma\) is the reservoir decay rate, \(\Gamma_{\rm s,\pm}\) describes the rate of spin relaxation for each spin component,
The parameter \(W>0\) quantifies the conversion of dark and high-momentum inactive excitons \(\mathcal{P}_{\pm}\) into the active "bottleneck" exciton reservoir \(n_{\pm}\) which sustains the condensate. Under continuous-wave excitation we can approximate the steady state of the inactive reservoir as [52]:
\[\begin{pmatrix}\mathcal{P}_{+}\\ \mathcal{P}_{-}\end{pmatrix}=\frac{P}{W+\Gamma_{\rm s,+}+\Gamma_{\rm s,-}} \begin{pmatrix}W\cos^{2}(\theta)+\Gamma_{\rm s,-}\\ W\sin^{2}(\theta)+\Gamma_{\rm s,+}\end{pmatrix} \tag{10}\]
Here, \(P\) denotes the power of the nonresonant pump, and \(\theta\) is analogous to the quarter wave plate angle for the incident excitation, which defines the ellipticity of the excitation. Unless when stated otherwise, the excitation is linearly polarized with \(\theta=\pi/4\).
We account for the influence of the magnetic field on the exciton spin relaxation [53] using polynomial regression to fit the different spin-relaxation rates to the behaviour of the exciton reservoir degree of circular polarization below threshold. The general form of the spin relaxation parameters can be written,
\[\Gamma_{\rm s,\pm} = \Gamma_{\rm s}\pm\eta(B), \tag{11}\]
where the details of determining the function \(\eta\) are presented in Appendix B.
Numerically solving Eqs. (8) and (9) in time we are able to qualitatively reproduce the experimental observations [see Fig. 4(b) and Fig. 5(b)]. Notably, we find that two parameters are important to separate the results of the Zeeman screening from the inversion. For the screening, we introduced a weak birefringent field quantified by \(E_{XY}\) which helps pin the condensate linear polarization along a specific direction when the screening condition is fulfilled. In order to produce finite inversion in simulation, we needed to set \(E_{XY}\approx 0\) and decrease the quantum confinement parameter \(\xi\) [see Appendix C], corresponding to the larger trap sizes. By only adjusting these two parameters, we are able to transition from one effect to the other.
In the following, we additionally derive an analytical expression for the Zeeman splitting in Eq. (20) as a comparison to our full numerical simulations.
#### iii.1.1 Below-threshold solution
Below the condensation threshold, we have \(n_{\pm}>0\) but \(\rho_{\pm}=0\). The Zeeman splitting felt by any uncondensed polaritons is then modified by the reservoir in the following fashion,
\[E_{\rm ZS} = E_{\rm ZS}^{(0)}+G_{+}\left(n_{+}+\frac{\mathcal{P}_{+}}{W} \right)-G_{-}\left(n_{-}+\frac{\mathcal{P}_{-}}{W}\right). \tag{12}\]
The steady state solution of (9) under linearly polarized pumping can be trivially obtained which gives an analytical expression for the reservoir contribution to the different blueshifts of the polariton spins,
\[\begin{split}& n_{\sigma}+\frac{\mathcal{P}_{\sigma}}{W}=\frac{P }{W+2\Gamma_{s}}\bigg{[}\frac{1}{2}+\frac{\Gamma_{s,-\sigma}}{W}+\\ &\frac{(\Gamma+\Gamma_{s,-\sigma})(\frac{W}{2}+\Gamma_{s,-\sigma })+\Gamma_{s,-\sigma}(\frac{W}{2}+\Gamma_{s,\sigma})}{\Gamma(\Gamma+2\Gamma_ {s})}\bigg{]}.\end{split} \tag{13}\]
Here, we have used \(\sigma=\pm\) for brevity.
#### iii.1.2 Bichromatic above-threshold solution
Above the condensation threshold, where one or both condensate spin populations \(\rho_{\pm}\) are finite and positive, the Zeeman splitting (5) becomes modified due to reservoir saturation (i.e., gain clamping) and interactions from the condensate. Focusing on the optically isotropic cavity where \(E_{XY}=0\) and assuming that condensate spin populations have reached a steady state \(\hat{\rho}_{\pm}=0\) for given pump power and magnetic field, we can write a general ansatz,
\[\Psi=\begin{pmatrix}\sqrt{\rho_{+}}e^{-iE_{c,+}t/\hbar}\\ \sqrt{\rho_{-}}e^{-iE_{c,-}t/\hbar}\end{pmatrix}, \tag{14}\]
where
\[E_{c,\pm}=E_{\pm}+\alpha_{\pm}\rho_{\pm}+G_{\pm}\left(n_{\pm}+\frac{\mathcal{ P}_{\pm}}{W}\right). \tag{15}\]
Substituting Eq. (14) into Eq. (8), we arrive at
\[\rho_{\pm}=\left\{\frac{\mathcal{P}_{\pm}}{\gamma}+\frac{\Gamma_{s,\mp}}{R}- \frac{\Gamma+\Gamma_{s,\pm}}{R}\right\}H[\mathcal{P}_{\pm}-\mathcal{P}_{\rm th,\pm}], \tag{16}\]
where \(H[\dots]\) is the Heaviside function and \(\mathcal{P}_{\rm th,\pm}\) denotes the condensation threshold. We also obtain a rather cumbersome expression for the reservoir necessarily describing its occupation when both condensate spin components are above threshold corresponding to an elliptically polarized condensate (first term); one component is above threshold corresponding to a fully circularly polarized condensate (second term); and both components below threshold corresponding to no condensate (third
term),
\[n_{\pm} =\frac{\gamma}{R}H[\mathcal{P}_{\pm}-\mathcal{P}_{\mathrm{th},\pm}]\] \[+\frac{\gamma\Gamma_{s,\mp}/R+\mathcal{P}_{\pm}}{\Gamma+\Gamma_{s, \pm}}H[\mathcal{P}_{\mp}-\mathcal{P}_{\mathrm{th},\mp}]H[\mathcal{P}_{\mathrm{ th},\pm}-\mathcal{P}_{\pm}]\] \[+\frac{(\Gamma+\Gamma_{s,\mp})\mathcal{P}_{\pm}+\Gamma_{s,\mp} \mathcal{P}_{\mp}}{\Gamma(\Gamma+2\Gamma_{s})}H[\mathcal{P}_{\mathrm{th},\pm} -\mathcal{P}_{\pm}]. \tag{17}\]
Note that the zeros of the curled brackets in (16) define the threshold of each spin component. In terms of pump power, these thresholds are (for linearly polarized excitation)
\[P_{\mathrm{th},\pm}=\frac{\gamma}{R}\times\frac{\left(W+2\Gamma_{s}\right) \left(\Gamma+\Gamma_{s,\pm}-\Gamma_{s,\mp}\right)}{\frac{W}{2}+\Gamma_{s,\mp}}. \tag{18}\]
At zero magnetic field the two thresholds coincide and can be written simply,
\[P_{\mathrm{th},0\mathrm{T}}=\frac{2\gamma\Gamma}{R}. \tag{19}\]
The Zeeman splitting now becomes
\[E_{\mathrm{ZS}} = E_{\mathrm{ZS}}^{(0)}+G_{+}\left(n_{+}+\frac{\mathcal{P}_{+}}{ W}\right)-G_{-}\left(n_{-}+\frac{\mathcal{P}_{-}}{W}\right) \tag{20}\] \[+\alpha_{+}\rho_{+}-\alpha_{-}\rho_{-}.\]
In Fig. 7 we plot the above analytical expression for the same set of parameters as used in the full numerical simulation of Fig. 5. Our model correctly identifies the inversion of the Zeeman splitting as a function of power but lacks some quantitative features appearing in experiment such as the shape of the inversion boundary. Nevertheless, our analytical expression allows us to efficiently locate regions of interesting behaviours in the condensate energy structure, which saves a great deal of time when locating relevant parameters in full numerical simulation of (8) and (9). We also plot the condensation thresholds predicted by our model (black dashed and dash-dotted curves) which don't exactly coincide with the inversion boundary because the exciton reservoir (17) is also contributing to the inversion. We note that our model contains multiple power- and magnetic field dependent effects in order to remain as inclusive as possible. A more exhaustive analysis of all the terms contributing to Eq. (20) is beyond the scope of the current study but will be addressed in the future. In particular, the stimulated scattering rate \(R\) is taken as the same constant for each spin since but could possess a more complicated dependence on the magnetic field and pump power.
## V Conclusions
In conclusion, we have demonstrated the appearance of Zeeman splitting of an optically trapped polariton condensate subjected to an external magnetic field along the optical axis of a planar microcavity embedded with several pairs of InGaAs quantum wells. We explain the conditions needed to obtain two operation regimes, where we observe two qualitatively different spin-related effects: 1) the full parametric screening of external magnetic field and the suppression of the Zeeman splitting; 2) the inversion of the spin-population and sign reversal of the Zeeman splitting. We develop a mean field model based on a zero-dimensional generalized Gross-Pitaevskii equation coupled to a rate equation for the exciton reservoir, which qualitatively captures the observed effects.
Optical trapping of polariton condensate offers a powerful tool for magneto-optical studies of microcavities, which has not been explored so far. The advantage of the technique used in this work is its tunability and reconfigurability. The efficient manipulating of spins with a magnetic field in systems with arbitrarily arranged pump geometry opens up new possibilities for a wide class of magnetically controlled polariton devices. The single optical trap system presented in this study is the building block, which can be adapted to complex polariton systems, such as polariton lattices or coupled trapped condensates. As an example, a natural next step could be investigating magnetic manipulation of synchronization and emission properties between spatially coupled [54; 55] or time modulated [33] optical traps.
## Appendix A Estimation of the excitonic Lande \(g\)-factor and diamagnetic coefficient
Here we present the data used to determine \(g_{\mathrm{X}}\), the exciton Lande \(g\)-factor, and \(\gamma_{\mathrm{dia}}\), the exciton diamagnetic shift parameter, in Eq. (3). Figure 8(a) shows polarization resolved exciton line (below threshold) and 8(b) shows the peak value of each spectral component as a function of magnetic field. Solid lines are the fit using
Figure 7: Inversion of the Zeeman splitting as predicted by our analytical model (20). Condensation threshold for spin-up and spin-down polaritons (18) are indicated by the dot-dashed and just dashed curves, respectively. Regions of different condensate polarization are indicated. Parameters are given in Appendix C.
Eq. (3) giving \(g_{\rm X}=-0.364\), in good agreement for 6 nm In\({}_{0.08}\)Ga\({}_{0.92}\)As QWs [38], and \(\gamma_{\rm dia}=0.117\) meV T\({}^{-2}\).
## Appendix B Dependence of reservoir spin relaxation on magnetic field
In this section, we determine the function \(\eta(B)\) in Eq. (11) which describes how the magnetic field modifies the exciton spin relaxation rate \(\Gamma_{s}\). This effect leads to different steady state populations of the reservoir exciton spins under linearly polarized pumping \(\theta=\pi/4\) below condensation threshold (\(|\psi_{\pm}|^{2}=0\)). Consequently, it contributes to the different blueshifts experienced by the trapped polariton spins when a magnetic field is present [see the term proportional to \(G_{\pm}\) in Eq. (8)].
In order to estimate the modified spin relaxation rates of the excitons we scrutinize the degree of circular polarization (DCP) of the emitted photons below threshold. Assuming that most of the below-threshold emission is coming from reservoir excitons in the bottleneck region we can define the below-threshold DCP in terms of the reservoir densities \(n_{\pm}\) in a standard way,
\[{\rm DCP}=\frac{n_{+}-n_{-}}{n_{+}+n_{-}}. \tag{20}\]
The steady states of \(n_{\pm}\) below the threshold can be trivially obtained from Eq. (9). We can fit the above equation to our experimental data (see Fig. 9) using a simple polynomial model for the change in spin relaxation,
\[\eta(B)=\eta_{1}B+\eta_{3}B^{3}, \tag{21}\]
where \(\eta_{1}\) and \(\eta_{3}\) are fitting parameters.
## Appendix C Parameters of simulations
Parameters to obtain the result presented in Fig. 4(b): \(\theta=\pi/4\), \(\tau=0.540\), \(\xi=45\), \(E_{\rm ph}=0\), \(E_{\rm X,0}=1.9\) meV, \(E_{\rm XY}=0.8\) \(\mu\)eV, \(\Omega_{0}=4\) meV, \(u_{\rm X}=1.7\mu\)eV, \(R=4.9\times 10^{-3}\) ps\({}^{-1}\), \(\gamma^{-1}=5.5\) ps, \(\Gamma=\Gamma_{\rm s}=5\times 10^{-2}\) ps\({}^{-1}\), \(W=0.5\Gamma\), \(a_{0}=10\) nm, \(\eta_{1}=3.2\times 10^{-4}\) ps\({}^{-1}\) T\({}^{-1}\), and \(\eta_{3}=1.0\times 10^{-5}\) ps\({}^{-1}\) T\({}^{-3}\).
Parameters to obtain the result presented in Fig. 5(b) and Fig. 7 are the same as in Fig. 4(b) except: \(\tau=0.755\), \(\xi=25\), \(R=1.55\times 10^{-3}\) ps\({}^{-1}\) and \(E_{\rm XY}=0\).
## Acknowledgments
This work was supported by the European Union Horizon 2020 program, through a Future and Emerging Technologies (FET) Open research and innovation action under Grant Agreement No. 964770 (TopoLight) and No. 899141 (PoLLoC). Y.W.'s studentship was financed by the Royal Society, Grant No. RGF\(\backslash\)EA\(\backslash\)180062. H.S. acknowledges the project No. 2022/45/P/ST3/00467 co-funded by the Polish National Science Centre and the European Union Framework Programme for Research and Innovation Horizon 2020 under the Marie Sklodowska-Curie grant agreement No. 945339.
Figure 8: (a) PL spectra of the emission from excitonic level at an increasing magnetic field. With the magnetic field, the Zeeman splitting value increases, reaching \(\sim 0.21\) meV at 5T. (b) Shift of exciton energies with applied magnetic field for \(\sigma^{+}\) (red) and \(\sigma^{-}\) (blue) polarization of detection; solid lines show fits using Eq. (8).
Figure 9: Degree of circular polarization below the threshold as function of magnetic field. |
2306.14928 | Simulating the phase behavior of the Kuramoto tree | The Kuramoto model is a versatile mathematical framework that explains
phenomena resulting from interactions among phase oscillators. It finds
applications in various scientific and engineering domains. In this study, we
focused on a Y-shaped network, which serves as the fundamental unit of a tree
network. By simulating oscillators on the network, we generated heat maps for
different numbers of nodes and coupling strengths and demonstrated the
occurrence of different phases. Our findings reveal transitions between
synchronization, wave state, and chaos within the system. | Mohammad Javad Nouhi, Javad Noorbakhsh | 2023-06-24T16:31:41Z | http://arxiv.org/abs/2306.14928v1 | ## Simulating the phase behavior of the Kuramoto tree
### Abstract:
The Kuramoto model is a versatile mathematical framework that explains phenomena resulting from interactions among phase oscillators. It finds applications in various scientific and engineering domains. In this study, we focused on a Y-shaped network, which serves as the fundamental unit of a tree network. By simulating oscillators on the network, we generated heat maps for different numbers of nodes and coupling strengths and demonstrated the occurrence of different phases. Our findings reveal transitions between synchronization, wave state, and chaos within the system.
### Introduction:
The Kuramoto model is a mathematical model used to simulate oscillatory behavior in a complex system, and is based on a system of coupled differential equations that describe the behavior of a network of phase oscillators. This model has been applied to a wide range of problems in biology, physics, electrical engineering, and a myriad of other fields [1]; including but not limited to the regulation of circadian rhythms in the brain [2], oscillatory behavior in gene regulatory networks [3][4], frequency synchronizations across a network of generators on a power grid [5], and synchronized flashing of firefly populations [6].
The dynamics and phases of Kuramoto oscillators on networks have been studied extensively [7, 8, 9], however, such studies mostly focus on fully and randomly connected networks. Some authors have explored other network structures such as trees. For example Dekkar, et al [10] provide a comprehensive study of Kuramoto oscillators on a tree in the synchronized regime, but overall the topic remains less explored due to a lack of closed-form equations. Nevertheless, these structures can arise in many real-life systems where three or more populations of oscillators interact. |
2308.15142 | A Multimodal Visual Encoding Model Aided by Introducing Verbal Semantic
Information | Biological research has revealed that the verbal semantic information in the
brain cortex, as an additional source, participates in nonverbal semantic
tasks, such as visual encoding. However, previous visual encoding models did
not incorporate verbal semantic information, contradicting this biological
finding. This paper proposes a multimodal visual information encoding network
model based on stimulus images and associated textual information in response
to this issue. Our visual information encoding network model takes stimulus
images as input and leverages textual information generated by a text-image
generation model as verbal semantic information. This approach injects new
information into the visual encoding model. Subsequently, a Transformer network
aligns image and text feature information, creating a multimodal feature space.
A convolutional network then maps from this multimodal feature space to voxel
space, constructing the multimodal visual information encoding network model.
Experimental results demonstrate that the proposed multimodal visual
information encoding network model outperforms previous models under the exact
training cost. In voxel prediction of the left hemisphere of subject 1's brain,
the performance improves by approximately 15.87%, while in the right
hemisphere, the performance improves by about 4.6%. The multimodal visual
encoding network model exhibits superior encoding performance. Additionally,
ablation experiments indicate that our proposed model better simulates the
brain's visual information processing. | Shuxiao Ma, Linyuan Wang, Bin Yan | 2023-08-29T09:21:48Z | http://arxiv.org/abs/2308.15142v1 | # A Multimodal Visual Encoding Model Aided by Introducing Verbal Semantic Information
###### Abstract
Biological research has revealed that the verbal semantic information in the brain cortex, as an additional source, participates in nonverbal semantic tasks, such as visual encoding. However, previous visual encoding models did not incorporate verbal semantic information, contradicting this biological finding. This paper proposes a multimodal visual information encoding network model based on stimulus images and associated textual information in response to this issue. Our visual information encoding network model takes stimulus images as input and leverages textual information generated by a text-image generation model as verbal semantic information. This approach injects new information into the visual encoding model. Subsequently, a Transformer network aligns image and text feature information, creating a multimodal feature space. A convolutional network then maps from this multimodal feature space to voxel space, constructing the multimodal visual information encoding network model. Experimental results demonstrate that the proposed multimodal visual information encoding network model outperforms previous models under the exact training cost. In voxel prediction of the left hemisphere of subject 1's brain, the performance improves by approximately 15.87%, while in the right hemisphere, the performance improves by about 4.6%. The multimodal visual encoding network model exhibits superior encoding performance. Additionally, ablation experiments indicate that our proposed model better simulates the brain's visual information processing.
fMRI, multimodal network, Transformer, visual information encoding +
Footnote †: journal: Computer Vision
## 1 Introduction
Vision, the primary source for humans to gather external information, is crucial to understanding how the human brain processes visual information. A critical approach to comprehending brain mechanisms is visual information
encoding models, which simulate how the human visual cortex processes information to predict the response changes of different voxels under various external stimuli (Kay et al., 2008; Naselaris et al., 2011). Investigating neural encoding of visual information is essential in unraveling the brain's visual processing mechanisms and enhancing artificial visual models' perceptual and cognitive abilities.
In the philosophy of science, the collective human knowledge and perspectives on objects, attributes, and actions are called verbal information (Ivanova, n.d.). Through the visual system, humans can perceive various objects and actions in the natural world. Moreover, they can communicate and reason about these verbal categories through language and text. This suggests a connection between semantic information obtained through visual sensory input and language and text (Barsalou, 1999; Damasio, 1989; Ralph et al., 2017). Recent studies have revealed that verbal information is not solely localized to specific brain regions but distributed across the entire cortex (Anderson et al., 2016; Huth et al., 2016; Pereira et al., 2018; Xu et al., 2018). A graduate thesis from the Massachusetts Institute of Technology in 2022 (Ivanova, n.d.) explicitly mentioned that the brain's language regions are activated for nonverbal semantic tasks, such as image tasks. Researchers propose that in individuals with normal neural development, the brain may reencode the stimulus components of images into verbal forms as an additional source of task-relevant information (Connell and Lynott, 2013; Greene and Fei-Fei, 2014; Trueswell and Papafragou, 2010). As part of a control experiment, the researchers also conducted similar experiments with individuals with global aphasia. The results indicated that despite severe language impairments, these subjects could still participate in the experiment, albeit with lower efficiency than regular subjects. This indirectly confirms that the brain's cortical areas in regular subjects tend to transform image stimulus components into linguistic forms as an additional source of information (Ivanova, n.d.). Indeed, text-based computational models developed in recent years have successfully executed a wide range of " verbal " tasks, such as reasoning, paraphrasing, and question-answering tasks (Bao et al., 2022; Brown et al., 2020). In summary, text-based verbal information can provide a more comprehensive source of information for visual encoding within the brain's visual cortex, greatly enriching the
"database" of visual information encoding models.
In the field of visual information encoding, whether it has the circular symmetric Gaussian difference-of-Gaussians (DoG) model improved upon by Zuiderbaan et al. in 2012(Zuiderbaan et al., 2012), the Bayesian population receptive field estimation model proposed by Zeidman et al. in 2018(Zeidman et al., 2018), or the "What" and "Where" models introduced by Wang et al. in 2021(Wang et al., 2021) based on hierarchical deep features of receptive fields, these models have undergone diverse improvements at various levels. However, the original input source for these models remains a single stimulus image.
In response to the current state of single-input visual information encoding models and inspired by the benefits of verbals in nonverbal tasks observed in modern biology, we propose a multi-modal visual information encoding model based on both stimulus image features and their related textual semantic features. Our model differs from traditional visual information encoding models in that while using the stimulus image features as the information source, we additionally introduce verbal information related to the stimulus image as linguistic semantic features. Subsequently, our model employs a Vision Transformer (Dosovitskiy et al., 2021) network to align image and text features, forming a multi-modal feature space. A CNN network then processes this space to predict the visual cortical voxel space, completing the mapping from stimulus image to visual cortical voxel.
Our work presents three key innovations:
1. The multi-modal visual information encoding model, for the first time, introduces textual semantic features as an additional information source to the encoding model instead of relying solely on stimulus images. This design closely resembles the processing pattern of visual information in the brain's visual cortical regions.
2. We utilize the Vision Transformer model to process multi-modal feature information, achieving alignment between features from different modalities.
3. Through cross-validation, we demonstrate significant performance improvements in our model compared to previous encoding models.
## 2 Methods
From the architectural perspective, the Multi-Modal Visual Information Encoding Model is an end-to-end framework. Its input consists of stimulus images and their corresponding verbal, while the output is the predicted voxel values generated by the encoding model. The multi-modal model comprises three components:
1. Graphical and Textual Feature Extraction Component: This component involves extracting graphical and textual features.
2. Multi-Modal Information Interaction Component: This part involves the interaction between graphical and textual features. It is a crucial step for integrating information from both modalities.
3. Feature-Voxel Mapping Component: This component is common in traditional encoding tasks and maps features to voxel responses.
Following a single-stream approach, the Multi-Modal Visual Information Encoding Model employs the Vision Transformer (ViT) as the backbone network. ViT-B/32 model is utilized, with a hidden size of 768, 12 layers, patch size 32, 3072 MLP size, and 12 self-attention heads.
Regarding textual features, diverse types of textual information are considered. Different pre-trained models, including BERT and GPT-2, are employed to process distinct textual features. These models are referred to as Textual Feature Extraction Models.
The information processing flow of the multi-modal model is as follows: First of all, The input stimulus image \(I_{img}\in\mathbb{R}^{\mathbb{C}\wedge H\wedge W}\) is sliced and flattened into a vector \(v_{img}\in\mathbb{R}^{N\left(P^{2}\cdot C\right)}\), where \(N\) is \(HW/P^{2}\). Followed by linear projection \(V\in\mathbb{R}^{\left(P^{2}\cdot C\right)\wedge H}\) and position embedding \(V^{pos}\in\mathbb{R}^{\left(N+1\right)\wedge H}\), \(v_{img}\) is embedded into \(\overline{v}_{img}\in\mathbb{R}^{N\wedge H}\). For textual information, due to varying lengths of descriptive text between different images, we adopt a fixed-length approach to constrain the length of textual information. We use an empirical value of L=256 for the text length. The input text \(t_{text}\in\mathbb{R}^{L\left\backslash\mathcal{H}\right\rvert}\) is embedded to \(\overline{t}_{text}\in\mathbb{R}^{L\times H}\) with a word embedding matrix \(T\in\mathbb{R}^{\left\backslash\mathcal{H}\right\rangle}\) and a position embedding matrix \(T^{pos}\in\mathbb{R}^{\left(L+1\right)\wedge H}\). The text (\(\overline{v}_{img}\in\mathbb{R}^{N\wedge H}\)) and image (\(\overline{t}_{text}\in\mathbb{R}^{L\times H}\)) embeddings are summed with their corresponding modal-type embedding vectors \(t^{spe},\nu^{spe}\in\mathbb{R}^{H}\), then are concatenated into a combined sequence \(\zeta^{0}\). The contextualized vector \(z\) is iteratively updated through D-depth transformer layers up until the final contextualized sequence \(z^{D}\). \(p\) is a pooled representation of the whole
multimodal input, and is obtained by applying linear projection \(W_{\textit{pad}}\in\mathbb{R}^{H\times H}\) and hyperbolic tangent upon the first index of sequence \(Z^{D}\). Subsequently, we pass the vector \(z^{D}\) through a CNN convolutional network for feature reduction. Then, we use FC layers to complete the mapping from multi-modal feature information to voxel responses. This process generates the model's predicted voxel values. The overall architecture diagram of the multi-modal visual information encoding model is illustrated below:
### Train Objectives
Researchers commonly employ the Pearson correlation coefficient as an
Figure 1: The overall architecture diagram of the multi-modal visual information encoding model. Figure 1A is the graphical and verbal feature extraction component; Figure 1B is the feature-voxel mapping component.
evaluation metric for the visual encoding model. Therefore, to closely align with the final evaluation metric in this experiment, we also utilize the Pearson correlation coefficient as the loss function.
The formula for the Pearson correlation coefficient is as follows:
\[R_{v}=\text{corr}(G_{v},P_{v})=\] \[=\frac{\sum_{t}\bigl{(}G_{v,t}-\bar{G}_{v}\bigr{)}\bigl{(}P_{v,t}- \bar{P}_{v}\bigr{)}}{\sqrt{\sum_{t}\big{(}G_{v,t}-\bar{G}_{v}\big{)}^{2}\sum_{t }\big{(}P_{v,t}-\bar{P}_{v}\big{)}^{2}}},\]
where \(v\) is the index of vertices (over all subjects and hemispheres), \(t\) is the index of the test stimuli images, \(G\) and \(P\) correspond to, respectively, the ground truth and predicted fMRI test data, \(\bar{G}\) and \(\bar{P}\) are the ground truth and predicted fMRI test data averaged across test stimuli images, \(R\) is the Pearson correlation coefficient between \(G\) and \(P\).
## 3 Experiments
### Dataset
This experiment is based on the largest and richest dataset of fMRI responses to natural scenes, the Natural Scenes Dataset (NSD). Please visit the NSD website for more details. Briefly, NSD provides data acquired from a 7-Tesla fMRI scanner over 30-40 sessions during which each subject viewed 9,000-10,000 color natural scenes (22,000-30,000 trials)(Allen et al., 2022). We analyzed data for four of the eight subjects who completed all imaging sessions (subj01, subj02). The images used in the NSD experiments were retrieved from MS COCO and cropped to 224*224. The training sets for these two subjects each contain 9841 images. The test sets also consist of 159 images, respectively.
The fMRI data is z-scored at each NSD scan session and averaged across image repeats, resulting in 2D arrays with the amount of images as rows and as columns a selection of the vertices that showed reliable responses to images during the NSD experiment. The left (LH) and right (RH) hemisphere files consist of, respectively, 19,004 and 20,544 vertices, with the exception of subjects 6 (18,978 LH and 20,220 RH vertices) and 8 (18,981 LH and 20,530 RH vertices) due to missing data.
Previous research has revealed that the visual cortex is divided into multiple distinct areas having different functional properties, referred to here
as regions-of-interest (ROIs). Some of those are functionally defined, for example by their preferred response to a particular category such as faces or houses. Others are defined by anatomical criteria. Following is the list of ROIs (ROI class file names in parenthesis):
Early retinotopic visual regions (prf-visualrois): V1v, V1d, V2v, V2d, V3v, V3d, hV4.
Body-selective regions (floc-bodies): EBA, FBA-1, FBA-2, mTL-bodies.
Face-selective regions (floc-faces): OFA, FFA-1, FFA-2, mTL-faces, aTL-faces.
Place-selective regions (floc-places): OPA, PPA, RSC.
Word-selective regions (floc-words): OWFA, VWFA-1, VWFA-2, mfs-words, mTL-words.
Anatomical streams (streams): early, midventral, midlateral, midparietal, ventral, lateral, parietal.
3.2 Implementation Details
For all experiments, we use AdamW(Loshchilov & Hutter, 2019) optimizer with base learning rate of \(\leavevmode\ 10^{4}\)and weight decay of \(\leavevmode\ 10^{2}\). The learning rate is set to decay by 0.8 every 5 epochs to ensure the model learns more effectively. We resized the input images to 224x224 while maintaining the aspect ratio. For the ViT-B/32 model, this generates 49 image patches. To ensure accuracy in our experiments, we employed a 5-fold cross-validation approach throughout our study.
The stimulus images in the NSD dataset are, in fact, a subset of the Microsoft COCO dataset. Each image in the COCO dataset is accompanied by five text descriptions, which multiple annotators independently provide. Recognizing the diversity of these text descriptions, we employed the CLIP model in our experiments to select the best-matching textual information for each natural image from the five available descriptions. This selected text information was used as the relevant textual information for our stimulus images in this study.
We utilized pre-trained models from the Hugging Face repository to achieve better initialization of model parameters and reduce the training cost. Specifically, we employed the publicly available BERT model for our study.
3.3 Experimental Results
Due to the large size of the NSD dataset, to enhance experimental efficiency and ensure result accuracy, we randomly selected data from both hemispheres of subjects 1 and 2, resulting in a total of 4 experimental data sets. We performed 5-fold cross-validation to validate the experimental data.
Table 1 presents the predictions of the four datasets in various ROIs using our model. It is worth noting that there are significant variations in predicted values among different voxels within each ROI. To mitigate the impact of extreme values, in this experiment, we use the median value of all voxels within each ROI to showcase our model's voxel predictions.
Figures 2A, 2B, 2C, and 2D depict the results of the four datasets using both the traditional encoding model (with only stimulus images as input, indicated by blue markers) and our multimodal model (indicated by red markers).
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{Subject 1} & \multicolumn{2}{c|}{Subject 2} \\ \cline{2-5} & LH & RH & LH & RH \\ \hline Early & 0.142042 & 0.239547 & 0.189492 & 0.289508 \\ \hline Midventral & 0.192652 & 0.232347 & 0.265559 & 0.298260 \\ \hline Midlateral & 0.202285 & 0.256350 & 0.257363 & 0.303731 \\ \hline Midparietal & 0.296695 & 0.245398 & 0.178395 & 0.280645 \\ \hline Ventral & 0.304879 & 0.233720 & 0.416443 & 0.288365 \\ \hline Lateral & 0.418475 & 0.261894 & 0.373886 & 0.325633 \\ \hline Parietal & 0.328863 & 0.262444 & 0.268750 & 0.309305 \\ \hline All vertices & 0.262446 & 0.246072 & 0.290660 & 0.299426 \\ \hline \end{tabular}
\end{table}
Table 1: Median voxel predictions across various ROIs for the four datasets are as follows:
Comparing Figure 2A and Figure 2C, we observe that in both the left hemisphere data of Subject 1 and Subject 2, our model demonstrates improved encoding accuracy compared to the traditional encoding model that employs a single stimulus image as input source. Our model exhibits significantly better encoding accuracy starting from higher-level regions like hV4. For instance, in the PPA and RSC regions, the encoding accuracy is improved by 73.35% and
Figure 2: The comparison of encoding performance between our model and the traditional model. Figures 2A and 2B respectively illustrate the comparison of encoding performance between our model and the traditional model in the left and right hemispheres of subject 1; Figures 2C and 2D respectively illustrate the comparison of encoding performance between our model and the traditional model in the left and right hemispheres of subject 2.
64.24%, respectively. However, in the lower-level visual areas V1-V3, the traditional encoding model has a slight advantage in Subject 1's left hemisphere, while this advantage is not prominent in Subject 2's left hemisphere.
Upon comparing Figure 2B and Figure 2D, it is evident that the performance of both models remains consistent in the right hemisphere data of both Subjects. Specifically, our model consistently outperforms the traditional model regarding encoding performance.
### Ablation Study
Building upon the experiments in Section 3.3, we conducted additional ablation experiments to illustrate further the advantages of textual features in the multi-modal network model.
In Section 3.4, the textual information used was derived from the original COCO dataset's descriptions. However, human-generated descriptions can vary significantly due to individual interpretations of stimulus images. We employed a text-image generation model to mitigate this variability to generate textual information for the stimulus images. This text-image generation model is based on the pre-trained ViT-GPT2 model in the Hugging Face community. In this setup, ViT serves as the image decoder, and GPT2 is utilized for textual encoding. For a detailed description of the ViT-GPT2 model, please refer to this website: [https://huggingface.co/nlpconnect/vit-gpt2-image-captioning](https://huggingface.co/nlpconnect/vit-gpt2-image-captioning).
From Figure 3A, 3B, 3C and 3D, it is evident that the four sets of test data show significant improvement in the text-image generation model. This observation suggests that the performance of our encoding model is reliant on the quality of the textual information.
\begin{table}
\begin{tabular}{c
In Table 2, we randomly selected 4 images from the training dataset. We displayed the best-matched descriptions from COCO's description using the CLIP model and the descriptions generated using our ViT-GPT2 model. The descriptions generated by ViT-GPT2 are more specific and provide more straightforward descriptions of the main objects in the images.
As mentioned earlier, text information is considered an additional source that participates in non-verbal semantic tasks in the visual cortex. Similarly, for our visual encoding task, is text information also an "additional" source of information? To address this question, we retrained the encoding model using only stimulus images and extended the training iterations appropriately.
Figure 4A
Figure 4B
In this ablation experiment, we focused on using only stimulus images as input information sources. We adjusted the number of epochs from 30 to 120 while keeping other experimental parameters and model hyperparameters unchanged. In Figure 4, we observe the comparison between data from the left and right brain of Subject 1. In Figure 4A, the performance of the 120 Epochs experiment is dominant in the early stages of the visual coding, up to the hV4 region, and shows a reversed trend beyond the hV4 region. The performance tends to align in the "All Vertices" category. This trend is even more pronounced in the right brain data of Subject 1, where the performance of 120 Epochs and the multi-modal model becomes consistent.
Through this ablation experiment, we confirm that textual features as non-linguistic semantic features in visual coding models are indeed extra information from linguistic semantic tasks. Suppose we do not introduce this kind of additional information in the laboratory, although we can achieve a certain level of coding effect by extending the training time. In that case, it increases the time and training cost. By incorporating textual information as an additional information source, we can significantly reduce training costs and time, enabling coding models with equivalent parameter settings to achieve
better performance at lower training costs.
## 4 Conclusion and Future Work
In this paper, inspired by fundamental biological research highlighting the involvement of text semantic information as an additional source in visual information processing, we introduce a multi-modal visual information encoding network model based on both image feature information and verbal semantic information. This model thoroughly considers the role of textual information in the brain's visual processing. We align verbal feature information with image data through Transformer networks, integrating text semantic information as an extra information source into the visual encoding model. This approach allows the model to closely mimic the brain's visual information processing pattern.
Experimental results demonstrate that our proposed multi-modal network model outperforms traditional single-modal models in terms of performance, even at the same training cost. Crucially, our model presents a novel encoding paradigm for the future of visual coding. By introducing aligned textual feature information, we enhance the prominence of text semantic information in the visual encoding model. This encourages the model to achieve better encoding performance with lower training costs. This contribution opens up a new avenue for the field of visual coding, suggesting that incorporating aligned textual feature information could greatly improve encoding performance while minimizing training expenses. This could enable a broader and more comprehensive understanding of this field.
|
2304.07693 | Translating Simulation Images to X-ray Images via Multi-Scale Semantic
Matching | Endovascular intervention training is increasingly being conducted in virtual
simulators. However, transferring the experience from endovascular simulators
to the real world remains an open problem. The key challenge is the virtual
environments are usually not realistically simulated, especially the simulation
images. In this paper, we propose a new method to translate simulation images
from an endovascular simulator to X-ray images. Previous image-to-image
translation methods often focus on visual effects and neglect structure
information, which is critical for medical images. To address this gap, we
propose a new method that utilizes multi-scale semantic matching. We apply
self-domain semantic matching to ensure that the input image and the generated
image have the same positional semantic relationships. We further apply
cross-domain matching to eliminate the effects of different styles. The
intensive experiment shows that our method generates realistic X-ray images and
outperforms other state-of-the-art approaches by a large margin. We also
collect a new large-scale dataset to serve as the new benchmark for this task.
Our source code and dataset will be made publicly available. | Jingxuan Kang, Tudor Jianu, Baoru Huang, Binod Bhattarai, Ngan Le, Frans Coenen, Anh Nguyen | 2023-04-16T04:49:46Z | http://arxiv.org/abs/2304.07693v1 | # Translating Simulation Images to X-ray Images via Multi-Scale Semantic Matching
###### Abstract
Endovascular intervention training is increasingly being conducted in virtual simulators. However, transferring the experience from endovascular simulators to the real world remains an open problem. The key challenge is the virtual environments are usually not realistically simulated, especially the simulation images. In this paper, we propose a new method to translate simulation images from an endovascular simulator to X-ray images. Previous image-to-image translation methods often focus on visual effects and neglect structure information, which is critical for medical images. To address this gap, we propose a new method that utilizes multi-scale semantic matching. We apply self-domain semantic matching to ensure that the input image and the generated image have the same positional semantic relationships. We further apply cross-domain matching to eliminate the effects of different styles. The intensive experiment shows that our method generates realistic X-ray images and outperforms other state-of-the-art approaches by a large margin. We also collect a new large-scale dataset to serve as the new benchmark for this task. Our source code and dataset will be made publicly available.
Keywords:Sim2Xray GAN Interventional Simulation Systems
## 1 Introduction
Image-to-image translation involves converting an image into a different modality or style [11, 37, 38]. In medical imaging, this task is related to the translation between various medical image modalities, such as MRI to X-ray [29], MRI to CT [23], or between MRI modalities [6]. Medical image translation is challenging due to the need for preserving semantic and structural information, as well as the details during the translation process. In practice, medical images often share similarities, with only minor differences. Effective translation methods can significantly aid medical training [3, 35], surgical planning [28], or sim-to-real learning [24]. However, challenges such as data privacy and incompleteness hinder medical image transfer, while deep learning algorithms require extensive data, compounding these issues [32].
The recent development of surgical simulators [16; 30] facilitates the acquisition of medical skills of aspiring surgeons. Compared to real-world setup, training learning algorithms in simulation is inexpensive and expeditious [21; 33]. However, most of the current medical simulators consider gray-scale as X-ray images. This assumption causes a challenging problem when we apply the learned knowledge from the medial simulators to the operating theater [8]. To bridge the gap between simulation images from medical simulators and real medical images, several works have proposed GAN-based methods for medical image translation through adversarial training [5; 10; 23]. However, these methods usually have the collapsed pattern problem or fail to yield valuable results [17].
In this paper, we propose a simple, yet effective method translate _unpaired simulation images_ from an endovascular simulator [16] into X-ray images. Unlike previous works that focus on X-ray images with clear and static human body parts (e.g., X-ray images of the hand) [13], our input are endovascular simulation images which contain dynamic motion of the catheter [22; 20]. Therefore, we need to learn both the "style" information of the real X-ray image, while maintaining the _structure of the input_ (e.g., the position of the catheter). To this end, we introduce a multi-scale domain matching method to learn both the style and preserve the structure information during the translation. As shown in Fig.1, our model archives realistic results that are almost indistinguishable between the translated image and the real X-ray image. Furthermore, our model's simple architecture enables rapid training and inference, making it well-suitable for real-time endovascular simulators. Additionally, we introduce a new and challenging dataset of unpaired images, consisting of \(1,607\) real X-ray images and \(2,000\) simulation images. This dataset is essential for developing and evaluating robust image translation models, particularly in the under-studied task of translating simulation images to real X-ray images.
Figure 1: We present a new method to translate the images from an endovascular simulator to X-ray images. Our method preserves the _structure information of the input simulation image_ and learns the _“X-ray style” from the real X-ray image_. (a) input image (b) our generated image, (c) an example of a real X-ray image.
## 2 Related Work
### Image to Image Translation
Numerous works have focused on conditional GAN [2, 10, 36] for image-to-image translation since the introduction of GAN [12]. CycleGAN [38] proposed a cycle consistency loss to constrain the model, allowing it to translate an image back to the original domain after being translated to the target domain. Another well-known work was GcGAN [11]. The limitation of CycleGan and GcGAN was the lack of clear constraints on the process of converting, which might produce multiple solutions thus, not meeting the requirements of medical image translation. DistanceGAN [4] solved the model collapse problem, but it did not impose any constraints on semantic information. F-Lesim [37] used self-similarity to define the structure of the scene, but it had strong constraints, making it difficult to train and risking the loss of useful semantic information.
### Medical Image Translation
An initial implementation of GAN-based for medical image translation was introduced in [10] for synthesizing different classes of lesion patches of liver CT images [27]. Furthermore, given that CT imaging puts patients at risk of cellular damage and radiation from cancer, CAGAN [23] implemented pixel-by-pixel reconstruction loss and image gradient loss to synthesize CT images from MR images. Nevertheless, it required one-to-one correspondence with the dataset for training. The subsequent Deep MR to CT Synthesis [34] used unpaired data and get acceptable results. MedGAN [3] utilized a discriminator network as a trainable feature extractor to penalize differences between the translated medical image and the desired modality. Stylistic transfer loss was used to match the texture and fine structure of the desired target image to the translated image. Based on the theory of loss correction [26], RegGAN [19] assumed that aligned data could be treated as noisy labels, and an additional alignment network on the generator could adaptively fit this noisy distribution. Compared to the above works, we directly translate the simulation images into X-ray images without the need for paired data.
Many methods have been proposed to translate RGB images directly to medical images [1, 13]. Pix2xray [13] utilized CGANs to generate synthetic X-rays. However, obtaining the required dataset for pix2xray is time-consuming as it necessitates RGB images, pose images, and X-ray images. Our proposed method, on the other hand, only requires simulation images that do not need to be paired with real X-ray images. In addition to pix2xray, other approaches such as GDR [31] used domain randomization to synthesize realistic images. The authors in [1] proposed a method for Cardiac MRI simulation-to-real translation using unsupervised GAN.
## 3 Method
Given a collection of simulation images \(\mathcal{X}\) and real X-ray images \(\mathcal{Y}\), our goal is to find a generator \(\mathcal{G}\) mapping \(\mathcal{X}\) domain to \(\mathcal{Y}\) domain, denoted as \(\mathcal{G}:\mathcal{X}\rightarrow\mathcal{Y}\). The translated result is \(\hat{\mathbf{y}}=\mathcal{G}(\mathbf{x})\), \(\mathbf{x}\in\mathcal{X}\). We aim to convert unpaired simulation images into X-ray images. Due to the demand for keeping details of the input simulation images, we need to maintain the semantic and structured information during the translation process, while changing the style of the input simulation image to the style of the X-ray image.
As shown in Fig. 2, we input a simulation image \(\mathbf{x}\in\mathcal{X},\mathbf{x}\in\mathds{R}^{\mathrm{H}\times\mathrm{W} \times\mathrm{C}}\) and use adversarial training to generate an X-ray image \(\hat{\mathbf{y}}\). The pre-trained ViT network [9] extracts high-level structural features by splitting the image into patches that act as tokens. Our multi-scale semantic matching approach maps and learns structural information between the input and output by simultaneously matching a query token to all tokens at other positions of the image. The final semantic matching result is a weighted average of different blocks matched. A single multilayer perception layer (MLP) that takes the features from the ViT is used as the discriminator \(\mathcal{D}\) to classify the fake and the real input.
Figure 2: An overview of our framework. We feed the simulation images to the Generator to obtain the translated images, and two images enter the same pre-trained ViT network. The features are extracted from the intermediate blocks of the ViT. We match the self-domain and cross-domain respectively to maintain the structure information of the input and learn the style of the X-ray image. Finally, a discriminator is used to classify the fake and real images.
### Multi-Scale Semantic Matching
The adversarial training can learn domain mapping but may generate random permutations of the target domain [17], hence changing the structure of the input image. To address this, we propose Multi-Scale Semantic Matching to maintain semantic structure information relationships and reduce the diverse style impact during adversarial training.
**Feature Extractor.** We extract features from simulation images using a pre-trained ViT model [9] that divides images into patches. Each patch is treated as a query token, with all patches serving as key tokens. We select the output of multiple intermediate blocks and match them separately between domains. In practice, we find out that the transformer-based network ViT is particularly well-suited for the simulation images in our problem, which mainly feature catheters and guidewires that span the entire image but occupy a relatively small number of pixels [14].
**Multi-Scale Self-Domain Matching.** We maintain semantic information on multiple intermediate blocks of ViT [9] to keep the structure information at different scales. To make the simulator image and the generated X-ray image have the same structure relationship, we perform the matching between all tokens in \(\mathbf{x}\) and \(\hat{\mathbf{y}}\), respectively. We call it _self-domain_ matching as the tokens are matched _within the same image_. We use \(\mathbf{s}_{i}\) to denote the i-th token in \(\mathbf{x}\), and \(\mathbf{s}_{*}\) to denote all the tokens in \(\mathbf{x}\). Similarly, we denote \(\mathbf{t}_{i}\) as the i-th token in the \(\hat{\mathbf{y}}=\mathcal{G}(\mathbf{x})\), and \(\mathbf{t}_{*}\) to denote all the tokens in \(\hat{\mathbf{y}}\). Each token can also be matched with itself. We formulate this process as follows:
\[\mathbf{v}_{s_{i}} =\mathbf{s}_{i}\cdot\mathbf{s}_{*} \tag{1}\] \[\mathbf{v}_{t_{i}} =\mathbf{t}_{i}\cdot\mathbf{t}_{*} \tag{2}\]
where \(\mathbf{s}_{i},\mathbf{t}_{i}\in\mathds{R}^{d}\), and \(\mathbf{s}_{*},\mathbf{t}_{*}\in\mathds{R}^{n\times d}\). \(n\) represents the number of tokens. We match the query token to all tokens to obtain a vector \(\mathbf{v}\in\mathds{R}^{n}\). We repeat this process for each query token in the input image to obtain the matrix which contains the semantic relationships between all tokens in the image. We express the matrices as \(\mathbf{X}_{\mathrm{self}}=[\mathbf{v}_{s_{1}},\mathbf{v}_{s_{2}},...,\mathbf{ v}_{s_{n}}]\),\(\mathbf{\hat{Y}}_{\mathrm{self}}=[\mathbf{v}_{t_{1}},\mathbf{v}_{t_{2}},..., \mathbf{v}_{t_{n}}]\). The aim of this process is to achieve semantic alignment between the simulation image and the X-ray image. This is achieved by minimizing the distance between the two \(\mathbf{X}_{\mathrm{self}}\), \(\mathbf{\hat{Y}}_{\mathrm{self}}\) matrices.
**Multi-Scale Cross-Domain Matching.** The use of self-domain matching does guarantee similar semantic relationships, but the process is inevitably interfered with by stylized information [18]. The images from the simulator and the generated X-ray style images have completely different style information, which influences the effectiveness of the translation. To avoid the intervention of style information [7], we propose multi-scale cross-domain matching to decouple the content and the style. As shown in Fig. 2, we match a _token from the simulation image with all tokens from the X-ray image_ and vice versa. This match results not only contains the gap between different tokens, but also the gap between
different styles. Similarly, we apply this process to all tokens to obtain two semantic representation matrices. In contrast to the self-domain matching, the two matrices have information gaps from different styles. By optimizing the disparity between these two matrices, the effect of style information can be reduced. Specifically, we use the tokens in \(\mathbf{x}\) to match with all the tokens in \(\hat{\mathbf{y}}=\mathcal{G}(\mathbf{x})\). The matching semantic information contains the gap between different positions and the difference between styles. Then we use the tokens in \(\hat{\mathbf{y}}\) to match with all the tokens in \(\mathbf{x}\). We formulate this process as follows:
\[\mathbf{u}_{s_{i}}=\mathbf{s}_{i}\cdot\mathbf{t}_{*} \tag{3}\]
\[\mathbf{u}_{t_{i}}=\mathbf{t}_{i}\cdot\mathbf{s}_{*} \tag{4}\]
where \(\mathbf{s}_{i},\mathbf{t}_{i}\in\mathds{R}^{d}\), and \(\mathbf{s}_{*},\mathbf{t}_{*}\in\mathds{R}^{n\times d}\). \(n\) is the number of tokens. The two matrices are expressed as \(\mathbf{X}_{\text{cross}}=[\mathbf{u}_{s_{1}},\mathbf{u}_{s_{2}},...,\mathbf{ u}_{s_{n}}]\), \(\mathbf{\hat{Y}}_{\text{cross}}=[\mathbf{u}_{t_{1}},\mathbf{u}_{t_{2}},..., \mathbf{u}_{t_{n}}]\).
### Training
We express the multi-scale domain matching objective as follows:
\[\mathcal{L}_{\text{self}}=\frac{1}{N}\sum_{i=1}^{N}\xi\left(\mathbf{X}_{\text {self}},\mathbf{\hat{Y}}_{\text{self}}\right) \tag{5}\]
\[\mathcal{L}_{\text{cross}}=\frac{1}{N}\sum_{i=1}^{N}\xi\left(\mathbf{X}_{\text {cross}},\mathbf{\hat{Y}}_{\text{cross}}\right) \tag{6}\]
\[\mathcal{L}_{\text{sem}}=\alpha\mathcal{L}_{\text{self}}+(1-\alpha)\mathcal{L }_{\text{cross}} \tag{7}\]
where \(N\) represents the number of extracted feature blocks. \(\xi\) is the cosine similarity distance. \(\alpha\) is a hyperparameter that controls the intensity of self-domain matching and cross-domain matching.
We follow adversarial training with the Generator and Discriminator to train our model. We express the objective as:
\[\begin{split}\mathcal{L}_{D}=&-E_{\mathbf{y}\sim p _{\text{data}}(\mathbf{y})}\left[\log\mathcal{D}(\mathbf{y})\right]\\ &-E_{\mathbf{x}\sim p_{\text{data}}(\mathbf{x})}\left[\log\left( 1-\mathcal{D}(\mathcal{G}(\mathbf{x}))\right)\right]\end{split} \tag{8}\]
\[\mathcal{L}_{\mathcal{G}}= E_{\mathbf{x}\sim p_{\text{data}}(\mathbf{x})}\left[\log \left(1-\mathcal{D}(\mathcal{G}(\mathbf{x}))\right)\right]+\lambda\cdot \mathcal{L}_{\text{sem}} \tag{9}\]
where \(\mathcal{L}_{\text{sem}}\) is the multi-scale domain matching loss, and a hyperparameter \(\lambda\) is used to control the semantic loss contribution. Our optimization goal is to increase \(\log\mathcal{D}(\mathbf{y})\) for real images and decrease \(\mathcal{D}(\mathcal{G}(\mathbf{x}))\) for simulation images, resulting in realistic X-ray image generation.
## 4 Experiments
### Experimental Setup
**Setup.** We create a new dataset with unpaired simulation images and real X-ray images. We use CathSim [16] to capture 2000 simulation images, and collect 1607 real X-ray images using the C-arm (Siemens, Germany) and two vascular soft silicone phantoms (Elastrat, Switzerland). We set the hyperparameter \(\alpha\) in Equation 7 to 0.5 to give equal weight to cross-domain and self-domain factors. The term \(\lambda\) in Equation 9 is set to 8. Please refer to the Supplementary Material for more details about our dataset and implementation.
**Baselines.** We compare our simulation to X-ray (Sim2Xray) method with several recent works, based on visual effects, FID scores [15], training time, and the number of parameters used. The compared methods include CycleGAN [38], GeGAN [11], FaseCUT [25] and FLSeSim [37]. We do not compare our approach with pix2xray [13] although both methods do X-ray image translation. This is because we used unpaired data while pix2xray [13] used paired data and there is no public source code of pix2xray [13] for testing.
### Results
The results of our model and other methods are shown in Table 1. The results show that our proposed method outperforms other approaches significantly in FID scores. In addition, our method is the smallest model in terms of the number of parameters. Furthermore, our training time is much less than other methods and our inference time is only 11ms for each image, compared to around 16ms of other methods.
In Fig. 3, we show qualitative comparisons of our method and all other models. It can be seen that some models such as CycleGAN [38] and GcGAN [11] have pattern collapse problems. The generated results do not correspond to the input images and do not learn the structural information we expect. Fast-CUT [25] and FLSeSim [37] do not successfully learn the target domain style of the X-ray images. On the other hand, our method successfully transfers the style
\begin{table}
\begin{tabular}{l|c|c|c} \hline
**Method** & **FID Score** & **\#Param** & **Training Time** \\ \hline CycleGAN [38] & 337.80 & 28.27M & 220.8 \\ GcGAN [11] & 285.37 & 28.27M & 131.2 \\ FaseCUT [25] & 297.09 & 14.70M & 101.0 \\ FLSeSim [37] & 307.09 & 14.68M & 448.3 \\ \hline Sim2Xray (w/o self-domain) & 128.84 & **12.55M** & **95.8** \\ Sim2Xray (w/o cross-domain) & 138.68 & **12.55M** & 96.4 \\ Sim2Xray (ours) & **115.77** & **12.55M** & 96.1 \\ \hline \end{tabular}
\end{table}
Table 1: Performance of different methods. Training time is second/epoch.
of the simulation image into the X-ray image while still retaining the structure information of the input images.
### Multi-Scale Semantic Matching Analysis
In Table 1, we also demonstrate the contribution of our proposed multi-scale semantic matching. Based on the FID score, we can see that when we only use self-domain or cross-domain matching, the FID score is improved but not optimal. We obtain the best FID score when we use both self-domain and cross-domain matching. It confirms the importance of learning both the style information and keeping the structure information from the input. In Fig. 4, we visualize the results of our model when we do not use self-domain matching or cross-domain matching. From the visualization, we see that the results without cross-domain matching have poor X-ray style, while the results without self-domain matching cannot keep the structure information from the input image.
## 5 Conclusions
We propose a new and effective method to translate simulation images of an endovascular simulator to X-ray images using multi-scale semantic matching. Our approach has fast training and inference time, making it well-suitable for real-time endovascular simulators. Additionally, we introduce a new dataset that can be used to develop and evaluate image translation models. Our source code and dataset will be made publicly available for future study.
Figure 3: The translation results of different models. |
2303.02994 | Fighting noise and imbalance in Action Unit detection problems | Action Unit (AU) detection aims at automatically caracterizing facial
expressions with the muscular activations they involve. Its main interest is to
provide a low-level face representation that can be used to assist higher level
affective computing tasks learning. Yet, it is a challenging task. Indeed, the
available databases display limited face variability and are imbalanced toward
neutral expressions. Furthermore, as AU involve subtle face movements they are
difficult to annotate so that some of the few provided datapoints may be
mislabeled. In this work, we aim at exploiting label smoothing ability to
mitigate noisy examples impact by reducing confidence [1]. However, applying
label smoothing as it is may aggravate imbalance-based pre-existing
under-confidence issue and degrade performance. To circumvent this issue, we
propose Robin Hood Label Smoothing (RHLS). RHLS principle is to restrain label
smoothing confidence reduction to the majority class. In that extent, it
alleviates both the imbalance-based over-confidence issue and the negative
impact of noisy majority class examples. From an experimental standpoint, we
show that RHLS provides a free performance improvement in AU detection. In
particular, by applying it on top of a modern multi-task baseline we get
promising results on BP4D and outperform state-of-the-art methods on DISFA. | Gauthier Tallec, Arnaud Dapogny, Kevin Bailly | 2023-03-06T09:41:40Z | http://arxiv.org/abs/2303.02994v1 | # Fighting Noise and Imbalance in Action Unit Detection Problems.
###### Abstract
Action Unit (AU) detection aims at automatically caracterizing facial expressions with the muscular activations they involve. Its main interest is to provide a low-level face representation that can be used to assist higher level affective computing tasks learning. Yet, it is a challenging task. Indeed, the available databases display limited face variability and are imbalanced toward neutral expressions. Furthermore, as AU involve subtle face movements they are difficult to annotate so that some of the few provided datapoints may be miss-labeled. In this work, we aim at exploiting label smoothing ability to mitigate noisy examples impact by reducing confidence [1]. However, applying label smoothing as it is may aggravate imbalance-based pre-existing under-confidence issue and degrade performance. To circumvent this issue, we propose Robin Hood Label Smoothing (RHLS). RHLS principle is to restrain label smoothing confidence reduction to the majority class. In that extent, it alleviates both the imbalance-based over-confidence issue and the negative impact of noisy majority class examples. From an experimental standpoint, we show that RHLS provides a free performance improvement in AU detection. In particular, by applying it on top of a modern multi-task baseline we get promising results on BP4D and outperform state-of-the-art methods on DISFA.
Gauthier Tallec\({}^{1}\), Arnaud Dapogny\({}^{2}\), and Kevin Bailly\({}^{1,2}\)\({}^{1}\) Sorbonne Universite, 4 Place Jussieu, 75005 Paris, France
\({}^{2}\) Databalab, 114 Boulevard Malesherbes, 75017 Paris, France Computer Vision, Affective Computing, Action Unit Detection.
## 1 Introduction
Facial expressions convey information galore about how humans feel. Consequently, efficient computational face representation could unlock better automatic comprehension of human behaviours and in turn improve human-machine interaction. For that purpose, the Facial Action Coding System (FACS) provides an anatomic representation that decompose faces using muscular activations called Action Units (AU).
From a machine learning point of view, AU detection can be formulated as a multi-task problem where each task consists in the detection of a single AU. In practice, its performance are hindered by data scarcity. Indeed, the AU labeling process consists in frame-by-frame video annotations of subtle facial activations and is therefore hardly scalable. As a result existing AU datasets display low face variability with only few positively annotated examples (because AU are short events). Finally, as shown in Figure 1, AU are often so subtle that even expertly trained annotators may miss-annotate edgy examples [2], resulting in label noise. Altogether, training in such setting is prone to overfitting and model over-confidence toward predicting neutral expressions.
To tackle low face variability, the main line of research make use of prior geometric information (typically facial keypoints) to help the learning process by either guiding the network attention [3, 4, 5, 6] or normalizing face geometry [7]. In the same vein, several works [8, 9, 10] attempted to incorporate prior AU dependencies to better structure predictions. For imbalance problems, the widely adopted approach is loss frequency reweighting [11, 12, 4, 6].
Interestingly, very few methods address the label noise problem. The work in [13] is among the only that takes AU uncertainty into account by using the method in [14] to learn to adapt the contribution of each AU to the total loss.
This work lies in the continuity of [13] since it focuses on AU noise modelling. Yet, we stand aside from it arguing that uncertainty learning intuitively requires large amount of data
Figure 1: Noisy examples from DISFA and BP4D. For example, the first face in DISFA is annotated with neither smile (AU12) or eyebrow raise (AU1-2). Fitting on those examples may prevent the network from properly understanding which zones are involved in each AU and degrade performance.
[15] and may therefore not be fully efficient in AU detection low data regime.
Instead, we aim at taking advantage from the recent success of label smoothing [16] at mitigating noise [1] by reducing over-confidence. However, vanilla label smoothing reduces over-confidence uniformly in all classes. Therefore, applying it in imbalanced situations may worsen the pre-existing under-confidence in the minority class. For that purpose, we propose Robin Hood Label Smoothing (RHLS) that takes its name from the fact that, by smoothing only the majority class, it introduces a probability to take examples from the rich class to give them to the poor. By doing so, it reduces both the imbalance-based over-confidence issue and the negative impact of noisy majority class examples. To summarize our contributions are as follows :
* We introduce RHLS that adapts label smoothing to imbalanced situations by restraining over-confidence reduction to the majority class. Consequently it mitigates both imbalance over-confidence issue and the negative impact of majority class noisy examples.
* Experimentally, we show that AU detection performance benefits from the use of RHLS without any additional computational overhead. More precisely we demonstrate that applying RHLS on a modern multi-task baseline is competitive on BP4D and significantly outperforms state-of-the-art results on DISFA.
## 2 Methodology
In this work, we use a multi-task binary classification dataset composed of couples \((\mathbf{x},\mathbf{y})\) with \(\mathbf{x}\in\mathbf{R}^{H\times W\times 3}\) a face image and \(\mathbf{y}\in\{0,1\}^{T}\) the labels for each of the \(T\) target AU.
### Vanilla Label Smoothing
AU detection involves subtle changes in skin texture that are difficult to detect, even for expertly trained annotators. As a consequence, the main available annotated datasets display noise. Figure 1 highlights the existence of this noise by showing several arguably wrong annotated examples. Prior work [1] showed that label smoothing could help mitigate the influence of annotation noise by reducing model confidence [16] and consequently preventing it from over-fitting on noisy examples. For that purpose, label smoothing introduces uniform noise into the ground truth labels with probability \(\alpha\). From a concrete point of view, label smoothing with coefficient \(\alpha\) modifies ground truth label of task \(i\) as follows:
\[\tilde{y}_{i}=(1-\alpha)y_{i}+\frac{\alpha}{2}. \tag{1}\]
However, we experimentally show that label smoothing degrades AU detection performance (see section 3.3). We hypothesize that such performance drop is due to AU datasets imbalance. Indeed, as shown by figure 2, in nowadays most popular AU datasets, several AU display low empirical frequencies. In particular, such imbalance has been shown [17] to push model toward under-confident predictions for the minority class. Therefore, by reducing confidence of both positive and negative examples, label smoothing may worsen the pre-existing confidence problem on the minority class and in turn explain the observed performance gap.
### Robin Hood Label Smoothing (RHLS)
In order to address that problem we extend label smoothing to Robin Hood Label Smoothing (RHLS). RHLS takes its name from the fact that, by smoothing only the majority class, it introduces a probability to steal examples from the majority (the rich) class to give it to the minority class (the poor). Formally, it first introduces \(\alpha_{i}^{+}\) and \(\alpha_{i}^{-}\) that respectively denote the uniform noise probability for positive and negative values for task \(i\) so that :
\[\tilde{y}_{i}=y_{i}(1-\frac{\alpha_{i}^{+}}{2})+(1-y_{i})\frac{\alpha_{i}^{- }}{2}. \tag{2}\]
Then it parametrizes \(\alpha_{i}^{+}\) and \(\alpha_{i}^{-}\) with respect to task \(i\) empirical frequencies \(f_{i}\) so that only the majority class is smoothed :
\[\alpha_{i}^{-}=\beta\max(0,\frac{1-2f_{i}}{1-f_{i}}),\alpha_{i}^{+}=\beta \max(0,\frac{2f_{i}-1}{f_{i}}), \tag{3}\]
Figure 2: Action Unit frequencies for BP4D and DISFA. BP4D is slightly imbalanced toward negativeness compared to DISFA where most AU are represented in less than 1/10 frame. Minimizing cross-entropy in such imbalanced situations tend to push the network toward over-confidence in the majority class i.e toward ignoring the minority class and predicting the majority class with high probability.
where \(\beta\in[0,1]\) quantifies the amount of noise introduced in the majority class from \(0\) (No noise is applied) to \(1\) (noise is applied so that the resulting dataset is balanced).
Through the introduction of noise in the majority class using \(\beta>0\), RHLS encourage less-confident prediction for negative examples. It consequently reduces the negative impact of noisy negative examples, alleviates the imbalance-based overconfidence problem an may improve performance w.r.t vanilla label smoothing.
In what follows, we validate the RHLS superiority over vanilla label smoothing in imbalanced situations and discuss the significant improvements it provides in AU detection.
## 3 Experiments
In this section, we first introduce Action Unit Detection datasets (Section 3.1) along with details about our architecture and its optimization (Section 3.2). Then, in Section 3.3 we both validate our method hyperparameters and perform a comparative analysis between RHLS and vanilla methods for fighting noise and imbalance. Finally In Section 3.4, we compare RHLS with state-of-the-art approaches.
### Datasets
**DISFA**[18] is a dataset for facial action unit detection. It consists of 27 single subject videos for a total of \(\approx 100k\) face images. Each image is annotated with 12 AU intensity scores that range from 0 to 5. In detection, intensity scores higher than 2 are considered positive [19]. As far as evaluation is concerned, the 27 videos are split into 3 folds of 9 [4] and performance is measured by averaging the 3 mean F1-Scores obtained from training on 2 folds and evaluating on the last. For stability concerns [4, 11], we run such evaluation protocol 5 times and report mean performance. For validation, we follow the protocol in [4] i.e we perform 6 fold cross-validation on each of the 3 two-folds training set and compute the validation scores by averaging F1-Score on those 18 runs.
**BP4D**[2] dataset is composed of approximately \(140k\) face images in which \(41\) people (\(23\) females, \(18\) males) with different ethnicities are represented. Each image is annotated with the presence of \(12\) AU. Similarly to DISFA, performance evaluation consists in measuring F1-Scores on all \(12\) AUs using a subject exclusive 3-fold cross evaluation with the same fold distribution as in [4].
### Implementation Details
For all our experiments, we inspire from the M architectures in [20]. However, as no face-based pretrained ViT is publicly available, we replace this part by a resnet50 pretrained on VGGFACE [21]. To fit with the ViT encoder outputs in [20] that is \(P\times d\) where \(P=196\) and \(d=768\), we replace the last convolutional block by a conv1D of size \(d=768\) on top of which a layer of self-attention is built. For the decoder part we use as many class tokens in the cross-attention as there are AU in the dataset (\(T=8\) for DISFA and \(T=12\) for BP4D) and we feed each of the resulting \(T\) representations to an AU specific dense layer with sigmoidal activation.
For optimization we use AdamW [22] with exponential decay \(\beta=0.75\) for 2 epochs. In the convolutional part we use initial learning rate \(\lambda_{c}=5e-5\) for BP4D and \(\lambda_{c}=1e-5\) for DISFA. In the transformer part we scale the initial learning rate w.r.t number of queries \(q\) (\(P\) in self attention and \(T\) in cross-attention), model size \(d\) and batchsize \(B=32\), so that \(\lambda_{t}=\lambda_{t}^{(0)}\frac{Bq}{\sqrt{d}}\) and \(\lambda_{t}^{(0)}=4e-8\).
### Ablation Study
In this section we validate smoothing intensity \(\beta\) and compare RHLS to vanilla existing methods for noise mitigation (label smoothing [16]) and imbalance (frequency weighted cross-entropy [4, 11, 12]) on DISFA.
Figure 3 shows the evolution of the validation score with respect to \(\beta\). For low values of \(\beta\), RHLS significantly boosts the model predictive performance by reducing overconfidence in the majority class and consequently lowering the negative influence of both imbalance and noisy negative examples. However, passed a certain threshold for \(\beta\), RHLS introduces too much false positive in training which hurts the learning process and results in performance drops. Therefore, we select \(\beta=0.25\) for evaluation.
Second, Table 1 compares the performance of the proposed RHLS with other existing label smoothing methods. First, it is noticeable that frequency weighted BCE hurts performance. This may be caused by the difference of scales across AU frequencies (eg : \(f_{\text{AU1}}\sim 1e-2\), while \(f_{\text{AU25}}\sim 1e-1\)) so that weighting AU loss contribution using \(w_{i}=\frac{1/f_{i}}{\sum_{j=1}^{2}\frac{1}{f_{j}}}\) may encourage the learning of extremely low frequency AU at the expense of all the others.
Figure 3: RHLS \(\beta\) validation on DISFA.
Finally, figure 4 show the histogram of predictions for different smoothing method on DISFA. We observe that baseline results display majority class overconfidence as many positive examples are predicted negative with probability \(0\). Label smoothing mitigates that problem but worsens the imbalance-based minority class low confidence problem which in turn reduce overall performance (see Table 1). By smoothing only the majority class, RHLS mitigates majority class confidence without any influence on the minority class and consequently obtains the performance boost reported in Table 1.
### Comparaison with state-of-the-art methods
In this section we compare RHLS with state-of-the-art AU detection methods.
Table 2 provides RHLS results on BP4D. Interestingly, applying RHLS over a modern multi-task baseline is competitive with several recent methods [11, 6] including other uncertainty modelling strategies [13]. However, it gets outperformed by the most recent ones that either involve more complex landmark guided transformer architecture [12] or refined AU dependency modelling [8, 10]. Nonetheless, the increment RHLS provides to the baseline shows that it is a simple yet efficient way to improve performance without any additional computational overhead. In that extent, attempting to plug it on top of more complex methods could be a promising track toward better overall AU detection performance.
On DISFA, Table 2 shows that the free increment RHLS allows the baseline architecture to surpass state-of-the-art performance. To explain those excellent results, it is worth noticing that most state-of-the-art methods [4, 11, 8, 12] use frequency weighted loss. Therefore the superiority of RHLS against the frequency weighted loss on DISFA (see Table 1) may justify the significant increment that we observe.
Beyond that, it is also worth noticing that RHLS simple and free label modification pushes a simple baseline above more complex methods with spatial prior guidance [6] or explicit AU dependency modelling [8]. On the one hand, it highlights that imbalance reduction and noise modelling are as critical as input feature extraction or dependency modelling in AU detection performance. On the other hand, it offers potential improvement perspective as integrating RHLS with more complex methods could further improve performance.
## 4 Conclusion
In this work, we investigated the impact of label smoothing to fight against AU datasets imbalance and noise problems. In particular, we showed that vanilla label smoothing is ill-adapted to imbalanced situations as it may worsen pre-existing under-confidence problems and degrade performance. To alleviate this issue, we proposed Robin Hood Label Smoothing that constrains label smoothing to the majority class by introducing a probability to steal examples from the rich (the majority class) to give it to the poor (the minority class). In that extent RHLS reduces both imbalance issues and majority class noise negative impact
Experimentally, we showed that applying RHLS on top of a multi-task baseline provides competitive performance on BP4D and significantly outperforms state-of-the-art on DISFA. In particular, on DISFA, the excellent results obtained indicates that RHLS is a better option that the frequency weighted loss. In future works, we will inspire from the successes of AU dependency modelling methods [10, 8] and try structuring label smoothing noise to prevent smoothed labels from displaying unrealistic dependencies that may hurt the training process.
\begin{table}
\begin{tabular}{|c|c|} \hline Method & Mean F1Score \\ \hline Baseline & 63.0 \(\pm\) 1.9 \\ Label Smoothing(\(\alpha=0.1\)) & 62.0 \(\pm\) 1.8 \\ Frequency Weighted BCE & 61.7 \(\pm\) 2.1 \\ Robin Hood Label Smoothing & 65.8 \(\pm\) 1.4 \\ \hline \end{tabular}
\end{table}
Table 1: Comparaison between RHLS and prior label smoothing methods for noise and imbalance mitigation on DISFA dataset.
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Mean F1 Score** & **BP4D** & **DISFA** \\ \hline JAANet [4] & 60.0 & 56.0 \\ ARL [11] & 61.1 & 58.7 \\ JAANET [6] & 62.4 & 63.5 \\ UGN-B [13] & 63.3 & 60.0 \\ FAUwT [12] & 64.2 & 61.5 \\ MONET [8] & **64.5** & 63.9 \\ \hline Baseline & 61.9 & 63.1 \\ RHLS & 63.0 & **65.8** \\ \hline \end{tabular}
\end{table}
Table 2: Comparaison of RHLS with state-of-the-art deep learning based AU detection methods
Figure 4: Histogram of predictions on positive and negative DISFA AU2 labels for different smoothing methods.
## 5 Acknowledgements
This work was granted access to the HPC resources of IDRIS under the allocation 2021-AD011013183 made by GENCI.
|
2304.09866 | Towards Designing a ChatGPT Conversational Companion for Elderly People | Loneliness and social isolation are serious and widespread problems among
older people, affecting their physical and mental health, quality of life, and
longevity. In this paper, we propose a ChatGPT-based conversational companion
system for elderly people. The system is designed to provide companionship and
help reduce feelings of loneliness and social isolation. The system was
evaluated with a preliminary study. The results showed that the system was able
to generate responses that were relevant to the created elderly personas.
However, it is essential to acknowledge the limitations of ChatGPT, such as
potential biases and misinformation, and to consider the ethical implications
of using AI-based companionship for the elderly, including privacy concerns. | Abeer Alessa, Hend Al-Khalifa | 2023-04-18T17:24:14Z | http://arxiv.org/abs/2304.09866v1 | # Towards Designing a ChatGPT Conversational Companion for Elderly People
###### Abstract
Loneliness and social isolation are serious and widespread problems among older people, affecting their physical and mental health, quality of life, and longevity. In this paper, we propose a ChatGPT-based conversational companion system for elderly people. The system is designed to provide companionship and help reduce feelings of loneliness and social isolation. The system was evaluated with a preliminary study. The results showed that the system was able to generate responses that were relevant to the created elderly personas. However, it is essential to acknowledge the limitations of ChatGPT, such as potential biases and misinformation, and to consider the ethical implications of using AI-based companionship for the elderly, including privacy concerns.
ChatGPT, conversational agent, elderly, loneliness, isolation, social interaction, mental health, physical health
## 1 Introduction
As the world's population continues to age, loneliness and social isolation among elderly people has become a major concern. According to the World Health Organization [2], loneliness and social isolation are risk factors for poor mental and physical health among the elderly, including depression, cognitive decline, and cardiovascular disease. Furthermore, studies have shown that loneliness and social isolation can lead to decreased quality of life and increased mortality.
In recent years, conversational agents or chatbots have emerged as a potential solution to this problem. Conversational agents can provide elderly people with companionship and help reduce their feelings of loneliness and social isolation. These agents can engage in meaningful conversations, share stories, provide reminders of
medication, and offer helpful advice on various topics. Furthermore, they are available 24/7, which is especially useful for elderly people who may not have access to social support systems.
ChatGPT [3] is a state-of-the-art language model developed by OpenAI that has shown success in natural language processing tasks, including text completion, language translation, and text summarization. One of the primary advantages of ChatGPT is that it can be fine-tuned for dialogue using human feedback, making it an ideal candidate for developing conversational agents for elderly people.
Our research motivation behind proposing a ChatGPT-based conversational companion is that it has the potential to reduce feelings of loneliness and social isolation among elderly people and improve their overall quality of life. In fact, ChatGPT is one of the most advanced natural language generation systems today. It has 175 billion parameters that can do different kinds of language generation tasks. What makes ChatGPT particularly remarkable is its ability to perform in few-shot and zero-shot settings with a simple hand-crafted task description. It can answer questions the elderly may have, such as health and well-being-related queries. Additionally, with the proper context, ChatGPT can generate responses tailored to older adults' interests and preferences. This can also help alleviate loneliness in the elderly, as ChatGPT can provide social interaction,
which has been shown to improve mental and physical health. With the right context and a few commands, ChatGPT can offer an elderly person the necessary companionship and assistance.
In this paper, we present our approach towards designing a ChatGPT-based conversational companion system for elderly people. We describe the various components of the system and discuss the ethical considerations and challenges that arise when developing such systems.
The rest of the paper is organized as follows: Section 2 reviews the related work on conversational agents for lonely elderly people. Section 3 describes the method we used to design and implement our system, including its inspiration, overview, and details of the prompt engineering process. Section 4 reports the results of a preliminary evaluation of our system's performance. Section 5 discusses the implications, limitations, and future directions of our work and concludes the paper.
## 2 Related work
Chatbots have the potential to enhance the health and well-being of the elderly by providing them with information, social support, and reminders. Several recent studies have demonstrated the effectiveness of chatbots in promoting medication adherence, managing mental health symptoms, and reducing social isolation among older adults.
One of the significant benefits of chatbots is that they can alleviate social isolation, which is a prevalent issue among the elderly. Elderly people are often vulnerable to loneliness, isolation, cognitive decline, and digital exclusion, and chatbots can help them overcome these challenges. By engaging in conversations with chatbots, elderly people can experience a sense of companionship and connection, which can improve their overall emotional and mental well-being. For example, a chatbot named Charlie has been designed to provide elderly people with companionship through innovative strategies based on gamification, active notifications, and the promotion of self-compassion, which can be explored for preventive mental healthcare [3]. Another entertainment chatbot for elderly people that aims to close the digital gap and improve their abstraction capabilities was designed and developed by [4]. The chatbot alternates newscasts with light dialogues about news items that are adapted to the user's mood, and descriptions of news items are automatically extracted from these dialogues for content recommendation purposes. The chatbot uses state-of-the-art natural language generation (NLG) and sentiment analysis (SA) technologies to engage the user and reduce the digital divide [4].
Similarly, the authors in [5] presented a project that uses IBM Watson Assistant to develop a chatbot that can have friendly conversations with elderly people living alone. The chatbot has three different personalities ( girl, nurse, and senior) and can talk about seven different topics. The purpose of the chatbot is to help the elderly cope with loneliness and prevent mental decline. The authors also proposed combining a chatbot with a smart mirror that could monitor the health of the elderly in the future.
A recent project by Omdena France Chapter and Omdena Paris, France Chapter was to create a conversational AI chatbot for the elderly and disabled using natural language processing (NLP) [6]. The project aims to provide a virtual caregiver system that can support tailored treatment for elderly people and the disabled by extracting their mental and physical health states through dialogue-based human-computer interaction. The project involves data cleaning, data intent, data normalization, contextualization, goal setting, and reporting intents.
## 3 Method
In this section, we present the inspiration for the system, its overview and functionalities, and the prompt engineering process.
### System Inspiration
We believe that older adults and socially isolated seniors deserve emotional and relationship support in their lives. This is why we were inspired by the idea of HappyTalks [2], a service that connects them with friendly, trained professionals who call them regularly and listen to their stories. We aimed to create a similar service that is accessible, affordable, and automated. Therefore, we designed a ChatGPT Chatbot that mimics HappyTalks. Our chatbot can engage in empathetic conversations with older adults and socially isolated seniors, providing them with companionship, comfort, and care.
In our work, we explored the ability of ChatGPT when used as a companion for elderly people. We included personal information in the prompts to create personalized content. We designed personalized prompts that included various types of personal information about the user based on the HappyTalks phone call service that provides companionship to older adults. Their service initially requests that the caregiver provide information about their loved ones, which we adapted through a questionnaire during the sign-up process. In addition, the customer is matched with the person who provides them with the company.
### System overview
In our work, we started by designing an architecture that is (1) easy to use and (2) provides companionship to older adults. We started by defining the system components, mainly automatic speech recognition (ASR), ChatGPT, and text-to-speech (TTS). The choice of using a Voice-based chatbot is due to keyboards used in smartphones and tablets being too dense for many elderly users [8], leading to a high error rate for textual input [9]. In [10], the authors suggested the potential of using speech-to-text communication to reduce the need for keyboard text input. We thus incorporated ASR and TTS models into our system to enhance the usability of our solution. For the ASR and TTS tasks, we used a streaming version of Google's Cloud API [11][12]. For the conversational component, we used GPT-3.5 turbo with default settings and examined the different prompt settings. Figure 1 shows the flow of the system.
Figure 1: An overview of the proposed system.
As shown in Figure 1, the caregiver will register with the elderly person and fill in a questionnaire that will help provide personalized messages. This information was then saved and incorporated into the prompt. Older adults will then be able to interact with the chatbot through voice commands. As shown in Figure 2 (a), the agent's first function is to engage in personalized conversations with the user. The user can interact with the agent's voice messages. Figure 2 (b) shows when the user chooses the conversation option. The chatbot greets and interacts with the user based on the details provided by the caregiver. Our proposed system can further offer quizzes (Figure 2 (c) ) to help older adults keep their minds sharp and engaged. The chatbot starts by asking what type of quiz the user wants or provides a general knowledge quiz. This can also help older adults stay informed and up-to-date about the latest topics and entertain them. Additionally, our system offers general health tips or specifics if the user has a specific issue (see Figure 2 (d)).
### Prompt Engineering
In order to create the best prompt to be used in the chatbot, we conducted three experiments with different levels of detail in the prompts. In all the experiments, we included the explicit statement to the system, "_You are a conversational companion for an elderly person. You should be polite, helpful, empathetic, sociable, friendly, and factually correct._" We included the conversation history and the elderly utterance, similar to [14].
The first experiment used low-detailed prompts that only included basic information, such as the user's name, age, interests, and physical and cognitive health. While the conversation was engaging in this approach, it lacked some personalization. The second experiment used medium-detail prompts that followed
Figure 2: Illustration of the system functionalities. We followed the design guidelines for older adults described in [13]. Image (a) presents three interaction options for the agent. Images (b) show an example of a “conversation” option, (c) shows a quiz example, and (d) shows a health tips example.
the questionnaire shown in Figure 3. We decided to proceed with this prompting approach and further evaluate it because it gave similar results to the highly detailed prompting approach while using fewer tokens.
The third experiment was similar to the HappyTalks application, where we used high-detail prompts that contained additional information such as the user's favorite quote, religion, political views, admired person and reason, preferred vacation place and reason, and what they used to or still collect. We compared the quality and personalization of the conversations generated by the system under each prompt condition. Providing the system with some information regarding interests enhanced the flow of the conversation. For example, when adding some of the user's favorites, such as shows, books, and hobbies, the agent initiated the question, _"I remember you mentioned that you enjoy watching TV shows, do you have any favorites that you're currently watching?."_
## 4 Preliminary Evaluation
Since our system interactions are open-ended and do not assume the existence of a correct response, we, therefore, adapt the evaluation criteria by See et al. [15], i.e., human evaluation. We first created elderly _personas_ from different demographic backgrounds, such as race, ethnicity, gender, and political and religious beliefs. Then, we interacted with the conversation agent based on the details the persona had. Finally, we conduct a human experts evaluation. The two experts are a computer science professor and a graduate student. They were provided with a consent form and a short introduction. The evaluation is based on See et
Figure 3: The questionnaire provided to the caregiver in the process of registering the elderly in the system.
al. [15] criteria which include: (1) engagingness, (2) interestingness, (3) inquistiveness, (4) listening, (5) avoiding repetition, (6) fluency, and (7) making sense. Figure 4 in Appendix A.1 shows See et al. evaluation criteria with a description. The experts rated each criteria on a scale of 1- 4, with higher scores indicating better results. We then computed the average score for each question. For each expert, there were 5 conversations covering the 5 personas, each containing around 10 turns. Table 1 in Appendix A.2 shows the 5 prompt use cases, and Figure 6 in Appendix A.3 shows conversations samples.
As the results are shown in Table 1, the generated responses were rated highly across all metrics. Responses were particularly strong in the areas of fluency and making sense, with mean scores of 4.0 and 3.9, respectively. Engagingness and interestingness were also rated highly, with mean scores of \(3.8\pm 0.4\) for both metrics. Inquistiveness received a mean score of \(3.0\pm 0.632\), indicating that the generated responses were somewhat less successful in piquing the curiosity of the human evaluators. The metric of avoiding repetition was rated on a scale of 1-3 and received a mean score of \(3.0\pm 0\). This suggests that the generated responses were successful in avoiding repetitive language.
Overall, these results suggest that the generated responses were of high quality, with a particularly strong performance in the areas of fluency and making sense. The areas of engagingness, interestingness, inquistiveness, and avoiding repetition could potentially be improved upon in future iterations of the chatbot.
## 5 Discussion and Conclusion
It is difficult for adults to always be present and to provide emotional support to their elderly relatives while living their own lives. Our proposed system acts as a companion for older adults to provide such support. We explored the possibility of using ChatGPT as a conversational companion. Although our study shows promise, the system has some limitations. First, the system relies on the pretraining paradigm and the advances in large language models for English. Such resources and even unlabelled data are scarce in many languages. Second, ASR models can perform worse for individuals with dementia [16], and the system assumes that users have a good ability to use the technology. Finally, although we did not encounter harmful content in our experiments, LLMs are prone to generating problematic content [17]. Additionally, there is no guarantee that ChatGPT will understand complex topics or adequately respond to difficult problems. The system should be further tested and assessed for safety, accuracy, and user acceptance, and its impact and effect on users' well-being should be studied, especially for such a vulnerable population. It is also important to consider the ethical implications of using ChatGPT as a conversational companion and privacy and security issues, as the system, collects personal data.
Overall, ChatGPT provides a promising foundation for creating a conversational companion for the elderly. However, more research is needed to address the limitations of this system to create an effective and supportive companion.
|
2303.13641 | No Love Among Haters: Negative Interactions Reduce Hate Community
Engagement | While online hate groups pose significant risks to the health of online
platforms and safety of marginalized groups, little is known about what causes
users to become active in hate groups and the effect of social interactions on
furthering their engagement. We address this gap by first developing tools to
find hate communities within Reddit, and then augment 11 subreddits extracted
with 14 known hateful subreddits (25 in total). Using causal inference methods,
we evaluate the effect of replies on engagement in hateful subreddits by
comparing users who receive replies to their first comment (the treatment) to
equivalent control users who do not. We find users who receive replies are less
likely to become engaged in hateful subreddits than users who do not, while the
opposite effect is observed for a matched sample of similar-sized non-hateful
subreddits. Using the Google Perspective API and VADER, we discover that
hateful community first-repliers are more toxic, negative, and attack the
posters more often than non-hateful first-repliers. In addition, we uncover a
negative correlation between engagement and attacks or toxicity of
first-repliers. We simulate the cumulative engagement of hateful and
non-hateful subreddits under the contra-positive scenario of friendly
first-replies, finding that attacks dramatically reduce engagement in hateful
subreddits. These results counter-intuitively imply that, although
under-moderated communities allow hate to fester, the resulting environment is
such that direct social interaction does not encourage further participation,
thus endogenously constraining the harmful role that these communities could
play as recruitment venues for antisocial beliefs. | Daniel Hickey, Matheus Schmitz, Daniel Fessler, Paul Smaldino, Goran Muric, Keith Burghardt | 2023-03-23T20:00:07Z | http://arxiv.org/abs/2303.13641v1 | # No Love Among Haters:
###### Abstract
While online hate groups pose significant risks to the health of online platforms and safety of marginalized groups, little is known about what causes users to become active in hate groups and the effect of social interactions on furthering their engagement. We address this gap by first developing tools to find hate communities within Reddit, and then augment 11 subreddits extracted with 14 known hateful subreddits (25 in total). Using causal inference methods, we evaluate the effect of replies on engagement in hateful subreddits by comparing users who receive replies to their first comment (the treatment) to equivalent control users who do not. We find users who receive replies are _less_ likely to become engaged in hateful subreddits than users who do not, while the opposite effect is observed for a matched sample of similar-sized non-hateful subreddits. Using the Google Perspective API and VADER, we discover that hateful community first-repliers are more toxic, negative, and attack the posters more often than non-hateful first-repliers. In addition, we uncover a negative correlation between engagement and attacks or toxicity of first-repliers. We simulate the cumulative engagement of hateful and non-hateful subreddits under the contra-positive scenario of friendly first-replies, finding that attacks dramatically reduce engagement in hateful subreddits. These results counter-intuitively imply that, although under-moderated communities allow hate to foster, the resulting environment is such that direct social interaction does not encourage further participation, thus endogenously constraining the harmful role that these communities could play as recruitment venues for antisocial beliefs.
## Introduction
Hate groups have been shown to cause harm in online environments [1], and hate that is spread online can influence offline events [2, 3]. With the rapid growth of the internet, social media platforms such as Reddit [1] make it easier than ever for hateful individuals to congregate [4]. The design of Reddit is especially conducive to this phenomenon, as communities, called "subreddits," can be formed to discuss virtually any topic, and participants can hide behind anonymous handles. Although there has been significant research into the spread of online hate [5, 6, 7, 1] and the spillover of hate between social media communities [6, 7], little is known about what causes users to become active members of hateful communities online. Given that many extremist organizations effectively employ the internet to recruit individuals for violent causes [8], it is vital to understand interactions between new contributors and existing participants in hateful subreddits, as this can reveal whether subreddits constitute fora in which indoctrination and incorporation are likely to occur. Previous research indicates that when users interact with newcomers in online communities, those newcomers are more likely to stay [9, 10, 11], especially if the interaction is a positive one [12]. However, it is not known whether this is true of online hate groups, nor whether such groups, being characterized by selective antagonism, are welcoming towards newcomers.
We address this knowledge gap with a causal model-based study to analyze the effect of interactions on hateful and non-hateful subreddits. After utilizing a novel method to extract hateful subreddits from data, we study how interactions on hateful and non-hateful subreddits affect the probability that a user continues to be an active member. We use replies to a
user's first post as a treatment, and on each subreddit compare users who received a reply to similar users who posted but never received a reply. We find that, on non-hateful subreddits, new users who received a reply to their first post show a greater likelihood of continuing to be active, yet the reverse is true on most hateful subreddits. We explore why this may be the case using Google's Perspective API [13] and VADER [14]. We find that, even after controlling for their greater hate speech usage, hateful subreddits show more attacks, toxicity, and negativity, features negatively correlated with the probability a user will continue to post. We use these findings to create a model simulating the contrapositive wherein first-repliers do not reply to first-posters with an attack, toxicity, or negativity, finding that removing hostile language significantly increases member activity in hateful subreddits; because non-hateful subreddits have lower levels of each, their activity is comparatively unchanged. We conclude that although under-moderated hateful subreddits allow hate speech to thrive, the hostility characteristic of these communities discourages newcomers from becoming active members, thereby conceivably limiting the potential impact of these groups.
In summary, our contributions are as follows:
* We develop a novel technique to detect hateful subreddits.
* We apply causal modeling techniques to determine how a user's first interactions affect their subsequent activity in hateful and non-hateful subreddits.
* We quantify how attacks, toxicity, and negative sentiment all contribute to reductions in user activity, and show that these features are more apparent in hateful subreddits.
* We simulate how users would behave with more positive interactions, and show that hateful subreddits lose a significant proportion of users to replier vitriol, whereas non-hateful subreddits lack vitriol and are therefore broadly unaffected.
## Related Work
### Engagement in Online Communities
Users have many motivations for joining online communities: exchanging information, acquiring social support, or merely alleviating boredom [15, 16, 17, 18, 19], and they may be recruited by friends or acquaintances (a more effective pathway than impersonal advertising) [9]. However, not all motivations for joining a community are positive. Users from one community may invade another community with negative posts [20], which can reduce the overall activity in the invaded community [21]. And some communities are, at their core, focused on discussions of antagonistic and hateful aims, including those that are racist, sexist, or xenophobic.
After a user initially joins a community, various factors contribute to whether they continue participating. When newcomers receive replies to their first posts, they are more likely to keep posting in that community [9, 10, 11, 12]. The positive impact of the reply is strengthened when it contains positive [12] or personalized [9] language. Similarly, new Wikipedia editors who have their first edits reverted are less likely to continue making edits [22, 23], and Wikipedia therefore encourages users to be gentle with newcomers [24]. However, not all newcomers may be desired by or beneficial for a given community [9, 25]. For example, members of pro-anorexia communities employ gatekeeping tactics to exclude newcomers labeled as "wannaxerics" [26, 27]. Similar negative interactions with new users are also seen in Stack Exchange Q&A boards [27]. Beyond direct social interaction, users' willingness to adapt to linguistic norms of a community are predictive of how long they will stay in a community [28]. Although many users of online health communities stay past their initial motivations for joining (e.g., being diagnosed with cancer), a commonly cited reason for leaving is negative emotions expressed by other users [18].
### Online Hate Groups
Members of online hate groups can coordinate in various ways to build stronger coalitions of hate. Users who post toxic material and peddle extremist sources have greater reach than users whose language is positive [29]. Because online hate communities occur on a variety of social media platforms having different moderation policies and design elements, research must situate each platform in the larger online hate ecosystem. Velasquez et al. [30] analyze six online platforms to characterize how hateful content can spread between platforms. Another study analyzing multiple platforms shows how moderating hateful content on one platform can cause hateful content to spread more quickly on other platforms, making a case for the cross-platform regulation of social media [31]. Facebook is apparently used more by hate groups for actively recruiting members, while Twitter is used to amplify the message of the group and reach a larger audience [32]. Research on Reddit reveals their upvote feature may have aided in platforming hateful content on r/The_Donald [33]. Other studies compare hate speech usage across different social media platforms [34, 35, 36], or evaluate the impact of time spent on social media platforms on hate speech [37]. Here, we provide greater insight into recruitment dynamics of hate groups on Reddit, further informing how Reddit may compare to other platforms.
Causal inference methods have been applied to further understand hate groups on social media [38]. Matching approaches are common, and have been used to examine the effects of moderation policies [39, 5, 40] and the effects of hateful behavior on
other platform members6, 41]. However, these studies have either examined hate speech outside of hate groups or have been case studies of a small number (two to four) of hateful communities. We expand on such research by making causal inferences in a much larger sample of hateful communities.
[MISSING_PAGE_POST]
Footnote 20: [https://www.sublc](https://www.sublc).
membership from each subreddit to get the proportion. As there is overlap between the top 1,000 from each hateful subreddit, this resulted in a total of 2,177 subreddits.
We checked each subreddit to see if it is banned, quarantined, or private by checking if it is accessible through the official Reddit API. For each banned, quarantined, or private subreddit with more than 3,000 users, we obtained a list of 100 candidate hate words using SAGE[54], following Schmitz et al.[6]. SAGE compares a target corpus to a baseline corpus to find the words most characteristic of the former. For our baseline corpus, we used a sample of randomly selected Reddit posts using the Reddit API. After the top 100 most characteristic words of each subreddit were obtained, three human annotators (who are also authors of this paper) rated each word as \(0=\) not hateful, \(1=\) sometimes hateful, or \(2=\) always hateful (the context of the subreddits were taken into account when annotating). The annotations resulted in a Fleiss Kappa score of 0.5, indicating moderate agreement among raters. Words with a total score of four or greater were classified as hateful. Subreddits with greater than five hate words were classified as hateful; 14 subreddits met this criterion. Three of the subreddits (r/TheRedPill, r/milliondollarextreme, and r/CringeAnarchy) were independently obtained from Wikipedia's (non-exhaustive) list of highly prominent hateful subreddits, supporting the validity of our method. Table 1 displays the full list of hateful subreddits. All these hateful subreddits contain data on both comments and submission except for four: GreatApes, CoonTown, NeoFg, and FatPeopleHate. We do not believe this significantly affects our results. Automated accounts were removed both as newcomers and repliers by excising accounts with usernames matching certain substrings, such as "bot", and manually analysing the remaining accounts with highest activity, following Schmitz et al.[6]. As automated accounts make up a small proportion of accounts in each subreddit, this step does not notably impact our overall results.
### Matching
To make causal inferences about hateful subreddits and their users, we employed a Mahalanobis distance matching approach[55] wherein we compare similar users in each community, one of whom receives a reply. We explore two related analyses: first, estimating the effect of a reply within each subreddit. Second, estimating the overall effects of replies based on subreddit type. For each inference, we perform a separate user matching. Our matching process derives from methods used by Schmitz et al.[6]. Features were measured prior to the treatment. In the matching process, a user in the treatment group is chosen, the most
\begin{table}
\begin{tabular}{c c c c c}
**Subreddit** & **Num. Users** & **Type of Hate** & **Source** & **Similar size non-hateful subreddit** \\ \hline AskTRP & 87,367 & Sexist & Banned list & AnimeSketch \\ Braincels & 45,817 & Sexist & Wikipedia & OraRPG \\ Incels & 37,619 & Sexist & Prior research & samuraijack \\ MGTOW & 125,481 & Sexist & Wikipedia & ApplyingToCollege \\ TheRedPill & 126,648 & Sexist & Wikipedia & ask \\ TruFemcels & 12,447 & Sexist & Wikipedia & Fallout76MarketPlace \\ \hline CoonTown & 11,042 & Racist & Prior research & MCPE \\ GreatApes & 3,139 & Racist & Prior research & sneakerreps \\ WhiteRights & 9,191 & Racist & Banned list & ForeverAloneWomen \\ CCJ2 & 4,424 & Racist & Banned list & prozac \\ \hline Honkler & 3,086 & Alt-right & Wikipedia & PixelGun \\ frenWorld & 17,604 & Alt-right & Wikipedia & Pikabu \\ SJWHate & 29,384 & Alt-right & Banned list & Paladins \\ milliondollarextreme & 26,563 & Alt-right & Wikipedia & RocketLeagueExchange \\ \hline CringeAnarchy & 246,587 & Trolling/harassment & Wikipedia & NoFap \\ trolling & 4,800 & Trolling/harassment & Banned list & DannyGonzales \\ FuckYou & 12,803 & Trolling/harassment & Banned list & TimPool \\ NeoFg & 945 & Trolling/harassment & Wikipedia & dragonvale \\ \hline TrollGC & 5,807 & Anti-LGBT & Banned list & ninjavoltage \\ GenderCritical & 55,004 & Anti-LGBT & Wikipedia & triuerateme \\ dolan & 11,679 & Anti-LGBT & Banned list & Soundbars \\ \hline ImGoingToHellForThis & 450,915 & General hate & Banned list & Drugs \\ opieandanthony & 31,793 & General hate & Banned list & DragonballLegends \\ DelrayMisfits & 4,381 & General hate & Banned list & newsokunomoral \\ \hline FatPeopleHate & 56,340 & Fat-shaming & Prior research & Planetside \\ \end{tabular}
\end{table}
Table 1: Hateful subreddits used in analysis. The subreddits capture a variety of sizes and targeted groups.
similar user in the control group is selected as its pair, and the process is repeated until there are no unmatched users in the treatment group.
To estimate the direct effect of replies within each subreddit, we matched users who received replies to similar users who did not receive replies, based on four potential confounders: the time elapsed between account creation and their first post in the studied subreddit; the nest level (with 1 indicating a top-level comment, 2 indicating a direct reply, 3 indicating a reply to that reply, etc.) of their first comment; the sentiment valence of their first post, as measured by VADER[14]; and the number of words in their first post. Features like these have previously been shown to impact users' likelihood of receiving replies in online discussions[12]. This process was applied to both hateful and non-hateful subreddits. We measure the effects of replies to submissions and replies to comments separately due to the qualitative differences between each type of post - therefore, users who join subreddits by making submissions are in a separate matching pool from users who join subreddits with comments. Samples of users in each matching pool were limited to 30,000 per subreddit.
To compare the overall effects of replies in hateful subreddits to the effects in non-hateful subreddits, we collected 324 random subreddits by obtaining random posts from the Reddit API, identifying the subreddit associated with each post, and using the Pushshift API[66] to crawl all subreddits with greater than 10,000 members and less than 2 million members. Larger subreddits were excluded due to the time required to crawl subreddits using Pushshift. We used Mahalanobis distance matching to pair each hateful subreddit with a non-hateful subreddit from this sample. Each matched non-hateful subreddit was manually inspected to ensure the posts were not primarily made by automated accounts.
Comparing the engagement in banned (hate) subreddits and non-banned subreddits can bias the results of our analysis because posts by users who would otherwise be active after the moment a subreddit is banned would not appear in the dataset. In this case, the engagement of users might appear lower simply because the posts they would have made could not be created. We account for this in the process of matching subreddits. Namely, for each hateful subreddit, we obtain the 90th percentile of the time taken for newcomers to post a second time.This value, alongside the subreddit size, is used for matching each hateful subreddit with a non-hate one. Limiting to the 90th percentile prevents our results from being skewed by users who could not post a second time because the subreddit had been banned. Moving forward, a ban was simulated for each non-hateful subreddit by excluding posts made after their respective matched hateful subreddit's ban date. Additionally, users who posted after the 90% percentile return time before the ban were excluded, as they may have been unable to keep posting due to the (simulated) ban.
### Analyzing Content of Replies
To identify replies that may discourage users from continuing to engage within subreddits, we used the Perspective API, a collection of models used for the detection of toxic online posts[13]. The Perspective API has been widely utilized and validated in a variety of domains,[57, 58, 59], including Reddit[60, 47, 67]. We measured two attributes from the Perspective API to better understand the content of replies: "toxicity" and "attack on commenter." Although we analyze both subreddit comments and submissions, we employ the Perspective terms, labeling the metric "attack on commenter" to both comments and submissions for terminological consistency. Perspective API defines a toxic post as "a rude, disrespectful, or unreasonable comment that is likely to make people leave a discussion." The attack on commenter attribute is a metric trained using New York Times comments. The attribute is meant to identify hurtful posts directed at other members of a given comment thread. The outputs from the toxicity and attack on commenter models are probabilities that the posts are toxic or attacks, respectively. The Pearson correlation between these attributes is relatively weak (\(r=0.012\), calculated from replies in all hateful subreddits), hence these constitute independent metrics. We also explored the severe toxicity metric from Perspective by replacing regular toxicity with severe toxicity in our analysis and found the overall results to be similar. We use regular toxicity only because it has been much more thoroughly validated across a range of datasets[57, 58, 59], including Reddit[60, 47, 67]. Perspective also recommends a probability threshold of 0.7 for their toxicity metric, which we do not perform in our main analyses, instead treating the metric as a continuous variable. However, we repeated our analyses using a threshold of 0.7 for both toxicity and attack on commenter metrics and found similar results. In addition to the attributes from the Perspective API, we measured the sentiment of each reply to a first post using VADER[14], a lexicon and rule-based sentiment analysis tool specifically geared towards social media text. Whereas the attributes from the Perspective API are intended to capture negative or hurtful replies, sentiment analysis allows us to capture the valence of replies.
To understand how strongly these models are affected by hateful language, we tested how sensitive the outputs of each model are in response to hate words. We built a lexicon of 260 hate words from all 25 subreddits using the process of obtaining and annotating words described in section 3.2. We then calculated the attack on commenter, toxicity, and sentiment probabilities for each post with hate words and each post with hate words replaced with neutral synonyms (for example the n-word was replaced with "black person"), see hate speech lexicons and replaced words here (WARNING: Contains offensive terms): [https://anonymous.4open.science/r/reddit_lexicons-8628/](https://anonymous.4open.science/r/reddit_lexicons-8628/). We ensured the neutral word replacements did not belong to the VADER lexicon, as that could bias our results. The average toxicity, attack on commenter, and sentiment
outputs for replies in each subreddit were then compared, including replies that did not originally contain hate words.
### Simulating Growth of Subreddits
To understand how replies and their content impact the growth of hateful subreddits, we created a mixed effect logistic regression model to measure the likelihood a user continues to post after their first post (what we term an "engaged" user). We feed in the subreddit as random effects and use the features of the first reply to a user's post on that subreddit (the reply's Perspective API attribute probabilities and VADER sentiment scores) as fixed effects. We fit separate models for comments and submissions, as well as separate models for hateful and non-hateful subreddits. We used these to measure the probability a user will be engaged for all posts (we used the same dataset for training and testing because our goal is modeling the dataset being studied, rather than building a system for out-of-sample prediction). To ensure our models are valid, we computed the variance inflation factor (VIF) for each one, finding that no coefficient reached a VIF greater than 2.5, which is consistent with non-multicollinear features [61]. We then measured the probability each first-poster would continue, given their first replies, and simulated a user as either continuing or not depending on whether a Bernoulli random variable of this probability was 1 or 0, respectively. We created two simulations from these models, shown in Fig. 1, to test how user engagement would change under "nicer" subreddits. In one, we used the true features from the replies the users received to predict whether each user is engaged. In the other, we set the attack on commenter and toxicity probabilities of all replies to zero and set the sentiment scores of negative sentiment replies to zero, then made the same predictions. For each subreddit, we added up the total number of engaged users in each scenario and calculated the percent increase in engaged users for the scenario with no toxicity, negativity, or attack relative to the default scenario. These simulations include users who make comments as their first post as well as users who make submissions as their first post.
## Results
**Replies to comments in hate groups lead to lower engagement.** The vast majority (78%) of all first-posts within our dataset are comments. Among these comments, hateful subreddits have a significantly lower \(ERR\) than non-hateful subreddits (Fig 2). We use a Wilcoxon signed-rank test to assess significance (\(T=44,p<0.001\)) [62]. The mean \(ERR\) for hateful subreddits is below one (0.98), while the mean \(ERR\) for non-hateful subreddits is above one (1.03), indicating replies have opposite effects, on average, for each subreddit type. For users whose first post is a submission, there is no significant difference in \(ERR\) between the two subreddit types. The mean \(ERR\) is above one for both types, meaning these first-posters are generally encouraged to keep posting after receiving replies.
Figure 1: Schematic of hateful subreddit growth simulation. A mixed effect logistic regression model predicts whether a user continues posting or leaves the subreddit. Predictions are counted to calculate the cumulative number of engaged users in a subreddit.
**Attacks, toxicity, and negativity are prevalent in the replies of hateful subreddits.** Replies in hateful subreddits are generally more toxic, negative, and contain more attacks than non-hateful subreddits. Fig. 3 depicts differences in (A) mean attack on commenter probabilities, (B) mean toxicity probabilities, and (C) mean sentiment for replies to comments. These results are similar for replies to submissions. A Wilcoxon signed-rank test[62] reveals the mean attack on commenter probability for replies to comments is significantly higher in hateful subreddits than non-hateful subreddits (\(T=1\), \(p<0.001\)). The mean toxicity probability is also higher (\(T=0\), \(p<0.001\)), while mean sentiment is lower (\(T=0\), \(p<0.001\)). For replies to submissions, the mean attack on commenter probability is higher in hateful subreddits, though the difference is not significant. However, the mean toxicity remains significantly higher in hateful subreddits (\(T=1\), \(p<0.001\)) and the mean sentiment is significantly lower (\(T=25\), \(p<0.001\)). Additionally, for hateful subreddits, there are significant differences in toxicity between replies to submissions and replies to comments. The mean toxicity probability is greater in replies to comments compared to replies to submissions (\(T=55\), \(p=0.04\)). Other differences are not significant. When assessing the impact of hate words on attacks, toxicity, and sentiment, replacing hate words with neutral words does not alter their distributions nearly enough to account for the differences between hateful and non-hateful subreddits. The reason for this small difference is because posts with hate words are in the minority of all subreddits, despite their impact on toxicity[46].
**Content of replies are predictive of whether users will leave a subreddit.** To understand how the content of replies relates to subreddit engagement, we built mixed effect logistic regression models, where the response variable was whether or not a user continues to post given they either made a submission or comment as their first post, and the independent variables were whether or not they received a reply, and, if so, the sentiment, toxicity, and attack on commenter scores of that reply. Table 2 summarizes the results.
The models show that, for new users, each reply received increases their likelihood of remaining engaged in that subreddit, after controlling for covariates. The models also show a much stronger negative effect of attacks on posters than of toxicity or sentiment. The models suggest that the higher attack on commenter probabilities in hateful subreddits are a principal contributing factor to the lower \(ERR\)s in these subreddits. We find a non-significant relationship between engagement and toxicity or sentiment in non-hateful subreddits, and that, while toxicity appears to reduce the likelihood of engagement in comments within hateful subreddits, it increases engagement for submissions. Finally, while positive sentiment increases engagement, especially for commenters in hateful subreddits, this effect is not significant for submissions, possibly because the
Figure 2: Replies to comments in hateful subreddits lead to significantly less engagement than replies to comments in non-hateful subreddits. Distributions of engagement risk ratios for different subreddit types, separated by users who make comments as their first post (A) and users who make submissions as their first post (B). Points represent engagement risk ratios of individual subreddits, while boxplots summarize the overall distributions for each subreddit type. Values of engagement risk ratios greater than one imply replies are associated with more active users, while values less than one imply replies are associated with less active users. The lines in the boxes represent medians, while boxes and outer lines represent the inter-quartile range and 95% quantiles, respectively.
sentiment is directed at the content rather than the user, therefore users are less impacted.
**Engagement risk ratios of replies to comments in subreddits are correlated with reply attributes.** Figure 4 shows a negative correlation between the \(ERR\)s for newcomers who make comments as their first post of subreddits and the mean toxicity of replies they receive (Spearman correlation coefficient, \(r=-0.21,p=0.001\)), as well as the mean attack on commenter probabilities (\(r=-0.24,p<0.001\)). There is also a positive correlation between mean sentiment of replies and \(ERR\)s (\(r=0.17\), \(p=0.001\)). This indicates that, generally speaking, subreddits where negative, toxic, or attacking language are prevalent lead to less engagement through direct social interactions.
**Simulations show negative impact of negativity, toxicity, and attacks on subreddit growth.** Through our simulations, for both hateful and non-hateful subreddits, we observe higher cumulative numbers of engaged users in the contrapositive scenario where there are no attacks, toxicity, or negativity in replies (Fig 5). The average relative increase from the simulated status quo is much higher for hateful subreddits, indicating that hateful subreddits grow more slowly than they potentially could due to the nature of their replies.
## Discussion
We analyzed replies to first posts in a matched sample of 25 hateful and 25 non-hateful subreddits to assess the overall effect that replies have on engagement. We find that replies are more likely to cause users to leave hateful subreddits than to continue engaging with them, while the opposite is true for non-hateful subreddits. It is only after controlling for the content of replies in hateful subreddits that the effect of replies on engagement becomes positive - in other words, it is the negative, attacking, or toxic nature of replies in hateful subreddits that cause receiving a reply to discourage, rather than encourage, new contributors to continue to participate. Below we discuss the implications of these findings for online policymakers, community moderators, and researchers.
**Hate groups hate themselves.** A principal finding is that attack on commenter scores are much higher in hateful subreddits than in non-hateful subreddits, especially when the first post is a comment rather than a submission (Fig 3A). Unlike sentiment
\begin{table}
\begin{tabular}{l c c|c c} & \multicolumn{2}{c}{**Comments**} & \multicolumn{2}{c}{**Submissions**} \\ & Hate & Non-hate & Hate & Non-hate \\ \hline
**Fixed Effects: Coefficient \(\pm\) Standard Error** & & & & \\ Mean Intercept & **0.085 \(\pm\) 0.07** & 0.13 \(\pm\) 0.069 & **-0.435 \(\pm\) 0.101** & -0.155 \(\pm\) 0.126 \\ Reply & **0.09 \(\pm\) 0.01** & **0.104 \(\pm\) 0.012** & **0.366 \(\pm\) 0.020** & **0.228 \(\pm\) 0.021** \\ Reply \(\times\) Sentiment & **0.11 \(\pm\) 0.01** & **0.060 \(\pm\) 0.013** & -0.015 \(\pm\) 0.022 & **0.060 \(\pm\) 0.025** \\ Reply \(\times\) Toxicity & **-0.05 \(\pm\) 0.02** & 0.018 \(\pm\) 0.03 & **0.161 \(\pm\) 0.037** & 0.038 \(\pm\) 0.061 \\ Reply \(\times\) AttackOnCommenter & **-0.35 \(\pm\) 0.02** & **-0.168 \(\pm\) 0.022** & **-0.255 \(\pm\) 0.037** & **-0.206 \(\pm\) 0.039** \\ \hline
**Random Effects: Variance** & & & & \\ Subreddit & 0.16 & 0.15 & 0.24 & 0.38 \\ \end{tabular}
\end{table}
Table 2: Logistic mixed effect model for the probability a user will continue to be active after their first post. Coefficients in bold indicate \(p<0.05\).
Figure 3: Attacks and toxicity are higher in hateful subreddits replies, while sentiment is lower. Distributions across all users for (A) attack on commenter probability, (B) toxicity probability, and (C) valence for replies to comments in hateful and non-hateful subreddits. Substitution of synonymous non-hate terms does not alter the ability of the models to distinguish between subreddit types. Results are similar when analyzing replies to submissions.
and toxicity, which can detect negative or toxic comments directed at a particular identity group, the attack on commenter metric is an indicator of rude or disrespectful comments directed at other members of the hate community. Members of hateful subreddits are thus generally more disrespectful towards their own communities than are members of non-hateful subreddits. Members of hateful subreddits are thus _generally_ more antisocial, as their animosity is not limited to a single identity group, but also applies to others who hold their same prejudices. This is consistent with prior research revealing high levels of psychopathy among online haters[63].
**Hate groups are self-defeating.** Most social interactions in hateful subreddits decrease engagement within these communities. In hateful subreddits, we see a high prevalence of the types of replies to comments predictive of users leaving subreddits, indicating that users are not regulating their behavior in ways that would encourage newcomers to remain. It is therefore possible that, due to the nature of social interactions within their communities, hate groups on Reddit pose less of a threat in terms of recruitment and radicalization than is true of other online platforms. This finding also makes the case that it is in a platform's best interests to employ some level of moderation in order to maximize its adoption and growth. Quite simply, creating a lawless land is bad for business. That said, replies to users whose first post is a submission encourage _more_ activity compared to posts without a reply, possibly because such users are generally more eager to engage in the community regardless of reply toxicity.
Given the general vitriol of hateful subreddits, the rarity of attacks, toxicity, and negativity within a given online hate community may serve as a useful marker that said community poses a significant threat as a venue for recruitment and radicalization. However, caution is in order when extrapolating from our results. Given the psychological profile of online haters, the hostility characteristic of replies in hateful subreddits could potentially be attractive to other haters. It is therefore
Figure 4: Engagement risk ratios of comments are correlated with attacks, toxicity, and sentiment. Relationships between engagement risk ratios and mean attack on commenter probability (A), toxicity (B), and sentiment (C) of subreddits.
Figure 5: Simulated growth of subreddits. (A) Empirical and simulated growth curve of engaged users in r/MGTOW. (B) Predicted increases in engaged users for hateful and non-hateful subreddits given attack on commenter probability, toxicity, and sentiment of negative replies are set to zero. The white dots represent medians, while the thick gray bars represent the inter-quartile range, and the thin gray lines represent 95% quantiles.
possible that, while attacking, toxic, or negative replies reduce new user engagement, they may indirectly _increase_ the engagement of other users in the community. Moreover, in the minority of users whose first post is a submission, the toxicity of replies might be directed at whoever is mentioned in the submission rather than the submission's author, thus it could encourage user engagement. Additionally, previous research has found that members of hate communities on Facebook who post toxic and inflammatory content are effective at spreading their messaging [29], and that more toxic comments tend to be more popular on Facebook [64]. Furthermore, there are other ways to measure the success of online communities beyond user retention or number of users [65]. Social interactions may therefore affect hateful subreddits in ways not explored in this study.
The attributes of replies have stronger effects in hateful subreddits compared to non-hateful subreddits. That said, a lack of association between toxicity and engagement in non-hateful subreddits may be because the content of replies with high toxicity varies between the different subreddit types. For example, high-toxicity replies in hateful subreddits might be more likely to be insulting, while high-toxicity replies in non-hateful subreddits could be posts with a high prevalence of profanity but no insulting material.
### Limitations and Future Directions
While we find that replies make newcomers in hateful subreddits less likely to remain active in those communities, there remain additional challenges and questions about recruitment in online hate groups.
**Behavior outside of hateful subreddits.** We examined users' likelihood of being engaged in one self-contained community after their first post in said community. However, there are opportunities for them to engage in other hate communities, or exhibit hate behavior beyond the subreddit of interest. Given prior research on how toxicity begets toxicity in online environments [66, 59, 64], it is possible that users who leave hateful subreddits due to toxicity or attacks could be spreading that content elsewhere. Analyzing whether users who leave a hateful subreddit then become engaged in other hateful subreddits, or use hate material to antagonize users in non-hateful subreddits, can illuminate how social interactions in hateful subreddits influence the radicalization of newcomers.
**More specific content analysis of replies.** Our analysis focused on negative dimensions of text, such as the toxicity or attack on commenter metrics. However, previous research indicates that members of hate groups use particular types of arguments to recruit new members, such as appeals to fear [32]. Investigating the presence of such arguments in social interactions in Reddit hate groups and their internal dynamics could clarify the prevalence of these replies and their effects on recipients of such messages.
**Hate group recruitment outside Reddit.** Previous research suggests that hate groups use different online platforms for different purposes in growing their communities [32], and negative language is more prevalent on certain platforms [34]. Replicating our studies on other platforms, such as Facebook, may provide further insight into where online hate group recruitment occurs, as well as illuminate Reddit's role in the larger online hate ecosystem.
**Opinions of newcomers in hateful subreddits.** Our research shows how antagonism by members of hateful subreddits deters newcomers. However, we do not know between _whom_ these antagonistic interactions are occurring. How radicalized are the newcomers who receive antagonistic replies? While we attempted to control for this using causal inference methods, examining who receives antagonistic replies upon joining hateful subreddits could be informative, as it is possible that most newcomers who receive antagonistic replies are "outsiders" who enter the community in order to challenge the general views of the hateful subreddit.
**Validating experimental Perspective API attributes.** Several papers have validated the Toxicity metric of the Perspective API on a range of domains, including Reddit, and our work shows the efficacy of the metric. To further confirm the efficacy of the API, especially on the "attack on commenter" metric, which was underexplored in previous analyses, we collected 200 posts at random from hateful subreddits and labeled them as toxic or attacks on commenters. Three of the authors labeled these posts, and, where there were disagreements, the majority decision was used. The ROC-AUC scores for the attack on commenter and toxicity metrics are 0.85 and 0.86, respectively, thus demonstrating reasonable accuracy. Along with their toxicity model, Perspective provides a "severe toxicity" model designed to be less sensitive to hateful words than the toxicity model. Due to the elevated presence of hate terms in our subreddits of interest, we considered using this model, which has an ROC-AUC of 0.84. All results for regular toxicity are similar to those generated using the severe toxicity metric. In the future, a new set of models fine-tuned on Reddit data could create better metrics of toxicity, attacks, and sentiment within Reddit communities.
**Community-specific discourse analysis.** Content that is generally seen as negative or toxic on social media platforms may not be viewed as such by members of hate groups, as the interpreted meaning of online communication may differ greatly depending on its audience [67]. This makes the identification of antagonistic comments within hate groups challenging. While the attack on commenter attribute from the Perspective API appears to effectively identify antagonistic comments within hate groups, it was designed to be used in an entirely different domain. Community-specific tools that address hostile comments directed towards members of hate communities may prove even more accurate, and may provide greater insight into the presence of directed antagonism within hate groups.
### Broader perspective, ethics and competing interests
All data were collected from a public dataset, and all identifiable information was removed prior to analysis. Our findings can contribute to understanding how hate communities form, and why some remain comparatively small. We are cognizant that those who seek to promote prejudice and intolerance could exploit our findings. Nonetheless, we believe that, on balance, it is in society's interests that work such as this be conducted, as shedding light on the dynamics of interactions in contexts in which antisocial views are amplified can guide efforts to create more tolerant online communities, for example by providing tools that can assist platforms in moderating their content.
|
2307.10867 | FigCaps-HF: A Figure-to-Caption Generative Framework and Benchmark with
Human Feedback | Captions are crucial for understanding scientific visualizations and
documents. Existing captioning methods for scientific figures rely on
figure-caption pairs extracted from documents for training, many of which fall
short with respect to metrics like helpfulness, explainability, and
visual-descriptiveness [15] leading to generated captions being misaligned with
reader preferences. To enable the generation of high-quality figure captions,
we introduce FigCaps-HF a new framework for figure-caption generation that can
incorporate domain expert feedback in generating captions optimized for reader
preferences. Our framework comprises of 1) an automatic method for evaluating
quality of figure-caption pairs, 2) a novel reinforcement learning with human
feedback (RLHF) method to optimize a generative figure-to-caption model for
reader preferences. We demonstrate the effectiveness of our simple learning
framework by improving performance over standard fine-tuning across different
types of models. In particular, when using BLIP as the base model, our RLHF
framework achieves a mean gain of 35.7%, 16.9%, and 9% in ROUGE, BLEU, and
Meteor, respectively. Finally, we release a large-scale benchmark dataset with
human feedback on figure-caption pairs to enable further evaluation and
development of RLHF techniques for this problem. | Ashish Singh, Prateek Agarwal, Zixuan Huang, Arpita Singh, Tong Yu, Sungchul Kim, Victor Bursztyn, Nikos Vlassis, Ryan A. Rossi | 2023-07-20T13:40:22Z | http://arxiv.org/abs/2307.10867v1 | # FigCaps-HF: A Figure-to-Caption Generative Framework and Benchmark with Human Feedback
###### Abstract
Captions are crucial for understanding scientific visualizations and documents. Existing captioning methods for scientific figures rely on figure-caption pairs extracted from documents for training, many of which fall short with respect to metrics like helpfulness, explainability, and visual-descriptiveness [15] leading to generated captions being misaligned with reader preferences. To enable the generation of high-quality figure captions, we introduce **FigCaps-HF** a new framework for figure-caption generation that can incorporate domain expert feedback in generating captions optimized for reader preferences. Our framework comprises of 1) an automatic method for evaluating quality of figure-caption pairs, 2) a novel reinforcement learning with human feedback (RLHF) method to optimize a generative figure-to-caption model for reader preferences. We demonstrate the effectiveness of our simple learning framework by improving performance over standard fine-tuning across different types of models. In particular, when using BLIP as the base model, our RLHF framework achieves a mean gain of 35.7%, 16.9%, and 9% in ROUGE, BLEU, and Meteor, respectively. Finally, we release a large-scale benchmark dataset with human feedback on figure-caption pairs to enable further evaluation and development of RLHF techniques for this problem.
## 1 Introduction
For scientific articles, figures like graphs, charts and plots are key to effectively conveying the work's motivation, methodology, and results to readers. To better understand a given figure and, by extension, the research work itself, it is then crucial that the corresponding captions are informative, i.e., a given caption can represent and complement the figure, situating it in the context of the article. While the importance of figure captions is universally acknowledged, writing a good caption is not trivial. More often than not, many scholarly works contain generic figure captions and lack descriptiveness, thus rendering the figure unhelpful. This has motivated extensive research into developing methods that can automatically generate captions for figures to assist researchers in writing better captions and improve the accessibility of scientific figures for visually impaired readers.
Recent works in figure captioning formulate the problem as a vision-to-language task and have primarily focused on developing methods to encode the figure image and metadata and decode captions effectively. For model training, these methods use figure-caption pairs extracted from existing scientific articles [13]. While this method of data collection is appealing due to its easy access, this also leads to the problem of poor model learning and generalization when the captions
are not well written. As discussed in [15], more than 50% of the captions in arXiv cs.CL papers were classified as not helpful to the domain expert readers. Thus, figure captioning methods trained on such data are not calibrated to reader preferences and thus generate captions that are uninformative.
Motivated by the above, we introduce **FigCaps-HF**, a new benchmark and learning framework for improving figure-to-caption generation by aligning model optimization to reader preferences. Figure 1 describes our proposed framework. Our proposed framework is designed around two key questions: **(1)** How can we incorporate feedback from domain experts in a computationally efficient manner without compromising on performance and usability? **(2)** How can we develop a scalable framework for feedback generation that minimizes human labeling efforts?
To address **(1)** we utilize offline Upside-Down RL (T& or UDRL) to align the model's generated captions with expert feedback. Unlike previous applications of RLHF [35] which uses on-policy algorithms [40] for reward maximization, our approach of using offline reward-conditioned behavioral cloning for model optimization is computationally efficient. Once our reward model is trained and we predict the reward scores for each sample, we do not need the reward model during figure-to-caption model training. Furthermore, offline UDRL-like methods are known to perform equally well as their other counterparts [11] while being efficient and simple.
For generating feedback for figure-caption pairs in a scalable manner, we introduce a general caption-scoring mechanism, guided by domain expert feedback, which allows us to evaluate quality of figure-caption pairs with respect to readers preference. Specifically, we utilize a small human-annotated dataset of image-caption pairs, each rated on a variety of factors including helpfulness, OCR content, takeaway etc. to train an auxiliary model to score for a given caption on the basis of the quality measure. This step is integral because it allows us to infer caption scores for our larger training-set. Additionally, we publically release our benchmark dataset with feedback for future research on developing figure-to-caption models.
Our experimental results indicate an increase in performance by using our Upside-Down RL-guided approach. Firstly, our empirical results indicate that our trained reward model is very well calibrated, and the annotation statistics of our ground-truth annotations match those from our inferred annotations. Secondly, we evaluate the performance of our approach on a variety of image-to-text models and observe that models with RLHF achieve the best performance; specifically, our best-performing model has a 35.7% increase in BLEU, 16.9% increase in ROUGE-L, and 9% increase in METEOR score using RLHF. Our ablation studies show the beneficial effects of further investigation into parts of our setup, including the type and nature of feedback used.
**Summary of Main Contributions.** The key contributions of this work are as follows:
* We introduce a novel RLHF framework for figure-to-caption generation that leverages a _small amount_ of actual human feedback to learn an oracle model to infer human feedback on a larger scale for any unknown figure-caption pair encountered in the wild.
* We propose a technique that learns an oracle model from a small amount of human feedback, which can then be used for predicting the human feedback scores for any new unseen figure-caption pair.
* Extensive experiments demonstrate the effectiveness of our framework for figure-to-caption generation via human feedback.
* To facilitate further research on this important new problem, we release a comprehensive benchmark dataset for figure-to-caption generative models with human feedback. This new benchmark data will enable other researchers to develop even better RLHF models for figure-to-caption generation.
## 2 Background
**Figure Caption Generation.** Most prior work in scientific figure-captioning can be broadly divided into the following three categories based on their different input modalities: the figure-image alone, the underlying data chart of the figure, and relevant texts from the original article. In the vision-based approach, prior works have primarily utilized a vision-encoder to encode figure-features followed by a text-decoder to generate captions. [41; 38; 37] focus on explicitly extracting different features of the figure before combining their information for downstream tasks. [6; 7; 8] create and leverage
FigCAP, a synthetic figure-caption dataset and adapt an LSTM model to produce captions. More recently, [13] collected a dataset, namely SciCAP, from published articles and used a CNN+LSTM pipeline to generate captions. There are few prior works which examine the abilities of utilizing SOTA image-captioning pipelines, which primarily utilize large pre-trained Transformer [46] modules, for figure-captioning. A closely related task is Figure Question Answering, which formulates the more general problem of figure understanding as a visual-question answering task; there has been a variety of works in this space towards modeling [41; 19; 26; 42; 53; 17; 18] as well as creating curated datasets including DVQA [17], FigureQA[19], PlotQA [32], Leaf-QA [5], and ChartQA [31]. In the data-driven approach, research focuses on using only the tabular data, as well as some metadata, to generate a caption. Table-to-Text[49] focuses on generating captions for rows in arbitrary tables. Chart-to-Text[34] creates a new large-scale dataset focusing on figure-captioning and adopt an encoder-decoder transformer model to process the underlying data table and generate summaries.
In the text-driven approach, [15] focuses on utilizing only the relevant text in an article to generate a figure-caption, for example, using text explicitly referencing the figure.
**Learning with Human Feedback** Aligning model predictions with human preference has been shown to improve task performance in various areas, including natural language processing tasks like language model pretraining [20], machine translation [2; 21], text summarization [45], unlearning undesirable behaviors from language models [29], computer vision tasks like text-to-image generation [22; 52] and reinforcement learning tasks like training RL agents [30; 16; 23]. In contrast to prior works, our work also aims at improving figure caption generation by optimizing model learning to align with domain expert feedback. However, unlike previous work that leverages on-policy RL [40] algorithm to maximize the reward-weighted likelihood, our framework utilizes reward-conditioned behavioral cloning [11], an offline variant of upside-down RL method [43] to optimize model learning for reader preference. This provides a simpler and more controllable framework for human preference alignment. Furthermore, our feedback scheme allows for incorporating multiple feedback at different granularity as reward signal during the model optimization step, thus improving model learning. We propose a general human-feedback model along with a new benchmark with feedback to enable further research in developing and evaluating methods that optimize for reader preference.
## 3 Framework
In this section, we explain our proposed framework for learning with expert feedback(Figure 1). We first describe a standard figure captioning pipeline (Sec. 3.1). Next, we provide details of designing and training a generalizable human-feedback prediction model (Sec. 3.2). Finally, we discuss our feedback-aligned model training strategy instantiated as a simple RLHF framework (Sec. 3.3).
### Preliminaries
In a figure-captioning problem, we are initially provided with a dataset \(D_{w}\) consisting of figure-caption pairs \(\{I_{w},T_{w}\}\). Given the dataset \(D_{w}\), we can then define a model \(f_{\theta}\), that takes in information corresponding to the figure and outputs a sequence of text as output. Typically, the input consists of only the figure image. However, other sources of information like figure-specific text from the corresponding document, OCR, figure metadata can also be utilized as input samples.
Figure 1: RLHF Framework for Figure-Caption Generative Models
Assuming the general case of figure image as input, model \(f_{\theta}\) is constructed using a vision encoder module to get image-based encoding and a language encoder-decoder module to encode and generate corresponding text. The weights \(\theta\) can either be randomly initialized, or initialized by large-scale pretrained model weights. Furthermore, the model weights corresponding to the vision encoder and text encoder-decoder models can either be initialized with separate weights or jointly trained model weights. After initialization, model \(f_{\theta}\) can then be trained for the task of caption generation.
Generally, for training such a model, Language Modeling (LM) loss is used as a standard training objective. Let \(\{I_{i},T_{i}\}\in D\) be the input to the model \(f_{\theta}\), where \(I_{i}\in\mathbb{R}^{n}\) is the input figure, and \(T_{i}\) is the corresponding text sequence. Additionally, \(T_{i}\) is represented as sequence of \(K_{j}\) tokens from a fixed vocabulary \(\mathcal{V}\): \(T_{i}=(T_{i,1},...T_{i,K_{j}})\), where \(K_{j}=|T_{i}|\). Then the training objective is defined as:
\[\mathcal{L}_{\text{LM}}=\frac{1}{K_{j}+1}\sum_{j=0}^{K_{j}+1}H(T_{i,j}|I_{i},( T_{i,0},...,T_{i,j-1})), \tag{1}\]
where H denotes the cross-entropy loss and \((T_{i,0},...,T_{i,j-1})\) represents all the tokens in the caption prior to \(T_{i,j}\).
### Human Feedback Prediction Model
To improve figure-to-caption generation, we propose to incorporate domain expert feedback into our optimization step. To generate feedback for figure-caption pairs, we thus propose to learn a feedback prediction model to score individual datasample based on different metrics representing reader preferences. Our objective is to learn a model that can predict human feedback scores for unseen captions accurately given small set of training samples.
To this end, we first label a small control set \(D_{h}\) consisting of \(M\) figure caption pairs \(\{I_{w},T_{w}\}\) with domain experts ratings. Here we assume that \(M\ll N\), i.e. the size of the control set is significantly less than the original noisy dataset (For example, if \(N~{}=100,000\), then \(M~{}=100\)). We can now train a model on \(D_{h}\) to predict the human expert ratings for the original dataset \(D_{w}\). Specifically, given human feedback dataset \(D_{h}\) containing figure-caption pairs \(\{I_{h},T_{h}\}\in D_{h}\) and \(k\) human expert evaluation metrics for each datasample \(y\in y_{0},y_{1},...y_{k}\), we want to train \(k\) models \(R(x_{i},\theta)_{k}\) to predict the \(k\) scores respectively. Here the output of a model \(R(x_{i},\theta)_{k}(T_{h})\) is a scalar quantity denoting a specific metric score for the given input caption. Thus we formulate the scoring problem as a regression task. Specifically, we can define our human-feedback prediction model as follows:
\[R(x_{i},\theta)_{k}(T_{h})=g(l(\theta_{l},x_{i}),\theta_{g}), \tag{2}\]
where, \(R(x_{i},\theta):\mathbb{R}^{N}\rightarrow\mathbb{R}\), \(l(x_{i},\theta_{l}):\mathbb{R}^{N}\rightarrow\mathbb{R}^{D}\) and \(g(u_{i},\theta_{g}):\mathbb{R}^{D}\rightarrow\mathbb{R}\). In the above, \(l(.,\theta_{l})\) is an embedding function that takes in input data \(x_{i}\in\mathbb{R}^{N}\) and generates corresponding representation \(u_{i}\in\mathbb{R}^{D}\), and \(g(.,\theta_{l})\) is a regression function to generate the scores respectively. We only train the regression function while keeping the weights of the embedding function fixed. For training the regression function, we use mean-squared error loss, written as: \(\mathcal{L}_{\text{R}}=\sum_{i=1}^{D_{h}}(\hat{y_{i}}-y_{i})^{2}\), where \(\hat{y_{i}}\) is the predicted score while \(y_{i}\) is the ground-truth evaluation score. After training the human-feedback prediction models, we compute scores for all the samples in the training dataset \(D_{w}\) to construct our new set, which will be used for training the figure-caption model.
### Reinforcement Learning with Human Feedback
Given the human-feedback prediction model described above, we can now use it as a reward model to train an image-to-text model that generates higher-quality captions. We achieve this goal, by formulating the problem as a reinforcement learning task. Specifically, for the given training dataset \(D_{w}\) containing figure caption pairs \(\{I_{w},T_{w}\}\), we can consider figures \(I_{w}\) as the state of the environment, caption \(T_{w}\) as the actions and the corresponding predicted metric scores \(R(T_{w})\) for these captions as the rewards/outcomes. Then our objective is to learn a policy (which in this case would be the image-to-text model \(f(\theta)\) that we want to train) that maps from states(\(I_{w}\)) to actions(\(T_{w}\)) such that we maximize the reward for each action. In this way, we can generate output captions that better align with human judgment of a good figure-caption.
While there are many different approaches in the reinforcement learning literature [40] to achieve the above objective, we specifically focus on offline upside-down reinforcement learning (UDRL).
We select offline UDRL because it computationally efficient and robustly performant without being algorithmically complex[11]. In UDRL, the motivation is to learn a policy (\(\pi_{\theta}\)) that maps the states (\(S_{t}\)) to actions (\(a_{t}\)) conditioned on specific rewards(\(r_{t}\)). Thus the learning problem can be formulated as a supervised learning problem, wherein we first sample the triplets of \(S_{t},a_{t},r_{t}\) from the environment to construct our dataset, which is then used to train \(\pi_{\theta}\) using standard supervised learning objective. Specifically, we can write the optimization problem as:
\[\max_{\theta}\sum_{t\in D}\mathbb{E}[\log\pi_{\theta}(a_{t}|S_{t},r_{t})], \tag{3}\]
We follow the above UDRL framework for learning an image-text model \(f(\theta)\). For our setting, we consider our image-to-text model \(f(\theta)\) as our policy \(\pi_{\theta}\). For each caption \(T_{i}\in T_{w}\), we compute a reward score and quantize it to generate a control token \(c_{i}\). Specifically, we binarize the reward score to generate two control tokens: <|good|> and <|bad|>. In general, the level of quantization is a hyperparameter that can be selected according to task or other factors. For each caption \(T_{i}\in T_{w}\), we compute the control token by thresholding the output of \(R\), i.e. if \(R(I_{i},T_{i})\geq t\) then \(c_{i}=\text{<|good|>}\), else \(c_{i}=\text{<|bad|>}\). Here \(t\) is a hyperparameter. Given the additional human feedback, we fine-tune \(f_{\theta}\) with the following new objective function:
\[\mathcal{L}_{\text{HF}}=\frac{1}{K_{j}+1}\sum_{j=0}^{K_{j}+1}H(T_{i,j}|I_{i},(c _{i},T_{i,0},...,T_{i,j-1})), \tag{4}\]
Where \(c_{i}\) refers to the control token computed using the reward function \(R\) for a given caption \(T_{i}\).
## 4 FigCaps-HF: Figure-Captioning with Human Feedback Benchmark
As noted before, captions from online scientific articles can be of 'low quality' with respect to domain expert quality metrics [15]. This can, in turn, lead to poor figure-captioning models as these are trained to simply maximize the likelihood of the raw training data. Thus, our goal with the new benchmark is to provide additional training signals to improve figure-caption model without incurring the cost of re-creating a new dataset.
To this end, we propose our new benchmark for figure-captioning with feedback. Our benchmark consists of 133,543 figure-caption pairs [13] with feedback scores. Our dataset contains feedback based on different measures to evaluate quality of the author written captions for the corresponding figure. For each figure-caption pair, we evaluate the data sample based on four quality measures: **(1) Helpfulness, (2) Takeaway, (3) Visual-descriptiveness (visual)** and **(4) Image-text (OCR)**[15]. Each quality metric is selected to measure the ability of the readers to comprehend and draw inferences based on the provided figure and the corresponding caption.
We compute the feedback scores for each data sample in a scalable manner by first annotating a small subset with domain-expert feedback and then predicting score for the entire dataset using the human-feedback model described in Sec. 3.2. Specifically, we select 438 randomly sampled figure-caption pairs, each annotated by domain experts [15]. Each pair has been evaluated on 5-point
\begin{table}
\begin{tabular}{l l l c c c c c} \hline \hline \multicolumn{2}{c}{**\# Fig-Caption Pairs**} & \multicolumn{1}{c}{**Human Feedback**} & \multicolumn{1}{c}{**Median**} & \multicolumn{1}{c}{**Mean**} & \multicolumn{1}{c}{**Std**} & \multicolumn{1}{c}{**Q1**} & \multicolumn{1}{c}{**Q3**} \\ \hline \multirow{3}{*}{Actual Human Feedback} & \multirow{3}{*}{**400**} & **Helpfulness** & 3 & 3.01 & 1.19 & 2 & 3 \\ & & **Takeaway** & 2 & 2.16 & 1.22 & 1 & 2 \\ & & **Visual** & 2 & 2.11 & 1.08 & 1 & 2 \\ & & **OCR** & 4 & 3.83 & 0.80 & 4 & 4 \\ \hline \multirow{3}{*}{Predicted Human Feedback} & \multirow{3}{*}{**106,834**} & **Helpfulness** & 2.89 & 2.89 & 1.07 & 2.17 & 3.61 \\ & & **Takeaway** & 1.95 & 2.06 & 1.03 & 1.33 & 2.66 \\ \cline{1-1} & & **Visual** & 1.91 & 2.02 & 1.01 & 1.31 & 2.63 \\ \cline{1-1} & & **OCR** & 3.88 & 3.84 & 0.83 & 3.32 & 4.41 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of our benchmark dataset for figure-caption generative models with RLHF.
Likert scale for each of the above mentioned quality metric. Using this labeled subset, we train a human-feedback prediction model to generate scores for the remainder of the dataset. Unlike the subset, we keep the scores for the entire dataset as a continuous value. This allows the users of the benchmark to accordingly decide their own scheme for labeling each figure-caption pair based on different thresholding criteria, thus providing flexibility for fine-grained feedback.
Table 1 presents an overview of the statistics related to the actual and predicted human feedback for the captioning of scientific figures. We see that the predicted human feedback values in our study show a diverse range, as indicated by the small standard deviation of \(1\pm 0.2\) and a consistent mean value across all ratings. Additionally, the alignment of the median predicted scores with the actual human feedback values indicates that the model's performance is not skewed towards any particular rating but provides an accurate assessment across the range of ratings. This suggests that the human-feedback prediction model used to infer the scores is generalizable and can accurately assess the quality of captions across various ratings. Furthermore, the proposed model provides reliable scores for captions that fall outside the typical range of scores.
For further implementation details, please refer to the section "Additional Dataset Details" in the appendix.
## 5 Experiments
### Setup
For our human-feedback prediction model, we use MCSE [51] as embedding function and a 2-layer MLP as regression function. For comparative evaluation, we select the following models as our baselines based on input: (1) OCR-only: Pegasus[50], (2) Figure-only: TroCR [25], BeiT+GPT2 [1], ViT\(+\)GPT2 [10], ViT\(+\)RoBERTA [10; 28] and (3) Figure-Caption: PromptCap [14], Flamingo [1], GIT [47], BLIP [24] and CLIPCap [33]. We use ROUGE-L [27], METEOR [3] and BLEU [36] metrics to compare each model's performance. For more details regarding individual baselines, metrics and dataset, please refer to the Appendix.
### Results
We show our experimental results in Table 2. Specifically, we want to evaluate the performance of our RLHF framework for figure-caption generation. To this end, we compare our framework with standard fine-tuning method and benchmark the performance on the Test set of our proposed benchmark. We show fine-tuning results for all the above mentioned baselines. We use BLIP and ViT\(+\)GPT2 to evaluate our RLHF framework. From Table 2, models trained using our proposed RLHF formulation performs better than simple fine-tunning. Specifically, for BLIP, RLHF provides
\begin{table}
\begin{tabular}{l l c c c c} \hline \hline & Model & **\#Params** & **ROUGE-L** & **BLEU** & **METEOR** \\ \hline \hline \multirow{2}{*}{ocr-only} & **Pegasus**[50] & 0.27B & 2.6 & 4.78e-2 & 4.2 \\ \hline \multirow{3}{*}{Figure-Only} & **TrOCR**[25] & 0.23B & 2.5 & \textless{}0.1 & 1.8 \\ & **BEIT+GPT2** & 0.24B & 14.2 & 0.5 & 1.24 \\ & **ViT[10] + RoBERTA[28]** & 0.23B & 14.0 & 1.2 & 12.1 \\ & **ViT[10] + GPT2** & 0.24B & 14.2 & 1.8 & 12.6 \\ \hline \multirow{3}{*}{Figure-Caption} & **PromptCap**[14] & 0.47B & 13.0 & 0.9 & 8.2 \\ & **Flamingo**[1] & 1.14B & 8.7 & 0.1 & 4.6 \\ \cline{1-1} & **GIT**[47] & 0.17B & 11.9 & 0.2 & 9.1 \\ \cline{1-1} & **BLIP**[24] & 0.25B & 13.0 & 1.4 & 13.2 \\ \cline{1-1} & **CLIPCap**[33] & 0.15B & 10.3 & 1.2 & 13.1 \\ \hline \multirow{2}{*}{rlhf} & **Ours-BLIP-RLHF** & 0.25B & **15.2** & 1.9 & **14.5** \\ \cline{1-1} & **Ours-ViT+GPT2-RLHF** & 0.24B & 13.8 & **2.0** & 12.6 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison with state-of-the-art methods. For all the metrics, higher values are better (\(\uparrow\)).
has a 35.7% increase in BLEU, 16.9% increase in ROUGE-L, and 9% increase in METEOR score. For ViT+GPT2, RLHF provides a 11.1% increase in BLEU.
Aggregating the metrics, BLIP performs best, which is likely due to its aligned image encoder and text decoder which are pre-trained jointly. In contrast, ViT+GPT2's modules are not aligned/trained jointly and the text decoder learns to attend to the vision encoder only during fine-tuning. Hence, for our approach, the type of pre-training can have an impact on the amount of model improvement.
Overall, since the performance increase is generalized among models with different pre-training strategies and overall model-structure, the results show the benefits of using this simple UDRL framework for fine-tuning. Utilizing only a small amount of human annotated data, different scoring mechanisms and prompts can be further developed to take advantage of this limited supervision and further increase performance.
### Qualitative Results
To validate our framework's ability to generate better reader-aligned captions than standard approaches, we conduct an extensive qualitative study. We evaluate the results of the human feedback prediction model and the figure-captioning models trained with RLHF. We provide our analysis below:
**Human Feedback Prediction Model**: To evaluate the generalizability our model, we first computed the score predictions on all the figure-caption pairs. Then we ordered the figure-caption pairs by the predicted scores and selected the top-3 figure-caption pairs with the largest score along with the bottom-3 figure-caption pairs with the smallest score. Results are provided in Figure 2. We observe that the figure-caption pairs with the largest scores are highly helpful to the reader as they mention specific takeaways from the figure (_e.g._, "as students make more applications, the number of students who get into their top-choice school decreases, while the number of overall acceptances increases."), as well as mentioning specific visual aspects that are important to the understanding of it (_e.g._, "... Vertical lines show the true p (blue) and \(\beta\) (orange)"). In contrast, the bottom-3 figure-caption pairs scored the lowest (shown in red on the right in Figure 2) are vague, without any takeaways, nor reference to visual elements in the figure.
Figure 2: Results of our Human Feedback Prediction Model. Here we show the three figure-caption pairs with the highest (left; green) and smallest (right; red) “helpfulness” human feedback score from our trained HF model. Notably, the figure-caption pairs rated highly by our human-feedback predictive model are those that are obviously better as they mention specific takeaways, as well as OCR from the figure, and even visual aspects are often mentioned. In contrast, the figure-caption pairs with lowest scores by our predictive model are those that are extremely vague, without actual takeaways, OCR mentions, and without mentioning any visual aspects from the figure.
**Figure-Caption Generative Model**: To evaluate the quality of captions, we compare the output of BLIP-RLHF and BLIP (Fine-tuned) models. We show some of the interesting results in Figure 3. In general we see that, qualitatively BLIP-RLHF produces better captions compared to fine-tuned BLIP. In most cases, captions produced by BLIP (Fine-tuned) are either explaining the given figure incorrectly (Figure 3, leftmost sub-figure), not relevant (Figure 3, middle sub-figure) or are completely uninformative (Figure 3, rightmost sub-figure). On the other hand, captions produced by BLIP-RLHF method are more faithful to the figure, captures semantic relation between texts to summarize the phenomenon and utilizes visual attributes in explaining the figure. We provide more examples and analysis in the Appendix.
### Ablation Study
We perform the following ablation experiments to better understand different components of our framework. We provide the details of our findings below.
**Effect of Different Human Feedback Labels**: To understand how the level of quantization of our reward signals (Binary vs Multi-level) affect the model learning, we conduct the comparative study by modifying the feedback while training the BLIP-RLHF model. First, we trained the model for 10 epochs using multi-labeled human feedback (Row 2), specifically, we used 5 levels of human feedback (very bad, bad, neutral, good, very good) calculated at the 20\({}^{\text{th}}\), 40\({}^{\text{th}}\), 60\({}^{\text{th}}\), 80\({}^{\text{th}}\) percentile respectively to ensure an equal number of samples. We also experimented with varying label coarsity during the course of training (Row 3); specifically, we trained the model with 5 epochs of binary-label feedback followed by 5 epochs of multi-label feedback. We show our results in Table 3. Both aforementioned approaches with finer feedback outperform simple binary feedback and demonstrate, through our RL framework, the model's ability and receptiveness to leverage more finer human feedback effectively. The experiment also indirectly validates the quality of our human prediction model, which is capable of providing useful labels at different levels of coarsity that can be leveraged for increased performance on a downstream task like figure-captioning. The study also shows the further potential gains that can be made by further investigating different feedback mechanisms.
**Effect of different human feedback metrics** We also study the effect of using different metrics as feedback for training the figure-caption models. In particular, we compare results of training the BLIP-RLHF model with the Helpfulness, Takeaway, Visual-descritiveness (visual) and Image-text (OCR) feedback scores provided in our benchmark. We provide the results in Table 5. We see that training BLIP-RLHF with Takeaway, visual and COR feedback performs better than Helpfulness. This is understandable as helpfulness rating is subjective while Visual and Takeaway are objective evaluation metrics respectively. This shows that the type of feedback is important and that further gains can be made by modeling different aspects of the annotated human dataset.
**Effect of different figure-caption representations** To understand the effect of using different figure-caption representations, we use BERT, SciBERT and BLIP to encode our figure-captions pairs and use their final-layer representations of the [CLS] token to train our human feedback prediction model. The different representations outperform our default MCSE implementation, indicating that our human feedback prediction model, and downstream figure-captioning performance, are sensitive to the quality of representations used. Additionally, further performance gains can be made by using different representations, for example, by encoding different modalities (text only vs joint encoding of text and vision).
**Effect of human feedback position**: To understand the sensitivity of the model to the position of human feedback, we compare the performance of appending and pre-pending the human feedback labels in Table 6. Since our models generate text, during test time, without any human feedback label prompt, they can only rely on feedback during training. Additionally, due to the auto-regressive generation of our models, they only observe the label before generation, and for append, only observe the label after generation. Intuitively, pre-pending should work best since the generation is conditioned on the label. The results support this and show that ViT+GPT2 and BLIP perform better when trained with pre-pended human feedback.
## 6 Discussion, Limitations & Conclusion
In this work, we contribute a new methodology to improve caption generation for scientific figures. We show that incorporating domain expert feedback in learning a model for figure-ti-caption generation improves both model performance and caption quality. Our proposed framework is scalable (requires limited manual human effort in labeling) and flexible (allows for incorporating multiple reward signals at different granularity). We also propose a new benchmark of figure-caption pairs with caption quality scores to further the research efforts in reader-aligned figure-captioning tasks. We hope that this new dataset will allow the researchers to benchmark their own methods for incorporating human feedback in figure-to-caption generation tasks and various other image-to-text generation tasks.
While we empirically show that our framework can generate better captions, it currently lack the ability to incorporate multiple complementary feedback. Furthermore, currently we need to quantize
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline & Model & **\#Params** & **ROUGE-L** & **BLEU** & **METEOR** \\ \hline \multirow{2}{*}{RLIF-append} & **Ours-BLIP-RLHF** & 0.25B & 13.6 & 1.8 & 13.2 \\ & **Ours-VIT-GPT2-RLHF** & 0.24B & 13.8 & 1.6 & 11.9 \\ \hline \multirow{2}{*}{RLIF-prepend} & **Ours-BLIP-RLHF** & 0.25B & 15.2 & **1.9** & **14.5** \\ & **Ours-VIT+GPT2-RLHF** & 0.24B & 13.8 & **2.0** & 12.6 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Comparing RLHF prepend to append.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & **\#Params** & **ROUGE-L** & **BLEU** & **METEOR** \\ \hline
**Helpfulness** & 0.25B & 15.20 & 1.86 & 14.50 \\
**Takeaway** & 0.25B & 16.76 & 2.30 & 15.98 \\
**Visual** & 0.25B & 16.78 & 2.30 & 15.95 \\
**OCR** & 0.25B & 16.54 & 2.23 & 15.65 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Results with different human feedback metrics (BLIP-RLHF).
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & **\#Params** & **ROUGE-L** & **BLEU** & **METEOR** \\ \hline
**BERT** & 0.25B & 15.65 & 1.93 & 14.73 \\
**SciBERT** & 0.25B & 15.77 & 2.01 & 15.09 \\
**BLIP** & 0.25B & 15.73 & 1.98 & 14.94 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Results with different embedding models for the human-feedback model.
the reward score to be able to utilize it as a valid feedback when training the model. This limits the applicability of our framework in scenarios where a numerical score does not correspond to a categorical label like 'good' or 'bad'.
As a future goal, we aim to improve our framework by focusing on the above issues. We also aim to further explore the properties and further use cases of the human feedback prediction model. For example, we would like to further benchmark the generalizability of the human-feedback prediction model to various data and task distribution shifts. This can provide further insights into developing methods that are robust and adaptable.
## 7 Ethics Statement
Our work on improving figure caption generation is important in building accessible assistive tools for scientific community and visually impaired people. However, like many works in the area of generative AI, our work/general ideas also carry the risk of misuse i.e. our proposed method can be advertised by a third party as a deployable product, when in fact, we believe that our proposed method is a research endeavor and still has room for improvement. Another potential negative impact of our work could be complacent consideration of generating human feedback without due consideration to human subjects involved. This is our key motivation to make our dataset with feedback labels public, to allow interested researchers to develop and benchmark their own methods that require feedback.
Finally, we comment on the dataset privacy considerations for the proposed benchmark. Our proposed dataset and other datasets considered in this work are licensed for academic/non-commercial research (Creative Commons Attribution-Non Commercial-Share Alike 4.0 International License). Our proposed dataset does not contain any personal information.
|
2310.13779 | Weibel-dominated quasi-perpendicular shock: hybrid simulations and in
situ observations | We directly compare hybrid kinetic simulations and in situ observations of a
high Mach number high-$\beta$ shock in the Solar wind. We launch virtual probes
to demonstrate that the model quantitatively reproduces the observations. The
observed wave properties are caused by the ion Weibel instability in the shock
foot. Parameters of reflected ions in the shock foot are extracted from
simulations, and their coordinate dependencies are linearly approximated. These
approximations could be used in analytical models. Due to strong magnetic
variations at ramp the reflected ions density can be locally very high (nearly
that of the incoming flow), which makes favourable conditions for the
instability. | J. A. Kropotina, A. A. Petrukovich, O. M. Chugunova, A. M. Bykov | 2023-10-20T19:25:03Z | http://arxiv.org/abs/2310.13779v1 | # Weibel-dominated quasi-perpendicular shock: hybrid simulations and in-situ observations
###### Abstract
We directly compare hybrid kinetic simulations and in-situ observations of a high Mach number high-beta shock in the Solar wind. We launch virtual probes to demonstrate that the model quantitatively reproduces the observations. The observed wave properties are caused by the ion Weibel instability in the shock foot. Parameters of reflected ions in the shock foot are extracted from simulations, and their coordinate dependencies are linearly approximated. These approximations could be used in analytical models. Due to strong magnetic variations at ramp the reflected ions density can be locally very high (nearly that of the incoming flow), which makes favourable conditions for the instability.
keywords: Physical Data and Processes: instabilities, Physical Data and Processes: shock waves, Software: simulations, Physical Data and Processes: plasmas
## 1 Introduction
Collisionless shocks propagating through low magnetized plasma appear in various astrophysical objects. The synchrotron radiation of gamma-ray burst afterglows is likely associated with energetic particles and magnetic fields, produced by the Weibel instability mediated shocks driven by relativistic outflows in the low-magnetized circumburst medium (see e.g. Medvedev & Loeb, 1999; Milosavljevic & Nakar, 2006; Lemoine et al., 2019). The spectacular merger and accretion events in the clusters of galaxies are accompanied by the observed large scale shocks which are propagating through a hot intercluster plasma (see e.g. Churazov et al., 2021, 2023; Markevitch & Vikhlinin, 2007; Bykov et al., 2019). The shock AIVF era Mach numbers and the ratio of thermal to magnetic pressure (i.e. the plasma parameter \(\beta\)) can be large in the intercluster medium. Also, the magnetic field in the cold expanding supernova ejecta is expected to be extremely low, if one assumes that it is the field of progenitor star scaled according to the magnetic flux conservation law (Ellison et al., 2005; Telezhinsky et al., 2012). Hence, the reverse shocks observed in supernova remnants (see Raymond, 2018, for a review) are likely to be unmagnetized.
Numerical models and laser plasma experiments (Huntington et al., 2015; Park et al., 2015; Marcowith et al., 2016) indicate that in the environments initially negligible magnetic fields can substantially grow due to the ion Weibel instability (IWI) which appears during the interaction of incoming and reflected flows (Chang et al., 1990, Burgess et al., 2016). The electron Weibel instability arises first and quickly thermalizes electrons. After that the much slower IWI instability comes into play and forms a collisionless shock with a strong electromagnetic turbulence. As a result the magnetic energy density can reach about 1-2 % of the upstream bulk kinetic energy density (Kato & Takabe, 2008). These magnetic fields not only shape collisionless shocks but are also favorable for magnetic reconnection and subsequent particle acceleration (Bohdan et al., 2020). Thus low-magnetized Weibel-mediated shocks might be a site of electrons pre-acceleration and injection into the first order Fermi acceleration. Meanwhile strong magnetic fields near shock transition increase the momentum which particles need to enter the Fermi process. So the summary impact of the IWI on the Fermi acceleration efficiency is still an open question. Understanding the microstructure and properties of Weibel-mediated shocks is required to solve this problem.
Weibel-mediated shocks have been extensively studied by means of kinetic simulations (Kato & Takabe, 2008, 2010; Spitkovsky, 2008; Bohdan et al., 2020). Particle-in-cell (PIC) codes used for the simulations are highly resource-intensive because they operate on electron scales and must resolve the Debye length to avoid a nonphysical heating. As far as collisionless shocks form on much greater ion scales, some tricks are routinely used to artificially bring electron scales closer to the ion ones and reduce the computation time. Those tricks include reducing of the proton-to-electron mass ratio and increasing the upstream temperature, both increasing the ratio of the Debye radius to the ion inertial length. The upstream flow velocity (in the shock front reference frame) \(V_{sh}\) must be increased correspondingly to keep the sonic Mach number. For this reason PIC simulations usually deal with relativistic or subrelativistic shocks (at least \(V_{sh}\sim 0.1c\), where \(c\) is the velocity of light).
In the nonrelativisic case the magnetisation parameter can be estimated as \(\sigma\equiv[B_{0}^{2}/8\pi]/[n_{0}(m_{i}+m_{e})V^{2}/2]\), where \(B_{0}\), \(n_{0}\) and \(V\) are a far upstream magnetic field, number density and flow velocity; \(m_{i}\) and \(m_{e}\) are proton and electron masses. It was proposed in Kato & Takabe (2008) that \(\sigma\) must be lower than \(10^{-4}\) for a shock to be Weibel-dominated. This makes doubtful the presence of Weibel
dominated shocks in older than 1000 yr supernova remnants. Meanwhile in the laser plasma experiment of Park et al. (2015) Weibel structures were found in shock with \(\sigma\sim 10^{-3}\). In the solar wind \(\sigma\) are even higher (at least \(\sim 10^{-2}\)). Moreover, in Burgess et al. (2016) Weibel structures were obtained in hybrid simulations of low Mach number low-beta shocks with sigma about 0.1. This points out that shocks can be Weibel-dominated in moderately magnetized regime. However, the authors pointed out that hybrid codes have some limitations, and thus their results should be confronted with observations or PIC simulations. They also proposed that the Cluster and MMS spacecraft are capable of resolving Weibel structures.
Near-Earth observations with spacecraft provide unique possibility to study collisionless shock structure and dynamics in-situ, measuring electric and magnetic field, as well as electron and ion distribution functions. However, low-magnetized (high-\(\beta\)) conditions are not frequent in solar wind plasmas. Among a very few early direct observations of the low magnetized shocks in the solar wind (e.g. Formisano et al., 1975; Winterhalter & Kivelson, 1988), ISE 1 and ISE 2 spacecraft revealed important details of the high \(\beta\) terrestrial bow shock structure (Farris et al., 1992). The large separation between the spacecraft (over 2500 km) allowed simultaneous upstream and downstream solar wind plasma measurements for a period of about 8 min. Large amplitude magnetic field and density fluctuations were measured and a hot dense field-aligned ion beam escaping from the downstream region of the shock was detected. The beam was associated by Farris et al. (1992) with short periodic magnetic holes detected in front of the bow shock. Recent studies of high \(\beta\) shocks with MMS, Cluster and Geotail spacecrafts were reviewed by Petrukovich & Chugunova (2021). Often the transition region of a high-beta shock contained quasi-periodic linearly polarised pulsations, most probably associated with the IWI (Sundberg et al., 2017; Petrukovich et al., 2019; Petrukovich & Chugunova, 2021). But the relation between observed quasi-periodic oscillations and shock structure was never studied in details.
In this paper we build a hybrid kinetic model of a nonrelativistic high-beta shock observed by MMS and directly compare it with the observations. In our model we study the growth of the magnetic variance in the foot region and find it consistent with the predictions of the kinetic linear theory for the IWI. Also we launch a virtual probe to study the nature of the observed quasiperiodic oscillations. We conclude that the observed nonrelativistic (\(V\sim 400\) km/s) quasiperiendicular shock is formed due to the IWI and has a typical structure with normal-aligned filaments of density and magnetic field. Weibel structures are non-propagating in the plasma reference frame, but they are convected supersonically along the shock surface. This happens because the mean flow velocity along shock surface is substantial in the foot region occupied by reflected ions. Hence waves minima and maxima come across the slowly moving spacecraft and lead to the observed pulsations.
The quantitative agreement of our hybrid kinetic model and in-situ observations, as well as qualitative agreement with Burgess et al. (2016) allows to verify that hybrid codes are capable of reproducing Weibel-dominated shocks. We also determine the properties of the reflected ions beam responsible for the development of the instability.
The paper is organized as follows: in Section 2 we provide the linear theory of the IWI; in Section 3 we describe an observed event in the Solar wind; in Section 4 we introduce our kinetic numerical model and in Section 5 we discuss the simulated shock properties and compare them with the observed ones. The discussion and conclusions are given in Sections 6 and 7 respectively.
## 2 Theory
The transverse electromagnetic Weibel (1959) instability is widely discussed for a long time in the modeling of collective processes in plasma with anisotropic particle distributions, both in the laboratory plasma installations (e.g. Morse and Nielson, 1971; Davidson et al., 1972, 2004) and in the space environment (see e.g. Balogh and Treumann, 2013; Bykov and Treumann, 2011; Sironi et al., 2015; Marcowith et al., 2016; Pelletier et al., 2017; Takabe, 2023). The ion beam Weibel instability in a cold unmagnetized cross-field ion beam moving relative to the static cold magnetized electrons was considered by Chang et al. (1990). Besides the well-known modified two-stream and lower-hybrid drift instabilities they found a purely growing electromagnetic mode which they called the IWI. Their approach was generalized in Park et al. (2015); Burgess et al. (2016) for the case of two opposite cold unmagnetized cross-field ion beams. In the center of mass reference frame the growth rate is given by
\[\Gamma^{2}=\frac{k^{2}n_{e}n_{b}(V_{c}-V_{b})^{2}}{(nc+n_{b})^{2}(1+k^{2}c^{2 }/\omega_{pi}^{2})}, \tag{1}\]
where \(V\) and \(n\) are a flow velocity and number density of both ion populations, \(k\) is a wavenumber, \(\omega_{pi}\) is the ion plasma frequency. Here the subscript \(c\) denotes the denser core and the subscript \(b\) -- the fainter beam (note, however, that the expression is symmetric, so the subscripts can be exchanged). The growth rate is independent on \(B_{0}\) and becomes asymptotic to \(|V_{c}-V_{b}|\sqrt{n_{e}n_{b}}/(n_{c}+n_{b})\) for \(k\gg\omega_{pi}/c\). The wavevector is perpendicular to the beams.
Kato and Takabe (2010) studied the IWI kinetically taking the parameters from their PIC simulation. In case when a magnetic field and a wavevector are along \(z\), and both beams lie in the \(x-y\) plane the dispersion equation reads as
\[\det\Lambda=0, \tag{2}\]
where
\[\Lambda_{xx} = 1-\left(\frac{kc}{\omega}\right)^{2}+\frac{1}{2}\left(\frac{ \omega_{pe}}{\omega}\right)^{2}\xi_{0}[Z(\xi_{1})+Z(\xi_{-1})]+ \tag{3}\] \[\qquad+\sum_{s}\left(\alpha_{s}+2\left(\frac{V_{x,s}}{V_{T,s}} \right)^{2}(1+\alpha_{s})\right)\left(\frac{\omega_{ps}}{\omega}\right)^{2},\]
\[\Lambda_{yy} = 1-\left(\frac{kc}{\omega}\right)^{2}+\frac{1}{2}\left(\frac{ \omega_{pe}}{\omega}\right)^{2}\xi_{0}[Z(\xi_{1})+Z(\xi_{-1})]+ \tag{4}\] \[\qquad+\sum_{s}\left(\alpha_{s}+2\left(\frac{V_{y,s}}{V_{T,s}} \right)^{2}(1+\alpha_{s})\right)\left(\frac{\omega_{ps}}{\omega}\right)^{2},\]
\[\Lambda_{zz} = 1+2\left(\frac{\omega_{pe}}{kV_{T,e}}\right)^{2}[1+\xi_{0}Z(\xi_{0 })]+ \tag{5}\] \[\qquad+2\sum_{s}\left(\frac{\omega_{ps}}{kV_{T,s}}\right)^{2}(1+ \alpha_{s}),\] \[\Lambda_{xy} = \frac{i}{2}\left(\frac{\omega_{pe}}{\omega}\right)^{2}\xi_{0}[Z( \xi_{1})-Z(\xi_{-1})]+\] (6) \[\Lambda_{xz} = \Lambda_{zx}=2\sum_{s}\left(\frac{\omega_{ps}}{\omega}\right)^{2} \frac{V_{x,s}}{V_{T,s}}\frac{V_{y,s}}{kV_{T,s}}(1+\alpha_{s}), \tag{8}\]
\[\Lambda_{yz}=\Lambda_{zy}=2\sum_{s}\left(\frac{\omega_{ps}}{\omega} \right)^{2}\frac{V_{y,s}}{V_{T,s}}\frac{\omega}{kV_{T,s}}(1+\alpha_{s}), \tag{9}\] \[\xi_{n}=\frac{\omega-n\Omega_{e}}{kV_{T,e}},\quad\alpha_{s}=\left( \frac{\omega}{kV_{T,s}}\right)Z\left(\frac{\omega}{kV_{T,s}}\right),\] (10) \[\Omega_{e}=-\frac{eB}{mc},\quad\omega_{ps}=\sqrt{\frac{4\pi n_{s }\epsilon_{s}^{2}}{m_{s}}},\quad V_{T,s}=\sqrt{\frac{2k_{B}T_{s}}{m_{s}}},\] (11) \[Z(\xi)\equiv\pi^{-\frac{1}{2}}\int_{-\infty}^{\infty}\frac{e^{- z^{2}}}{z-\xi}dz. \tag{12}\]
Here \(\omega\) is a complex frequency, and the summation is only over ion sorts, i. e. for \(s=b,c\) (beam, core).
This dispersion equation will be solved numerically in section 5.5 with parameters taken from our simulations. The solution includes a purely growing mode which corresponds to the kinetic IWI. The increment is typically much lower than (1).
## 3 Observations
For the analysis we used measurements of NASA Magnetospheric Multiscale (MMS) project from magnetic field (FGM) Russell et al. (2016) and plasma (FPI) Pollock et al. (2016) experiments. In order to directly compare simulations and observations we chose the bow shock crossing by MMS spacecraft on November 25, 2017 (see Petrukovich & Chugunova (2021)). This is a high-beta strong collisionless shock in a region with an ambient magnetic field \(B_{0}\) as low as 0.9 nT. The Alfven Mach number in the shock rest frame is \(M_{a}\approx 60\) and the shock inclination angle is \(\theta\approx 65^{\circ}\), with a total ion number density \(n_{i}\approx 9\) cm\({}^{-3}\). The protons' temperature is \(T_{p}\approx 1.1\) eV, and the electrons' temperature is \(T_{e}\approx 13.4\) eV.
Shocks with such parameters are rare in solar wind, only about 30 well documented cases for \(\beta>30\) were found in the observations by modern spacecraft (Petrukovich & Chugunova, 2021). About half of these cases have rather rich internal structure with extended variations, similar to the event presented here, while the other have the appearance closer to a more standard shock (a single magnetic field and density jump). It should be noted that this difference in appearance is not due to the angle between shock normal and upstream magnetic field (parallel shocks are known to have more extended variations than perpendicular ones). Most of considered shocks have this angle larger than \(45^{\circ}\).
Though the shock crossing takes some minutes (Fig. 1), the physical width of the shock layer is only about one proton cyclotron radius in the (very low) upstream magnetic field. The spacecraft gradually crosses the shock from downstream to upstream and observes relatively stable picture of periodically (\(\sim\)15 s) emerging activations, gradually thermalizing the solar wind ion flow. Each activation, in turn, consists of high-amplitude magnetic variations with a period about 1 s, coupled with pulses of a downstream-like plasma flow. Sometimes these density and magnetic field peaks are higher than the downstream averaged plasma density and field values. Between the pulses more upstream-like flow is observed with a substantial fraction of reflected and accelerated ions.
Available observations with four closely separated spacecraft allow to determine the wavelength of 1-sec pulsations of about 150 km as well as the propagation velocity and direction. These waves have linear polarisation and are almost standing in the plasma rest frame, consistent with the expectation for the Weibel mode. Later on we compare these values with those obtained in simulations.
## 4 Simulations
We modeled a shock with parameters taken from the observations by means of the hybrid code "Maximus" (Kroopina et al., 2019, 2021). We used a 3d cartesian grid sized \(L_{x}\times L_{y}\times L_{z}=2500\times 150\times 150\) cells, each cell \(0.1I_{i}^{3}\), where \(I_{i}\) is the proton inertial length. The shock was launched via the rigid piston method, when a super-Alfvenic flow with a bulk velocity \(V_{x}=-45V_{a}\) was reflected from a conductive wall at \(x=0\). This resulted in a formation of a shock front moving in the positive \(x\) direction with \(V_{f}\approx 15.3V_{a}\). Thus in the shock front frame \(M_{a}=60.3\). The initial magnetic field lay in the \(x-z\) plane at an angle \(\theta=65^{\circ}\) to the shock normal. Average values for Helium content (4 % He(+2) by number) and temperature (He(+2) is four times hotter than protons) were used in the model. Thus the total mass density was \(\rho_{0}\approx 1.7\cdot 10^{-23}\) g/cm\({}^{3}\). Plasma parameters for all particle sorts were \(\beta_{P}\equiv 8\pi 0.96n_{i}T_{p}/B_{0}^{2}=4.8\), \(\beta_{He}=8\pi 0.40n_{i}4T_{p}/B_{0}^{2}=0.8\), \(\beta_{p}=8\pi 1.04n_{i}T_{e}/B_{0}^{2}=62.4\), and total \(\beta=\beta_{P}+\beta_{He}+\beta_{e}=68\). Electrons were treated as neutralizing massless fluid with the adiabatic equation of state and standard adiabatic index \(\Gamma=5/3\).
It should be noted that hybrid codes cannot capture electron kinetics, thus the highest-frequency modes might be modeled incorrectly. However, our model is highly resource-intensive even within the hybrid approach. Meanwhile the same simulation box size seems to be unreachable in frames of full PIC modeling, especially with realistic electron-to-proton mass ratios (which in turn might affect the results). For this reason we chose the hybrid approach. The comparison with observations will validate our method at least in the sense of reproducing wave directions, amplitudes, spectra, and polarisation.
In the hybrid code all quantities are normalized, i.e. a magnetic field and a mass density are measured in \(B_{0}\) and \(\rho_{0}\), lengths are measured in \(l_{i}=c/\omega_{p}i\approx 2.3\cdot 10^{7}n_{0}^{-0.5}\) cm, times -- in the inverse proton gyrofrequencies \(\Omega^{-1}=m_{p}c/eB_{0}\approx 11.6\) s, velocities -- in the Alfven velocities \(V_{a}=B_{0}/\sqrt{4\pi\rho_{0}}\approx 5.8\) km/s, temperatures are
Figure 1: MMS observations of the bow shock crossing at November 25, 2017. (a) Ion omnidirectional spectrogram; (b) ion number density; (c) magnetic field magnitude; (d) \(B_{y}\) magnetic field; (e) wavelet dynamic spectrum of magnetic field \(B_{y}\).
given in energy units \(m_{P}V_{A}^{2}\approx 0.4\) eV. To make units self-consistent we took as \(n_{0}\) the number density of a pure proton plasma with the same \(\rho_{0}\) (see Matthews (1994)). Thus for the proton-helium plasma \(n_{0}\) was slightly greater than the electron number density \(n_{e}\). All simulation parameters are listed in Table 1.
## 5 Results
### Shock structure
The shock temporal evolution is color-coded in Fig. 2. The front is formed at \(t\approx 30\) s (\(\sim 3\Omega_{ci}^{-1}\)) and propagates with a nearly uniform velocity \(V_{f}\approx 15.3V_{a}\sim 89\) km/s (its trajectory is shown by dashed gray lines in all panels). To mimic the MMS shock crossing four virtual probes were launched. They were located in vertexes of a right tetrahedron with an edge equal to \(0.3\,l_{i}\approx 22\) km. These virtual spacecraft moved along the shock normal from the downstream to the upstream with \(V_{p,x}=16.3V_{a}\) (\(\approx 1V_{a}\) in the front rest frame) and measured magnetic field and plasma parameters on their way. Their trajectories are shown in Fig. 2 by a black arrow (the distance between the probes is not resolved). The resulting temporal profiles are discussed in section 5.2. We checked that transverse probe movement with \(V_{p,y}=V_{p,z}=0.3V_{a}\) didn't introduce any differences.
The shock structure at \(t=10\Omega^{-1}\approx 2\) min after initialisation is shown in Figures 3 and 4. The shock front has a complicated filamentary structure, and its surface is highly corrugated (rippled). Ion phase spaces in the top panels of Fig. 3 show that the shock foot extends over several thousands kilometers. Specularly reflected ions gyrate in the upstream magnetic field and induce a cross-field current along \(x\) and \(y\). This is auspicious for the IWI. A detailed inspection of \(V_{x}\) and \(n\) maps revealed that high density filaments correspond to higher negative velocities along normal. This indicates that density enhancements in the foot are not due spatial variations of the number of shock-reflected ions. On the contrary density variations appear further in the foot and are convected towards the shock front due to the plasma bulk flow.
In the shock coplanarity plane (Fig. 3, right column) thin structures (filaments) are seen in the foot region. They make a small angle with a shock normal and grow rapidly towards the front. The overall picture resemble those seen in simulations of Kato & Takabe (2008, 2010), who concluded that such structures appear due to the IWI. Structures seen in both planes are also very alike those discussed in Burgess et al. (2016). They identify narrow Weibel filaments with width close to \(2l_{i}\) in the coplanarity plane and somewhat wider oblique 'tongues' in the perpendicular plane which they call 'the AIC-like ripples'.
Magnetic fluctuations associated with these filaments are very strong near the front. In the downstream (yellow rectangle) and in the close foot region (cyan rectangle) their amplitude reaches \(10B_{0}\), and between them, at ramp, it is twice as large. Thus the magnetic energy density reaches about 2 % of the bulk kinetic energy. Number density peaks reach about \(10\rho_{0}\).
### Virtual probes
Instantaneous shock profiles along normal (Fig. 4) are much smoother than those seen by MMS (Fig. 1). Hence we cannot directly compare observations with simulations if we ignore relative motion of the shock and spacecraft. In Fig. 5 we show what observes one of the virtual probes, which starts at \(t=3.5\Omega^{-1}\) at \(x=37.5l_{i}\). The picture is very alike the observations (see Fig. 1) and severely differs from the instantaneous profiles (Fig. 4). The reason is that the plasma moves across the probes much faster than the probes move across the shock. Moreover, the plasma transverse movements are highly oscillatory. We found that along probes' ways \(V_{y}\) varies between \(-30V_{a}\) and \(15V_{a}\), and \(V_{z}\) -- between \(-15V_{a}\) and \(5V_{a}\).
To study how the observed oscillations change while the probe is moving across shock we performed the Morlet wavelet transform of \(B_{y}\) projection and found that the picture qualitatively resembles MMS observations (Cf. the bottoms panels of Fig. 1 and Fig. 5). The main difference between the model and observations is the absence in the model of the clear-cut bunching of 1-Hz oscillations in the \(\sim\)15 sec 'packets', though some enhancements are observed in the frequency spectra at about 0.1 Hz (bottom panel of Fig. 5).
It should be noted that our virtual spacecraft crossed the shock much faster than the real ones (two minutes vs five). It was done in order to show the whole transition with reasonable computational efforts. However to check the impact of probes velocity we also launched several probes starting at different points and moving with \(V_{p,x}=15.5V_{a}\) and \(V_{p,x}=15.7V_{a}\) (i. e. \(0.2V_{a}\) and \(0.4V_{a}\) relatively to the front).
Fig. 6 shows the results of three probes starting at \(x=38l_{i}\), \(41l_{i}\) and \(42.5l_{i}\) and moving with \(V_{p,x}=15.7V_{a}\). They should cross the
\begin{table}
\begin{tabular}{c c c c} \hline \(B_{0}\) & \(0.9\) nT & \(T_{P}\) & \(1.1\) eV \\ \(n_{i}\) & \(9\) cm\({}^{-3}\) & \(T_{He}\) & \(4.4\) eV \\ \(n_{0}\) & \(10\) cm\({}^{-3}\) & \(T_{e}\) & \(13.4\) eV \\ \(l_{i}\) & \(68\) km & \(L_{x}\) & \(2500\) cells \\ \(\Omega_{ci}^{-1}\) & \(11.6\) s & \(L_{y}\) & \(150\) cells \\ \(V_{a}\) & \(5.8\) km / s & \(L_{z}\) & \(150\) cells \\ \(M_{a}\) & \(60.3\) & cell size & \(0.1\,l_{i}^{3}\) \\ \(\theta\) & \(65^{\circ}\) & & \\ \hline \end{tabular}
\end{table}
Table 1: Simulation parameters
Figure 2: Shock dynamics. From top to bottom: cross-section averaged transverse magnetic field \(B_{y}\), magnetic field magnitude and density. An approximate front position is marked by dashed gray lines. Black arrows represent a trajectory of a virtual probe. The insets represent the closer view of the front.
shock in \(\sim 5\) minutes, just as the real spacecraft. Black lines show the probes measurements, and red ones correspond to the smoothed data.
The modeled data seems to be more noisy than due to limited number of particles per cell. Also the highest frequencies might be affected by the grid resolution and the lack of electrons kinetics. Hence the modeled curves do not ideally reproduce the observed ones. However rather prominent wave packets appear in the case of slower probes, which stay longer in each region. More upstream-like regions with lower density and magnetic fields alternate with more downstream-like ones. The wave packets are less clearly separated from each other than in observations. This probably indicates that the observed shock is more variable. One of the reasons might be that the longest waves are restricted by simulation box sizes.
Overall, it is possible that prolonged shock crossings like the presented one are observed due to extremely low proper shock speed (of the order of km/s). The detailed discussion of this issue is beyond the scope of this paper. Here we concentrate on comparison of the observed and simulated structures.
We compare the plasma wave properties observed at the near-Earth shock, with those in the recordings of the virtual probes with \(V_{p,x}=16.3V_{a}\) (Table 2). Four probes in observations and simulations allow to determine not only the temporal sequence of measured parameters, but also determine spatial gradient (hence, wavevector) on a scale of separation. In both cases magnetic oscillations were linearly polarised (the maximum variance eigenvalue is at least 4-5 times larger, than the medium variance one) and the wavevector was close to the local magnetic field (\(\theta_{kB}\) < 40\({}^{o}\)). Dominating frequency (measured as frequency of the peak in the spectrum) in observations
Figure 3: Shock structure. Top panel: \(x-V_{x}\) and \(x-V_{y}\) protons phase spaces averaged over \(y\) and \(z\). Other panels (from top to bottom): velocity, magnetic field and number density maps in two projections. \(V_{x}\) is given in the front rest frame. Colored rectangles mark downstream and upstream zones where spectral analyses were made. Magnetic field lines are superimposed in black.
is larger: 1.25 Hz vs 0.38 and 0.8 Hz (two equivalent peaks are present in the simulation interval).
The doppler shift in frequency is in both cases close to the observed frequency, which means that the waves are standing in the rest frame and are purely convected with the plasma flow. Observed and simulated wavelengths are about 100-200 km. Concluding, we consider the properties of magnetic oscillations in observation and simulation as very similar in their principal characteristics.
There are oscillations with frequencies between \(10^{-1}\) and 1 Hz with quasiperiodic enhancements and frequency growth towards upstream.
### Shock dynamics
Supercritical collisionless shocks are known to be quasi-stationary. The transition appears due to partial reflection of the incoming ions and reforms quasi-periodically. Shock reformation is a topic of great interest, actively investigated by means of numerical models and in-situ observations (see, e. g. Turner et al., 2021; Yang et al., 2020; Johlander et al., 2022). There are two mechanisms of this process: (I) the accumulation of reflected ions in a foot until their density becomes comparable to that at ramp, and (II) the front interaction with waves convected by the upstream flow (Marcowith et al., 2016). The insets of Fig. 2 show some signatures of the first type reformation: the front velocity and the cross-section averaged magnetic field at the overshoot slightly vary with time. However, field variations are relatively weak, and a density profile is nearly stationary. So the "classical" picture of shock reformation did not reveal in this case. The more thorough investigation of this problem is beyond the scope of this paper.
Meanwhile waves generated by the IWI and convected by the flow substantially contribute to the shock nonstationarity as well. To demonstrate this we made a real-time movie of the probe with \(V_{p,x}=15.7V_{x}\) recordings together with its movement through the shock. The movie is available online in the supplementary materials. Fig. 7 shows one frame of this video. In the upper row color maps of \(B_{y}\), \(n\) and \(V_{x}\) are given. The velocity is in the front rest frame. The probe position is marked by a white triangle, and its recordings are shown in the bottom row. The red line corresponds to the data measured until the current moment, and the blue one - to the future recordings.
From the color maps we can see that the front is highly corrugated. In the movie all these structures move both towards the shock and
\begin{table}
\begin{tabular}{l c c} \hline \hline Parameter & observation & simulation \\ & 23:40:15-23:40:21.5 UT & 97:1-102.7 s \\ observed frequency, Hz & 1.25 & 0.37, 0.8 \\ eigenvalues & 4.72, 7.58, 46.2 & 11.9 22.47 63.99 \\ observed wave speed, km/s & 173 & 85 \\ wavelength, km & 138 & 230, 106 \\ Doppler shift, Hz & 1.3 & 0.37, 0.81 \\ \(\theta_{LB}\) & \(35^{\circ}\) & \(37^{\circ}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Wave analysis data for P#2 and P#3.
Figure 4: Flow velocity, magnetic field and density along the shock normal \(y=z=0\) at \(t=10\Omega_{cl}^{-1}\). Selected regions are the same as in Fig. 3
Figure 5: Overview of the Probe 1 shock crossing. From top to bottom: protons phase space \(x-E\); ion number density; magnetic fields; wavelet dynamic spectrum of \(B_{y}\).
across it, leading to a lively structure and oscillatory probe measurements. We chose the moment when the probe is near a density peak, which corresponds to a higher negative \(V_{x}\). Such regions appear where transverse magnetic fields are low and the upstream plasma easily penetrate downstream. These "paths of least resistance" are surrounded by regions with higher \(B_{\perp}\), where hotter reflected ions lead to pressure increase. This pressure compresses colder "paths of least resistance" up to nearly downstream density. So thin dense filaments appear, clearly visible in the upper middle panel of Fig. 7.
### Spectral analyses
To better understand the structure of the shock we made spectral analyses of magnetic field fluctuations in regions marked by yellow and cyan rectangles in Figures 3 and 4. Figures 8 and 9 show \(B_{x}\) spectral power density in \(k_{x}-k_{z}\) and \(k_{x}-k_{y}\) planes in the close downstream region marked by a yellow rectangle, and in the close upstream region marked by a cyan rectangle. The spectral power density is \(|\bar{B}_{x}|^{2}\), where \(\bar{B}_{x}\) is a discrete Fourier transform of \(B_{x}\). From 1d and 2d spectra it can be seen that in the close upstream region \(k_{z}\approx 3.5I_{l}^{-1}\) and \(k_{x}\approx k_{y}\approx 1I_{l}^{-1}\), which corresponds to \(\lambda\sim 2I_{l}\sim 140\) km. In the close downstream wavelengths are larger. We also checked that in the close upstream \(B_{x}\) fluctuations are stronger than those of \(B_{y,z}\), while in the close downstream the spectral power in all three magnetic field projections is comparable.
The upstream wavevector direction and waves polarisation in the coplanarity plane \(x-z\) agree with those expected for the IWI, i. e. \(\mathbf{k}\) is along the mean magnetic field and, hence, is nearly perpendicular to the cross-field beam in the \(x-y\) plane (see top panels of Fig. 3).
### Growth rate analyses
To demonstrate that the shock transition is governed by the IWI we directly compare the growth rate and spectral properties with linear predictions. We expect a zero frequency mode, thus if we move towards the shock front with the flow, the waves amplitude grows as
\[b_{0}\exp\left(\int\,\Gamma(x(t))dt\right), \tag{13}\]
where \(b_{0}\) is its initial value and \(\Gamma\) is an increment.
To check this we studied the evolution of magnetic field fluctuations amplitude towards shock front. We calculated it as a standard deviation over a transverse slice embedded in the upstream flow. The result is shown in Fig. 10, a.
We integrated the \(x-V_{x}\) and \(x-V_{y}\) phase spaces (see the top panels of Fig. 3) to estimate velocity, density and temperature of the core and beam. We arbitrarily placed a boundary between them at \(V_{x}=-15V_{a}\approx-90\) km/s and \(V_{y}=-5V_{a}\approx-30\) km/s. We also checked that the result is nearly the same for boundaries at \(V_{x}=0\) and \(V_{y}=-10V_{a}\). Core and beam densities, as well as their thermal and flow velocities in the center of mass rest frame are shown in the panels (b) and (c) of Fig. 10.
Knowing physical parameters at each point we could find hydrodynamic and kinetic increments and maximal wavenumbers from (1) and (2) respectively. It should be noted that the kinetic approach suggests treating protons and helium ions separately. However, we considered unmagnetized ions, so only their plasma frequencies and thermal velocities are important. The latter are equal in our model because ions temperatures are mass-proportional. The ratio of He(+2) and proton plasma frequencies depends only on their number densities, as if they were both protons. So we solved (2) considering a pure proton plasma.
Knowing \(\Gamma\) and taking initial \(b_{0}\) from simulations we directly compared simulated and theoretical growth using (13). Theoretical amplitudes are shown in the panel (a) of Fig. 10 by orange and blue curves, and the predicted wavenumber is superposed on the actual spectrum in the panel (e). The curve color codes the corresponding increment. We also checked that the real frequency found from the dispersion equation was zero.
The linear kinetic theory can also predict waves polarisation. In respect that \(\Lambda_{i}/E_{j}=0\), and \(\mathbf{ck}\times\mathbf{E}=\omega\delta\mathbf{B}\), where \(\mathbf{E}\) is an electric field and \(\delta\mathbf{B}\) is a magnetic field variation, and \(\mathbf{k}\) is along \(z\), we find
Figure 6: Measurements of probes moving with \(0.4V_{a}\) relatively to the shock front. Left column – a probe starting at \(x=38I_{l}\); middle column – at \(x=41I_{l}\); right column – at \(x=42.5I_{l}\). Red lines show the same data smoothed by a Savitzky-Golay filter.
Figure 7: A snapshot of the shock transition with a virtual probe marked by a white triangle. Upper row: color maps of \(B_{y}\), \(n\) and \(V_{x}\), lower row: the probe recordings of the same quantities. The current moment is at the conjunction of red (past) and blue (future) lines. The corresponding movie is available online.
Figure 8: Panel (d): \(k_{x}-k_{z}\) maps of spectral power in \(B_{x}\) fluctuations in the downstream region (inside the yellow rectangle in Fig. 3); panel (e): the same in the close foot region (cyan rectangle in Fig. 3); panels (a) and (b): 1d spectral power of \(B_{x}(x)\) at \(z=0\) in the corresponding regions; panels (c) and (e): 1d spectral power of \(B_{x}(z)\) at the left edges of the corresponding regions.
Figure 9: The same as in Fig. 8, but for \(k_{x}-k_{y}\)
that
\[(\Lambda_{yx}\Lambda_{zz}-\Lambda_{yz}\Lambda_{zx})\delta B_{y}=(\Lambda_{xy} \Lambda_{yz}-\Lambda_{yy}\Lambda_{zx})\delta B_{x}.\]
In panel (d) of Fig. 10 we compare \(|\Lambda_{yx}\Lambda_{zz}-\Lambda_{yz}\Lambda_{zx}|^{2}(\delta B_{y}^{2})\) and \(|\Lambda_{zy}\Lambda_{yz}-\Lambda_{yy}\Lambda_{zz}|^{2}(\delta B_{x}^{2})\), where the magnetic variance is taken from simulations, and \(\Lambda_{ij}\) is calculated from the beam properties. The curves do not perfectly coincide, but they resemble each other even in a highly nonlinear regime.
It can be seen that the hydrodynamic increment is far too large, but the simulated growth rate is reasonably explained by the kinetic linear theory until the wave amplitude approaches about \(0.1B_{0}\). After that the system gradually enters a nonlinear regime, and the predicted growth rate outplays the actual one. Note also that (2) was obtained for a uniform medium, and the actual increment may differ due to strong gradients. The predicted wavenumbers are slightly higher than the simulated spectral maxima, but the simulated spectrum is rather broad, so the agreement is satisfactory. The polarisation properties of the IWI are also well reproduced. So we can conclude, that the IWI governs the shock transition.
It should be noted that the investigated shock has a sound Mach number \(M_{s}\) as low as 7. However, linear analyses in Nishigai and Amano (2021) indicated that a shock must have both \(M_{a}\) and \(M_{s}\) as great as \(\sim 20-40\) to be Weibel-dominated. The authors argued that the instability behaves Weibel-like if the growth rate is much greater than the ion cyclotron frequency. For an Alfven Mach number of 60 and a sound Mach number of 7, Fig. 3 of Nishigai and Amano (2021) predicts the growth rate which is comparable and slightly larger than the ion cyclotron frequency. In our simulations it locally reaches \(\sim 10\).
In Nishigai and Amano (2021) the reflected ions in the foot were parameterised as a ring distribution with a number density about \(0.2n_{0}\), the radius of the ring equal to the upstream flow velocity, and thermal spread equal to the one of the upstream flow. From Fig. 10 it can be seen that the actual quantities strongly vary along the shock normal. Closer to the shock the reflected ions density nearly reaches that of the incoming flow. In the regions where the density is about \(0.2n_{0}\) the beam velocity is greater than the upstream one. That is why the IWI growth rate exceeds that predicted by Nishigai and Amano (2021).
Simple analytical models like that in Nishigai and Amano (2021) are a powerfool tool to scan a wide range of parameters with minimal computational efforts. So it is useful to precise them with parameterisation of ions distributions based on numerical models. To make the first step in this direction we approximated the reflected ion density and flow velocity in the simulated shock foot by simple linear functions of coordinate:
\[n_{b}/n_{0}=2.0(1.0+(x-x_{sh})/R_{g}),\]
\[|V_{b,y}-V_{c,y}|/V_{a}=0.66M_{a}(1.0+(x-x_{sh})/R_{g}),\]
where \(R_{g}\) is the effective particle gyroradius in the foot. It appeared to be equal to \(0.4M_{a}l_{i}\) in our case because the transverse magnetic field is greater than that far upstream. A thermal velocity of the beam varied only slightly and was close to \(25V_{a}\) (an order higher than that of the core). The beam velocity along \(x\) could not be approximated linearly, but it quickly reached a relatively stable value \(|V_{b,x}-V_{c,x}|/V_{a}\approx M_{a}\). The difference between \(V_{b,x}\) and \(V_{b,y}\) is due to the shock drift acceleration by a motional electric field along \(y\) axis Sagdeev (1966). As a result \(V_{b,y}\) eventually exceeds \(V_{b,x}\), and \(B_{x}\) variation becomes stronger than that of \(B_{y}\).
## 6 Discussion
Near Earth spacecraft plasma observations afford unique possibility to sample in-situ such important astrophysical phenomena as collisionless shocks. A rich variety of shock structures was discovered, depending on basic plasma constants and geometry (Mach number, plasma \(\beta\), magnetic field direction, etc). However these experiments are essentially limited by number of spacecraft simultaneously available -- measurements can be performed only in few points, while the spatial structure at large is only inferred. On the other hand, numerical modelling affords the full access to spatio-temporal structure of the shock transition. To make calculations to be completed in realistic time, simplifications to physical models are usually introduced, which may question the applicability of results.
In our work, we were able to prove that rather typically observed high \(\beta\), high-M\({}_{a}\) shock structure with the developed high-amplitude magnetic fluctuations is well reproduced with our hybrid model with Helium and adiabatic electrons. Consistency is found in the general appearance of the shock transition (Fig.1 and Fig.5), as well as in quantitative characteristics of the dominating plasma wave mode (Tab. 2).
It is shown that the temporal profiles of the shock crossing depend substantially on the relative velocity of the shock front and the spacecraft probes. Slowly flying probes (as mostly in space experiment) are able to detect the strong temporal variability of the
Figure 10: Growth rate analyses: (a) the simulated growth of magnetic fluctuations amplitude towards shock front compared with the predictions of hydrodynamic and kinetic linear theories; (b) beam and core number densities; (c) beam and core thermal velocities and flow velocities along \(y\) in the center of mass rest frame; (d) polarisation analyses (see text); (e) color-coded \(B_{x}\) Fourier spectrum with overlaid linear prediction for \(k_{max}\) (the curve color codes the maximal predicted growth rate).
shock front, while high-speed motion results in rather simple almost instantaneous profile cuts (Fig. 4). It is not always possible to determine the spacecraft-shock relative velocity in orbit and possibility of such strong dependence of observations on relative motion should be taken into account.
Of course, observed differences of the shock structure (see discussions in Petrukovich et al. (2019); Petrukovich and Chugunova (2021) might be due to some differences in shock parameters such as magnetic field angle and Mach number. Some our simulation runs, not shown here, reveal significant variance of shock structure across parameter range and model details, even if all cases are high-\(\beta\) shocks. This parametric dependence of shock structure is left for the future studies.
Yet another advantage offered by simulations, is the ability to access the 3D spatial structure of the transition region in full details. The cuts of simulation box (like Fig. 3) reveal the complicated breathing filament structure with varying scale in different directions. These filaments move rapidly along the shock front and create the magnetic and plasma variability observed by the probes. The amplitude of these variability is very large, magnetic amplitudes are order of magnitude larger than the background magnetic field. Such variability might provide sites of magnetic reconnection and particle acceleration, though in our case we have not seen it neither in the simulations nor in the observations, probably due to high \(\beta\) value.
Plasma properties also principally change across filaments: more sheath-type thermalized and high-density streams interchange with more upstream-type with low density and high percentage of the reflected ions. Close to the ramp the reflected ions density nearly reaches that of the incoming flow. The detailed physics of such complicated shock transition remains to be studied with point-by-point comparison of observations and simulations. It is important for such a study, as it was stated above, that our numerical model is closely compatible with observation in all comparable properties.
Finally, our results represent one more proof that high-\(\beta\) shock transition is dominated with the Weibel-like plasma wave mode. We determine polarisation as well as dispersion characteristics, which coincide in observations and modeling. The complicated spatial structure detected, suggests that such mode needs to be considered in the deeply non-linear regime, practically shaping the process of plasma flow thermalisation. Knowing parameters of magnetic variations allows to analyse variants of shock-related particle acceleration and diffusion at such astrophysical objects. To improve analytical models of shock transition we extracted the parameters of the reflected ions distribution from our simulation and found that the beam density and flow velocity could be well approximated by linear functions.
## 7 Conclusions
We demonstrated that hybrid kinetic models can quantitatively reproduce observed properties of strong Weibel-dominated high-beta quasiperpendicular shocks. Hybrid models are much less resource-intensive than PIC ones and do not need high upstream temperatures and subrelativistic flow velocities. This gives a possibility to study a large field of shock parameters and find the conditions when shocks become Weibel-dominated. Strong magnetic variations at ramps of such shocks could prevent particles injection into the first order Fermi acceleration process. On the other hand such variations might cause magnetic reconnection which in turn produces nonthermal particles. So the net impact of the IWI on particle acceleration is still to be determined.
We also extracted from simulations distributions of reflected ions in the shock foot. This allows to improve existing analytical models of such shocks.
## Acknowledgements
JK and AB acknowledge the Russian Science Fund grant 21-72-20020, which supported the plasma numerical modeling presented here. Some of the modeling was performed at the Joint Supercomputer Center JSCC RAS and at the "Tornado" subsystem of the St. Petersburg Polytechnic University supercomputing center. AP and OC acknowledge the Russian Science Fund grant 19-12-00313, which supported the observation analysis and comparison with simulations. Authors are grateful to NASA MMS project team for excellent space project and observations. We are very grateful to the reviewer Dr. Takanobu Amano, whose fruitful suggestions greatly improved this paper.
## Data Availability
MMS spacecraft data are open at the NASA CDAWeb data archive [https://cdaweb.gsfc.nasa.gov/](https://cdaweb.gsfc.nasa.gov/).
|
2302.04014 | Extension of Hodge norms at infinity | It is a long-standing problem in Hodge theory to generalize the
Satake--Baily--Borel (SBB) compactification of a locally Hermitian symmetric
space to arbitrary period maps. A proper topological SBB-type completion has
been constructed, and the problem of showing that the construction is algebraic
has been reduced to showing that the compact fibres A of the completion admit
neighborhoods X satisfying certain properties. All but one of those properties
has been established; the outstanding problem is to show that holomorphic
functions on certain divisors "at infinity" extend to $X$. Extension theorems
of this type require that the complex manifold X be pseudoconvex; that is,
admit a plurisubharmonic exhaustion function. The neighborhood X is stratified,
and the strata admit Hodge norms which are may be used to produce
plurisubharmonic functions on the strata. One would like to extend these norms
to X so that they may be used to construct the desired plurisubharmonic
exhaustion of X. The purpose of this paper is show that there exists a function
that simultaneously extends all the Hodge norms along the strata that intersect
the fibre A nontrivially. | Colleen Robles | 2023-02-08T12:05:03Z | http://arxiv.org/abs/2302.04014v1 | # Extension of Hodge norms at infinity
###### Abstract.
It is a long-standing problem in Hodge theory to generalize the Satake-Baily-Borel compactification of a locally Hermitian symmetric space to arbitrary period maps. A proper _topological_ Satake-Baily-Borel type completion has been constructed, and the problem of showing that the construction is _algebraic_ has been reduced to showing that the compact fibres \(A\) of the completion admit neighborhoods \(X\) satisfying certain properties. All but one of those properties has been established; the outstanding problem is to show that holomorphic functions on certain divisors \(Y\subset X\) "at infinity" extend to \(X\). Extension theorems of this type require that the complex manifold \(X\) be pseudoconvex; that is, admit a plurisubharmonic exhaustion function. The neighborhood \(X\) is stratified, and the strata admit Hodge norms which are may be used to produce plurisubharmonic functions on the strata. One would like to extend these norms to \(X\) so that they may be used to construct the desired plurisubharmonic exhaustion of \(X\). The purpose of this paper is show that there exists a function that _simultaneously_ extends all the Hodge norms along the strata that intersect the fibre \(A\) nontrivially.
Key words and phrases:period map, variation of (mixed) Hodge structure 2010 Mathematics Subject Classification: 14D07, 32G20, 32S35, 58A14 R.: _RODS is partially supported by NSF DMS 1611939, 1906352._
## 1. Introduction
Suppose that \(D\) is a Mumford-Tate domain parameterizing pure, effective, weight \(\mathsf{w}\), \(Q\)-polarized Hodge structures on a finite dimensional rational vector space \(V\). Fix a period map \(\Phi:B\to\Gamma\backslash D\) defined on a smooth quasi-projective \(B\) with smooth projective completion \(\overline{B}\supset B\) and simple normal crossing divisor \(Z=\overline{B}\backslash B\) at infinity.
Let \(\wp=\Phi(B)\subset\Gamma\backslash D\) denote the image. A proper _topological_ Satake-Baily-Borel (SBB) type completion \(\Phi^{\mathsf{S}}:\overline{B}\ \to\ \overline{\wp}\) of \(\Phi\) is constructed in [1]. Let \(Z_{1},\ldots,Z_{\nu}\) denote the smooth irreducible components of \(Z\), and \(Z_{I}=\cap_{i\in I}Z_{i}\) the closed strata. By the nilpotent orbit theorem [15], the period map \(\Phi\) asymptotically induces a period map \(\Phi_{I}:Z_{I}^{*}\to\Gamma_{I}\backslash D_{I}\) along the open strata \(Z_{I}^{*}=Z_{I}\backslash\cup_{j\not\in I}Z_{j}\). Set \(Z_{\emptyset}^{*}=B\), so that \(\Phi_{\emptyset}=\Phi\). The topological compactification \(\overline{\wp}\) of \(\wp\) is the disjoint union of the images \(\wp_{I}=\Phi_{I}(Z_{I}^{*})\) modulo a certain equivalence relation that accounts for the fact that the period map \(\Phi_{I}\) may extend to some \(Z_{J}^{*}\subset Z_{I}\). (When extension exists, it coincides with \(\Phi_{J}\). See [1] for details.) With these identifications, \(\wp_{I}\hookrightarrow\overline{\wp}\), the image \(\overline{\wp}\) is a finite union of quasi-projective varieties, and \(\Phi^{\mathsf{S}}\) is continuous and proper. The fibres of \(\Phi^{\mathsf{S}}\) are algebraic subvarieties of \(\overline{B}\).
One would like to assert that \(\overline{\wp}\) is itself projective algebraic. This is known to be the case when \(D\) is hermitian and \(\Gamma\) is arithmetic: \(\overline{\wp}\) is the closure of \(\wp\) in the Satake-Baily-Borel compactification of \(\Gamma\backslash D\). In general it is an open problem to show that \(\overline{\wp}\) is a complex analytic space. The latter would imply Conjecture 1.1 below.
The completion \(\Phi^{\mathsf{S}}\) admits a "Stein factorization"
The fibres of \(\hat{\wp}\to\overline{\wp}\) are finite, and the fibres of \(\hat{\Phi}^{\mathsf{S}}\) are connected, compact algebraic subvarieties of \(\overline{B}\).
**Conjecture 1.1** ([1]).: _The topological space \(\hat{\wp}\) is Moishezon, and the map \(\hat{\Phi}^{\mathsf{S}}:\overline{B}\to\hat{\wp}\) is a morphism._
The conjecture holds in the case that \(D\) is hermitian symmetric, and in the case that \(\dim\wp\leq 2\), [1]. The proof of Conjecture 1.1 has been reduced to showing that every (compact, connected) fibre \(A\) of \(\hat{\Phi}^{\mathsf{S}}\) admits a neighborhood \(X\subset\overline{B}\) with the following properties [1, Theorem 3.20]:
1. The restriction of \(\hat{\Phi}^{\mathsf{S}}\) to \(X\) is proper.
2. Holomorphic functions on \(Z_{I}\cap X\) extend to \(X\).
Neighborhoods satisfying the first property (I) exist by Theorem 1.2. Let \(\mathcal{F}^{\mathsf{w}}\subset\mathcal{F}^{\mathsf{w}-1}\subset\cdots\subset \mathcal{F}^{0}\) denote the Hodge vector bundles over \(B\). Assume that the local monodromy at infinity is unipotent, so that the \(\mathcal{F}^{p}\) extend to \(\overline{B}\).
**Theorem 1.2** ([1]).: _Every fibre \(A\) of \(\hat{\Phi}^{\mathsf{S}}\) admits a neighborhood \(X\subset\overline{B}\) with the following properties:_
1. _The restriction of_ \(\hat{\Phi}^{\mathsf{S}}\) _to_ \(X\) _is proper._
2. _For every_ \(p\)_, there exists a positive integer_ \(1\leq m_{p}\) _so that the line bundle_ \(\det(\mathcal{F}^{p})^{\otimes m_{p}}\) _is trivial over_ \(X\)_._
The second property (II) is an Ohsawa-Takegoshi type extension problem (although without the need for bounds on the \(L^{2}\) norms) [1, 1]. Such theorems usually impose the hypothesis that \(X\) is pseudoconvex.
### Pseudoconvexity
Recall that the neighborhood \(X\) is _pseudoconvex_ if it admits a plurisubharmonic exhaustion function \(\rho:X\to[-\infty,\infty)\). A continuous function \(\rho:X\to\mathbb{R}\) is an _exhaustion_ if \(\rho^{-1}[-\infty,r)\) is relatively compact for all \(r\in\mathbb{R}\). The function is _plurisubharmonic_ (psh) if for every holomorphic map \(\psi:\Delta\to X\), the composition \(\rho\circ\psi\) is subharmonic. If \(\rho\) is \(\mathcal{C}^{2}\), then it is psh if and only if \(\mathbf{i}\partial\overline{\partial}\rho\geq 0\). For example, if \(f\in\mathcal{O}(X)\), then \(\rho=|f|^{2}\) is psh. Likewise, a line bundle with metric \(h\) is positive if \(-\log h\) is psh. Oka's Theorem asserts that a complex manifold is Stein if and only if it admits a smooth strictly psh exhaustion function.
**Conjecture 1.3**.: _The neighborhood \(X\) in Theorem 1.2 may be chosen to be pseudoconvex. There is a continuous exhaustion function \(\rho:X\to[0,\infty)\) with the property that \(\partial\overline{\partial}\rho(v,\overline{v})\geq 0\), and equality holds if and only if \(v\) is tangent to a fibre of \(\Phi^{\mathsf{S}}\).1_
Footnote 1: This statement must be interpreted with some care as, in general, \(\rho\) will be \(\mathcal{C}^{1}\), but not \(\mathcal{C}^{2}\). Then the inequality \(\partial\overline{\partial}(v,\overline{v})\rho\geq 0\) of Conjecture 1.3 should be understood to allow \(\partial\overline{\partial}\rho(v,\overline{v})=+\infty\). The
In SS1.2 we will show that the conjecture holds in three cases: when \(D\) is hermitian symmetric; when \(A\subset B\); and when \(A\) is a connected component of \(Z\). The purpose of this note is to discuss how the conjecture might be approached in general (SS1.2.4), and to establish a key result in that direction (extension of Hodge norms, Theorem 1.7). This result is used elsewhere to prove the conjecture in the following simple, but nontrivial, case.
**Theorem 1.4** ([13]).: _Suppose that the Mumford-Tate domain \(D\) parameterizes weight \(\mathsf{w}=2\), effective, polarized Hodge structures with \(p_{g}=h^{2,0}=2\). Assume that the fibre \(A\) is contained in a codimension 1 strata \(Z^{*}_{i}\). Then Conjecture 1.3 holds._
_Remark 1.5_ (Strict psh).: The conjectural exhaustion function \(\rho:X\to\mathbb{R}\) will be the \(\hat{\Phi}^{\mathsf{S}}\)-pullback of a continuous function \(\varrho\) on \(\mathscr{X}=\hat{\Phi}^{\mathsf{S}}(X)\subset\hat{\wp}\). The assertion that \(\partial\overline{\partial}\rho(v,\overline{v})\geq 0\), with equality precisely when \(v\) is tangent to a fibre of \(\Phi^{\mathsf{S}}\), should be interpreted as saying that \(\varrho\) is a _strictly_ psh function on \(\mathscr{X}\). This is "interpretative" because the topological space \(\hat{\wp}\) is not yet shown to be complex analytic. However, the space \(\hat{\wp}\) is a finite union \(\cup\hat{\wp}_{\pi}\) of complex analytic spaces, and the restriction \(\varrho\big{|}_{\hat{\wp}_{\pi}}\) is strictly psh.
_Remark 1.6_ (Pseudoconvexity in Hodge theory).: Griffiths and Schmid showed that \(D\) admits a smooth exhaustion function whose Levi form, restricted to the horizontal subbundle of the holomorphic tangent bundle, is positive definite at every point [10, (8.1)]. In particular, the image of the lift \(\tilde{\Phi}:\tilde{B}\to D\) to the universal cover of \(B\) admits a strict psh exhaustion function.
### Discussion of Conjecture 1.3
Define
\[\Lambda\ =\ \det(\mathcal{F}^{\mathsf{w}})\,\otimes\,\det(\mathcal{F}^{\mathsf{ w}-1})\,\otimes\cdots\otimes\,\det(\mathcal{F}^{\lceil(\mathsf{w}+1)/2\rceil})\,.\]
Theorem 1.2(ii) implies \(\Lambda^{\otimes m}\) is trivial over \(X\) for some positive integer \(m\geq 1\).
#### 1.2.1. Proof of Conjecture 1.3 when \(D\) is hermitian
It follows from [1] and the triviality of \(\Lambda^{\otimes m}\big{|}_{X}\) that there exist holomorphic functions \(g_{1},\dots,g_{\mu}:X\to\mathbb{C}\) that separate the fibres of \(\Phi^{\mathsf{S}}\big{|}_{X}\) and with the property that \(V(g_{1},\dots,g_{\mu})=A\). Set \(f=\sum|g_{j}|^{2}\). Given a sufficiently small \(\varepsilon>0\), and shrinking \(X\) if necessary, we may assume that \(X=\{x\in X\ |\ f(x)<\varepsilon\}\). Then \(\rho=(\varepsilon-f)^{-1}\) is the desired psh function.
#### 1.2.2. Proof of Conjecture 1.3 when \(A\subset B\)
We may assume without loss of generality that \(X\subset B\). Then it follows from [1, Theorem 6.14] and the triviality of \(\Lambda^{\otimes m}\big{|}_{X}\) that there exist holomorphic functions \(g_{1},\dots,g_{\mu}:X\to\mathbb{C}\) that separate the fibres of \(\Phi^{\mathsf{S}}\big{|}_{X}\). Without loss of generality these functions have the property that \(V(g_{1},\dots,g_{\mu})=A\). Now the argument above goes through verbatim.
#### 1.2.3. Proof of Conjecture 1.3 when \(A\) is a connected component of \(Z\)
Again it follows from [1, Theorem 6.14] and the triviality of \(\Lambda^{\otimes m}\big{|}_{X}\) that there exist holomorphic functions \(g_{1},\dots,g_{\mu}:X\to\mathbb{C}\) that separate the fibres of \(\Phi\big{|}_{B\cap X}\) and with the property that \(V(g_{1},\dots,g_{\mu})=Z\).
_A second proof in this case_. Let \(h_{0}\) be the Hodge norm-squared of a trivialization of \(\Lambda^{\otimes m}\big{|}_{B\cap X}\). Then \(\rho=1/h_{0}\) is a psh function, and the restriction \(\rho\big{|}_{B\cap X}\) satisfies Conjecture 1.3: we have \(\partial\overline{\partial}\rho(v,\overline{v})\geq 0\), with equality if and only if \(v\) is tangent to a fibre of \(\Phi\), [10].
The restriction \(\rho\big{|}_{Z\cap X}\) vanishes identically. So \(\rho:X\to[0,\infty)\) is an exhaustion function if and only if \(X\cap Z\) is compact. Finally, \(\rho\big{|}_{Z\cap X}\) will satisfy Conjecture 1.3 if and only if \(A=X\cap Z\).
_A third proof in this case._ P. Griffiths has pointed out that, if we allow \(\rho\) to take value in \([-\infty,\infty)\), then \(-\log h_{0}\) also yields a psh exhaustion with the desired properties.
#### 1.2.4. An approach to Conjecture 1.3 in the general case
There are (at least) two possibilities for a continuous psh function \(\rho_{0}:X\to[0,\infty)\) with the property that the restriction \(\rho_{0}\big{|}_{B\cap X}\) satisfies Conjecture 1.3: we have \(\partial\overline{\partial}\rho_{0}(v,\overline{v})\geq 0\), with equality
if and only if \(v\in T(B\cap X)\) is tangent to a fibre of \(\Phi\), cf. [Rob23]. In both cases \(\rho_{0}\) vanishes along \(Z\cap X\), and so in general will not be an exhaustion function. We need a second function \(\rho_{1}:X\to\mathbb{R}\) with the following properties:
1. The restriction \(\rho_{1}\big{|}_{Z\cap X}\) is psh.
2. The sum \(\rho_{0}+\rho_{1}:X\to\mathbb{R}\) is psh. In fact, \(\partial\overline{\partial}(\rho_{0}+\rho_{1})(v,\overline{v})\geq 0\), with equality precisely when \(v\) is tangent to a fibre of \(\Phi^{\mathsf{S}}\).
3. We have \(\rho_{0}+\rho_{1}\geq 0\), and the fibre is characterized by \[A\ =\ \{\rho_{0}+\rho_{1}\ =\ 0\}\,.\]
Then for sufficiently small \(\varepsilon>0\) we may take \(X=\{\rho_{0}+\rho_{1}<\varepsilon\}\) and \(\rho=1/(\varepsilon-\rho_{0}-\rho_{1})\).
A natural source of psh functions on \(Z_{I}^{*}\cap X\) are the \(-\log h_{I}\) with \(h_{I}\) the Hodge norm-squared of trivialization of \(\Lambda^{\otimes m}\big{|}_{X}\). The main result (Theorem 1.7) of this paper is the simultaneous extension to \(X\) of all the Hodge norms \(h_{I}\) with \(Z_{I}^{*}\cap A\) non-empty. And this extension does indeed yield a psh exhaustion of \(X\), as outlined above, at least in two cases: (i) If \(D\) is hermitian, then the extension is psh on \(X\) (Theorem 4.1). (ii) The non-classical (non-hermitian) example of Theorem 1.4. It is work in progress to fully generalize these two examples.
### Extension of Hodge norms
There is a Hodge metric associated to each \(\Lambda\big{|}_{Z_{I}^{*}}\) that is canonically defined up to a positive multiple (SS3.4). Fix a trivialization of \(\Lambda\big{|}_{X}\) and let \(h_{I}:Z_{I}^{*}\cap X\to\mathbb{R}_{>0}\) be the Hodge norm-squared of the trivialization. Then \(-\log h_{I}\) is a smooth psh function on \(Z_{I}^{*}\cap X\).
**Theorem 1.7**.: _The neighborhood \(X\) of Theorem 1.2 may be chosen so that it admits a continuous function \(h:X\to\mathbb{R}\) that is smooth on strata \(Z_{I}^{*}\cap X\)\((\)including \(B\cap X)\), constant on \(\hat{\Phi}^{\mathsf{S}}\)-fibres, and has the following property: if \(Z_{I}^{*}\cap A\) is nonempty, then the restriction of \(h\) to \(Z_{I}^{*}\) is a multiple of the Hodge norm-squared \(h_{I}\). In particular the restriction of \(-\log h\) to \(Z_{I}^{*}\) is plurisubharmonic._
The theorem is proved in SS3.
_Remark 1.8_.: If the Mumford-Tate domain \(D\) is hermitian, then \(h\) is smooth and \(-\log h\) is psh (Theorem 4.1). In general, \(-\log h\) need not be psh [10]. And smoothness is expected to fail when the hypotheses of Theorem 5.1 do not hold.
## 2. Preliminaries and review
The construction of \(h\) will utilize the period matrix representation of an induced variation of Hodge structure; the latter is introduced in SS2.1, and the former is reviewed in SS2.2. The fibre \(A\) is characterized in SS2.4.
Set
\[G_{\mathbb{R}}\,=\,\operatorname{Aut}(D)\,\subset\,\operatorname{Aut}(V_{ \mathbb{R}},Q)\quad\text{and}\quad G_{\mathbb{C}}\,=\,\operatorname{Aut}( \check{D})\,\subset\,\operatorname{Aut}(V_{\mathbb{C}},Q)\,.\]
### Induced Hodge structure
Let \(\mathsf{d}_{p}=\dim_{\mathbb{C}}F^{p}V_{\mathbb{C}}\) denote the dimensions of the Hodge filtration \(F\in\check{D}\). Any (pure, effective, polarized) Hodge structure \(F\in D\) on \(V\) naturally induces one on
\[H\ =\ (\bigwedge^{\mathsf{d}_{\mathsf{w}}}V)\,\otimes\,(\bigwedge^{\mathsf{d}_ {\mathsf{w}-1}}V)\,\otimes\cdots\otimes\,(\bigwedge^{\mathsf{d}_{\lceil( \mathsf{w}+1)/2\rceil}}V)\,.\]
We continue to denote the polarization by \(Q\). Let \(\mathsf{n}\) denote the weight of the induced Hodge structure. While the Hodge structure on \(H\) is effective, if \(\mathsf{w}\geq 3\), then it will be the case that \(h^{\mathsf{n},0}=0\). Let \(\mathsf{k}\geq 0\) be the smallest integer such that \(h^{\mathsf{n}-\mathsf{k},\mathsf{k}}\) is nonzero. Replacing \(H\) with \(H\otimes\mathbb{Q}(\mathsf{k})\), and \(\mathsf{n}\) with \(\mathsf{n}-2\mathsf{k}\), we may assume that the pure, effective, polarized Hodge structure on \(H\) satisfies \(h^{\mathsf{n},0}=1\).
From this point forward we will view the period map \(\Phi\) as parameterizing pure, effective, weight \(\mathsf{n}\), \(Q\)-polarized Hodge structures on \(H\).
_Remark 2.1_ (Relationship to \(\Lambda\)).: There is a tautological filtration
\[0\,\neq\,\mathcal{F}^{\mathsf{n}}(H_{\mathbb{C}})\,\subset\,\mathcal{F}^{ \mathsf{n}-1}(H_{\mathbb{C}})\,\subset\cdots\subset\,\mathcal{F}^{0}(H_{ \mathbb{C}})\ =\ \check{D}\times H_{\mathbb{C}} \tag{2.2}\]
of the trivial bundle. The bundle \(\mathcal{F}^{\mathsf{n}}(H_{\mathbb{C}})\) has rank one, and the fibre over \(F\in\check{D}\) is
\[\det(F^{\mathsf{w}}V_{\mathbb{C}})\,\otimes\,\det(F^{\mathsf{w}-1}V_{ \mathbb{C}})\,\otimes\cdots\otimes\,\det(F^{\lceil(\mathsf{w}+1)/2\rceil}V_{ \mathbb{C}})\,.\]
The bundles (2.2) are \(G_{\mathbb{C}}\)-homogeneous, and so all descend to \(\Gamma\backslash D\), and
\[\Lambda\big{|}_{B}\ =\ \Phi^{*}{\mathcal{F}}^{n}(H_{\mathbb{C}})\,.\]
### Period matrix representation
The period map admits a period matrix representation over the neighborhood \(X\) of Theorem 1.2. This means we have the following structure. (See [1, SSSS3.1-3.2] for details.)
#### 2.2.1. Monodromy about the fibre \(A\)
Let \(\pi_{1}(B\cap X)\twoheadrightarrow\Gamma_{X}\subset\Gamma\) be the monodromy of the variation of Hodge structure over \(B\cap X\). Let \((W,F,\sigma)\) be any one of the limiting mixed Hodge structures arising along the fibre \(A\). The weight filtration \(W\) is independent of this choice. In general, both the Hodge filtration \(F\in\check{D}\) and the nilpotent cone \(\sigma\) of local monodromy logarithms depend on our choice, and are defined only up to the action of \(\Gamma_{X}\). (See Remark 2.12 for what we can say about other choices of \((W,F^{\prime},\sigma^{\prime})\) along \(A\).) Nonetheless, the limit
\[F_{\infty}\ =\ \lim_{y\to\infty}\exp({\bf i}yN)\cdot F\ \in\ \partial D\]
is independent of our choice of both \((F,W,\sigma)\) and \(N\in\sigma\). (By convention the nilpotent cones here are nonzero and _open_, so that \(N\in\sigma\) is necessarily nonzero.) Both \(W\) and \(F_{\infty}\) are invariant under the monodromy:
\[\Gamma_{X}\ \subset\ \text{Stab}_{G_{\mathbb{R}}}(W)\,\cap\,\text{Stab}_{G_{ \mathbb{C}}}(F_{\infty})\,, \tag{2.3}\]
where \(G_{\mathbb{R}}=\text{Aut}(D)\) and \(G_{\mathbb{C}}=\text{Aut}(\check{D})\).
#### 2.2.2. Framing of the Hodge bundles
Fix once and for all a limiting mixed Hodge structure \((W,F,\sigma)\) arising along the fibre \(A\). Choose a basis \(\{e_{0},e_{1},\dots,e_{\mathsf{d}}\}\) of \(H_{\mathbb{C}}\) so that \(e_{0}\) spans \(F^{n}(H_{\mathbb{C}})\), \(\{e_{0},\dots,e_{\mathsf{d}_{n-1}}\}\) span \(F^{n-1}(H_{\mathbb{C}})\), and so on. Since \(F\in\check{D}\) is \(Q\)-isotropic (that is, satisfies the first Hodge-Riemann bilinear relation)
\[Q(F^{p},F^{q})\ =\ 0\,,\qquad\forall\quad p+q<\mathsf{n}\,,\]
we may assume that the basis also satisfies
\[Q(e_{i},e_{j})\ =\ \delta^{\mathsf{d}}_{i+j}\,,\qquad\forall\quad i\leq\mathsf{d}/2\,. \tag{2.4}\]
Let \(\mathcal{U}\to B\cap X\) be the universal cover, and \(\tilde{\Phi}:\mathcal{U}\to D\) the lift of the restricted period map \(\Phi:B\cap X\to\Gamma_{X}\backslash D\). Then there exist holomorphic functions \(z_{j},z_{a,j},\ldots:\mathcal{U}\to\mathbb{C}\), so that
\[\eta_{0}\ =\ e_{0}\ +\ \sum_{j=1}^{\mathsf{d}}z_{j}\,e_{j}\]
frames \(F^{\mathsf{n}}(\tilde{\Phi})\); \(\eta_{0}\) and
\[\eta_{a}\ =\ e_{a}\ +\ \sum_{j=\mathsf{d}_{\mathsf{n-1}}+1}^{\mathsf{d}}z_{a,j }\,e_{j}\,,\quad 1\leq a\leq\mathsf{d}_{\mathsf{n-1}}\]
frame \(F^{\mathsf{n-1}}(\tilde{\Phi})\); \(\eta_{0},\ldots\eta_{\mathsf{d}_{\mathsf{n-1}}}\) and
\[\eta_{a}\ =\ e_{a}\ +\ \sum_{j=\mathsf{d}_{\mathsf{n-2}}+1}^{\mathsf{d}}z_{a,j }\,e_{j}\,,\quad\mathsf{d}_{\mathsf{n-1}}+1\leq a\leq\mathsf{d}_{\mathsf{n-2}}\]
frame \(F^{\mathsf{n-2}}(\tilde{\Phi})\); and so on. The framing \(\{\eta_{0},\ldots,\eta_{\mathsf{d}}\}\) is the _period matrix representation_ of \(\Phi\).
We will sometimes treat the \(z_{j},z_{a,j},\ldots\) as holomorphic functions on \(B\cap X\to\mathbb{C}\) that are defined up to an action of the monodromy \(\Gamma_{X}\).
#### 2.2.3. Schubert cell
Let \(\mathcal{S}\subset\check{D}\) be the open Schubert cell of filtrations \(\tilde{F}\in\check{D}\) having generic intersection with \(\overline{F_{\infty}}\). The existence of a period matrix representation over \(X\) is equivalent to the properties that (i) the action of \(\Gamma_{X}\) on \(\check{D}\) preserves \(\mathcal{S}\), and (ii) the lift \(\tilde{\Phi}\) of the period map takes value in \(\mathcal{S}\cap D\). Then the \(z_{j},z_{a,j},\ldots\) are realized as the pullback of coordinates on \(\mathcal{S}\).
It will be convenient to have a description of this Schubert cell in terms of Deligne splittings, which we now review. (See [14] for details.) The mixed Hodge structure
\((W,F)\) determines a Deligne splitting
\[H_{\mathbb{C}}\ =\ \bigoplus_{p,q\geq 0}H^{p,q}_{W,F}\]
that satisfies
\[F^{k}\ =\ \bigoplus_{p\geq k}H^{p,q}_{W,F}\,,\quad W_{\ell}\ =\ \bigoplus_{p+q\geq\ell}H^{p,q}_{W,F}\quad\text{and}\quad F ^{k}_{\infty}\ =\ \bigoplus_{q\leq n-k}H^{p,q}_{W,F}\,, \tag{2.5}\]
and
\[\overline{H^{p,q}_{W,F}}\ =\ H^{q,p}_{W,F}\quad\text{modulo}\quad\bigoplus_{ \begin{subarray}{c}r\,<\,q\\ s\,<\,p\end{subarray}}H^{r,s}_{W,F}\,.\]
Let \(\mathfrak{g}_{\mathbb{R}}\) and \(\mathfrak{g}_{\mathbb{C}}\) be the Lie algebras of \(G_{\mathbb{R}}\) and \(G_{\mathbb{C}}\) respectively. The mixed Hodge structure \((W,F)\) on \(H\) induces one on \(\mathfrak{g}\). Let
\[\mathfrak{g}_{\mathbb{C}}\ =\ \bigoplus_{p,q}\mathfrak{g}^{p,q}_{W,F} \tag{2.6}\]
be the Deligne splitting. The splitting satisfies
\[\mathfrak{g}^{r,s}_{W,F}(H^{p,q}_{W,F})\ \subset\ H^{p+r,q+s}_{W,F}\,, \tag{2.7}\]
and respects the Lie bracket in the sense that
\[[\mathfrak{g}^{p,q}_{W,F}\,,\,\mathfrak{g}^{r,s}_{W,F}]\ \subset\ \mathfrak{g}^{p+r,q+s}_{W,F}\,. \tag{2.8}\]
The Lie algebra of \(\operatorname{Stab}_{G_{\mathbb{C}}}(F)\) is
\[\mathfrak{s}_{F}\ =\ \bigoplus_{p\geq 0}\mathfrak{g}^{p,q}_{W,F}\,;\]
the Lie algebra of \(\operatorname{Stab}_{G_{\mathbb{C}}}(W)\) is
\[\mathfrak{s}_{W}\ =\ \bigoplus_{p+q\leq 0}\mathfrak{g}^{p,q}_{W,F}\,;\]
and the Lie algebra of \(\operatorname{Stab}_{G_{\mathbb{C}}}(F_{\infty})\) is
\[\mathfrak{s}_{\infty}\ =\ \bigoplus_{q\leq 0}\mathfrak{g}^{p,q}_{W,F}\,.\]
It follows from (2.3) that the monodromy is contained in the Lie group
\[\Gamma_{X}\ \subset\ M_{X}\ =\ \mathrm{Stab}_{G_{\mathbb{C}}}(W)\,\cap\,\mathrm{ Stab}_{G_{\mathbb{C}}}(F_{\infty})\,\cap\,\mathrm{Stab}_{G_{\mathbb{C}}}(\overline{F_{ \infty}})\] with Lie algebra (2.9b) \[\mathfrak{m}_{X}\ =\ \bigoplus_{p,q\leq 0}\mathfrak{g}_{W,F}^{p,q}\,. \tag{2.9a}\]
Note that \(M_{X}\) is defined over \(\mathbb{R}\).
The nilpotent algebra
\[\mathfrak{s}_{F}^{\perp}\ =\ \bigoplus_{p<0}\mathfrak{g}_{W,F}^{p,q} \tag{2.10}\]
satisfies
\[\mathfrak{g}_{\mathbb{C}}\ =\ \mathfrak{s}_{F}\ \oplus\ \mathfrak{s}_{F}^{ \perp}\,.\]
The exponential map \(\mathfrak{s}_{F}^{\perp}\to\exp(\mathfrak{s}_{F}^{\perp})\) is a biholomorphism. The Schubert cell is the Zariski open
\[\mathcal{S}\ =\ \exp(\mathfrak{s}_{F}^{\perp})\cdot F\ \subset\ \check{D}\,;\]
it is precisely the set of filtrations \(\tilde{F}\in\check{D}\) having generic intersection with \(\overline{F_{\infty}}\). The maps \(\mathfrak{s}_{F}^{\perp}\to\exp(\mathfrak{s}_{F}^{\perp})\to\mathcal{S}\) are biholomorphisms. There is a holomorphic map
\[\eta:\mathcal{U}\to\exp(\mathfrak{s}_{F}^{\perp}) \tag{2.11}\]
such that
\[\eta_{j}\ =\ \eta\cdot e_{j}\,;\]
equivalently,
\[\tilde{\Phi}\ =\ \eta\cdot F\,.\]
Both \(\eta\) and \(\log\eta\) are equivalent to the framing \(\{\eta_{j}\}_{j=0}^{\mathsf{d}}\), and we will sometimes refer to both as the _period matrix representation_ of \(\Phi\).
_Remark 2.12_.: We have \(\sigma\subset\mathfrak{g}_{W,F}^{-1,-1}\). If \((W,F^{\prime},\sigma^{\prime})\) is any other limiting mixed Hodge structure arising along \(A\), then
\[F^{\prime}\ \in\ \exp(\mathfrak{m}_{X}\cap\mathfrak{s}_{F}^{\perp})\]
and
\[\sigma^{\prime}\ \subset\ \bigoplus_{p,q\leq-1}\mathfrak{g}_{W,F}^{p,q}\,.\]
### Definition of \(e_{\infty}\) and \(\mathfrak{m}\)
Since \(\dim F^{\mathfrak{n}}(H_{\mathbb{C}})=1\), there exists a unique \(\mathfrak{n}\leq\mathfrak{m}\leq 2\mathfrak{n}\) so that
\[F^{\mathfrak{n}}(H_{\mathbb{C}})\subset W_{\mathfrak{m}}(H_{\mathbb{C}})\quad \text{and}\quad F^{\mathfrak{n}}(H_{\mathbb{C}})\cap W_{\mathfrak{m}-1}(H_{ \mathbb{C}})=0\,.\]
Symmetries in the limit mixed Hodge structure imply
\[\dim W_{2\mathfrak{n}-\mathfrak{m}}(H_{\mathbb{C}})\cap F^{2 \mathfrak{n}-\mathfrak{m}}(H_{\mathbb{C}}) = 1\,,\] \[W_{2\mathfrak{n}-\mathfrak{m}-1}(H_{\mathbb{C}})\cap F^{2 \mathfrak{n}-\mathfrak{m}}(H_{\mathbb{C}}) = 0\,,\] \[W_{2\mathfrak{n}-\mathfrak{m}}(H_{\mathbb{C}})\cap F^{2 \mathfrak{n}-\mathfrak{m}+1}(H_{\mathbb{C}}) = 0\,,\]
and also that
\[H_{\mathbb{C}}\ =\ F^{1}(H_{\mathbb{C}})\ \oplus\ \overline{W_{2\mathfrak{n}- \mathfrak{m}}(H_{\mathbb{C}})\cap F^{2\mathfrak{n}-\mathfrak{m}}(H_{\mathbb{ C}})}\,.\]
So we may assume that
\[e_{\mathfrak{d}}\ \in\ \overline{W_{2\mathfrak{n}-\mathfrak{m}}(H_{\mathbb{C}}) \cap F^{2\mathfrak{n}-\mathfrak{m}}(H_{\mathbb{C}})}\,,\]
and \(W_{2\mathfrak{n}-\mathfrak{m}}(H_{\mathbb{C}})\cap F^{2\mathfrak{n}- \mathfrak{m}}(H_{\mathbb{C}})\) is spanned by some \(e_{\infty}\in\{e_{0},e_{1},\ldots,e_{\mathfrak{d}}\}\). (The reason for the subscript \(\infty\) is discussed in Remark 3.21.)
In order to get a feel for the vectors \(e_{0}\) and \(e_{\infty}\), and the integer \(\mathfrak{m}\) it may be helpful to visualize them in the Hodge diamond for the mixed Hodge structure \((W,F)\) on \(H\); see SSA for two interesting examples.
### The fiber \(A\)
Without loss of generality, we may assume that there exist subsets \(B_{\ell}\subset\{e_{0},e_{1},\ldots,e_{\mathsf{d}}\}\) so that \(W_{\ell}(H_{\mathbb{C}})\) is framed by \(\{e_{j}\mid j\in B_{\ell}\}\). Let \(\ell(j)=\min\{\ell\mid e_{j}\in W_{\ell}(H_{\mathbb{C}})\}\). Then the fibre \(A\subset X\) is cut out by the equations
\[A\ =\ \{\eta_{j}\ \equiv\ e_{j}\mod W_{\ell(j)}(H_{\mathbb{C}})\ |\ 0\leq j\leq\mathsf{d}\}\,. \tag{2.13a}\]
Equivalently,
\[A\ =\ \{z_{j}=0\ |\ \ell(j)\geq\mathsf{m}\}\,\cup\,\{z_{a,j}=0\ |\ \ell(j)\geq\ell(a)\}\,. \tag{2.13b}\]
Although the \(z_{j}\) and \(z_{a,j}\) are defined only up to the action of monodromy as functions on \(B\cap Z\), the fact that the monodromy preserves the weight filtration (2.3) implies that the vanishing (2.13) _is_ well-defined on \(B\cap X\).
## 3. Extension of Hodge norms
The purpose of this section is to prove Theorem 1.7. Take the neighborhood \(X\) given by Theorem 1.2. The problem is to construct a function \(h:X\to\mathbb{R}\) satisfying the properties of Theorem 1.7. The function \(h\) is defined over \(B\cap X\) in SS3.1. It is shown that this smooth function extends to a continuous function on \(X\) in SS3.2. We will see that \(h\) is constant on \(\hat{\Phi}^{\mathsf{S}}\)-fibres in SS3.3. Finally we show that the restriction of \(h\) to \(Z_{I}^{*}\cap X\) coincides with a positive multiple of \(h_{I}\) whenever \(Z_{I}^{*}\cap A\) is nonempty in SS3.4.
### Construction of \(h\) over \(B\cap X\)
Let \(\eta_{\infty}=\eta\cdot e_{\infty}:\mathcal{U}\to F^{2\mathsf{n}-\mathsf{m}}( \tilde{\Phi})\) denote the corresponding section (SSSS2.2.2-2.2.3). Define \(0\neq\lambda\in\mathbb{C}\) by
\[\overline{\lambda\,e_{\infty}}\ =\ e_{\mathsf{d}}\,, \tag{3.1}\]
and set
\[\tilde{h}\ =\ \tfrac{1}{2}\,Q(\eta_{0}\,,\,\overline{\lambda\,\eta_{\infty}}) \,+\,\tfrac{1}{2}\,Q(\overline{\eta_{0}}\,,\,\lambda\,\eta_{\infty})\ =\ \operatorname{Re}Q(\eta_{0}\,,\,\overline{\lambda\,\eta_{\infty}})\,. \tag{3.2}\]
(It may be helpful to visualize the placement of \(e_{0},e_{\infty},e_{\mathsf{d}}\) in the Hodge diamond of \((W,F)\). See SSA for some examples.)
**Theorem 3.3**.: _The function \(\tilde{h}:\mathcal{U}\ \to\ \mathbb{R}\) descends to \(h:B\cap X\to\mathbb{R}\)._
Proof.: Note that \(\pi_{1}(B\cap X)\) acts on both \(\mathcal{U}\) and \(H_{\mathbb{C}}\). Given \(u\in\mathcal{U}\) and \(\gamma\in\pi_{1}(B\cap X)\), we have
\[\gamma\cdot F^{p}(\tilde{\Phi}(u))\ =\ F^{p}(\tilde{\Phi}(\gamma\cdot u))\,.\]
If \(\eta_{j}(u)\in F^{p}(\tilde{\Phi}(u))\), then it is likewise the case that both
\[\gamma\cdot\eta_{j}(u)\,,\,\eta_{j}(\gamma\cdot u)\ \in\ F^{p}(\tilde{\Phi}( \gamma\cdot u))\,. \tag{3.4}\]
However in general it need not be the case that \(\gamma\cdot\eta_{j}(u)=\eta_{j}(\gamma\cdot u)\), or even that they be linearly dependent. Nonetheless, Lemma 3.5 below does hold, and implies that \(Q(\eta_{0},\overline{\eta_{\infty}}):\mathcal{U}\to\mathbb{C}\) is invariant under monodromy and so descends to \(B\cap X\). The theorem follows.
**Lemma 3.5**.: _There exists a character \(\chi(\gamma)\in S^{1}\subset\mathbb{C}\) so that_
\[\gamma\cdot\eta_{0}(u)\ =\ \chi(\gamma)\,\eta_{0}(\gamma\cdot u)\quad\text{and} \quad\gamma\cdot\eta_{\infty}(u)\ =\ \chi(\gamma)\,\eta_{\infty}(\gamma\cdot u)\,.\]
Proof.: First we observe that (3.4) and \(\dim F^{\mathfrak{n}}(\tilde{\Phi}(\gamma\cdot u))=1\) implies the first equality in the lemma: \(\gamma\cdot\eta_{0}(u)=\chi(\gamma)\,\eta_{0}(\gamma\cdot u)\) for some character \(\chi:\Gamma_{X}\to\mathbf{G}_{m,\mathbb{C}}\).
Next we observe that (2.7), (2.9a) and
\[e_{\infty}\ \in\ W_{2\mathfrak{n}-\mathfrak{m}}(H_{\mathbb{C}})\cap F^{2 \mathfrak{n}-\mathfrak{m}}(H_{\mathbb{C}})\ =\ H_{W,F}^{2\mathfrak{n}-\mathfrak{m},0}\]
imply that
\[\gamma\cdot\eta_{\infty}(u)\ =\ \chi_{\infty}(\gamma)\,\eta_{\infty}(\gamma \cdot u) \tag{3.6}\]
for some character \(\chi_{\infty}:\Gamma_{X}\to\mathbf{G}_{m,\mathbb{C}}\).
\[\begin{array}{rcl}\operatorname{span}_{\mathbb{C}}\left\{e_{0}\right\}&=&H_ {W,F}^{\mathfrak{n},\mathfrak{m}-\mathfrak{n}}\ =\ F^{\mathfrak{n}}(H_{\mathbb{C}})\,,\\ \operatorname{span}_{\mathbb{C}}\left\{e_{\infty}\right\}&=&H_{W,F}^{2 \mathfrak{n}-\mathfrak{m},0}\ =\ W_{2\mathfrak{n}-\mathfrak{m}}(H_{\mathbb{C}})\cap F^{2 \mathfrak{n}-\mathfrak{m}}(H_{\mathbb{C}})\\ \operatorname{span}_{\mathbb{C}}\left\{e_{\mathsf{d}}\right\}&=&H_{W,F}^{0,2 \mathfrak{n}-\mathfrak{m}}(H_{\mathbb{C}})\,.\end{array} \tag{3.7}\]
Let
\[h^{p,q}_{W,F}\ =\ \dim_{\mathbb{C}}H^{p,q}_{W,F}\,.\]
Then
\[\begin{array}{rcl}1&=&h^{\mathsf{n},\mathsf{m}-\mathsf{n}}_{W,F}\,\ h^{2\mathsf{n}-\mathsf{m},0}_{W,F}\,,\\ 0&=&h^{\mathsf{n},q}_{W,F}\,,\ h^{p,0}_{W,F}\,,\quad\forall\ q\neq\mathsf{m}- \mathsf{n}\,,\ p\neq 2\mathsf{n}-\mathsf{m}\,.\end{array} \tag{3.8}\]
We call the \((h^{p,q}_{W,F})\) the _Hodge diamond of the mixed Hodge structure_. They are conveniently visualized in the \((p,q)\)-plane (SSA). They satisfy the _symmetries_
\[h^{p,q}_{W,F}\ =\ h^{q,p}_{W,F}\quad\text{and}\quad h^{p,q}_{W,F}\ =\ h^{p-k,q-k}_{W,F}\,, \tag{3.9}\]
where \(k=p+q-\mathsf{n}\). (The first equality holds for arbitrary mixed Hodge structures, the second holds for _limiting/polarized_ mixed Hodge structures.) These symmetries imply
\[1\ =\ h^{\mathsf{m}-\mathsf{n},\mathsf{n}}_{W,F}\,,\ h^{0,2\mathsf{n}- \mathsf{m}}_{W,F}\,,\]
and all other \(h^{p,\mathsf{n}}_{W,F}\), \(h^{0,q}_{W,F}\) are zero.
The the desired (3.6) now follows from (2.7), (2.9), (3.4), (3.7) and (3.8).
It remains to show that \(\chi(\gamma)=\chi_{\infty}(\gamma)\), and that this character is a root of unity. Both are established in [12, SS3.4]: the \(\chi(\gamma)\) and \(\chi_{\infty}(\gamma)\) here are the \(\chi(\beta^{-1})=\chi(\beta)^{-1}\) and \(\chi_{\infty}(\gamma)\) there, respectively.
### Continuous extension of \(h\) to \(X\)
Since both \(\eta_{0}\) and \(\eta_{\infty}\) are holomorphic, the function \(h\) is smooth on \(B\cap X\).
**Theorem 3.10**.: _The smooth function \(h:B\cap X\to\mathbb{R}\) extends to a continuous_
\[h:X\ \to\ \mathbb{R}\,.\]
_The restriction to \(Z^{*}_{I}\cap X\) is smooth._
#### 3.2.1. Outline of the proof of Theorem 3.10
It it suffices to work locally: given a local coordinate chart \(U\subset X\), we will show that the restriction \(h\big{|}_{B\cap U}\) extends to a continuous function on \(U\), that the extension is constant on \(\hat{\Phi}^{\mathcal{S}}\)-fibres, and restricts to a smooth function on the strata \(Z_{I}^{*}\cap U\). Sections 3.2.2-3.2.7 are occupied with studying the local coordinate expression of \(Q(\eta_{0},\overline{\eta_{\infty}})\) that is given by the nilpotent orbit theorem. Following a comment on the limits to be analyzed in SS3.2.8, the meat of the argument is SS3.2.9 where properties of weight filtrations are used to establish the desired results.
#### 3.2.2. Local structure at infinity
Let \(\Delta=\{\tau\in\mathbb{C}\ :\ |\tau|<1\}\) be the unit disc, and let \(\Delta^{*}=\{\tau\in\Delta\ :\ \tau\neq 0\}\) be the punctured unit disc. Set
\[\ell(\tau)\ =\ \frac{\log\tau}{2\pi\mathbf{i}}\,.\]
Fix a point \(b\in X\). There exists a local coordinate chart \(t=(t_{1},\ldots,t_{r}):U\stackrel{{\simeq}}{{\longrightarrow}} \Delta^{r}\), centered at \(b\), so that \(B\cap U\simeq(\Delta^{*})^{k}\times\Delta^{r-k}\). Without loss of generality \(U\subset X\). By the nilpotent orbit theorem [10] there exists a holomorphic function \(\mathcal{F}:U\to\check{D}\), and nilpotent operators \(N_{1},\ldots,N_{k}\in\mathfrak{g}_{\mathbb{Q}}\) so that the local coordinate representation of \(\Phi\) is
\[\Phi(t)\ =\ \exp\Big{[}\sum_{j=1}^{k}\ell(t_{j})N_{j}\Big{]}\cdot\mathcal{F}( t)\,,\]
modulo the local monodromy group \(\Gamma_{\mathrm{loc}}\subset\Gamma_{X}\) generated by the \(\exp(N_{1}),\ldots,\exp(N_{k})\).
Suppose that \(b\in Z_{J}^{*}\cap X\). Then \(|J|=k\), and without loss of generality we may suppose that \(J=\{1,\ldots,k\}\). Given \(I\subset J\), we have
\[Z_{I}^{*}\cap U\ =\ \{t_{i}=0\,,\ \forall\ i\in I\,;\ t_{j}\neq 0\,,\ \forall\ j \in J\backslash I\}\,.\]
Shrinking \(X\) if necessary, there exists \(K\supset J\) so that \(Z_{K}^{*}\cap A\) is nonempty. This implies that the \(N_{1},\ldots,N_{k}\) generate a face of a nilpotent cone \(\sigma^{\prime}\) arising along \(A\). In particular,
\[N_{1},\ldots,N_{k}\ \in\ \bigoplus_{p,q\leq-1}\mathfrak{g}_{W,F}^{p,q}\ \subset\ \mathfrak{s}_{F}^{\perp}\,\cap\,W_{-2}(\mathfrak{g}_{\mathbb{C}})\,, \tag{3.11}\]
by Remark 2.12. Since \(\sigma^{\prime}\subset\mathfrak{s}_{F}^{\perp}\), and \(\Phi(t)\) takes value in \(\Gamma_{\rm loc}\backslash(\mathcal{S}\cap D)\), it follows that \(\mathcal{F}\) takes value in \(\mathcal{S}\). So \(\mathcal{F}(t)=\zeta(t)\cdot F\) for some holomorphic map \(\zeta:U\to\exp(\mathfrak{s}_{F}^{\perp})\). In particular, the local coordinate representation of the function \(\eta\) in (2.11) is
\[\eta(t)\ =\ \exp\Big{[}\sum_{j=1}^{k}\ell(t_{j})N_{j}\Big{]}\cdot\zeta(t)\,. \tag{3.12}\]
#### 3.2.3. Local characterization of the fibre
Suppose that \(Z_{J}^{*}\cap A\) is nonempty. Horizontality implies that the restriction of \(\zeta\) to \(Z_{J}^{*}\cap U=\{t_{1},\ldots,t_{k}=0\}\) centralizes the \(N_{1},\ldots,N_{k}\), and therefore stabilizes the weight filtration \(W\). Equivalently,
\[\log\zeta\big{|}_{Z_{J}^{*}\cap U}\ \equiv\ 0\quad\text{modulo}\quad W_{0}( \mathfrak{g}_{\mathbb{C}})\,. \tag{3.13}\]
It follows from the global characterization (2.13) of the fibre, that it is locally characterized by
\[A\,\cap\,Z_{J}^{*}\,\cap\,U\ =\ \Big{\{}\log\zeta\big{|}_{Z_{J}^{*}\cap U}\ \equiv\ 0\quad\text{modulo}\quad W_{-1}(\mathfrak{g}_{\mathbb{C}})\Big{\}}. \tag{3.14}\]
Horizontality then implies that
\[\log\zeta\big{|}_{A}\ \equiv\ 0\quad\text{modulo}\quad\bigoplus_{p<-1,q\leq 0 }\mathfrak{g}_{W,F}^{p,q}\ \subset\ W_{-1}(\mathfrak{g}_{\mathbb{C}})\,.\]
#### 3.2.4. A second local coordinate representation
It will be helpful to re-write (3.12) as (3.16). Shrinking the coordinate neighborhood \(U\) if necessary, we may assume that \(Z_{I}^{*}\cap U\) is nonempty if and only if \(I\subset J\). Set
\[\hat{t}_{I}\ =\ \prod_{j\not\in I}t_{j}\,.\]
Then horizontality of the period map implies that \(\zeta\big{|}_{Z_{I}^{*}\cap U}\) takes value in the centralizer \(\mathfrak{z}_{I}\) of \(\{N_{j}\ |\ j\in I\}\). So the holomorphic map \(\log\zeta:U\to\mathfrak{s}_{F}^{\perp}\) may be expressed as
\[\log\zeta\ =\ \sum_{I\subset J}\hat{t}_{I}\,f_{I}\]
with \(f_{I}:U\to\mathfrak{z}_{I}\cap\mathfrak{s}_{F}^{\perp}\) holomorphic. Set \(f=f_{\emptyset}\).
Define
\[\hat{\theta}_{I}(t)\ =\ \exp\Big{[}\sum_{j\not\in I}\ell(t_{j})N_{j}\Big{]}\,.\]
Set
\[\hat{\zeta}(t)\ =\ \exp\Big{[}\sum_{I\subset J}\hat{t}_{I}\,\mathrm{Ad}_{\hat{ \theta}_{I}(t)}(f_{I})\Big{]} \tag{3.15}\]
and
\[\theta(t)\ =\ \hat{\theta}_{\emptyset}(t)\ =\ \exp\Big{[}\sum_{j=1}^{k}\ell(t_{j})N _{j}\Big{]}\,.\]
Then the local coordinate expression for \(\eta\) in (3.12) may be re-expressed as
\[\eta(t)\ =\ \theta(t)\cdot\zeta(t)\ =\ \hat{\zeta}(t)\cdot\theta(t)\,. \tag{3.16}\]
_Example 3.17_.: If \(J=\{1,2\}\), then
\[\log\zeta\ =\ f_{12}\,+\,t_{1}\,f_{2}\,+\,t_{2}\,f_{1}\,+\,t_{1}t_{2}\,f\,,\]
with \(f_{12}\) centralizing \(N_{1}\) and \(N_{2}\), and \(f_{j}\) centralizing \(N_{j}\). We have
\[\sum_{I\subset J}\hat{t}_{I}\,\mathrm{Ad}_{\hat{\theta}_{I}(t)}( f_{I}) = f_{12}\,+\,t_{1}\mathrm{Ad}_{\exp(\ell(t_{1})N_{1})}(f_{2})\,+\,t _{2}\mathrm{Ad}_{\exp(\ell(t_{2})N_{2})}(f_{1})\] \[+\,t_{1}t_{2}\,\mathrm{Ad}_{\exp(\ell(t_{1})N_{1}+\ell(t_{2})N_{ 2})}(f)\,.\]
#### 3.2.5. Properties of \(\hat{\zeta}(t)\)
Since the \(N_{j}\) are nilpotent elements of \(\mathfrak{g}_{\mathbb{C}}\), the \(\mathrm{Ad}_{\hat{\theta}_{I}(t)}(f_{I})\) are polynomial in the \(\ell(t_{j})\), \(j\not\in I\). This implies that the \(\hat{t}_{I}\mathrm{Ad}_{\hat{\theta}_{I}(t)}(f_{I})\) are continuous, albeit multivalued, functions on all of \(U\) that are holomorphic on the strata \(Z^{*}_{I}\cap U\). By extension, \(\hat{\zeta}(t)\) is continuous on all of \(U\) and holomorphic on the strata \(Z^{*}_{I}\cap U\). We have limits
\[\lim_{t_{1},\ldots,t_{k}\to 0}\ \sum_{I\subset J}\hat{t}_{I}\, \mathrm{Ad}_{\hat{\theta}_{I}(t)}(f_{I}) = f_{J}\,,\] \[\lim_{\begin{subarray}{c}t_{i}\,\rightarrow\,0\\ i\,\in\,I^{\prime}\end{subarray}}\ \sum_{I\subset J}\hat{t}_{I}\,\mathrm{Ad}_{\hat{ \theta}_{I}(t)}(f_{I}) = \sum_{I^{\prime}\subset I\subset J}\hat{t}_{I}\,\mathrm{Ad}_{I(t) }(f_{I})\,.\]
The first limit implies
\[\lim_{t_{1},\ldots,t_{k}\to 0}\hat{\zeta}(t)\ =\ \exp(f_{J}) \tag{3.18}\]
#### 3.2.6. Continuity of \(\eta_{\infty}\)
It follows from Remark 2.12, (2.7) and (3.7) that \(N_{j}(e_{\infty})=0\). So
\[\theta(t)\cdot e_{\infty}\ =\ e_{\infty}\,,\]
and
\[\eta_{\infty}(t)\ =\ \eta(t)\cdot e_{\infty}\ =\ \hat{\zeta}(t)\cdot e_{\infty} \tag{3.19}\]
is continuous (albeit defined only up to \(\Gamma_{\rm loc}\)). In particular, the limit
\[\lim_{t_{1},\ldots,t_{k}\to 0}\eta_{\infty}\ =\ \exp(f_{J})\cdot e_{\infty} \tag{3.20}\]
exists, and is well-defined (independent of \(\Gamma_{\rm loc}\)).
_Remark 3.21_.: In the case that \(b\in A\), the limit (3.20) is a nonzero element of
\[\det(F_{\infty}^{\sf w}V_{\mathbb{C}})\,\otimes\,\det(F_{\infty}^{\sf w-1}V_{ \mathbb{C}})\,\otimes\cdots\otimes\,\det(F_{\infty}^{\lceil({\sf w}+1)/2 \rceil}V_{\mathbb{C}})\,.\]
3.2.7. Local coordinate representations of \(\eta_{0}\) and \(Q(\eta_{0},\overline{\eta_{\infty}})\)
The nilpotency of the \(N_{j}\in W_{-2}({\mathfrak{g}}_{\mathbb{C}})\) implies that \(\theta(t)\cdot e_{0}\) is polynomial in the \(\ell(t_{j})\). From (2.7) and (3.11) we see that this polynomial has degree at most \({\sf m}-{\sf n}\). Write
\[\theta(t)\cdot e_{0}\ =\ \sum_{|a|\leq{\sf m}-{\sf n}}c_{a}\,\ell(t_{1})^{a_{1} }\cdots\ell(t_{k})^{a_{k}}\,N_{1}^{a_{1}}\cdots N_{k}^{a_{k}}(e_{0})\,,\]
with \(a=(a_{1},\ldots,a_{k})\) is a \(k\)-tuple of non-negative integers, and \(|a|=a_{1}+\cdots+a_{k}\). By (3.16) and (3.19), we have
\[Q\left(\eta_{0},\overline{\eta_{\infty}}\right)\ =\ \sum_{|a|\leq{\sf m}-{\sf n}}c_{a}\, \ell(t_{1})^{a_{1}}\cdots\ell(t_{k})^{a_{k}}\,Q\left(\hat{\zeta}(t)\cdot N_{1} ^{a_{1}}\cdots N_{k}^{a_{k}}(e_{0})\,,\,\overline{\hat{\zeta}(t)\cdot e_{ \infty}}\right)\,.\]
#### 3.2.8. Comment on computation of limits
In order to prove Theorem 3.10 we need to show that
\[\lim_{\begin{subarray}{c}t_{i}\to 0\\ i\in I\end{subarray}}h\quad\text{exists and defines a smooth function on $Z_{I}^{*}\cap U$;} \tag{3.22}\]
and that the resulting function \(h:U\to\mathbb{R}\) is continuous. We will prove (3.22) in the case that \(I=J=\{1,\dots,k\}\). The general case \(I\subset J\) is a straightforward generalization of the argument here, as is the exercise to verify that the resulting function is continuous on all of \(U\); details are left to the reader.
Returning to SS3.2.7, we are going to show that
\[\lim_{t_{1},\dots,t_{k}\to 0}\ell(t_{1})^{a_{1}}\cdots\ell(t_{k})^{a_{k}}\,Q \left(\hat{\zeta}(t)\cdot N_{1}^{a_{1}}\cdots N_{k}^{a_{k}}(e_{0})\,,\,\overline {\hat{\zeta}(t)\cdot e_{\infty}}\right)\ =\ 0\,, \tag{3.23}\]
whenever \(|a|>0\). Assume for the moment that (3.23) holds. Then
\[\lim_{t_{1},\dots,t_{k}\to 0}Q\left(\eta_{0},\overline{\eta_{\infty}}\right)\ =\ Q\left(\hat{\zeta}(t)\cdot e_{0}\,,\,\overline{\hat{\zeta}(t)\cdot e_{ \infty}}\right)\,. \tag{3.24}\]
Since \(\hat{\zeta}(t)\) is continuous on \(U\), and holomorphic on strata \(Z_{I}^{*}\cap U\) (SS3.2.5), the desired (3.22) now follows (in the case \(I=J\)) by the definition (3.2) of \(\tilde{h}\): we have
\[h|_{Z_{J}^{*}\cap U}\ =\ \operatorname{Re}Q\left(\hat{\zeta}(t)\cdot e_{0}\,, \,\overline{\hat{\zeta}(t)\cdot e_{\infty}}\right) \tag{3.25}\]
This completes the proof of Theorem 3.10 (modulo the reader's exercise).
#### 3.2.9. Proof of (3.23)
The key lemma is the following. Let \(W^{j}=W(N_{j})\) be the weight filtration along \(Z_{j}^{*}\cap U\).
**Lemma 3.26**.: _If \(\hat{\zeta}(t)\) stabilizes any one of the \(W^{j}\) with \(a_{j}>0\), then_
\[Q\left(\hat{\zeta}(t)\cdot N_{1}^{a_{1}}\cdots N_{k}^{a_{k}}(e_{0})\,,\, \overline{\hat{\zeta}(t)\cdot e_{\infty}}\right)\ =\ 0 \tag{3.27}\]
Proof.: The lemma is a consequence of properties of weight filtrations. By Lemma B.1, there exist \(\mathfrak{n}\leq\mathfrak{m}_{j}\leq\mathfrak{m}\) so that
\[e_{0}\ \in\ W^{j}_{\mathfrak{m}_{j}}(H_{\mathbb{C}})\,,\quad e_{0}\ \not\in\ W^{j}_{\mathfrak{m}_{j}-1}(H_{\mathbb{C}})\,, \tag{3.28}\]
and
\[e_{\infty}\ \in\ W^{j}_{2{\mathfrak{n}}-{\mathfrak{m}}_{j}}(H_{\mathbb{C}})\,, \quad e_{\infty}\ \not\in\ W^{j}_{2{\mathfrak{n}}-{\mathfrak{m}}_{j}-1}(H_{\mathbb{C}})\,. \tag{3.29}\]
We have
\[N^{a_{1}}_{1}\cdots N^{a_{k}}_{k}(e_{0})\ \in\ W^{1}_{{\mathfrak{m}}_{1}-2a_{1}}(H_ {\mathbb{C}})\,\cap\cdots\cap\,W^{k}_{{\mathfrak{m}}_{k}-2a_{k}}(H_{\mathbb{C}} )\,. \tag{3.30}\]
The essential property of the weight filtrations that we will utilize is that they are \(Q\)-isotropic
\[Q(W^{j}_{\ell}\,,\,W^{j}_{m})\ =\ 0\qquad\forall\quad\ell+m<2\,{\mathfrak{n}}\,. \tag{3.31}\]
The lemma now follows from (3.29), (3.30) and (3.31).
**Corollary 3.32**.: _If \(\hat{\zeta}(t)\) centralizes any one of the \(N_{j}\) with \(a_{j}>0\), then (3.27) holds._
Proof.: The centralizer of \(N_{j}\) preserves the weight filtration \(W^{j}\).
#### 3.2.10. Completing the proof of Theorem 3.10
By definition \(f_{I}\) centralizes every \(N_{j}\) with \(j\in I\). And since the \(N_{j}\) all commute, the \(\hat{\theta}_{I}\) centralizes every \(N_{j}\). So \(\operatorname{Ad}_{\hat{\theta}_{I}(t)}(f_{I})\) also centralizes every \(N_{j}\) with \(j\in I\). Now, \(\hat{\zeta}(t)\) will centralize \(N_{j}\) if \(f_{I}(t)=0\) for every \(I\not\ni j\). So (3.27) will hold unless, for every \(j\) such that \(a_{j}>0\), there exists \(I\not\ni j\) with \(f_{I}(t)\neq 0\). Regard
\[\hat{\zeta}(t)\ =\ \sum A_{b}\,t_{1}^{b_{1}}\cdots t_{k}^{b_{k}}\]
as polynomial in the \(t_{j}\) with coefficients taking value in \(\operatorname{End}(H)\). The coefficient \(A_{b}\) will centralize \(N_{j}\) if \(b_{j}=0\). Rewrite the left-hand side of (3.27) as
\[\sum_{b,c}t_{1}^{b_{1}}\overline{t}_{1}^{c_{1}}\cdots t_{k}^{b_{k}}\overline{ t}_{k}^{c_{k}}\,Q\left(A_{b}\cdot N_{1}^{a_{1}}\cdots N_{k}^{a_{k}}(e_{0})\,,\, \overline{A_{c}\cdot e_{\infty}}\right)\,,\]
and note that \(Q\left(A_{b}\cdot N_{1}^{a_{1}}\cdots N_{k}^{a_{k}}(e_{0})\,,\,\overline{A_{c }\cdot e_{\infty}}\right)=0\) if we have \(a_{j}>0\) and \(b_{j}+c_{j}=0\) for some \(j\). Returning to (3.23) we see that if \(a_{j}>0\), then
\[Q\left(\hat{\zeta}(t)\cdot N_{1}^{a_{1}}\cdots N_{k}^{a_{k}}(e_{0})\,,\, \overline{\hat{\zeta}(t)\cdot e_{\infty}}\right)\]
is a multiple of either \(t_{j}\) or \(\overline{t}_{j}\). This establishes the desired (3.23), and completes the proof of Theorem 3.10.
### Constancy on fibres
**Theorem 3.33**.: _The function \(h:X\to\mathbb{R}\) is constant on \(\hat{\Phi}^{\mathsf{S}}\)-fibres._
Proof.: Because \(h\) is defined by the period matrix representation, it is immediate that \(h\) is locally constant on the fibres of \(\Phi|_{B\cap X}=\Phi^{\mathsf{S}}|_{B\cap X}\), and therefore constant on the fibres of \(\hat{\Phi}^{\mathsf{S}}|_{B\cap X}\).
Suppose that \(Z_{J}^{*}\cap A\) is nonempty. Since the restriction of \(h\) to \(Z_{J}^{*}\cap X\) is a positive multiple of the Hodge norm-squared \(h_{J}\) (Theorem 3.35), it follows that \(h\) is constant on the fibres of \(\hat{\Phi}^{\mathsf{S}}|_{Z_{J}^{*}\cap X}\).
For the general case, recall the discussion of SSSS3.2.2-3.2.4. The coordinate chart is centered at a point \(b\in Z_{J}^{*}\cap X\). The nilpotent operator \(N=N_{1}+\cdots+N_{k}\) determines a weight filtration \(W^{J}=W(N)\). The fact that the restriction of \(\zeta\) to \(Z_{J}^{*}\cap U\) centralizes the nilpotent operators \(N_{j}\) (SS3.2.3) implies that the restrictions of both \(\zeta\) and \(\hat{\zeta}\) to \(Z_{J}^{*}\cap U\) preserve the weight filtration \(W^{J}\). By Lemma B.1 there exists \(\mathsf{n}\leq\mathsf{m}_{J}\leq\mathsf{m}\) so that \(e_{0}\in W^{J}_{\mathsf{m}_{J}}(H_{\mathbb{C}})\) and \(e_{\infty}\in W^{J}_{2\mathsf{n}-\mathsf{m}_{J}}(H_{\mathbb{C}})\). The fact that \(W^{J}\) is \(Q\)-isotropic, \(Q(W^{J}_{\ell},W^{J}_{m})=0\) for all \(\ell+m<2\,\mathsf{n}\), implies that the map \(W^{J}_{0}(\mathfrak{g}_{\mathbb{C}})\to\mathbb{C}\) given by \(w\mapsto Q(\exp(w)\cdot e_{0},\overline{\exp(w)\cdot e_{\infty}})\) descends to a well-defined \(W^{J}_{0}(\mathfrak{g}_{\mathbb{C}})/W^{J}_{-1}(\mathfrak{g}_{\mathbb{C}})\to \mathbb{C}\). It now follows from (3.25) and the definition of \(\Phi_{J}\) in [10] that \(h\) is constant on the fibres of \(\hat{\Phi}^{\mathsf{S}}\).
**Corollary 3.34**.: _The function \(h:X\to\mathbb{R}\) descends to a continuous function \(h:\mathcal{X}\to\mathbb{R}\) on \(\mathcal{X}=\hat{\Phi}^{\mathsf{S}}(X)\subset\hat{\wp}\)._
### Relationship to Hodge norms
**Theorem 3.35**.: _Assume \(Z_{J}^{*}\cap A\) is nonempty. The restriction of \(h\) to \(Z_{J}^{*}\cap X\) is a positive multiple of the Hodge norm-squared \(h_{J}\) of \(\Lambda\big{|}_{Z_{J}^{*}\cap A}\)._
Since \(h_{J}\) is the pull-back of a metric with negative curvature, cf. [10, (4.66)], we immediately obtain
**Corollary 3.36**.: _Assume \(Z_{J}^{*}\cap A\) is nonempty. The restriction of \(-\log h\) to \(Z_{J}^{*}\cap X\) is psh. Given \(v\in T(Z_{J}^{*}\cap X)\), we have \(-\partial\overline{\partial}\log h(v,\overline{v})=0\) if and only if \(v\) is tangent to a \(\Phi^{\mathsf{S}}\)-fibre (equivalently, a \(\Phi_{J}\)-fibre)._
Proof of Theorem 3.35.: Recall the discussion of SSSS3.2.2-3.2.4. The coordinate chart is centered at a point \(b\in Z_{J}^{*}\cap X\). Assume that \(Z_{J}^{*}\cap A\) is nonempty. Set \(N=N_{1}+\cdots+N_{k}\). Then \(W=W(N)\). We claim that the Hodge norm-squared of \(\Lambda\big{|}_{Z_{J}^{*}\cap U}\) is given by
\[h_{J}\ =\ \mathbf{i}^{2n-m}Q\left(\exp(f_{J})\cdot e_{0}\,,\,N^{m-n}\,\overline{ \exp(f_{J})\cdot e_{0}}\right)\,. \tag{3.37}\]
To see this, first recall that \(\mathfrak{z}_{J}\subset W_{0}(\mathfrak{g}_{\mathbb{C}})\). By construction \(e_{0}\in W_{m}(H_{\mathbb{C}})\) and \(e_{\infty}\in W_{2n-m}(H_{\mathbb{C}})\). Then the fact that \(W\) is \(Q\)-isotropic
\[Q\left(W_{\ell}(H)\,,\,W_{m}(H)\right)\ =\ 0\qquad\forall\quad\ell+m<2\,\mathsf{n} \tag{3.38}\]
implies that the map \(\mathfrak{z}_{J}\to\mathbb{C}\) given by
\[w\ \mapsto\ Q\left(\exp(w)\cdot e_{0}\,,\,N^{m-n}\,\overline{\exp(w)\cdot e_{0 }}\right)\]
descends to a well-defined \(\mathfrak{z}_{J}/W_{-1}(\mathfrak{z}_{J})\to\mathbb{C}\). Now to establish (3.37) it suffices to point out that \(\{f_{J}\bmod W_{-1}(\mathfrak{z}_{J})\}\) is the local period matrix representation of the period map \(\Phi_{J}:Z_{J}^{*}\to\Gamma_{J}\backslash D_{J}\).
Suppose for the moment that \(b\in A\). Then \(f_{J}\equiv 0\) modulo \(W_{-1}(\mathfrak{z}_{J})\) along \(Z_{J}^{*}\cap A\cap U\), and (3.37) implies
\[\mathbf{i}^{2n-m}Q\left(e_{0}\,,\,N^{m-n}\,\overline{e_{0}}\right)\ >\ 0\,. \tag{3.39}\]
Since \(f_{J}\) takes value in the centralizer \(\mathfrak{z}_{J}\) of the \(\{N_{j}\}_{j=1}^{k}\), it follows from Lemma 3.41 that
\[Q\left(\exp(f_{J})\cdot e_{0}\,,\,\overline{\lambda\,\exp(f_{J})\cdot e_{ \infty}}\right)\quad\text{is a positive multiple of}\quad h_{J} \tag{3.40}\]
By (3.2), (3.18) and (3.24)
\[Q\left(\exp(f_{J})\cdot e_{0}\,,\,\overline{\lambda\,\exp(f_{J})\cdot e_{\infty}} \right)\ =\ h\big{|}_{Z^{*}_{J}\cap U}\,.\]
This establishes the theorem.
**Lemma 3.41**.: _Recall the scalar \(\lambda\) defined by (3.1). Suppose that \(N\in W_{-2}(\mathfrak{g}_{\mathbb{R}})\) polarizes some mixed Hodge structure \((W,F^{\prime})\) arising along \(A\). Then \(\overline{\lambda\,e_{\infty}}\) is a positive multiple of \(\mathbf{i}^{2n-m}\,N^{m-n}\,\overline{e_{0}}\)._
Proof.: By Remark 2.12,
\[N\ \in\ \bigoplus_{p,q\leq-1}\mathfrak{g}_{W,F}^{p,q}\,.\]
Recall that \(e_{0}\in H^{\mathfrak{n},\mathfrak{m}-\mathfrak{n}}_{W,F}\). Then (2.7), and (3.8) imply \(N^{\mathfrak{m}-\mathfrak{n}}e_{0}\in H^{2n-m,0}_{W,F}\). It follows from \(H^{2n-m,0}_{W,F}=\operatorname{span}_{\mathbb{C}}\{e_{\infty}\}\) that \(N^{\mathfrak{m}-\mathfrak{n}}e_{0}\) is a multiple of \(e_{\infty}\). By (2.4) and (3.1) we have
\[Q(e_{0}\,,\,\overline{\lambda\,e_{\infty}})\ =\ 1\,.\]
Taken with (3.39), this implies the lemma.
## 4. The hermitian symmetric case
**Theorem 4.1**.: _If \(D\) is hermitian, then the function \(h:X\to\mathbb{R}\) is smooth, and \(-\log h\) is plurisubharmonic._
The essential point in the proof of Theorem 4.1 is the following lemma.
**Lemma 4.2**.: _If \(D\) is hermitian, then the subspace \(\mathfrak{s}_{F}^{\perp}\) of (2.10) centralizes the nilpotent elements \(N_{1},\dots,N_{k}\) of SS3.2.2._
Proof of Theorem 4.1: smoothness.: By Lemma 4.2, the function \(\log\zeta:U\to\mathfrak{s}_{F}^{\perp}\) of SS3.2.4 takes value in the centralizer of the \(N_{1},\dots,N_{k}\). This implies that \(\hat{\zeta}(t)=\zeta(t)\)
is smooth. And since \(\hat{\zeta}(t)\) centralizes the \(N_{1},\ldots,N_{k}\) it follows from Corollary 3.32 and SS3.2.7 that
\[Q(\eta_{0},\overline{\eta_{\infty}})\ =\ Q\left(\hat{\zeta}(t)\cdot e_{0}\,,\, \overline{\hat{\zeta}(t)\cdot e_{\infty}}\right) \tag{4.3}\]
is smooth. It now follows from the definition (3.2) that \(h\) is smooth.
Proof of Theorem 4.1: plurisubharmonicity.: By Lemma 3.41 and (4.3) we have
\[Q(\eta_{0},\overline{\lambda\eta_{\infty}})\ =\ \mathfrak{i}^{2n-\mathfrak{m}}\,Q \left(\hat{\zeta}(t)\cdot e_{0}\,,\,N^{\mathfrak{m}-\mathfrak{n}}\,\overline{ \hat{\zeta}(t)\cdot e_{0}}\right)\,.\]
By (3.2) and Theorem 3.3
\[h\ =\ \mathfrak{i}^{2n-\mathfrak{m}}\,Q\left(\hat{\zeta}(t)\cdot e_{0}\,,\,N^{ \mathfrak{m}-\mathfrak{n}}\,\overline{\hat{\zeta}(t)\cdot e_{0}}\right)\,,\]
modulo rescaling by a positive constant. Since \(N\) polarizes the mixed Hodge structure \((W,F)\), and \(\eta(t)\) centralizes \(N\), it follows that \(N\) also polarizes the mixed Hodge structure \((W,\hat{\zeta}(t)\cdot F)\), [10]. This yields a strengthening of (3.40): \(h\) is a Hodge metric, for the polarized Hodge structures \(F^{p}(\mathrm{Gr}^{W}_{\ell})\), on all of \(X\). Since this metric has nonpositive curvature [11, 12], it follows that \(-\log h\) is plurisubharmonic on all of \(X\).
Proof of Lemma 4.2.: Then the Deligne splitting \(\mathfrak{g}_{\mathbb{C}}=\oplus\,\mathfrak{g}^{p,q}_{W,F}\) of (2.6) has the property that
\[\mathfrak{g}^{p,q}_{W,F}\ =\ 0\quad\text{ if either }|p|>1\text{ or }|q|>1.\]
By (2.5) we have
\[W_{-2}(\mathfrak{g}_{\mathbb{C}})\ =\ \mathfrak{g}^{-1,-1}_{W,F}\,,\]
and by (2.10) we have
\[\mathfrak{s}^{\perp}_{F}\ =\ \mathfrak{g}^{-1,1}_{W,F}\,\oplus\,\mathfrak{g}^{-1, 0}_{W,F}\,\oplus\,\mathfrak{g}^{-1,-1}_{W,F}\,.\]
It follows from (2.8) that
\[[\mathfrak{s}^{\perp}_{F}\,,\,W_{-2}(\mathfrak{g}_{\mathbb{C}})]\ =\ 0\,.\]
By (3.11), the nilpotent operators \(N_{j}\) of SS3.2.2 lie in \(W_{-2}(\mathfrak{g}_{\mathbb{C}})\).
## 5. Criterion for smoothness
We have seen that \(h:X\to\mathbb{R}\) is smooth when \(D\) is hermitian (Theorem 4.1). Smoothness is actually a consequence of the weaker condition that \(\mathfrak{s}_{F}^{\perp}\subset W_{0}(\mathfrak{g}_{\mathbb{C}})\).
**Theorem 5.1**.: _If \(\mathfrak{s}_{F}^{\perp}\subset W_{0}(\mathfrak{g}_{\mathbb{C}})\), then \(h:X\to\mathbb{R}\) is smooth._
_Remark 5.2_.: The hypothesis that \(\mathfrak{s}_{F}^{\perp}\subset W_{0}(\mathfrak{g}_{\mathbb{C}})\) can not be dropped, see [10].
Proof.: As in the proof of Theorem 4.1, it follows from Lemma 3.26 and SS3.2.7 that
\[Q(\eta_{0},\overline{\eta_{\infty}})\ =\ Q\left(\hat{\zeta}(t)\cdot e_{0}\,,\, \overline{\hat{\zeta}(t)\cdot e_{\infty}}\right)\,.\]
In general, \(\hat{\zeta}(t)\) is not smooth. However, \(\log\zeta\in W_{0}(\mathfrak{g}_{\mathbb{C}})\) and \(N_{j}\in W_{-2}(\mathfrak{g}_{\mathbb{C}})\) implies that \(\log\hat{\zeta}(t)\) is smooth modulo \(W_{-2}(\mathfrak{g}_{\mathbb{C}})\).
As in the proof of Theorem 3.35, (3.28), (3.29) and the fact that \(W\) is \(Q\)-isotropic (3.38) imply that the map \(W_{0}(\mathfrak{g}_{\mathbb{C}})\to\mathbb{C}\) given by
\[w\ \mapsto\ Q\left(\exp(w)\cdot e_{0}\,,\,\overline{\exp(w)\cdot e_{\infty}}\right)\]
descends to a well-defined map \(W_{0}(\mathfrak{g}_{\mathbb{C}})/W_{-1}(\mathfrak{g}_{\mathbb{C}})\to\mathbb{C}\). It follows that \(Q(\eta_{0},\overline{\eta_{\infty}})\) is smooth. Smoothness of \(h\) follows from the definition (3.2).
## Appendix A Hodge diamonds
Given a mixed Hodge structure \((W,F)\) on a vector space \(V\) the _Hodge diamond_\(\Diamond_{W,F}(V)\) is a visual representation of the Deligne splitting \(V_{\mathbb{C}}=\oplus\,V_{W,F}^{p,q}\) (SS2.2.3) that is given by a configuration of points in the \((p,q)\)-plane that is labeled with \(\dim_{\mathbb{C}}V_{W,F}^{p,q}\). This device encodes much of the discrete data in \((W,F)\), and may illuminate some of the constructions here that utilize limiting mixed Hodge structures. In this appendix we consider two period domains, and list all possible Hodge diamonds coming from limiting mixed Hodge structures on those period domains. We include the diamonds \(\Diamond_{\mathbb{C}}(\mathfrak{g})\) and \(\Diamond_{\mathbb{C}}(H)\) of the induced mixed Hodge structures. In the first example
the domain is hermitian (SSA.1), in the second example the domain is non-hermitian (SSA.2).
### Weight \(\mathsf{w}=1\) and \(g=3\)
Suppose that \(D\) is the hermitian symmetric period domain parameterizing pure, effective, weight \(\mathsf{w}=1\) polarized Hodge structures on \(V\simeq\mathbb{Q}^{6}\). There are four possible Hodge diamonds, indexed by nonnegative integers \(0\leq a,b\in\mathbb{Z}\) satisfying \(a+b=3\). The diamonds for \(V\) and \(\mathfrak{g}\) are given by
The underlying limiting mixed Hodge structure is pure (\(\sigma=0\)) if and and only if \(a=0\). The Hodge diamonds for \(H=\bigwedge^{3}V\) (of weight \(\mathsf{n}=3\)) are
### Weight \(\mathsf{w}=2\) and \(\mathsf{h}(2,\mathsf{h},2)\)
Suppose that \(D\) is the (non-hermitian) period domain parameterizing pure, effective, weight \(\mathsf{w}=2\) polarized Hodge structures on \(V\) with Hodge numbers \(\mathbf{h}=(2,\mathsf{h},2)\). There are six possible Hodge diamonds. We have \(H=\bigwedge^{2}V=\mathfrak{g}\otimes\mathbb{Q}(-2)\) and \(\mathsf{n}=4\). In the diamonds below, some of the nodes are left unmarked; those missing dimensions may be determined by (2.5) and (3.9).
(A.1)
(A.3) \(\diamondsuit(V)\)\(\diamondsuit(H)\)\(\diamondsuit(V)\)\(\diamondsuit(H)\)\(\diamondsuit(V)\)\(\diamondsuit(H)\)\(\diamondsuit(V)\)\(\diamondsuit(H)\)\(\diamondsuit(H)\)\(\diamondsuit(V)\)\(\diamondsuit(H)\)\(\diamondsuit(H)\)\(\diamondsuit(V)\)\(\diamondsuit(H)\)\(\diamondsuit(H)\)\(\diamondsuit(H)\)\(\diamondsuit(V)\)\
(A.6) \[\begin{split}\includegraphics[scale=0.5]{figure/A6.png}\end{split}\]
(A.6) \[\begin{split}\includegraphics[scale=0.
The second item implies \(W=W(N_{1}+\hat{N}_{2})[-\mathfrak{n}]\). The third item implies \(g\) preserves both weight filtrations \(W\) and \(W^{1}\). So without loss of generality, we may assume that \(F=\tilde{F}\). Then (ii) implies \(e_{\infty}\) is a nonzero multiple of \(\hat{N}^{\mathfrak{m}-\mathfrak{n}}e_{0}\).
Since \(N_{1},\hat{N}_{2}\in\mathfrak{g}_{W,F}^{-1,-1}\), \(e_{0}\in F^{\mathfrak{n}}(H_{\mathbb{C}})\) is necessarily a highest weight vector of both \(\mathfrak{sl}_{2}\)'s. Now it follows from standard \(\mathfrak{sl}_{2}\)-representation theory that there exists \(\mathfrak{n}\leq\mathfrak{m}_{1}\leq\mathfrak{m}\) such that \(e_{0}\in W^{1}_{\mathfrak{m}_{1}}(H_{\mathbb{C}})\) and \(e_{0}\not\in W^{1}_{\mathfrak{m}_{1}-1}(H_{\mathbb{C}})\); and \(\hat{N}^{\mathfrak{m}-\mathfrak{n}}e_{0}=\hat{N}_{2}^{\mathfrak{m}-\mathfrak{ m}_{1}}\,N_{1}^{\mathfrak{m}_{1}-\mathfrak{n}}e_{0}\). The latter is an element of \(W^{1}_{2\mathfrak{n}-\mathfrak{m}_{1}}(H_{\mathbb{C}})\), and so implies \(e_{\infty}\in W^{1}_{2\mathfrak{n}-\mathfrak{m}_{1}}(H_{\mathbb{C}})\).
Finally \(Q(e_{0},\overline{e_{\infty}})\neq 0\) and the fact the the weight polarization \(W^{1}\) is \(Q\)-isotropic
\[Q(W^{1}_{k}(H),W^{1}_{\ell}(H))\ =\ 0\quad\text{ for all }\quad k+\ell<\mathfrak{n}\,,\]
implies \(e_{\infty}\not\in W^{1}_{2\mathfrak{n}-\mathfrak{m}_{1}-1}(H_{\mathbb{C}})\).
|
2302.12676 | Towards Computationally Efficient Responsibility Attribution in
Decentralized Partially Observable MDPs | Responsibility attribution is a key concept of accountable multi-agent
decision making. Given a sequence of actions, responsibility attribution
mechanisms quantify the impact of each participating agent to the final
outcome. One such popular mechanism is based on actual causality, and it
assigns (causal) responsibility based on the actions that were found to be
pivotal for the considered outcome. However, the inherent problem of
pinpointing actual causes and consequently determining the exact responsibility
assignment has shown to be computationally intractable. In this paper, we aim
to provide a practical algorithmic solution to the problem of responsibility
attribution under a computational budget. We first formalize the problem in the
framework of Decentralized Partially Observable Markov Decision Processes
(Dec-POMDPs) augmented by a specific class of Structural Causal Models (SCMs).
Under this framework, we introduce a Monte Carlo Tree Search (MCTS) type of
method which efficiently approximates the agents' degrees of responsibility.
This method utilizes the structure of a novel search tree and a pruning
technique, both tailored to the problem of responsibility attribution. Other
novel components of our method are (a) a child selection policy based on linear
scalarization and (b) a backpropagation procedure that accounts for a
minimality condition that is typically used to define actual causality. We
experimentally evaluate the efficacy of our algorithm through a
simulation-based test-bed, which includes three team-based card games. | Stelios Triantafyllou, Goran Radanovic | 2023-02-24T14:56:25Z | http://arxiv.org/abs/2302.12676v1 | Towards Computationally Efficient Responsibility Attribution in Decentralized Partially Observable MDPs
###### Abstract.
Responsibility attribution is a key concept of accountable multi-agent decision making. Given a sequence of actions, responsibility attribution mechanisms quantify the impact of each participating agent to the final outcome. One such popular mechanism is based on actual causality, and it assigns (causal) responsibility based on the actions that were found to be pivotal for the considered outcome. However, the inherent problem of pinpointing actual causes and consequently determining the exact responsibility assignment has shown to be computationally intractable. In this paper, we aim to provide a practical algorithmic solution to the problem of responsibility attribution under a computational budget. We first formalize the problem in the framework of Decentralized Partially Observable Markov Decision Processes (Dec-POMDPs) augmented by a specific class of Structural Causal Models (SCMs). Under this framework, we introduce a Monte Carlo Tree Search (MCTS) type of method which efficiently approximates the agents' degrees of responsibility. This method utilizes the structure of a novel search tree and a pruning technique, both tailored to the problem of responsibility attribution. Other novel components of our method are (a) a _child selection policy_ based on _linear scalarization_ and (b) a _backpropagation procedure_ that accounts for a minimality condition that is typically used to define actual causality. We experimentally evaluate the efficacy of our algorithm through a simulation-based test-bed, which includes three team-based card games.
Responsibility Attribution; Actual Causality; Monte Carlo Tree Search +
Footnote †: journal: Proceedings of the 22nd International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2023). A. Ricci, W. York, N. Agmon, R. An (eds.), May 29 - June 2, 2023.
+
Footnote †: journal: Proceedings of the 22nd International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2023). A. Ricci, W. York, N. Agmon, R. An (eds.), May 29 - June 2, 2023.
+
Footnote †: journal: Proceedings of the 22nd International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2023). A. Ricci, W. York, N. Agmon, R. An (eds.), May 29 - June 2, 2023.
+
Footnote †: journal: Proceedings of the 22nd International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2023). A. Ricci, W. York, N. Agmon, R. An (eds.), May 29 - June 2, 2023.
+
Footnote †: journal: Proceedings of the 22nd International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2023). A. Ricci, W. York, N. Agmon, R. An (eds.), May 29 - June 2, 2023.
+
Footnote †: journal: Proceedings of the 22nd International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2023). A. Ricci, W. York, N. Agmon, R. An (eds.), May 29 - June 2, 2023.
## 1. Introduction
One of the well known _Gedankenexperiment_ in the AI literature on actual causality and responsibility attribution is the story of _Suzy and Billy_. As J. Y. Halpern describes it in his book on _Actual Causality_(Haj
such as traffic light controllers (TLC), to multi-agent cooperative systems, such as warehouse robots. For an extended discussion on the application scenario of TLC see Appendix B.
Fig. 1 provides an overview of our algorithmic approach. We recognize two main challenges that this approach has to overcome. The first one is a statistical challenge and is related to the fact that in practice the _context_ under which an outcome of interest is generated cannot be always inferred. As explained in Fig. 1, to make responsibility attribution feasible in such cases, we use posterior inference over the possible values of the underlying context. The second challenge is a computational one and it is related to the computational complexity of identifying actual causes. We tackle this challenge by applying a Monte Carlo Tree Search (MCTS) type of method tailored to the problem of finding actual causes. As we show in this paper, our approach significantly outperforms baselines in terms of approximating the "ideal" responsibility assignments, obtained under no uncertainty and unlimited computational budget. Our contributions are primarily related to the design and experimental evaluation of this algorithmic framework, and they include:
* A novel **search tree** tailored to the tasks of pinpointing actual causes and attributing responsibility.
* A novel **pruning technique** that utilizes the structural properties of both the actual causality definition of Triantafyllou et al. (2017) and the responsibility attribution mechanism of Chockler and Halpern (2017).
* **Responsibility Attribution-MCTS (RA-MCTS)**, a new **search method** for efficiently finding actual causes under a given _causal setting_. Compared to standard MCTS, the main novel components of RA-MCTS are in its _simulation phase, evaluation function, child selection policy_, and _backpropagation phase_.
* **Experimental test-bed** for evaluating the efficacy of RA-MCTS. The test-bed is based on three card games, _Euchre_(Rue et al., 2019), _Spades_(Rue et al., 2019), and a team variation of the game _Goofpiel_(Rue et al., 2019). We deem the test-bed to be generally useful for studying actual causality and responsibility attribution in multi-agent sequential decision making. Our experimental results show that RA-MCTS almost always outperforms baselines, such as random search, brute-force search, or modifications of RA-MCTS. Our results also show that in cases where the underlying context cannot be exactly inferred, computing a good approximation of the "ideal" responsibility assignment might not be possible, even under unlimited computational budget. This can happen when the posterior distribution over the possible contexts is not informative enough. 1
Footnote 1: Code to reproduce the experiments is available at [https://github.com/stelios30/aamaz3-responsibility-attribution-mCTS.git](https://github.com/stelios30/aamaz3-responsibility-attribution-mCTS.git).
### Additional Related Work
This paper is related to works on responsibility and blame attribution in multi-agent decision making (Blei et al., 2017; Blei et al., 2018; Blei et al., 2019; Rue et al., 2019; Rue et al., 2019).
To the best of our knowledge, there is no prior work on developing general algorithmic approaches on efficiently computing degrees of causal responsibility. The closest we could find, are domain-specific applications of the Chockler and Halpern responsibility approach (2017) in program verification (Hirsch et al., 2017; Blei et al., 2019). Chapter 8 of (Kalpern and Kalpern, 2017) provides a general overview of such applications. Additionally, to our knowledge, the only general algorithmic approach on determining causality, and subsequently responsibility attribution, is that of (Rue et al., 2019). Their approach on checking actual causality utilizes SAT solvers and thus is significantly different than ours. They also restrict their focus to binary models, as opposed to ours which considers categorical variables.
The only other work that has used the same framework as the one used in this paper is that of Triantafyllou et al. (2017). Close to our work in this aspect, Buesing et al. (Buesing et al., 2019) and Oberst and Sontag (Sontag, 2019) have considered a combination of SCMs with POMDPs. Tsirtsis et al. (Taritsis et al., 2019) utilize a connection between SCMs and MDPs to generate counterfactual explanations for sequential decision making.
This paper is also related to a line of work which introduces variants of MCTS that apply to specific domains. For instance, Schadd et al. (Schadd et al., 2017) and Bjornsson and Finsson (Bjorsson and Finsson, 2019) propose modifications to MCTS in order to adapt it to single-player games. We refer the interested reader to Browne et al. (2019) for more such examples.
Figure 1. The figure provides an overview of our approach to responsibility attribution in Dec-POMDPs. Based on received execution traces, that is the agents’ trajectories, our approach first aims to infer the underlying context under which these decisions were made. To do so we utilize Dec-POMDP SCMs – a framework introduced in (Taritsis et al., 2019) that combines Dec-POMDPs and SCMs. The context in our case models randomness/noise, and together with the underlying causal model forms the causal setting. The next step is to apply an MCTS type of search to infer actual causes that are consistent with the definition of actual causality introduced in (Taritsis et al., 2019). This search method is formally specified in a language that is an extension of propositional logic. In order to determine the agents’ degrees of responsibilities, we apply the responsibility attribution method of (Kalpern and Kalpern, 2017) over the actual causes found by the search. In cases where the underlying context cannot be exactly inferred we use posterior inference over possible contexts. More specifically, we first draw samples from the posterior over possible contexts, and then repeat the process described above for each sampled context. Agents are assigned the average degrees of responsibility over all samples.
Framework and Background
In this section, we first give an overview of a formal framework which allows us to study responsibility attribution in the context of multi-agent sequential decision making. This framework is adopted from Triantafyllou et al. (2017), and relies on decentralized partially observable Markov decision processes (Dec-POMDPs) (Bartos et al., 2016; Bartos et al., 2017) and structural causal models (SCMs) (Sutton et al., 2017; Sutton and Sutton, 2018). Next, we provide the necessary background on actual causality and responsibility attribution. Finally, we state the responsibility attribution problem and highlight its main algorithmic challenges.
### Dec-POMDPs
The first component of this framework are Dec-POMDPs with \(n\) agents; state space \(\mathcal{S}\); joint action space \(\mathcal{A}=\times_{i=1}^{n}\mathcal{A}_{i}\), where \(\mathcal{A}_{i}\) is the action space of agent \(i\); transition probability function \(P:\mathcal{S}\times\mathcal{A}\rightarrow\Delta(\mathcal{S})\); joint observation space \(\mathcal{O}=\times_{i=1}^{n}\mathcal{O}_{i}\), where \(\mathcal{O}_{i}\) is the observation space of agent \(i\); observation probability function \(\Omega:\mathcal{S}\rightarrow\Delta(\mathcal{O})\); finite time horizon \(T\); initial state distribution \(\sigma\). Here \(\Delta\) denotes the probability simplex. For ease of notation, rewards are considered to be part of observations.
Each agent \(i\) is modeled with an information state space \(I_{i}\); decision making policy \(\pi_{i}:I_{i}\rightarrow\Delta(\mathcal{A}_{i})\); information probability function \(Z_{i}:I_{i}\times\mathcal{A}_{i}\times O_{i}\rightarrow I_{i}\); initial information probability function \(Z_{i,0}:\mathcal{O}_{i}\to I_{i}\). We denote with \(\pi_{i}(a_{i}|_{i})\) agent \(i\)'s probability of taking action \(a_{i}\) given information state \(i_{i}\), and with \(\pi\) the collection of all agents' policies, i.e., the agents' joint policy.
We assume spaces \(\mathcal{S}\), \(\mathcal{A}\), \(\mathcal{O}\) and \(\mathcal{I}_{i}\) to be finite and discrete.
### Dec-POMDPs and Structural Causal Models
In order to reason about actual causality and responsibility attribution in multi-agent sequential decision making Triantafyllou et al. (2017) view Dec-POMDPs as SCMs.2 More specifically, given a Dec-POMDP \(\mathcal{M}=(\mathcal{S},\{1,...,n\},\mathcal{A},P,O,\Omega,T,\sigma)\) and a model \(m=(I_{i},\pi_{i},I_{i},Z_{i,0})\) for each agent \(i\), they construct a SCM \(\mathcal{C}\), which they refer to as Dec-POMDP SCM. Under \(\mathcal{C}\) functions \(P\), \(\Omega\), \(\{Z_{i}\}_{i\in\{...,n\}}\) and \(\{\pi_{i}\}_{i\in\{1,...,n\}}\) are parameterized as follows
Footnote 2: They establish a connection between the two by building on prior work from Buesing et al. (2017).
\[S_{t}=g_{S_{t}}(S_{t-1},A_{t-1},U_{S_{t}}),\quad O_{t}=g_{O_{t} }(S_{t},U_{O_{t}}),\] \[I_{i,t}=g_{I_{i}}(I_{i,t-1},A_{i,t-1},U_{i,t},U_{i,t}),\quad A_{i, t}=g_{A_{i,t}}(I_{i,t},U_{A_{i,t}}), \tag{1}\]
where \(g_{S_{t}}\), \(g_{O_{t}}\), \(g_{I_{k}}\) and \(g_{A_{i,t}}\) are deterministic functions, and \(U_{\mathcal{G}_{t}}\), \(U_{O_{t}}\), \(U_{I_{k}}\) and \(U_{A_{i,t}}\) are independent noise variables with dimensions \(|\mathcal{S}|\), \(|\mathcal{O}|\), \(|I_{i}|\) and \(|\mathcal{A}_{i}|\), respectively.3
Footnote 3: Such a parameterization is always possible (Bartos et al., 2017).
Following SCM terminology (Sutton et al., 2017), we refer to state variables \(S_{t}\), observation variables \(O_{t}\), information variables \(I_{i,t}\) and action variables \(A_{i,t}\) as the endogenous variables of \(\mathcal{C}\). Furthermore, we call noise variables \(U\) the exogenous variables of \(\mathcal{C}\) and a setting \(\vec{u}\) of \(U\) context. Note that given a context \(\vec{u}\) one can compute the value of any endogenous variable in \(\mathcal{C}\) by consecutively solving equations in (1), also called structural equations. Therefore, a Dec-POMDP SCM-context pair \((\mathcal{C},\vec{u})\), also called causal setting, specifies a unique trajectory \(\tau=\{(s_{t},a_{t})\}_{t=0}^{T-1}\).
Another well-known notion in causality that is important for our analysis is that of interventions (Sutton et al., 2017).4 An intervention \(A_{i,t}\gets a^{\prime}_{i}\) on SCM \(\mathcal{C}\) is performed by replacing \(g_{A_{i,t}}(I_{i,t},U_{A_{i,t}})\) in Eq. (1) with \(a^{\prime}_{t}\), also called the counterfactual action of the intervention. We denote the resulting SCM by \(C^{A_{i,t}\leftarrow a^{\prime}_{i}}\). If one has knowledge over \(\mathcal{C}\) as well as the context \(\vec{u}\) under which a trajectory \(\tau\) was generated, they can efficiently compute the counterfactual outcome of that trajectory under some intervention \(A_{i,t}\leftarrow a^{\prime}_{i}\) on \(\mathcal{C}\). This can be done by simply generating the counterfactual trajectory \(\tau^{\ell f}\) that corresponds to the causal setting \((C^{A_{i,t}\leftarrow a^{\prime}_{i}},\vec{u})\). In other words, they can predict exactly what would have happened in that scenario had agent \(i\) taken action \(a^{\prime}_{i}\) instead of \(a_{i,t}\). However, the true underlying SCM \(\mathcal{C}\) or context \(\vec{u}\) are not always available in practice. Following a standard modeling approach (Gundlach et al., 2017; Bartos et al., 2017; Bartos et al., 2017), we restrict our focus on a specific class of SCMs, the Gumbel-Max SCMs, introduced by Oberst and Sontag (2017). More details on Gumbel-Max SCMs and how they can be integrated in the Dec-POMDP SCM framework, can be found in Appendix C.5
Footnote 4: In this paper, we consider interventions on action variables only.
### Actual Causality
Next, we present a language for reasoning about actual causality in (Dec-POMDP) SCMs (Sutton et al., 2017). Let \(\mathcal{C}\) be a Dec-POMDP SCM. A primitive event in \(\mathcal{C}\) is any formula of the form \(V=v\), where \(V\) is an endogenous variable of \(\mathcal{C}\) and \(v\) is a valid value of \(V\). We say that a Boolean combination of primitive events constitutes an event. Given a context \(\vec{u}\) and an event \(\phi\), we write \((C,\vec{u})\models\phi\) to denote that \(\phi\) takes place in the causal setting \((C,\vec{u})\). Furthermore, for a set of interventions \(\vec{A}\leftarrow\vec{a}^{\prime}\) on \(\mathcal{C}\), we write \((C,\vec{u})\models[\vec{A}\leftarrow\vec{a}^{\prime}]\phi\), if \((C^{\vec{A}\leftarrow\vec{a}^{\prime}},\vec{u})\models\phi\). For example, let \(\tau=\{(s_{t},a_{t})\}_{t=0}^{T-1}\) be the trajectory that corresponds to \((C,\vec{u})\). Consider the counterfactual scenario in which agent \(i\) takes action \(a^{\prime}_{i}\) instead of \(a_{i,t}\) in \(\tau\), and the process transitions to state \(s\) at \(t+1\). This can be expressed by
\[(\mathcal{C},\vec{u})\models[A_{i,t}\leftarrow a^{\prime}_{i}](S_{t+1}=s).\]
In the context of Dec-POMDP SCMs actual causality is related to the process of pinpointing agents' actions that were critical for \(\phi\) to happen in \((\mathcal{C},\vec{u})\). In this paper, we adopt the actual cause definition proposed by Triantafyllou et al. (2017). Their definition utilizes the agents' information states in order to explicitly account for the temporal dependencies between agents' actions.
Definition 2.1 (Actual Cause) \(\vec{A}=\vec{a}\) is an actual cause of the event \(\phi\) in \((\mathcal{C},\vec{u})\)under the contingency \(\vec{W}=\vec{w}^{\prime}\) if the following conditions hold:
1. \((C,\vec{u})\models(\vec{A}=\vec{a})\) and \((C,\vec{u})\models\phi\)
2. There is a setting \(\vec{a}^{\prime}\) of the variables in \(\vec{A}\), such that \[(C,\vec{u})\models[\vec{A}\leftarrow\vec{a}^{\prime},\vec{W}\leftarrow\vec{ w}^{\prime}]\neg\phi\]
3. \(\vec{A}\cup\vec{W}\) is minimal w.r.t. conditions \(\mathit{AC1}\) and \(\mathit{AC2}\)
4. For every agent \(i\) and time-step \(t\) such that \(A_{i,t}\in\vec{A}\) and \((C,\vec{u})\models(I_{i,t}=\imath_{i,t})\), it holds that \[(C,\vec{u})\models[\vec{A}\leftarrow\vec{a}^{\prime},\vec{W}\leftarrow\vec{ w}^{\prime}](I_{i,t}=\imath_{i,t})\]
* For every agent \(i\) and time-step \(t\) such that \(A_{i,t}\in\widetilde{W}\) and \((C,\vec{u})\models(I_{i,t}=\iota_{i,t})\), it holds that \[(C,\vec{u})\models[\vec{A}\leftarrow\vec{a}^{\prime},\widetilde{W}\leftarrow \widetilde{w}^{\prime}]\neg(I_{i,t}=\iota_{i,t})\]
We say that the tuple \((\widetilde{W},\widetilde{w}^{\prime},\vec{a}^{\prime})\) is a witness of \(\vec{A}=\vec{a}\) being an actual cause of \(\phi\) in \((C,\vec{u})\).
_AC1_ requires that both \(\vec{A}=\vec{a}\) and \(\phi\) happened in \((C,\vec{u})\). _AC2_ implies that \(\phi\) would not have occurred under the interventions \(\vec{A}\leftarrow\vec{a}^{\prime}\) and \(\widetilde{W}\leftarrow\widetilde{w}^{\prime}\) on \((C,\vec{u})\). _AC3_ is a minimality condition, which ensures that there are no subsets \(\vec{A}^{\prime}\) and \(\widetilde{W}^{\prime}\) of \(\vec{A}\) and \(\widetilde{W}\), and setting \(\widetilde{w}^{\prime\prime}\) of \(\widetilde{W}^{\prime}\), such that \(\vec{A}^{\prime}=\vec{a}^{\prime}\) and \(\widetilde{W}^{\prime}=\widetilde{w}^{\prime\prime}\) satisfy _AC1_ and _AC2_, where \(\vec{a}^{\prime}\) is the restriction of \(\vec{a}\) to the variables of \(\vec{A}\). _AC4_ (resp. _AC5_) requires that the information states which correspond to the action variables in \(\vec{A}\) (resp. \(\widetilde{W}\)) have the same (resp. different) values in the counterfactual scenario \((C^{\vec{A}\leftarrow\vec{a}^{\prime},\widetilde{W}\leftarrow\widetilde{w}^{ \prime}},\vec{u})\) and the actual scenario \((C,\vec{u})\). We say that a conjunct of an actual cause \(\vec{A}=\vec{a}\) constitutes a part of that cause. If for some \(\vec{A}=\vec{a}\) and \(\widetilde{W}=\widetilde{w}^{\prime}\) conditions _AC1_, _AC2_, _AC4_ and _AC5_ hold we say that \(\vec{A}=\vec{a}\) is a candidate actual cause of \(\phi\) in \((C,\vec{u})\) under the contingency \(\widetilde{W}=\widetilde{w}^{\prime}\). We also say that a set of interventions \(\vec{X}\leftarrow\vec{x}^{\prime}\) constitutes an (candidate) actual cause-witness pair according to Definition 2.1 if there exists such a pair \((\vec{A}=\vec{a},(\widetilde{W},\widetilde{w}^{\prime},\vec{a}^{\prime}))\), where \(\vec{X}=\vec{A}\cup\widetilde{W}\), and \(\widetilde{a}^{\prime}\) and \(\widetilde{w}^{\prime}\) are the projections of \(\vec{x}^{\prime}\) in \(\vec{A}\) and \(\widetilde{W}\), respectively.
### Responsibility Attribution
Responsibility attribution is a concept closely related to actual causality, which aims to determine the extent to which agents' actions were pivotal for some outcome. In this paper, we adopt a responsibility attribution approach which was first introduced by Chockler and Halpern (Chockler and Halpern, 1976), and then adapted by Triantafyllou et al. (Triantafyllou et al., 2007) to the setting of Dec-POMDP SCMs. Given a causal setting \((C,\vec{u})\) and an event \(\phi\), the Chockler and Halpern approach (henceforth CH) uses the following function to determine an agent \(i\)'s degree of responsibility for \(\phi\) in \((C,\vec{u})\) relative to a set of interventions \(\vec{X}\leftarrow\vec{x}^{\prime}\) on \(C\) and an actual causality definition \(\mathcal{D}\)
\[\begin{split}& d\iota_{i}((C,\vec{u}),\phi,\vec{X}\leftarrow \vec{x}^{\prime},\mathcal{D})=\frac{m_{i}}{|\vec{X}|},\end{split} \tag{2}\]
where \(m_{i}\) is computed as follows. If \(\vec{X}\leftarrow\vec{x}^{\prime}\) constitutes an actual cause-witness pair \((\vec{A}=\vec{a},(\widetilde{W},\widetilde{w}^{\prime},\vec{a}^{\prime}))\) of \(\phi\) in \((C,\vec{u})\) according to \(\mathcal{D}\), then \(m_{i}\) denotes the number of \(i\)'s action variables in \(\vec{A}\). Otherwise, \(m_{i}\) is \(0\). In this paper, an agent's degree of responsibility according to the CH approach is computed as follows.
Definition 2.2 ().: Consider a causal setting \((C,\vec{u})\) and an event \(\phi\) such that \((C,\vec{u})\models\phi\). With \(\mathcal{D}\) being Definition 2.1, an agent \(i\)'s degree of responsibility for \(\phi\) in \((C,\vec{u})\) is equal to the maximum value \(d\iota_{i}((C,\vec{u}),\phi,\vec{X}\leftarrow\vec{x}^{\prime},\mathcal{D})\) over all possible sets of interventions \(\vec{X}\leftarrow\vec{x}^{\prime}\) on \(C\).
The CH definition captures some key ideas of responsibility attribution. First, an agent's degree of responsibility depends on the size of an actual cause \(\vec{A}=\vec{a}\) the agent participates in. Second, it depends on the amount of participation the agent has in that cause. Finally, it depends on the size of the smallest contingency of \(\vec{A}=\vec{a}\), i.e., the minimum number of interventions that need to be performed on \(C\) in order to make \(\phi\) counterfactually depend on \(\vec{A}\).
### Problem Statement and Challenges
Given a trajectory \(\tau\) generated by causal setting \((C,\vec{u})\), the general problem we are interested in is computing the agents' degrees of responsibility for the final outcome \(\phi^{\tau}\) of \(\tau\). In this paper, we focus on two main challenges of this problem. The first one is related to the computational complexity of the problem. The second one has to do with the fact that in practice context \(\vec{u}\) might not be known.
In order to address the first challenge, we view responsibility attribution as a **multi-objective search problem** with limited computational resources. An algorithmic solution to this problem should find a set of interventions, for each agent, that maximizes the function in Eq. (2). The pipeline we consider for such algorithms can be summarized as follows. First, the algorithm searches for sets of interventions that constitute actual cause-witness pairs of the outcome \(\phi^{\tau}\). Next, based on the found actual cause-witness pairs the algorithm computes the responsibility assignment. A natural question that arises is how to choose which intervention sets to evaluate before the computational budget is exhausted. We believe that the answer to this question lies in the structural properties of Definitions 2.1 and 2.2 (Sections 3.1-3.3). Another question related to this problem is how to recognize if a set of interventions is in fact an actual cause-witness pair. Even though it is easy to infer whether a set of interventions constitutes a candidate actual cause-witness pair of \(\phi^{\tau}\) when \((C,\vec{u})\) is known, it is impossible to know if it is minimal, i.e., if it satisfies condition _AC3_, unless all of its subsets are first checked for _AC1_ and _AC2_. Despite that, there are countermeasures that one can implement to reduce the negative impact that _AC3_ might have on the search process (Section 3.3).
To address the second challenge, we view responsibility attribution as an **inference problem**. Our approach is to build on the above mentioned search algorithm, and by using posterior inference design a mechanism that can efficiently estimate responsibility assignments under context uncertainty (Section 3.4).
## 3. Algorithmic Solution
In this section, we analyze our algorithmic solutions to the search and inference problems described in Section 2.5. First, we propose a novel search tree tailored to the tasks of pinpointing actual causes and attributing responsibility. Next, we propose a pruning technique that utilizes the structural properties of Definitions 2.1 and 2.2. We then propose RA-MCTS, a novel Monte Carlo Tree Search (MCTS) type of method for finding approximate responsibility assignments under limited computational budget. Finally, we propose an extension of RA-MCTS to the unknown context regime.
### Search Tree
Fig. 2 illustrates an instantiation of our proposed search tree. Note that the tree is defined relative to the causal setting \((C,\vec{u})\), that is, every state, observation, information state and action is deterministically computed by the structural equations of \(C\) together with context \(\vec{u}\) (see Eq. (1)). Nodes in this tree fall into one of 5 categories. At the top of the tree, we have the _Root_ node, where the time-step of the first intervention is selected. Nodes \(t=0\), \(t=1\) and \(t=2\) in
Fig. 2 correspond to their respective time-steps, and we call them _TimeStep_ nodes. From a _TimeStep_ node, the agent of the next intervention is picked. Nodes \(ag=0\) and \(ag=1\) correspond to agents \(0\) and \(1\), and they are categorized as _Agent_ nodes. From an _Agent_ node, the counterfactual action of the next intervention is chosen from the available options. More specifically, let \(ag=i\) be the node where the search is currently on, and \(t=t^{\prime}\) be that node's parent. Let also \(\vec{X}\leftarrow\vec{x}^{\prime}\) denote the current set of interventions encoded in \(ag=i\). The available options from node \(ag=i\) then include all the valid actions that \(i\) could have taken at time-step \(t^{\prime}\), except the action that it would have normally taken given the current set of interventions, i.e., the action determined by the causal setting \((\vec{C}^{\vec{X}\leftarrow\vec{x}^{\prime}},\vec{u})\). Nodes \(a^{\prime}=3\), \(a^{\prime}=5\) and \(a^{\prime}=6\) in Fig. 2 correspond to such counterfactual actions and they are characterized as _Action_ nodes. From an _Action_ node, search can either stop growing the intervention set, and hence transition to a _Leaf_ node \(L\) or continue by transitioning to the next _TimeStep_ node. If search transitions to \(L\), then the current set of interventions is evaluated. In case this set of interventions is found to change the final outcome \(\phi^{\tau}\), it is added to the set of found candidate actual cause-witness pairs.
### Pruning
Apart from its intuitive nature and computational efficiency (Section 4.3), the search tree of Fig. 2 also allows us to apply a number of effective pruning techniques. Pruning can take place at any point during the search and it is basically the process of removing branches from a tree that cannot possibly improve the output of the algorithm. In our setting, this means that a node needs not be further visited if it becomes apparent that the evaluation of any leaf node reachable from that node cannot in any way influence the final responsibility assignment. Our method prunes away a node (and all of its descendants) if any of the following conditions hold:
* It is a _Leaf_ node that has already been evaluated.
* It is the closest ancestor _Agent_ node of a _Leaf_ node \(L\), such that \(L\)'s encoded set of interventions \(\vec{X}\leftarrow\vec{x}^{\prime}\) constitutes a candidate actual cause-witness pair. Note that the set of interventions encoded in any descendant of the pruned _Agent_ node is either identical to \(\vec{X}\leftarrow\vec{x}^{\prime}\) apart from its last counterfactual action, or its variable set is a superset of \(\vec{X}\), and hence it is non-minimal according to Definition 2.1.
* It is an _Agent_ node whose encoded set of interventions is non-minimal w.r.t. to the current set of found candidate actual cause-witness pairs.
* It is a fully-expanded node with all of its children already pruned.
### Responsibility Attribution Using Monte Carlo Tree Search (RA-MCTS)
The search algorithm we propose is based on the well-known Monte Carlo Tree Search (MCTS) method (Han et al., 2017). We refer to our algorithm as RA-MCTS because it is specific to the task of responsibility attribution. The main differences between RA-MCTS and standard MCTS (Han et al., 2017; Chen et al., 2017) are in their _simulation phases_, _evaluation functions_, _child selection policies_ and _backpropagation phases_.
_Simulation Phase._ At each iteration, the entire simulation path is added to the search tree. Although in applications of MCTS the tree is usually expanded by one node per iteration, this would not be optimal in our setting. Namely, under a fixed causal setting, the state transitions, observations generation and other such functions are deterministic.6 Hence, computing their values more than once is a waste of computational resources.
Footnote 6: Similar MCTS modifications have been used in other deterministic tasks, such as guiding symbolic execution in generating useful visual programming tasks (Han et al., 2017).
_Evaluation Function._ Whenever a _Leaf_ node \(L\) is visited during an iteration of (RA-)MCTS, a score is assigned to it and then backpropagated to all of its ancestors. Properly defining the function that determines that score, i.e., the _evaluation function_, is considered to be a critical ingredient of successfully applying MCTS methods. Considering the idiosyncrasy of our task, we design an evaluation function that returns a multi-dimensional score, as opposed to a single numerical value which is typically the case. More precisely, this evaluation function takes as input the set of interventions \(\vec{X}\leftarrow\vec{x}^{\prime}\) encoded in \(L\), and outputs a score vector \(\vec{r}\), of size \(n+1\), which is defined as follows. For each agent \(i\in\{1,...,n\}\), \(r_{i}=dr_{i}((C,\vec{u}),\phi^{\tau},\vec{X}\leftarrow\vec{x}^{\prime}, \mathcal{D})\), where \(\mathcal{D}\) denotes Definition 2.1. The \(n+1\)th value of \(\vec{r}\) is equal to the output of an environment specific function \(q_{\text{env}}((C,\vec{u}),\vec{X}\leftarrow\vec{x}^{\prime})\) which provides some additional information about the final outcome that corresponds to the causal setting \((C^{\vec{X}\leftarrow\vec{x}^{\prime}},\vec{u})\).7 For example, in a card game scenario we typically want to attribute responsibility to the members of the team that lost (outcome). Additional information that our search algorithm could benefit from in this scenario is how closer to or further from winning would the losing team get, had we intervened on some actions taken by its members. The purpose of \(\vec{r}\)'s first \(n\) values is to guide the search towards optimizing its main objective, i.e., approximating the agents' degrees of responsibility. The role of the last value of \(\vec{r}\) is complementary, as it helps to identify areas which are promising for discovering new actual causes.
Footnote 7: If such a function is not available then this part can be omitted.
_Child Selection Policy._ Similar to standard MCTS, in RA-MCTS each node \(v\) keeps track of two statistics, the number \(N(u)\) of times it has been visited and the vector \(\vec{Q}(v)\), where \(Q_{j}(v)\), with \(j\in\{1,..,n+1\}\), is equal to the total score \(r_{j}\) of all simulations that passed through
Figure 2. The red edges denote the path in our search tree, which corresponds to the intervention on the action of agent \(0\) at time-step \(1\) with counterfactual action \(a^{\prime}=3\), \(A_{0,1}\gets 3\).
that node. In order to transform \(\widetilde{Q}(v)\) into a scalar score value, the child selection policy of RA-MCTS follows a _linear scalarization_ approach inspired by the Multi-Objective Multi-Armed Bandits (MO-MAB) literature (Mikolov et al., 2016; Zhang et al., 2017). At iteration \(k\), a pre-defined weight \(b_{k,j}\) is assigned to each value of \(\widetilde{Q}(v)\), where \(\sum_{j\in\{1,\ldots,m+1\}}b_{k,j}=1\). The linear scalarized value of \(\widetilde{Q}(v)\) at iteration \(k\) is then defined as
\[f_{LS}(\widetilde{Q}(v))=\sum_{j\in\{1,\ldots,m+1\}}b_{k,j}\cdot Q_{j}(v). \tag{3}\]
After computing \(f_{LS}(\widetilde{Q}(v))\) for each child node \(v\), our policy employs the UCB1 formula (Mikolov et al., 2016; Zhang et al., 2017) to select the next node in the path
\[v^{next}\coloneqq\operatorname*{arg\,max}_{v\in\operatorname{children}} \frac{f_{LS}(\widetilde{Q}(v))}{N(v)}+C\cdot\sqrt{\frac{\ln N}{N(v)}}, \tag{4}\]
where \(N\) is the parent node's number of visitations and \(C\) is the _exploration_ parameter of RA-MCTS. Ties are broken randomly.
In our experiments, at every iteration \(k\), we set \(b_{k,n+1}\coloneqq B\), where \(B\in[0,1)\) is a constant. Additionally, for \(i=(k\bmod n)\) we set \(b_{k,i}=1-B\), while every other weight \(b_{k,j}\), with \(j\notin\{i,n+1\}\), is set to \(0\). As a result, the only simulated responsibility degrees that guide the search at iteration \(k\) are those of agent \(i\).
_Backpropagation Phase._ Note that as the set of found candidate actual cause-witness pairs grows during search, the intervention set encoded in a previously expanded _Agent_ node \(v\) might be evaluated as non-minimal when \(v\) gets visited again. Whenever this happens, in addition to pruning \(v\), we also backpropagate values \((-\widetilde{Q}(v),-N(v))\) to its ancestors. This way, we completely erase the footprints of the pruned node from the rest of the tree. Therefore, by taking this measure our search method is no longer guided by scores of simulations that passed through \(v\).
### Estimating Responsibility Assigns under Context Uncertainty
Our analysis so far in this section, assumes context \(\vec{u}\) to be known. We now lift this assumption and propose our solution to the inference problem described in Section 2.5. We extend RA-MCTS in the following way. First, we draw \(M\) Monte Carlo samples from the posterior \(Pr(\vec{u}|\vec{r})\), utilizing the procedure described in Section 3.4 of (Zhu et al., 2017). Next, we compute for each agent \(i\) its average degree of responsibility over all samples
\[\overline{d}_{i}\coloneqq\frac{1}{M}\cdot\sum_{m\in\{1,\ldots,M\}}d_{i}^{m}, \tag{5}\]
where \(d_{i}^{m}\) is \(i\)'s degree of responsibility in \((C,\vec{u}_{m})\) according to RA-MCTS, and \(\vec{u}_{m}\) is the \(m\)th sample.
## 4. Experiments
In this section, we experimentally test the efficacy of RA-MCTS, for known and unknown context, using a simulation-based testbed, which contains three card games. In our experiments, we restrict the maximum size of actual cause-witness pairs to \(4\), for reasons explained in (Zhu et al., 2017). We also fix RA-MCTS parameters to \(C=2\) and \(B=0.5\). Additional results can be found in Appendices G and H.
### Environments and Policies
We consider three card games played by two teams of two players. The members of one team are referred to as opponents, and they are treated as part of the environment. The members of the other team are treated as agents, and they are denoted by Ag0 and Ag1. All players have the same (initial) information probability function, but different decision making policies.
_TeamGoofspiel(\(H\))_. The first game is a team variation of the card game Goofspiel, introduced in (Zhu et al., 2017). In this game, the initial hand of each player consists of \(H\) cards. Typically, \(H=13\). At each round, all players simultaneously discard one of their cards, after observing the round's prize. The team which played the cards with highest total value collects the prize. After \(H\) rounds, the team that accumulated the biggest prize wins the game. Agent Ag0 tries to always play the card whose value matches the round's prize. If that card is not in Ag0's hand, then it chooses a card based on which team is currently leading the game. Agent Ag1 chooses its card based on a comparison between the average value of its hand and the current round's prize. Opponents follow the same stochastic policy which assigns a distribution on their hand based on the round's prize and the current leading team. For more details on the rules of the game and the players' policies see (Zhu et al., 2017).
_Euchre(\(H\))_. Second, we consider a turn-based trick-taking game. Each player is initially dealt \(H\) cards from a standard deck, with \(H\) typically being \(5\). Next follows the _calling_ phase, where the _trump suit_ and the player who starts first are chosen. For simplicity, we omit this phase and make the aforementioned choices randomly. At each round, the first player discards one card. This card's suit becomes the _leading suit_ of the current round. The rest of the players (in clockwise order) have to follow the _leading suit_ if possible, otherwise they are allowed to play any card from their hand. The winner of the round is determined by a game-specific _card ranking_ which takes into account the trump and the lead suits. The player who won the previous round starts next. After \(H\) rounds, the team with the most wins takes the game. The policies of agents Ag0 and Ag1 are based on the _HIGH!_ policy (Zhu et al., 2017). The main idea of _HIGH!_ is that "_if your teammate leads the round then let them win_". We implement the policy of Ag0 to be slightly more aggressive than that of Ag1. Opponents' policies follow the _HIGH!_ principle only when they play last in a round, otherwise they follow a stochastic greedy policy which assigns higher probabilities to cards that have potential to win the round. For more information see Appendix D.
_Spades(\(H\))_. Our third card game is yet another trick-taking game which is similar to _Euchre(\(H\))_, but with some key differences. For example, there is no calling phase and the trump suit is always spades. Before they start playing, the players must _bid_ on the number of tricks they believe that they will have won after \(H\) rounds, where typically \(H=13\). _Spades_(\(H\)) has a different card ranking than _Euchre_(\(H\)), and also some additional rules on which cards are allowed to be discarded by the players at each time. At the end of the game, the score of each team is calculated based on the number of tricks it won and its initial bids. In case a team bid more than its won tricks, it receives a penalty based on a _sandbagging rule_. The players' policies in _Spades_(\(H\)) are very similar to the ones in _Euchre_(\(H\)). For more information see Appendix E.
Note that all three games are standard benchmarks for AI research (Beng et al., 2017; Chen et al., 2018; Chen et al., 2019; Chen et al., 2019), and have also received extensive mathematical analysis (Han et al., 2017; Chen et al., 2019; Chen et al., 2019; Chen et al., 2019; Chen et al., 2019). Moreover, _Goofspiel_ and _Euchre_ are parts of a well known framework for RL in games (Zhou et al., 2019).
### Experimental Setup
We evaluate the efficacy of several search algorithms on estimating a responsibility assignment under a computational budget. Computational budget in our experiments is defined as the total number of environment steps that an algorithm is allowed to take.
**Baselines.** Apart from RA-MCTS we also implement RANDOM, which repeatedly samples a set of interventions and checks whether it constitutes a candidate actual cause-witness pair or not. When computational budget is reached, RANDOM determines the agents' degrees of responsibility based on the found solutions. Other baselines are BF-DT and BF-ST, which perform a brute force search over all possible sets of interventions. BF-DT is the algorithm of choice in (Zhou et al., 2019), and it utilizes the standard decision (game) tree. On the other hand, BF-ST utilizes the search tree from Section 3.1.
**Performance profiles.** We generate multiple configurations of our environments by changing parameter \(H\). For each of these configurations, our methods are evaluated on 50 different trajectories, in which the agents fail to win the opponents. For each such trajectory, we perform 10 independent runs of each method. 8 Following (Beng et al., 2017), we report performance profiles based on _run-score distributions_, and show for each method the fraction of runs in which it performs better than a certain threshold \(D\). We measure the performance of a method in terms of its accuracy w.r.t. some target responsibility assignment. More specifically, when it is computationally feasible to find the exact responsibility assignment, we report the maximum absolute difference \(e_{max}\in[0,1]\). For instance, if for some trajectory the agents' degrees of responsibility according to a method are 0.25 and 0.75, but the exact degrees are 0.33 and 1, then \(e_{max}=0.25\). If instead of the exact values, we can only compute lower bounds of the agents' responsibilities, we report the maximum absolute lower difference \(e_{max}^{lo}\in[0,1]\). Going back to our previous example, if the known lower bounds are 0.33 and 0.5, then \(e_{max}^{lo}=0.08\).
Footnote 8: We change the initial seed of the method.
For \(Euchre(H)\) and \(Spades(H)\), we are able compute the exact responsibility assignments for values of \(H\) up to 10.9 For \(TeamGoofspiel(H)\), the upper limit is 9. In order to evaluate our methods on environments with larger \(H\), we follow a procedure, described in
Figure 3. Performance profiles on _TeamGoofspiel_, _Euchre_ and _Spades_ with known context. Shaded regions show standard deviation. We add a marker if at the current number of steps the fraction of runs with \(e_{max}\leq 0\) or \(e_{max}^{lo}\leq 0\) is 1.
Appendix F, and generate trajectories for which we can retrieve non-trivial lower bounds of the agents' degrees of responsibility.
### Results
#### 4.3.1. Results with Known Context
Plots 3a-3l display performance profiles for the implemented methods and for different computational budgets. Note that threshold \(D\) in these experiments is set to \(0\). This means that a method performs better than \(D\) iff it manages to find the exact responsibility assignment.
We observe that RA-MCTS almost always converges to the optimal solution, i.e., achieves \(\epsilon_{max}=0\) or \(\epsilon_{max}^{b}=0\), under a reasonable computational budget. The only exception is _TeamGoofspiel_(13), for which it converges for \(96\%\) of the runs. To better understand how budget efficient RA-MCTS is, consider configurations _Euchre_(10) and _Spades_(10). By looking at Plots 3g and 3k, we observe that RA-MCTS needs at most \(4.2\cdot 10^{5}\) and \(3.5\cdot 10^{5}\) environment steps in order to converge to the exact responsibility assignment in these two configurations. In comparison, performing an exhaustive search, i.e., fully executing BF-DT or BF-ST, on one trajectory of _Euchre_(10) and one of _Spades_(10) can take up to more than \(10\cdot 10^{9}\) and \(6.2\cdot 10^{9}\) steps, respectively. It is also worth noting that RA-MCTS achieves \(\epsilon_{max}=0\) for more than \(90\%\) of the runs, in the above mentioned configurations, within at most \(2\cdot 10^{5}\) and \(0.5\cdot 10^{5}\) steps.
By comparing RA-MCTS to RANDOM in Plots 3a-3l, we can see that the former always _stochastically dominates_ the latter (Srivastava et al., 2017).10 As part of our ablation study, we also compare RA-MCTS to BF-ST. In Plots 3a-3c and 3e-3k, RA-MCTS _stochastically dominates_ BF-ST. In Plot 3c, performance profiles of the two methods are almost identical, while in Plot 3l, BF-ST outperforms RA-MCTS only for a small number of steps. Moreover, it can be seen that almost always the maximum number of environment steps that RA-MCTS might need in order to find the exact responsibility assignment, is considerably less than that of BF-ST. For instance, in Plot 3b RA-MCTS needs almost \(5\) times fewer environment steps compared to BF-ST, while in Plots 3a, 3e, 3j, 3k we witness a drop of at least \(30\%\). These results show that components from Sections 3.2 and 3.3 are important for RA-MCTS. In Appendix G, we include a similar ablation study, where we compare RA-MCTS to the BF-ST method enhanced with the pruning technique described in Section 3.2.
Footnote 10: The curve of the dominant method is strictly above the other method’s curve (Brandt et al., 2016).
Finally, Plots 3a-3l showcase that BF-ST _stochastically dominates_ BF-DT. This result demonstrates that a brute force algorithm that uses the structure of the tree we propose in Section 3.2 converges faster to the exact solution than a brute force algorithm that uses the standard decision tree of the underlying Dec-POMDP.
#### 4.3.2. Results with Unknown Context
In this section, we present the results of our experiments under context uncertainty, where we make use of the sampling procedure introduced in Section 3.4, with total number of samples \(M=10\). Each sample corresponds to a context generated by posterior inference. Plots 4a-4c display performance profiles for methods RA-MCTS and RANDOM for different computational budgets and for different values of threshold \(D\). First, we observe that almost always RA-MCTS stochastically dominates RANDOM. Moreover, all plots show that RA-MCTS achieves \(\epsilon_{max}=0.25\) in more than \(75\%\) of the runs, within at most \(0.5\cdot 10^{5}\) steps per sample, or \(5\cdot 10^{5}\) steps in total. Finally, we can also see that our method performs the best for _TeamGoofspiel_(9), where it achieves \(\epsilon_{max}=0.15\) in \(86\%\) of the runs.
Our results give prominence to an inherent problem of responsibility attribution under context uncertainty. We observe that even for a number of steps that suffices for RA-MCTS to find the exact agents' degrees of responsibility for most of the sampled trajectories, the responsibility assignments for many of the actual trajectories are not fully found. We conclude then that even under unbounded computational budget if the posterior distribution of the underlying context of a trajectory is not informative enough, then failing to exactly estimate the agents' degrees of responsibility for that trajectory is unavoidable. We believe however that one potential way to alleviate this issue is by designing responsibility attribution mechanisms that incorporate domain knowledge, which could balance the non-informativeness of the posterior distribution.
## 5. Conclusion
We initiate the study of developing efficient algorithmic approaches for responsibility attribution in Dec-POMDPs. To that end, we propose and experimentally evaluate RA-MCTS, an MCTS type of method which efficiently approximates responsibility assignments. Looking forward, we plan to apply and test the efficiency of RA-MCTS on a real-world domain. Extending our approach to continuous models is another research direction that we deem particularly interesting.
Figure 4. Performance profiles on _TeamGoofspiel_(9), _Euchre_(10) and _Spades_(10) with unknown context. Shaded regions show standard deviation and number of steps are per sample. The full set of plots can be found in Appendix H.
## Acknowledgements
This research was, in part, funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - project number 467367360.
|
2310.12467 | Contrastive Learning for Inference in Dialogue | Inference, especially those derived from inductive processes, is a crucial
component in our conversation to complement the information implicitly or
explicitly conveyed by a speaker. While recent large language models show
remarkable advances in inference tasks, their performance in inductive
reasoning, where not all information is present in the context, is far behind
deductive reasoning. In this paper, we analyze the behavior of the models based
on the task difficulty defined by the semantic information gap -- which
distinguishes inductive and deductive reasoning (Johnson-Laird, 1988, 1993).
Our analysis reveals that the disparity in information between dialogue
contexts and desired inferences poses a significant challenge to the inductive
inference process. To mitigate this information gap, we investigate a
contrastive learning approach by feeding negative samples. Our experiments
suggest negative samples help models understand what is wrong and improve their
inference generations. | Etsuko Ishii, Yan Xu, Bryan Wilie, Ziwei Ji, Holy Lovenia, Willy Chung, Pascale Fung | 2023-10-19T04:49:36Z | http://arxiv.org/abs/2310.12467v2 | # Contrastive Learning for Inference in Dialogue
###### Abstract
Inference, especially those derived from inductive processes, is a crucial component in our conversation to complement the information implicitly or explicitly conveyed by a speaker. While recent large language models show remarkable advances in inference tasks, their performance in inductive reasoning, where not all information is present in the context, is far behind deductive reasoning. In this paper, we analyze the behavior of the models based on the task difficulty defined by the semantic information gap - which distinguishes inductive and deductive reasoning Johnson-Laird (1988, 1993). Our analysis reveals that the disparity in information between dialogue contexts and desired inferences poses a significant challenge to the inductive inference process. To mitigate this information gap, we investigate a contrastive learning approach by feeding negative samples. Our experiments suggest negative samples help models understand what is wrong and improve their inference generations. 1
Footnote 1: The code and annotated data is available at [https://github.com/HLTCHKUST/contrastive_inference_dialogue](https://github.com/HLTCHKUST/contrastive_inference_dialogue).
## 1 Introduction
In conversations, inference is essential to uncover what the speaker intended to deliver, which often goes beyond the information explicitly expressed Rieger (1974); Thorndyke (1976). Inferences can be made by an explicit or implicit logical reasoning based on utterances and common ground among speakers Clark (1975). By reading between the lines, these inferences enable appropriate responses in dialogues. This inference process has been intensely discussed in the early age of research at dialogues (e.g., Thorndyke, 1976). However, research in dialogue systems nowadays often overlook such an aspect and instead rely solely on the capabilities of large language models (LLMs) to understand and comprehend dialogues.
Current LLMs, such as ChatGPT OpenAI (2022), lack the so-called "inductive reasoning" ability, while tending to accomplish the reasoning tasks deductively Bang et al. (2023). It might be due to the fundamental difference between inductive and deductive processes. According to Johnson-Laird (1988, 1993), inductive reasoning involves an increase in semantic information from input to output while it remains the same in deductive reasoning. In the context of dialogue inference processes, especially when reading implicit messages, there are information gaps that need to be filled. For instance, somebody's invitation for a "quick
\begin{table}
\begin{tabular}{l|l} \hline \hline & User A: I’m hungry, let’s order up something to eat. \\ & User B: Ok, maybe we can order a soup and a salad \\ & from the restaurant down the street. \\ & User A: I was thinking of getting a hamburger, fries, \\ & and a chocolate sundae. \\ \cline{2-3} & User B: You eat too much junk food. That sort of \\ & stuff clos up your arteries and is very high in chol. \\ \cline{2-3} Dial. & esterol. \\ & User A: Well, I never seem to gain weight, so I don’t \\ & mind. \\ & User B: It’s not only about getting fat or not, it’s about being healthy. You could really have some health \\ & problems later on. _[Target]_ \\ & User A: How about pizza or maybe some fried chicken? Better yet, let’s order some hot dogs! \\ & User B: You are a lost cause. \\ \hline Ques. & What is or could be the prerequisite of the target? \\ \hline Gold & The speaker is a fitness freak and keeps track of his \\ & daily diet. \\ \cline{2-3} T-base & The speaker eats too much junk food as it **clogs up** \\ & his arteries and is very high in cholesterol. \\ Ours & The speaker is a health-conscious person. \\ \hline \hline \end{tabular}
\end{table}
Table 1: One example in “Conceivable” difficulty level, comparing the generated inferences from our method, T5-base, and the gold inference. _Dial._ and _Ques._ are short for _Dialogue_ and _Question_. The snippets of inferences highlighted in pink are not explicitly stated in the dialogue and require the model to conduct inference inductively. We refer to this phenomenon as the _“information gap”_ to accomplish this task.
lunch as always" might be enough to specify the location and time without further interaction.
In this paper, we inspect the semantic information gap between dialogue contexts and intended inferences using a recently introduced dataset designed for generating inferences in dialogue Ghosal et al. (2022). We hypothesize that the difficulty of the task can be associated with the amount of information gap required to bridge. We manually annotate the randomly sampled subset of the dataset regarding their information gap, and assess the performance of the models. The analysis shows a decline in model performance as the information gap increases.
Furthermore, we propose to apply a contrastive learning approach to improve inference performance. One limitation of the current sequence-to-sequence training, especially for reasoning tasks, is that models are never exposed to negative samples Lee et al. (2021). In deductive reasoning, all the information required to generate an output is provided in the input, and there is no information gap. However, inductive reasoning requires including something that may not be explicitly stated in the input, and that is not simply learnable by only exposing gold samples. Thus, we need to teach the model with more guidance on the reasoning path. In our preliminary experiment using the same dataset and a multiple-choice framework with Roberta-large Liu et al. (2019), we observed a significant improvement from an F1 score of 83.91 to 96.6 simply by feeding negative samples together with the other candidate, which indicates that feeding negative samples will help the model learn how to fill the information gap. Building on this initial experiment, our experimental results in the generative settings show that contrastive learning helps improve both overall and breakdown performance in each task difficulty level, especially for fully deductive and inductive cases. Additionally, we explore various sampling methods for generating negative samples for contrastive learning.
Our contributions are three-fold: (1) we provide data annotation based on the information gap and the assessment; (2) we suggest that the information gap accounts for the difficulty of the inference generation in dialogue; and (3) our experimental results show that the contrastive learning approach helps to fill the information gap.
## 2 Related Work
### Inference in Conversation
As one of the most fundamental forms of the use of natural language Jurafsky and Marin (2023), advance in inference in conversation has been inseparable from the flourish of the field of natural language processing (NLP) (e.g., Mann, 1979; Phillips, 1975). Initially, the research focus of inference in conversation was to uncover the underlying rules of human conversations (e.g., Grosz, 1978; Carbonell Jr, 1978; Morgan, 1978). While it remains a core research question, recent works tend to be formed in question answering (QA) style so that we can test models in a handier way. Thanks to the powerful deep learning models, we can perform inference tasks sufficiently well yet leave underlying rules unclear. Recently, a number of QA datasets in conversational formats have been introduced Choi et al. (2018); Reddy et al. (2019); Ma et al. (2018), and their main focus tends to be comprehension of non-conversational texts. To evaluate the comprehension of dialogues, various tasks have been proposed in different task formulations such as span extraction Li et al. (2020); Yang and Choi (2019); Wu et al. (2022), multiple choice Sun et al. (2019), next utterance prediction Cui et al. (2020), or natural language inference (NLI) Welleck et al. (2019). Some tasks focus on a specific aspect of conversational inference, such as speaker guessing Sang et al. (2022), and temporal reasoning Qin et al. (2021). In natural language generation format, Ghosal et al. (2021, 2022) presents datasets for generating inferences based on dialogue, while Ghosal et al. (2021) only contains overt inferences and Ghosal et al. (2022) contains implicit guesses as well.
### Task Difficulty and Information Gap
Controlling the difficulty of tasks requires delicate tuning as it is crucial for further advance in NLP; too challenging or too easy tasks cannot facilitate the growth of the technology. A task becomes more challenging if we impose additional conditions, such as limiting the amount of data and computational power or adding modality or other languages. Recently, some work has investigated the specific task with controlled or annotated data. For example, Williams et al. (2022) annotates on inference types such as numerical or reference to see which type is the most challenging in NLI. Cui et al. (2023) limit the data to assess the models'
capability to properly understand what the word "respectively" refers to in NLI.
Discussing the task difficulty independent of the models' performance is non-trivial. Current assessment of the task difficulty tends to be inseparable from the performance comparison of the models (e.g., Bang et al., 2023). In this way, we can observe the models' strengths and weaknesses across different tasks, but there is still a lack of absolute difficulty rankings of the tasks. One possible way to discuss the difficulty in a model- or task-agnostic way might be based on the information gap, which is the core challenge in inductive reasoning Johnson-Laird (1988, 1993). It has been discussed as "given and new information" in Clark and Haviland (1974); Clark (1975) as the foundation in conversations, but this concept can be extended to any tasks McKeown (1979). In this line of work, Rudinger et al. (2020) proposes an NLI task in which an inference can be shifted when there is new information offered. These days, not many works explicitly mention "information gap" Hayashi (2022). However, we still have the concept underlain. For example, QA datasets commonly contain some portion of unanswerable questions (e.g., Rajpurkar et al., 2018; Bajaj et al., 2016) with the context provided.
### Contrastive learning in NLG
Contrastive learning teaches a model to embed similar data sample pairs are closer and disparate sample pairs stay apart Chopra et al. (2005); Smith and Eisner (2005). Not only in obtaining better representations of words Mikolov et al. (2013) or sentences Fang et al. (2020); Gao et al. (2021); Liu et al. (2021), contrastive learning is reported to improve a wide range of NLP tasks (e.g., Li et al., 2022; Klein and Nabi, 2020) including text generation tasks (e.g., Cai et al., 2020; Li et al., 2021; Liu et al., 2021; Paranjape et al., 2021; Li et al., 2022; Shu et al., 2021). The main motivation for applying contrastive learning for sequence-to-sequence text generation tasks is that it allows the model to be exposed to negative samples during training Lee et al. (2021). Indeed, negative samples are generated by some rule-based perturbations Shu et al. (2021) or machine-generated texts Cao and Wang (2021) such as entity-swap Tang et al. (2022) are reported to be effective for faithful, less hallucinatory text generation.
## 3 Information Gap in Inference
While existing work focuses on improving the model performance on inference tasks with various methods, there is still a lack of in-depth investigation on the task itself and how the model behavior is changed with the improved results. To fill this gap, we first propose to connect task difficulty with the "information gap" between contexts and target inferences and classify the inference task difficulty into three levels. Then, we focus on the generative inference in dialogues with the CICERO dataset Ghosal et al. (2022). We collect additional annotations to assess the task difficulty of a subset of samples for further analysis.
### Preliminaries of the CICERO Dataset
We denote a dialogue dataset as \(\{\mathcal{D}^{n}\}_{n=1}^{N}\), and a dialogue as \(\mathcal{D}_{I}=\{U_{i}\}_{i=1}^{I}\), where \(U_{i}\) is an utterance at turn \(i\). Given an input \(X=(\mathcal{D}_{I},Q,U_{t})\) where \(Q\) is a question and \(U_{t}\in\mathcal{D}_{I}\) is a target utterance, we aim to learn a model \(f_{\theta}\) to generate a plausible inference \(\tilde{A}=f_{\theta}(X)\).
CICERO dataset comes with five types of questions:
1. **Cause**: What is or could be the cause of the target utterance?
2. **Prerequisite**: What is or could be the prerequisite of target?
3. **Subsequent Event (SE)**: What subsequent event happens or could happen following the target?
4. **Motivation**: What is or could be the motivation of target?
5. **Reaction**: What is the possible emotional reaction of the listener in response to target?
For subsequent event category, it also offers a more challenging setting called **Subsequent Event Clipped (SE_Clipped)** where the dialogue is clipped until the target utterance: \(\mathcal{D}_{t}=\{U_{i}\}_{i=1}^{t}\).
### Task Difficulty of the CICERO dataset
The CICERO dataset provides commonsense inferences made by human annotators. According to the annotation instructions, generated answers must be grammatically correct and consistent with the dialogue, yet they can be overt or speculative depending on contextual scenarios Ghosal et al. (2022). While treated equally, some question types seem significantly more challenging than others according to the results breakdown reported in Ghosal et al. (2022). For example, Motivation scores the
highest even though it only accounts for 14% of the training set.
Although the surface format of the task is unified and thus cannot distinguish at a glance, we can sense that they challenge different things. For example, SE can be executed simply by summarizing the utterances after the turn \(t\), while SE_Clipped required to predict future sequences from the dialogue. The difficulty differs even among questions in the same question type. Some inferences can be derived simply by paraphrasing the utterances, while others require logical guessing to read between the lines. These differences boil down to the information gap between the answer \(A\) and the dialogue \(\mathcal{D}_{I}\). Here, we take an initial step to investigate the task difficulties systematically and define three levels of difficulty based on the amount of information in the answer covered by the dialogue: Sufficient, Likely, and Conceivable.
Level 1: SufficientAll the information in the answer is available in the given dialogue. Since there is no information gap between inputs and outputs, questions at this level are the easiest to answer. For example, from the given dialogue context below, it is overt that User A will be available on Saturday morning for delivery.
Level 2: LikelySome pieces of information in the answer are not available or directly stated, but it is possible to guess by combining the clues in the dialogue. Questions at this level can be compared to multi-hop question answering tasks Yang et al. (2018); Welbl et al. (2018); Inoue et al. (2020). There are arguably different degree of hops to derive an answer depending on the context Kumar et al. (2019); Cheng et al. (2021), however, here we classify all the questions that requires some sort of "hop" over e.g., a knowledge graph Speer et al. (2017); Sap et al. (2019); Hwang et al. (2021) regardless of the degree. For example in the dialogue below, we can guess that User B will check the car as per User A's request. To check the car, User B will likely try to turn on the engine.
Level 3: ConceivableThe answer contains some pieces of information that are not stated in the dialogue, and there is no clear guidance for a "hop". The answer is plausible but hardly verifiable. Questions at this level are not easy even with certain knowledge sources provided and can be compared to check hallucinations in open-domain text generations Ji et al. (2023). For example, in the dialogue below, Bob may be a brother of User B, and his occupation could be a radio journalist, which is a plausible reason to call Bob to ask about the fire at the factory. However, we cannot verify the answer as the dialogue lacks the evidence to guess the relationship between the speakers and Bob, nor his occupation.
### Human Assessment of the Difficulty
To the best of our knowledge, there is no absolute automatic metric to compare two pieces of text in terms of the amount of semantic information they contain. Here, we assess the difficulty of the task defined in Section 3.2 by human annotation. We randomly select 75 samples per question type (in total 450 samples) from the CICERO test set. In our annotation scheme, we assign two well-trained annotators per sample to give a difficulty-level label and the other one expert to double-check and finalize the label. In a few cases where the three annotators disagreed on the label, an additional expert is assigned for confirmation.
In Table 2, we summarize the annotated results and the T5-base Raffel et al. (2020) performance of the same subset that is fine-tuned on the CICERO training set. The CICERO dataset has a balanced mixture of the three levels (sufficient: 34.2%, likely: 33.6%, conceivable: 32.2%), and the per
formance of T5-base uniformly degraded with the decrease of the amount of available information. As reported in Table 3, different question types have different proportions of difficulty levels as anticipated. Although the proportion of likely and conceivable questions can explain the difference in T5-base performance to a certain extent, it does not have a simple correlation. It may be due to the difference in which kind of information is required to bridge the gap between the dialogue and the answer. For example, speakers' emotional reactions might be easily guessed by the sentiment of the utterances, while identifying the cause of the utterance may involve a more complicated understanding of background knowledge.
## 4 Methodology
We primarily train our model \(f_{\theta}\) by minimizing the negative log-likelihood:
\[\mathcal{L}_{\mathrm{NLL}}=-\sum_{1\leq n\leq N}\sum_{1\leq j\leq k}\log p(a_{ j}^{n}|a_{<j}^{n},X^{n}),\]
where a generated inference is denoted as \(\tilde{A}^{n}=\{a_{j}^{n}\}_{j=0}^{k}\). The contrastive learning objective is defined by:
\[\mathcal{L}_{\mathrm{CL}}=-\sum_{1\leq n\leq N}\log\frac{\exp(\sin(\mathbf{h}_{X},\mathbf{h}_{\tilde{A}^{n}})/\tau)}{\sum_{A^{\prime}\in\mathcal{A}}\exp(\sin(\mathbf{h }_{X},\mathbf{h}_{A^{\prime}})/\tau)},\]
where \(\mathrm{sim}\) is a cosine similarity function, \(\mathcal{A}\) is a set of negative samples of inferences, \(\mathbf{h}_{X}\), \(\mathbf{h}_{\tilde{A}^{n}}\), \(\mathbf{h}_{A^{\prime}}\) are the hidden representations of \(X,\tilde{A}^{n},A^{\prime}\), and \(\tau\) is a temperature, respectively. Following Cao and Wang (2021); Lee et al. (2021), the final training objective \(\mathcal{L}=\mathcal{L}_{\mathrm{NLL}}+\lambda\mathcal{L}_{\mathrm{CL}}\), where \(\lambda\) is a coefficient.
### Selection of Negative Samples
Automatically generating a set of negative samples \(\mathcal{A}\) for contrastive learning is a non-trivial task. The easiest method to sample negative samples is randomly sampling other inferences in the dataset (usually within the same batch), while the supervision of these negative samples might be weak due to the dissimilarity of the sentences. We denote the contrastive loss for in-batch negative samples as \(\lambda_{\mathrm{b}}\mathcal{L}_{\mathrm{CL}_{\mathrm{b}}}\). Besides, we aim to feed more informative negative samples per gold inference, which we denote as \(\lambda_{\mathrm{s}}\mathcal{L}_{\mathrm{CL}_{\mathrm{a}}}\). Then, the training objective can be formed as \(\mathcal{L}=\mathcal{L}_{\mathrm{NLL}}+\lambda_{\mathrm{b}}\mathcal{L}_{ \mathrm{CL}_{\mathrm{b}}}+\lambda_{\mathrm{s}}\mathcal{L}_{\mathrm{CL}_{ \mathrm{a}}}\). Since the CICERO dataset also serves as an MCQ task, each inference has four high-quality plausible-looking yet not appropriate candidates. These counterfactual candidates are machine-generated and then filtered by human annotators. In our experiments, we explore the following ways for generating negative samples in fully-automatic:
Non-Optimal GenerationSince the simple fine-tuning with \(\mathcal{L}_{\mathrm{NLL}}\) does not yield the optimal \(f_{\theta}\) as reported in Table 3, we directly use generated inferences by the fine-tuned model. We use top-\(k\) sampling with \(k=10\) for diversed generation.
Replacement of TokensInspired by Park et al. (2021), we manipulate tokens of the gold inference using the prediction of a masked language model. More specifically, we compute the probability of each token in the gold inference \(A\) when whole context \(X\) and \(A\) are given and when only \(A\) is given. In this way, we can estimate which tokens in \(A\) are more affected by the context \(X\). We directly compare the log-likelihood score of each token and select tokens that differ more than a threshold. The selected tokens will be replaced by the randomly selected tokens in top-\(k\) prediction by a masked language model. We apply the pre-trained Roberta-large model (Replace\({}_{\mathrm{ZS}}\)) and the Roberta-large trained on the CICERO dataset for MCQ (Replace\({}_{\mathrm{MCQ}}\)), set \(k=10\), and the threshold 0.75.
## 5 Experiments
### Baselines
We evaluate our proposed method across multiple Transformer-based models: T5-small/base/large Raffel et al. (2020), and GPT2-base Radford et al. (2019). To have a fair comparison, these baselines are finetuned on the CICERO training set only with \(\mathcal{L}_{\mathrm{NLL}}\). In addition, we compare our results with the performance of GPT-J Wang and Komatsuzaki (2021) and LLaMA-7B Touvron et al. (2023) in a 3-shot setting. We report an average of three trials of randomly sample manually crafted prompts and
\begin{table}
\begin{tabular}{l r r r r} \hline Difficulty & BLEU-2 & METEOR & ROUGE\_L & CIDEr \\ \hline Sufficient (34.2\%) & 18.78 & 16.80 & 29.37 & 46.07 \\ Likely (33.6\%) & 16.38 & 15.89 & 26.76 & 32.27 \\ Conceivable (32.2\%) & 11.92 & 12.72 & 21.87 & 22.23 \\ \hline \end{tabular}
\end{table}
Table 2: The performance of the fine-tuned T5-base gets worse along with the decrease in the amount of information available in the dialogue.
a strategic prompt using tf-idf to retrieve 3-most similar in-context examples.
### Evaluation Metrics
Automatic MetricsIn line with the CICERO paper, we assess the answers generated using \(n\)-gram overlap-based evaluation metrics: BLEU (Papineni et al., 2002), METEOR (Banerjee and Lavie, 2005), ROUGE-L (Lin, 2004), and CIDEr (Vedantam et al., 2015). Notably, CIDEr is calculated based on stem forms.
Human EvaluationFor a comprehensive evaluation, we also conduct a human evaluation on _Plausibility_ aspect which focuses on evaluating whether the answers are rational or not. We evaluate the same data samples as those for task difficulty analysis. More specifically, comparing with both generated inferences from the T5-base model and the gold inferences. A/B testing is utilized to compare our proposed method and the corresponding baseline on the CICERO test set. Each comparison requires three judgments. The human evaluation is conducted based on a crowd-sourcing platform offered by Appen 2. More details about human evaluation, such as annotator instructions and how the results are calculated, are included in Appendix A.2.
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline & BLEU1 & BLEU2 & BLEU3 & BLEU4 & METEOR & ROUGE-L & CIDEr \\ \hline GPT-J-3shot & 24.70 & 13.04 & 6.11 & 3.04 & 13.74 & 25.03 & 19.87 \\ + tf-idf & 22.83 & 11.64 & 5.52 & 2.83 & 12.19 & 22.45 & 17.99 \\ LLaMA-3shot & 28.34 & 15.18 & 7.25 & 3.73 & 15.26 & 27.43 & 26.47 \\ + tf-idf & 25.36 & 13.33 & 6.56 & 3.49 & 13.72 & 24.91 & 24.06 \\ \hline T5-small & 29.20 & 15.66 & 8.19 & 4.67 & **15.88** & 27.34 & **33.58** \\ + CL & **29.46** & **15.83** & **8.29** & **4.71** & **15.88** & **27.63** & 33.44 \\ T5-base & 29.77 & 16.38 & 8.87 & 5.26 & 16.40 & 28.32 & 38.91 \\ + CL & **30.67** & **17.09** & **9.45** & **5.65** & **16.62** & **28.50** & **40.53** \\ T5-large & 29.57 & 16.79 & 9.45 & 5.81 & 16.60 & **29.06** & 43.38 \\ + CL & **30.07** & **17.02** & **9.56** & **5.83** & **16.67** & 28.90 & **43.80** \\ GPT2-base & 25.09 & 13.65 & 6.92 & 3.89 & 14.45 & 26.48 & 25.73 \\ + CL & **27.55** & **14.91** & **7.56** & **4.22** & **15.13** & **27.94** & **26.59** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Automatic results on CICERO test set. _CL_ is short for _contrastive learning_. We bold the better results between our method and the corresponding baseline model. We also highlight the best results across different models with underline.
\begin{table}
\begin{tabular}{l c c c|c c c|c c c|c c c} \hline \hline & \multicolumn{3}{c|}{Difficulty} & \multicolumn{3}{c}{Automatic Metrics} \\ \cline{2-13} & Sufficient & Likely & Conceivable & BLEU-2 & METEOR & ROUGE\_L & CIDEr \\ \hline Cause & 46.7\% & 33.3\% & 20.0\% & 11.93 & 13.78 & 21.88 & 34.65 \\ SE & 41.3\% & 20.0\% & 38.7\% & 14.83 & 15.19 & 26.39 & 29.70 \\ SE\_Clipped & 4.0\% & 38.7\% & 57.3\% & 13.76 & 15.58 & 26.00 & 35.28 \\ Prerequisite & 32.0\% & 28.0\% & 40.0\% & 6.77 & 10.19 & 15.82 & 12.31 \\ Motivation & 58.7\% & 24.0\% & 17.3\% & 21.33 & 17.02 & 32.40 & 42.32 \\ Reaction & 22.7\% & 57.3\% & 20.0\% & 23.30 & 18.72 & 33.96 & 35.75 \\ \hline Total & 34.2\% & 33.6\% & 32.2\% & 15.62 & 15.08 & 26.08 & 32.51 \\ \hline \hline \end{tabular}
\end{table}
Table 3: The difficulty of the inferences varies on the type of questions, and so does the performance of the finetuned T5-base. The corresponding performance is calculated on the same subset of the CICERO test set.
\begin{table}
\begin{tabular}{l c c c|c c c|c c c|c c} \hline \hline \multirow{2}{*}{Plausibility} & \multirow{2}{*}{Win} & \multirow{2}{*}{Tie} & \multirow{2}{*}{Lose} & \multirow{2}{*}{\(\kappa\)} & \multicolumn{3}{c|}{Sufficient} & \multicolumn{3}{c|}{Likely} & \multicolumn{3}{c}{Conceivable} \\ \cline{5-13} & & & & & Win & \multicolumn{1}{c}{Tie} & \multicolumn{1}{c}{Lose} & \multicolumn{1}{c}{Win} & \multicolumn{1}{c}{Tie} & \multicolumn{1}{c}{Lose} & \multicolumn{1}{c}{Win} & \multicolumn{1}{c}{Tie} & \multicolumn{1}{c}{Lose} \\ \hline Ours vs T5-base & **38.7\%**\({}^{*}\) & 35.8\% & 25.5\% & 0.73 & **45.7\%**\({}^{*}\) & 34.6\% & 19.7\% & 34.7\% & **40.4\%** & 24.9\% & **35.4\%** & 32.4\% & 32.2\% \\ Ours vs Gold & 24.7\% & **52.4\%** & 22.9\% & 0.21 & 22.3\% & **53.0\%** & 24.7\% & 23.4\% & **53.0\%** & 23.6\% & 28.5\% & **51.3\%** & 20.2\% \\ \hline \hline \end{tabular}
\end{table}
Table 5: Human evaluation results on Plausibility, together with breakdown performance on each difficulty level. \({}^{*}\)Our model achieves a significant advantage over T5-base or Gold with pair-wise individual \(t\)-test (\(p<0.05\)).
### Training Details
The models are trained using a batch size of 64 after gradient accumulation, with a learning rate set at \(1\mathrm{e}{-}4\) for T5 models and \(1\mathrm{e}{-}5\) for GPT-2 models. We limit the training to a maximum of 10 epochs, employing a linear learning rate scheduler. The checkpoint exhibiting the lowest perplexity on the validation set is chosen as the optimal model for each trial. In the case of contrastive learning, the temperature \(\tau\) for \(\mathcal{L}_{\mathrm{CL_{b}}}\) and \(\mathcal{L}_{\mathrm{CL_{a}}}\) learning is set to 0.1 and 2.5, respectively, each contributing equally to the total loss with a coefficient \(\lambda_{\mathrm{b}}=\lambda_{\mathrm{s}}=0.5\). All the experiments are executed on a single RTX 3090 Ti GPU.
### Results
We report the automatic results of both our method and the baselines in Table 4. Automatic metrics based on \(n\)-gram overlap are mostly improved thanks to contrastive learning. Moreover, our proposed method is model architecture-agnostic, given that it shows consistent improvement in different encoder-decoder T5 models and encoder-only GPT2. For GPT-J and LLaMA, we could not see any improvement introduced by tf-idf. We suspect that even though lexically similar, these examples may mislead the model to make wrong predictions
Overlap-based metrics can reflect the general quality of the generated inferences with respect to the gold answers. However, it does not reflect the inference ability of the generations, not to mention the inductive inference ability. In this work, we also explore the feasibility of NLI metrics for inference ability evaluation. More discussion is included in Section 6.6.
Human EvaluationFor a more comprehensive evaluation of inference ability, we conduct a human evaluation of the plausibility of the generated inferences and report in Table 5. We leverage pairwise individual \(t\)-tests to validate the significance of the improvements. Inter-annotator agreements are computed using Fleiss' kappa (\(\kappa\)) 3 to assess the reliability of the evaluation. As it is shown in Table 5, contrastive learning significantly improves the plausibility of the generated inferences over T5-base with a substantial agreement. The generated inferences from T5-base with contrastive learning show comparable plausibility with gold ones in the CICERO test set with a fair inter-annotator agreement. The human evaluation further proves the effectiveness of our proposed method in improving inference ability. We further investigate the improvement breakdown in each difficulty levels to further analyze the effect of contrastive learning in Section 6.5.
Footnote 3: [https://www.statsmodels.org/stable/generated/statsmodels.stats.inter_rater.fleiss_kappa.html](https://www.statsmodels.org/stable/generated/statsmodels.stats.inter_rater.fleiss_kappa.html)
## 6 Discussion
### Case Study
Table 1 illustrates one example in "Conceivable", comparing the generated inferences from our method, T5-base, and the gold inference. While T5-base tends to copy from the dialogue (highlighted in blue), contrastive learning promotes the model to infer more rational information which is not stated in the context (highlighted in pink). We include more examples in Appendix B.2.
### Ablation Study
We perform an ablation study on our proposed method using T5-base as the foundational model. The effectiveness of our model is compared against those trained without the application of either \(\mathcal{L}_{\mathrm{CL_{a}}}\), \(\mathcal{L}_{\mathrm{CL_{b}}}\), or both \(\mathcal{L}_{\mathrm{CL}}=\lambda_{\mathrm{b}}^{\prime}\mathcal{L}_{\mathrm{CL _{b}}}+\lambda_{\mathrm{s}}^{\prime}\mathcal{L}_{\mathrm{CL_{a}}}\). In Table 6, our proposed method, employing both contrastive losses, amplifies the performance. A model devoid of \(\mathcal{L}_{\mathrm{CL_{a}}}\) surpasses our own in terms of CIDEr, yet our method achieves superior results across all other metrics. Furthermore, the impact of the different contrastive losses varies across the range of automated methods. While \(\mathcal{L}_{\mathrm{CL_{b}}}\) exhibits minimal impact on ROUGE-L, it proves more effective for CIDEr. The most significant contribution to the ROUGE-L improvement is derived from \(\mathcal{L}_{\mathrm{CL_{a}}}\).
### Comparison of Sampling Methods
In addition to the negative samples provided by the CICERO dataset, three different fully-automated methods of generating negative samples are explored as stated in Section 4.1. We train the model
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Model & BLEU-2 & METEOR & ROUGE\_L & CIDEr \\ \hline Ours & **17.09** & **16.62** & **28.50** & 40.53 \\ \(-\mathcal{L}_{\mathrm{CL_{a}}}\) & 16.97 & 16.53 & 28.34 & **40.71** \\ \(-\mathcal{L}_{\mathrm{CL_{b}}}\) & 16.95 & 16.53 & 28.49 & 40.18 \\ \(-\mathcal{L}_{\mathrm{CL}}\) & 16.38 & 16.40 & 28.32 & 38.91 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Ablation study with the base model as T5-base.
leveraging the negative samples obtained from different methods, respectively and present the performance of the models in Table 7 for comparison. While different generation methods yield different improvements in the automatic metrics, in general, feeding negative samples does not hurt taining the models to perform dialogue inference. The "contradiction" negative samples from the dataset provide the largest improvement to the model performance, which suggests that higher quality of negative samples can guide the models better with a smaller amount. Another method that shows effectiveness is to replace the words that affect the predictions largely for the RoBERTa-large model trained to differentiate positive samples from negative samples, Replace\({}_{\text{MCQ}}\), while replacement measured by the RoBERTa model in a zero-shot way (Replace\({}_{\text{ZS}}\)) is less helpful. This indicates that fine-tuned RoBERTa assigns the probability of the tokens more informatively for inference. Our exploration of using a non-optimal T5-base model to generate negative samples is expected to improve the model performance by iterative self-contrasting. However, self-improvement may not be effective without further human filtering since we might include rational answers as negative samples, which introduce noise during training.
### Effect of the Amount of Negative Samples
In our main experiments, we feed all four of the the counterfactual candidates provided by the CICERO dataset as negative samples to compute \(\mathcal{L}_{\text{CL}_{\text{a}}}\). As the effective amount of negative samples for contrastive learning is under discussion (e.g., Awasthi et al., 2022; Nozawa and Sato, 2021), we conduct a control experiment by feeding randomly sampled counterfactual candidates (\(m=1,2,3\)) to observe the effect of the number of negatives. We report the results in Table 8; note that we report the average of three trials with different random seeds for \(m=1,2,3\). The performance generally improves along with the increase in the number of negative samples, implying that the high-quality negative samples contribute to teach the model to inference. Encouraged by our results, it would be interesting to quantify how much guidance is necessary for each level. For example, the "Sufficient" level may need fewer negative samples than the "Conceivable" level to achieve similar performance. It would be also beneficial to investigate the possibility of dynamically controlling the number of negative samples to feed.
### Analysis of Improvements based on Task Difficulty
We further investigate how contrastive learning improves the model performance in different task difficulties. Table 9 reports the automatic score breakdown based on the difficulty annotated. Compared to the performance of the T5-base model reported in Table 2, our method yields improvement for all the levels, especially on "Sufficient" and "Conceivable". Similarly, we list the breakdown of human evaluation results to each task difficulty level in Table 5. T5-base with contrastive learning outperforms T5-base on plausibility in all difficulty levels, especially for "Sufficient" and "Conceivable", which is consistent with the trend of automatic metrics. In the "Sufficient" level, the advantage of our model is significant over the T5-base. This proves that contrastive learning can effectively improve the model's inference ability. Moreover, our method even significantly wins over gold in the "Conceivable" level in human evaluation with \(p<0.05\). Conceivable level gold answers tend to
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & BLEU-2 & METEOR & ROUGE\_L & CIDEr \\ \hline T5-base & 16.38 & 16.40 & 28.32 & 38.91 \\ \hline T5-base & 16.38 & 16.40 & 28.32 & 38.91 \\ \hline Contradiction & **17.09** & **16.62** & **28.50** & **40.53** \\ Non-optimal & 16.40 & 16.40 & 28.19 & 39.17 \\ Replace\({}_{\text{ZS}}\) & 16.39 & 16.36 & 28.44 & 40.09 \\ Replace\({}_{\text{MCQ}}\) & 16.48 & 16.42 & 28.45 & 39.42 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Comparison of different sampling methods for generating negative samples. All the models are implemented based on T5-base.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & BLEU-2 & METEOR & ROUGE\_L & CIDEr \\ \hline T5-base & 16.38 & 16.40 & 28.32 & 38.91 \\ \hline + \(m=1\) & 16.67 & 16.43 & 28.31 & 40.43 \\ + \(m=2\) & 16.81 & 16.53 & 28.54 & 40.69 \\ + \(m=3\) & 16.82 & 16.52 & 28.45 & **40.94** \\ + \(m=4\) & **17.09** & **16.62** & **28.50** & 40.53 \\ \hline \hline \end{tabular}
\end{table}
Table 8: The effect of the amount of negative samples. We report the average of three trials for \(m=1,2,3\).
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Difficulty & BLEU-2 & METEOR & ROUGE\_L & CIDEr \\ \hline Sufficient & 25.24 (**+6.46**) & 20.54 (**+3.73**) & 36.58 (**+7.21**) & 82.07 (**+36.00**) \\ Likely & 19.32 (**+2.94**) & 17.98 (**+2.09**) & 30.80 (**+4.04**) & 46.14 (**+13.87**) \\ Conceiv. & 17.09 (**+5.17**) & 15.39 (**+2.67**) & 28.62 (**+6.75**) & 38.99 (**+16.76**) \\ \hline \hline \end{tabular}
\end{table}
Table 9: The performance is improved thanks to the contrastive learning across all the difficulty levels. _Conceiv._ is short for _Conceivable_. The performance is calculated on the same subset of the CICERO test set in Table 2.
include something that is not stated/verifiable in the dialogue contexts provided, while ours tends to be more supported by the dialogue context (see Table 1). We believe this resulted in ours being more favored by annotators.
### Challenges in Evaluation of Inductive Reasoning
As we have discussed in the previous sections, it is extremely challenging to evaluate inductive processes because, by nature, outputs contain new information that is not stated in the inputs Johnson-Laird (1988, 1993). While the field has been aware of the fundamental difference between inductive and deductive for more than 60 years Watanabe (1960), there is no way to directly compare two pieces of text in terms of the amount of "semantic information" until now. Recently, with the arising demand in faithful and factual text generation, several metrics have been applied mainly by computing overlap in named entities or extracted keywords Mao et al. (2021). Although the overlapped metrics could be a decent starting point for many tasks such as summarization, it is not appropriate for inference in dialogue as non-overlap is something desired rather than being avoided.
Another common choice to measure the plausibility today would be adopting NLI-based metrics Honovich et al. (2022). In Table 10, we report model-based NLI metrics of UNLI Chen et al. (2020) and AlignScore Zha et al. (2023). We measure entailment between generated inferences and the gold references (\(\mathrm{UNLI_{gold}/AS_{gold}}\)), or between generated inferences and the corresponding dialogue context (\(\mathrm{UNLI_{con}/AS_{con}}\)) on a scale of \([0,1]\). The training specifics of the NLI models, as well as their performance, can be found in Appendix A.1.
Despite being promising, the NLI scores are hardly interpretable, showing the consistent trend of contrastive learning degrading except for the GPT2-base. Even gold answers are labeled as "neutral" and undeterminable, and it is difficult to associate numbers with the quality of generated inferences. Although NLI metrics are an effective method to quantify factuality Zha et al. (2023), this result suggests that NLI metrics are not suitable for inference in dialogue. Future work is needed to investigate possible evaluation metrics of the information gap since it can also benefit a wide range of NLP tasks.
## 7 Conclusion
In this paper, we conduct an analysis of inferences in dialogue, focusing on the availability of semantic information between inputs and outputs. As expected, the models perform worse on the samples with larger information gaps. We investigate a contrastive learning approach to teach what is wrong in inference. Our experimental results suggest the effectiveness of our approach, showing the promising direction to bridge the information gap, especially for smaller models with <1B parameters.
### Limitations
The main drawback of the proposed method is that it requires more computational resources and longer training time as we increase the amount of training data to yield improvement with contrastive learning over the baselines. Although our method is model, dataset, and language agnostic, our exploration is limited to the popular transformer-based architectures and the single dataset in English.
The other significant aspect we have not covered in the paper (and in most of the literature to the best of our knowledge) is the stopping rule of the inference process in dialogue. As suggested in Clark (1975), there is a clear boundary between what portion of the untold information should be guessed and what can be left unknown in a speaker's intention. However, even in dataset construction phases, this aspect has been neglected (e.g., Bhagavatula et al., 2020; Ghosal et al., 2022). The stopping rule is essential since it can be one factor separating "Likely" questions and "Conceivable" questions. An important question for future studies is how to deal with the stopping rule, as it can be also associated with a boundary of hallucination and acceptable freedom in open-domain dialogue systems.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \(\mathrm{UNLI_{gold}}\) & \(\mathrm{UNLI_{con}}\) & \(\mathrm{AS_{gold}}\) & \(\mathrm{AS_{con}}\) \\ \hline GOLD & 1.0000 & 0.5220 & 0.9995 & 0.3670 \\ T5-small & 0.2283 & 0.6584 & 0.0577 & 0.6224 \\ + CL & 0.2271 & 0.6422 & 0.0565 & 0.5940 \\ T5-base & 0.2607 & 0.6894 & 0.0765 & 0.6355 \\ + CL & 0.2572 & 0.6586 & 0.0760 & 0.6009 \\ T5-large & 0.2947 & 0.7015 & 0.0973 & 0.6282 \\ + CL & 0.2940 & 0.6993 & 0.0931 & 0.6106 \\ GPT2-base & 0.2783 & 0.6460 & 0.0783 & 0.5723 \\ + CL & 0.3066 & 0.6610 & 0.0964 & 0.5561 \\ \hline \hline \end{tabular}
\end{table}
Table 10: NLI-based metric results on the CICERO test set. AS is short for _AlignScore_.
## Ethics Statement
In this work, we collect additional human annotations on the task difficulty in the CICERO dataset. The CICERO dataset is publicly available, and we will also release our task difficulty annotations upon acceptance. We consider the target inference provided by the dataset as the gold inference and analyze the task difficulty fully based on the relationship between the target inferences and the dialogues. We conduct human evaluation relying on a public crowd-sourcing platform, Appen. Each judgment takes four seconds on average, and we assign 15 cents as the payment to each judgment for annotators. No personal information except the country-level location is used to ensure the English proficiency of the annotators to guarantee the annotation quality, while all annotations were anonymized before the analysis.
## Acknowledgement
The authors thank all the anonymous reviewers for their valuable comments and constructive feedback. This work has been partially supported by China NSFC Project (No. NSFC21EG14), the Hong Kong Jockey Club (RG192/HKJCCT21EG01), and the Hong Kong PhD Fellowship Scheme, Research Grant Council, Hong Kong (PF18-25016).
|
2305.17971 | Automatic Evaluation of Turn-taking Cues in Conversational Speech
Synthesis | Turn-taking is a fundamental aspect of human communication where speakers
convey their intention to either hold, or yield, their turn through prosodic
cues. Using the recently proposed Voice Activity Projection model, we propose
an automatic evaluation approach to measure these aspects for conversational
speech synthesis. We investigate the ability of three commercial, and two
open-source, Text-To-Speech (TTS) systems ability to generate turn-taking cues
over simulated turns. By varying the stimuli, or controlling the prosody, we
analyze the models performances. We show that while commercial TTS largely
provide appropriate cues, they often produce ambiguous signals, and that
further improvements are possible. TTS, trained on read or spontaneous speech,
produce strong turn-hold but weak turn-yield cues. We argue that this approach,
that focus on functional aspects of interaction, provides a useful addition to
other important speech metrics, such as intelligibility and naturalness. | Erik Ekstedt, Siyang Wang, Éva Székely, Joakim Gustafson, Gabriel Skantze | 2023-05-29T09:29:11Z | http://arxiv.org/abs/2305.17971v1 | # Automatic Evaluation of Turn-taking Cues in
###### Abstract
Turn-taking is a fundamental aspect of human communication where speakers convey their intention to either hold, or yield, their turn through prosodic cues. Using the recently proposed Voice Activity Projection model, we propose an automatic evaluation approach to measure these aspects for conversational speech synthesis. We investigate the ability of three commercial, and two open-source, Text-To-Speech (TTS) systems ability to generate turn-taking cues over simulated turns. By varying the stimuli, or controlling the prosody, we analyze the models performances. We show that while commercial TTS largely provide appropriate cues, they often produce ambiguous signals, and that further improvements are possible. TTS, trained on read or spontaneous speech, produce strong turn-hold but weak turn-yield cues. We argue that this approach, that focus on functional aspects of interaction, provides a useful addition to other important speech metrics, such as intelligibility and naturalness.
Erik Ekstedt, Siyang Wang, Eva Seekely, Joakim Gustafson, Gabriel Skantze KTH, Royal Institute of Technology, Stockholm, Sweden
[email protected], [email protected], [email protected], [email protected], [email protected]
**Index Terms**: text-to-speech, turn-taking, human-computer interaction
## 1 Introduction
In recent years, there has a been an increased interest in developing conversational Text-to-Speech (TTS) [1, 2, 3, 4]. Whereas earlier conversational systems had to rely on TTS built for read speech, conversational TTS will allow systems to interact in a more natural and fluid way, more closely resembling a human-human conversation. Many commercial vendors are now also offering conversational variants of their voices.
Until recently, the evaluation of conversational TTS has been primarily centered around the voice's perceived naturalness and intelligibility, but this approach is being progressively challenged [5]. While these aspects are important, we think there are other, more functional, aspects that also should be considered, especially in a conversational setting. One such aspect is the way in which the voice may help to coordinate the interaction between the participants, more specifically turn-taking. It is important to model turn-taking accurately in spoken dialog systems (SDS), in order to avoid long response delays or inadvertent interruptions (from both the user and the system). The failure of appropriate timing can substantially deteriorate the quality of the interaction [6]. It is well known that humans send and receive various turn-yielding and turn-holding cues to coordinate turn-taking [7, 8, 6]. For example, a syntactically or semantically incomplete phrase may signal a turn-hold, whereas a complete phrase may be turn-yielding [9]. A filled pause is a strong cue to turn-hold [10, 11]. When syntax and semantics is ambiguous, prosody, gestures, or gaze can also be informative [8]. With regards to prosody, flat pitch, higher intensity and longer duration are associated with turn-hold, whereas a rising or falling pitch, lower intensity and shorter duration are associated with yielding the turn. These cues allow humans to take turns with very small gaps (around 200ms), while avoiding large overlaps [12].
The modeling of turn-taking in conversational systems has so far mostly focused on detecting turn-taking cues in the user's voice, in order to allow the system to take turns or give backchannels at appropriate places [6]. However, it is equally important that the system's voice also exhibits accurate cues, so that the user knows when to take the turn and when to allow the system to finish its turn. This is problematic in current TTS systems, as there is typically no control over these cues. Thus, there is a risk that the user might interrupt the system at the wrong places (unintended barge-in), or that the user in other ways gets confused over the allocation of the floor.
To evaluate turn-taking cues in TTS, one option could be to ask human raters to listen to TTS samples and ask them to press a button when they expect a turn-shift, similar to psycholinguistic experiments on understanding turn-taking cues in human speech [13, 14]. However, this is clearly a costly method and not feasible for large-scale evaluations. Thus, an automatic method would be desirable, that could complement other automatic evaluation metrics, such as MOSNet [15] and ASR [16].
In this paper, we introduce an automatic method for evaluating turn-taking cues in conversational TTS, based on Voice Activity Projection (VAP), a turn-taking model proposed by [17]. The model has been shown to outperform prior work [18] that demonstrated that computational turn-taking models trained in a self-supervised fashion perform better than humans on predicting the next speaker on recorded data. Prior work has also shown that the VAP model is sensitive to prosodic cues in synthesized speech [19], as well as filled pauses [20]. Here, we use the model to assess the likelihood that a user would take the turn at each frame in a synthesized spoken utterance. This way, we can make sure that this likelihood is as low as possible while the system is supposed to have the turn (i.e., it has more things to say before yielding), while it should be as high as possible towards the end of the turn. While we only focus on offline evaluation here, such a model could potentially also be used as a training objective when developing the TTS. The VAP model1 and listening samples2 are publically available.
Footnote 1: [https://github.com/ErikEkstedt/VoiceActivityProjection](https://github.com/ErikEkstedt/VoiceActivityProjection)
Footnote 2: [https://erikekstedt.github.io/vap_tts](https://erikekstedt.github.io/vap_tts)
## 2 Automatic Evaluation Method
Voice Activity Projection [17] (VAP) is a training objective where the voice activity (VA) of two speakers are predicted in
crementally (left-to-right) over the course of a dialog. The VA is defined in binary terms (speech/no-speech), and the two speakers' future activities are jointly encoded into a discrete state that represents the upcoming 2s of dialog. The states are defined by discretizing the 2s windows of activity into eight smaller sub-state-bins, four for each speaker, of increasing duration (0.2s, 0.4s, 0.6s, 0.8s) and are considered active if they contain a majority of VA. This discretization step result in \(2^{8}=256\) possible discrete states (labels) to predict during training, see Figure 1.
The VAP model used in this experiment is a stereo version of the original VAP model that operates on two separate waveforms (one for each speaker) and is trained on the Fisher part 1 and the Switchboard corpus [21, 22]. The model consists of a CPC-encoder [23, 24] that extracts framewise representations from the raw audio followed by a 4-layer transformer [25] decoder with cross-attention between the two speaker channels (5.79M parameters).
During inference we scale each sub-state-bin, with their associated label probability, and combine all contributions to a single aggregate state representation. We define two separate probabilities representing the next speaker predictions inside the most immediate region, \(P_{now}\) (0-600 \(\mathrm{ms}\)), and the more distant future, \(P_{fut}\) (600-2000 \(\mathrm{ms}\)), i.e. the first and last four sub-state-bins, respectively. The probabilities are normalized across speakers to produce a final value between 0 and 1 that represent the prediction probability of speaker A being active in the corresponding region.
For our automatic evaluation method, we use the VAP-model to assess these predictions during pauses, when the system should hold the turn, and towards the end, when the system should yield the turn. In this way, the VAP model works as as a user model. In the more specific scenario we investigate in this paper, we focus on turns containing two sentences - a statement and a question - which will naturally include a pause that can be mistaken for a turn-shift and can cause the user to "interrupt" the system, if the TTS generates ambiguous turn-yielding signals. Furthermore, we are interested in whether the user would not just be able to detect that the turn has been yielded, but also _predict_ that the turn is about to be yielded (see Figure 2 for an example of this).
Given the \(P_{now/fut}\) values over the generated turns we define four metrics covering different regions. First, we focus on the pause between the sentences (red vertical lines) and consider it a **Weak Hold** if the long term prediction \(P_{fut}\) favors the agent. It's considered weak because it allows for the user to be the most likely speaker in the short term (\(P_{now}\)), corresponding to an invitation of a quick back-and-forth, like a user backchannel or acknowledgment [26]. Second, we define a **Strong Hold** to be the subset where both the \(P_{now}\) and \(P_{fut}\) values favors the agent. Third, we focus on the last 600 \(\mathrm{ms}\) of speech (between the orange and the first green line) and define it as an **Early Yield** if \(P_{fut}\) favors the user. Lastly, the silence after the turn (green lines) is a **Late Yield** if both \(P_{now}\) and \(P_{fut}\) predicts user activity. Arguably, the desired outcome for a conversational system planning to say two sentences would be to send clear turn-holding signals before the pause, while signaling that the turn is about to be yielded towards the end.
## 3 Experiment
We consider a hypothetical spoken dialog system (SDS) setup where a TTS model is used to generate speech in a task-oriented setting. To simulate this setup we extract text data from the MultiWoZ corpus [27]. The corpus contains over 10,000 annotated written dialogs covering 8 different domains where two humans have a fictional interaction where one has the role of a "user" and the other a "clerk", here referred to as the agent. The users have goals and several sub-goals to complete over the course of a dialog, such as booking a hotel, a taxi, a train or a restaurant, among others.
We extract agent turns that consist of a sentence pair where a statement, **SD**, (ending with a period) is followed by a question, **Q**. (ending with a question mark). For simplicity, and to avoid unexpected behavior, we omit sentence pairs that includes commas, digits, contains less than 5 words per sentence, and where the total amount of characters are less than 50 or more than 250. Finally, we only keep sentence pairs where both sentences end with a word containing a single syllable, in order to simplify the prosody manipulation in section 3.2. In total, we extract 1482 sentence pairs, such as the one shown in Figure 2.
### Text-To-Speech
We utilize three popular ASR services, namely Amazon3, Google4 and Microsoft5 and use their most natural and conversational american english neural TTS engines: "Joanna
Figure 1: The VAP model receives a stereo channel audio input and predicts a discrete label, y, at frame, n, corresponding to time, t. Each label represents a state that consists of 8 binary bins, 4 for each speaker, spanning the next 2s of dialog.
Figure 2: Example (Amazon TTS): “Yes that time will work. Would you like me to book it for you?”. From the top, we show the mel-spectrogram, the \(P_{now}\), and \(P_{fut}\) values. The dashed black line signifies equal speaker probability, with blue areas (hold) above and yellow or green (yield) below. Vertical lines mark the pause (red), Early Yield (orange, green), and Late Yield (green) zones. The model labels the pause as a Weak and Strong Hold and the end as an Early and Late Yield.
neural-en-US" (Amazon), "en-US-Neural2-C" (Google) and "en-US-JennyNeural" (Microsoft). Additionally, we include the open-source FastPitch (**FP**) model6[28] (46.27M parameters), trained on the LJSpeech7 corpus, using the pre-trained checkpoints available to the speech community at large. We also trained a conversational TTS on the widely known Tacotron 2 (**TT2**) architecture [29] (28.21M parameters), with modifications from [30] that allow for control of duration and pitch at the phoneme and word level. The voice is trained on the ThinkComputers Corpus [4], a corpus created from the recordings of a podcast which is made available in the public domain8. For the FastPitch and modified Tacotron 2 systems, the speech signal is decoded using the neural vocoder HiFi-GAN [31] (13.93M parameters).
Footnote 6: [https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/SpeechSynthesis/FastPitch](https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/SpeechSynthesis/FastPitch)
Footnote 7: [https://keithito.com/LJ-Speech-Dataset/](https://keithito.com/LJ-Speech-Dataset/)
Footnote 8: [https://archive.org/details/podcasts_miscellaneous/CreatorThinkComputers](https://archive.org/details/podcasts_miscellaneous/CreatorThinkComputers)
Footnote 9: [https://github.com/openai/whisper](https://github.com/openai/whisper)
### Results
We generate speech for the extracted turns and normalize the pause (and the Late Yield) duration to be \(400\,\mathrm{ms}\), using forced alignment [32], to focus on the prosodic signals of the speech rather than varying lengths of the pauses. After the silence normalization we apply the VAP model and extract the proposed metrics over the regions of interest. Additionally, we approximate the intelligibility and conceived naturalness of the generated speech using the open-source ASR model Whisper9 (large) to extract the word error rate, WER, and the automatic MOS predictor [33] that output a score between 1 and 5.
Footnote 9: [https://github.com/openai/whisper](https://github.com/openai/whisper)
All systems signal a Weak Hold over the vast majority of pauses, where TT2 achieves the highest score of 97% and Google the lowest score of 79%, see Table 1. A lower Weak Hold score raises the risk that a system prematurely signals the end of its turn. For the Strong Hold metric, TT2 outperforms the other systems, producing 93% Strong Holds, while Amazon and FP produce around 30%, and Google and Microsoft achieve around 20%. All systems except TT2 have a high risk of inviting user activity inside the pause, which, if not handled correctly by the SDS, could result in a dialog breakdown. As discussed earlier, a Weak Hold might be acceptable (or even desirable, if the user is invited to give a backchannel) for an interactive conversational system. This means that context of the SDS interaction have to be taken into consideration [5] and factored in to the analysis of the importance of the Strong Hold metric in general.
Towards the end of the turns, Amazon outperforms all other systems and provides Early Yield signals for 44% of the samples. During an interaction this could allow for better conversational flow by preparing the users for their expected upcoming turn. After the end of the generated speech, all commercial systems signal a Late Yield for 90-95% of the samples. This means that while the systems may not always provide early prosodic turn-yield signals, they do convey their intention to yield the turn upon its completion. However, both TT2 and FP heavily under-perform on this metric and provide 14% and 39% Late Yields and only 4% and 5% Early Yields, respectively. This may be explained by the fact that FP is trained on read speech, and while TT2 is trained on conversational speech the content is largely monologie, where a single speaker talks at length about a topic with minimal input from the interlocutor.
According to the naturalness scores, Microsoft outperforms all systems averaging a MOS of 4.8, while the least natural voice was TT2 with a score of 3.9. Microsoft also provides the most intelligible speech with a WER of 2.3%, Amazon and Google are slightly worse with 2.7% WER, while FP and TT2 are less intelligible, averaging 5.6% and 5.2%.
**Text Manipulation**: Because TTS models are commonly trained on text that includes punctuation, we provide two different input permutations of each agent turn that can induce different prosodic realizations at the end of the SD sentence. Whereas a period denotes the end of a sentence, commas are used to separates parts within them, and can potentially condition the TTS systems to generate stronger hold cues. Filled pauses (or _fillers_ for short), such as "um", are also well known to be a strong cue to turn hold [10, 11]. To study if such manipulations would indeed have a turn-holding effect, we experiment by replacing the ending period with either a comma or with the filler "um," (also ending with a comma).
The comma prompt produced an increase of Weak Holds, for all models (except TT2), and raised the Weak Hold frequency to over 90% for each system except Google (83%), as seen in Table 2. The FP model showed the largest relative improvement, classifying 56% of the pauses as a Strong Hold. Microsoft had the second largest gain, improving its Strong Hold classification from 20% to 33%, outperforming Google (29%), but falls short of Amazon with 48%. The large improvement for FP could be due to it being trained on read speech, where the source text is grammatical and contains a lot of punctuation, making the prompt conditioning prominent. However, TT2 was largely unaffected by the change of prompt which, similarly, could be because it is trained on spontaneous speech, that naturally contains less punctuation, making it less sensitive to punctuation in general. The results indicate that while the effect of changing periods to commas provides stronger hold signals, the strength varies depending on the systems. Furthermore, given the simplicity and the straight-forward meaning of punctuation control it could be beneficial for spontaneous TTS to include such examples in their training data.
Interestingly, while the manipulation only changes the punctuation prior to the pause, the later yield signals are affected as well. The cause of this effect can be because of differences in the generated speech at the end of the questions or how the entire turn is perceived. Because the VAP model is trained on \(20\,\mathrm{s}\) segments of continuous dialog, it can learn to utilize context over multiple sentences to infer the yield probabilities. However, the effect is small across all systems and does not change the general outcome in a significant way. Lastly, we note that both the intelligibility and naturalness metrics are roughly unaffected by the manipulation.
While changing the punctuation does not introduce any ad
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**Metric** & **AMZN** & **GGL** & **MSFT** & **FP** & **TT2** \\ \hline Weak Hold\(\uparrow\) & 87 & 79 & 86 & 84 & **97** \\ Strong Hold\(\uparrow\) & 31 & 21 & 20 & 30 & **93** \\ Early Yield\(\uparrow\) & **44** & 28 & 32 & 5 & 4 \\ Late Yield \(\uparrow\) & **95** & **95** & 90 & 39 & 14 \\ \hline MOS\(\uparrow\) & 4.2 & 4.3 & **4.8** & 4.4 & 3.9 \\ WER \(\downarrow\) & 2.7 & 2.7 & **2.3** & 5.6 & 5.2 \\ \hline \hline \end{tabular} Towards the end of the turns, Amazon outperforms all other systems and provides Early Yield signals for 44% of the samples. During an interaction this could allow for better conversational flow by preparing the users for their expected upcoming turn. After the end of the generated speech, all commercial systems signal a Late Yield for 90-95% of the samples. This means that while the systems may not always provide early prosodic turn-yield signals, they do convey their intention to yield the turn upon its completion. However, both TT2 and FP heavily under-perform on this metric and provide 14% and 39% Late Yields and only 4% and 5% Early Yields, respectively. This may be explained by the fact that FP is trained on read speech, and while TT2 is trained on conversational speech the content is largely monologie, where a single speaker talks at length about a topic with minimal input from the interlocutor.
According to the naturalness scores, Microsoft outperforms all systems averaging a MOS of 4.8, while the least natural voice was TT2 with a score of 3.9. Microsoft also provides the most intelligible speech with a WER of 2.3%, Amazon and Google are slightly worse with 2.7% WER, while FP and TT2 are less intelligible, averaging 5.6% and 5.2%.
**Text Manipulation**: Because TTS models are commonly trained on text that includes punctuation, we provide two different input permutations of each agent turn that can induce different prosodic realizations at the end of the SD sentence. Whereas a period denotes the end of a sentence, commas are used to separates parts within them, and can potentially condition the TTS systems to generate stronger hold cues. Filled pauses (or _fillers_ for short), such as "um", are also well known to be a strong cue to turn hold [10, 11]. To study if such manipulations would indeed have a turn-holding effect, we experiment by replacing the ending period with either a comma or with the filler "um," (also ending with a comma).
The comma prompt produced an increase of Weak Holds, for all models (except TT2), and raised the Weak Hold frequency to over 90% for each system except Google (83%), as seen in Table 2. The FP model showed the largest relative improvement, classifying 56% of the pauses as a Strong Hold. Microsoft had the second largest gain, improving its Strong Hold classification from 20% to 33%, outperforming Google (29%), but falls short of Amazon with 48%. The large improvement for FP could be due to it being trained on read speech, where the source text is grammatical and contains a lot of punctuation, making the prompt conditioning prominent. However, TT2 was largely unaffected by the change of prompt which, similarly, could be because it is trained on spontaneous speech, that naturally contains less punctuation, making it less sensitive to punctuation in general. The results indicate that while the effect of changing periods to commas provides stronger hold signals, the strength varies depending on the systems. Furthermore, given the simplicity and the straight-forward meaning of punctuation control it could be beneficial for spontaneous TTS to include such examples in their training data.
Interestingly, while the manipulation only changes the punctuation prior to the pause, the later yield signals are affected as well. The cause of this effect can be because of differences in the generated speech at the end of the questions or how the entire turn is perceived. Because the VAP model is trained on \(20\,\mathrm{s}\) segments of continuous dialog, it can learn to utilize context over multiple sentences to infer the yield probabilities. However, the effect is small across all systems and does not change the general outcome in a significant way. Lastly, we note that both the intelligibility and naturalness metrics are roughly unaffected by the manipulation.
While changing the punctuation does not introduce any ad
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Metric** & **AMZN** & **GGL** & **MSFT** & **FP** & **TT2** \\ \hline Weak Hold\(\uparrow\) & 87 & 79 & 86 & 84 & **97** \\ Strong Hold\(\uparrow\) & 31 & 21 & 20 & 30 & **93** \\ Early Yield\(\uparrow\) & **44** & 28 & 32 & 5 & 4 \\ Late Yield \(\uparrow\) & **95** & **95** & 90 & 39 & 14 \\ \hline MOS\(\uparrow\) & 4.2 & 4.3 & **4.8** & 4.4 & 3.9 \\ WER \(\downarrow\) & 2.7 & 2.7 & **2.3** & 5.6 & 5.2 \\ \hline \hline \end{tabular} Towards the end of the turns, Amazon outperforms all other systems and provides Early Yield signals for 44% of the samples. During an interaction this could allow for better conversational flow by preparing the users for their expected upcoming turn. After the end of the generated speech, all commercial systems signal a Late Yield for 90-95% of the samples. This means that while the systems may not always provide early prosodic turn-yield signals, they do convey their intention to yield the turn upon its completion. However, both TT2 and FP heavily under-perform on this metric and provide 14% and 39% Late Yields and only 4% Early Yields, respectively. This may be explained by the fact that FP is trained on read speech, and while TT2 is trained on conversational speech the content is largely monologie, where a single speaker talks at length about a topic with minimal input from the interlocutor.
According to the naturalness scores, Microsoft outperforms all systems averaging a MOS of 4.8, while the least natural voice was TT2 with a score of 3.9. Microsoft also provides the most intelligible speech with a WER of 2.3%, Amazon and Google are slightly worse with 2.7% WER, while FP and TT2 are less intelligible, averaging 5.6% and 5.2%.
**Text Manipulation**: Because TTS models are commonly trained on text that includes punctuation, we provide two different input permutations of each agent turn that can induce different prosodic realizations at the end of the SD sentence. Whereas a period denotes the end of a sentence, commas are used to separates parts within them, and can potentially condition the TTS systems to generate stronger hold cues. Filled pauses (or _fillers_ for short), such as "um", are also well known to be a strong cue to turn hold [10, 11]. To study if such manipulations would indeed have a turn-holding effect, we experiment by replacing the ending period with either a comma or with the filler "um," (also ending with a comma).
ditional words or letters to the prompt, inserting a filler does. Whereas a filler can be considered to function as a word [10], it is not guaranteed that all TTS systems include them in their training, and the realization of their characteristic hesitation-like properties [34] can vary between systems. Table 3 shows that both Microsoft and TT2 achieve 100% Strong Hold when a filler is included. While TT2 shows strong hold cues in both the original and comma permutations, the filler drastically strengthens the hold for the Microsoft voice. From listening to the generated filler samples we note that both Microsoft and TT2 have learned to generate fillers in a prosodically relevant way. This is reasonable considering the spontaneous nature of the TT2 training data and distinguishes Microsoft from the other commercial services. These results show that if a TTS has the ability to produce fillers, it can be a very efficient way to avoid user barge-ins.
**Prosody Manipulation**: As an alternative to text manipulation, we experiment with direct manipulation of the prosody prior to the pause, to achieve prosodic signals that are associated with turn-hold, according to the literature [6, 35]. We exclusively target the last syllable of the SD sentence, corresponding to the last word given the text extraction step, and raise the intensity, make the duration longer and flatten the intonation. We consider two approaches where we (1) apply post-processing on the original generated speech, using Praat [36] and torchaudio, and (2) use the FP and TT2 innate ability to control for these properties explicitly in the generation step.
The post-processing produces a substantial increase of Strong Holds for both Microsoft (81%) and Amazon (70%), while Google (53%) is less affected, see Table 4. This indicates that Google provides more ambiguous signals earlier on in the statement and requires greater intervention to produce consistent turn-holding aspects over the course of the turn. While TT2 consistently produces strong hold cues, FP shows a relative improvement similar to Microsoft and increases its Strong Hold performance to 85%. Overall, the results highlight the possibility for TTS systems to improve their generative capabilities w.r.t. the functional turn-taking aspects of conversational speech.
Post-processing inadvertently introduces artifacts that affects the perceived naturalness, reflected by the consistent decrease of the MOS across all TTS systems. The alternative approach, controlling the prosody directly in the generation process, alleviates this negative impact, where FP produces MOS of 4.3 instead of 4.0 and TT2, 3.7 instead 3.6, as compared to the post-processing. However, the hold signals are less prominent and the Strong Hold score for TT2 (93%) is unaffected (as compared to the original), while FP produces the same score as for the simpler comma prompt (54%).
## 4 Conclusion
We introduced a new automatic evaluation method that can measure a TTS model's ability to produce turn-taking cues. We show that although commercial TTS systems often do provide appropriate turn-taking cues, they still produce ambiguous, or opposite, cues for up to 21% of pauses (Google) and up to 10% of yields (Microsoft). Furthermore, we show that TTS models trained on read or spontaneous speech are generally good at producing turn-holding cues but show low performance w.r.t. turn-yielding cues. Replacing a period with a comma is a simple approach to condition TTS models to provide stronger turn-holding cues. If a TTS model has the ability to correctly generate fillers, they can be added to a prompt to strongly convey the agent's intention to continue its turn. By directly manipulating the prosody, we show the possibility of improvement for conversational TTS to produce turn-taking cues independent of the provided input prompt.
While controllable TTS has mainly focused on prosodic control [28], or style control (such as emotion or speaker) [37], we argue that controlling for turn-taking cues could provide added benefits useful for conversational speech. A VAP model could even provide turn-taking signals to be optimized during training, that could enable the ability to generate appropriate turn-taking cues without relying on additional datasets.
This work is breaking down barriers by providing a free, open-source, model that can be used to evaluate the generative turn-taking capabilities of conversational TTS systems. It enables researchers with limited resources to conduct fast model iteration, without relying on expensive human evaluations, and is a step towards making speech research more equal in general.
## 5 Acknowledgements
This work was supported by the Riksbankens Jubileumsfond (RJ) project (P20-0484), the Swedish Research Council projects (2020-03812) and (VR-2019-05003), and finally the Digital Futures project, _AAIS_.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**Metric** & **AMZN** & **GGL** & **MSFT** & **FP** & **TT2** \\ \hline Weak Hold\(\uparrow\) & **97** & 92 & 96 & 97 & **99** \\ Strong HOLD\(\uparrow\) & 70 & 53 & 81 & 85 & **98** \\ Early Yield\(\uparrow\) & **46** & 26 & 29 & 6 & 4 \\ Late Yield\(\uparrow\) & 95 & **97** & 95 & 42 & 18 \\ \hline MOS\(\uparrow\) & 3.8 & 3.9 & **4.6** & 4.0 & 3.6 \\ WER \(\downarrow\) & 2.7 & 2.5 & **2.4** & 3.9 & 5.2 \\ \hline \hline \end{tabular}
\end{table}
Table 4: The aggregate post-processing metrics. All values are percentages (%) except for the MOS.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**Metric** & **AMZN** & **GGL** & **MSFT** & **FP** & **TT2** \\ \hline Weak Hold\(\uparrow\) & 92 & 83 & 90 & 90 & **97** \\ Strong Hold\(\uparrow\) & 48 & 29 & 33 & 54 & **92** \\ Early Yield\(\uparrow\) & **39** & 28 & 28 & 6 & 5 \\ Late Yield\(\uparrow\) & 92 & **94** & 89 & 43 & 18 \\ \hline MOS\(\uparrow\) & 4.2 & 4.3 & **4.8** & 4.4 & 3.8 \\ WER \(\downarrow\) & 2.8 & 2.8 & **2.2** & 5.4 & 6.1 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The aggregate metrics for the COMMA permutation. All values are percentages (%) except for the MOS.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**Metric** & **AMZN** & **GGL** & **MSFT** & **FP** & **TT2** \\ \hline Weak Hold\(\uparrow\) & 93 & 77 & **100** & 92 & **100** \\ Strong Hold\(\uparrow\) & 57 & 26 & **100** & 69 & **100** \\ Early Yield\(\uparrow\) & **38** & 27 & 23 & 6 & 5 \\ Late Yield\(\uparrow\) & 92 & **95** & 88 & 38 & 21 \\ \hline MOS\(\uparrow\) & 4.2 & 4.2 & **4.7** & 4.3 & 3.8 \\ WER \(\downarrow\) & 6.5 & 4.9 & **3.1** & 7.5 & 7.2 \\ \hline \hline \end{tabular}
\end{table}
Table 3: The aggregate metrics for the FILLER permutation. All values are percentages (%) except for the MOS. |
2301.05882 | Design and Development of Wall Climbing Robot | Climbing Robots are being developed for applications ranging from cleaning to
the inspection of difficult to reach constructions. Climbing robots should be
capable of carrying a light payload and climbing on vertical surfaces with
ability to cope with obstacles. Regarding adhesion to the surface, they should
be able to operate on different surfaces with different adhesion methods to
produce strong gripping force using light weight mechanism consuming minimum
power. Bearing these facts in mind this paper presents a 4-legged Wall Climbing
Robot in which suction power using on board suction pumps is used as an
adhesion technique. A Walking gait was developed to provide the robot with a
capability for climbing up the wall. The robot's kinematics and motion can be
considered as mimicking a technique commonly used in rock-climbing using four
limbs to climb. It uses four legs, each with four-degrees-of-freedom (4-DOF)
and specially designed suction cups attached to the end of each leg that enable
it to manoeuvre itself up the wall and to move in any direction. The end
effector can also be replaced with other end effectors designed for different
adhesion methods to climb on variety of surfaces. | Hafiz Muhammad Bilal | 2023-01-14T10:33:24Z | http://arxiv.org/abs/2301.05882v1 | # Design and Development of Wall Climbing Robot
###### Abstract
Climbing Robots are being developed for applications ranging from cleaning to the inspection of difficult to reach constructions. Climbing robots should be capable of carrying a light payload and climbing on vertical surfaces with ability to cope with obstacles. Regarding adhesion to the surface, they should be able to operate on different surfaces with different adhesion methods to produce strong gripping force using light weight mechanism consuming minimum power. Bearing these facts in mind this paper presents a 4-legged Wall Climbing Robot in which suction power using on board suction pumps is used as an adhesion technique. A Walking gait was developed to provide the robot with a capability for climbing up the wall. The robot's kinematics and motion can be considered as mimicking a technique commonly used in rock-climbing using four limbs to climb. It uses four legs, each with four-degrees-of-freedom (4-DOF) and specially designed suction cups attached to the end of each leg that enable it to manoeuvre itself up the wall and to move in any direction. The end effector can also be replaced with other end effectors designed for different adhesion methods to climb on variety of surfaces.
## I Introduction
Climbing robots are very useful devices that can be used in different applications like maintenance, building inspection in construction industries. These systems are usually used in areas where direct access of a human operator is very expensive or very dangerous, due to the presence of an unfriendly environment. Robot that can climb vertically and autonomously along a vertical surface provide considerable military and civilian advantages. When positioned on a high building, the robot, can serve as an observation platform to provide valuable military intelligence and it can assist in search and rescue operations as well. Such a robot can also be used for unmanned clearance of hostile places and can serve as a platform for carrying firearms and explosives. In terms of civilian use, the robot can be used in construction to signal back the status of various operations being implemented at dangerously high levels.
A considerable research has been devoted to these climbing robots and various types of experimental models were proposed so far. The two major issue's in the design of wall climbing robots are their locomotion and the adhesion methods. With respect to the locomotion type, following types are often considered: the crawler [1], the track wheeled or wheeled [2, 3], the legged [4] types and Omni directional wheel climbing robot [5]. According to the adhesion method, these robots are generally classified into four groups [6]: vacuum or suction cups [7], electrostatic or magnetic [8], dry adhesion [9] and claws for gripping to the rough surface [10]. Recently, new methods for assuring the adhesion, based on biological findings [11, 12] have also been proposed.
There are some limitations attached with almost every adhesion technique and locomotion type. Although, the crawler and wheeled type are fast in movement but it is not adequate to use them in rough environment and also they possess limited ability in crossing cracks and obstacles. Legged climbing robots have the advantages of easily coping with obstacles or cracks found in the environment where they are moving. The adoption of a larger number of limbs can supply redundant support and raises the payload capacity and safety. Whereas the disadvantages are relatively low speed while requiring a complex control.
As far as adhesion techniques are concerned, effectiveness of dry adhesion reduces with continuous operation and pads needs to be replaced. Magnetic adhesion is limited to only metallic surfaces whereas claw like adhesion techniques are effective only in very rough surfaces. Passive suction cups are used to produce adhesion for climbing robot but it is applicable on very smooth surfaces and suction power in passive suction cups is compromised on irregular surfaces.
The climbing robots mostly can be used with a single type of adhesion and little work has been done to develop a robot on which different adhesion methods can be applied to make it able to climb on variety of surfaces.
The proposed design of climbing robot in this paper is a 4-legged robot with the ability to climb and cling on vertical surfaces. Although, active suction through on-board suction pump is used as adhesion technique in the robot presented in this paper but as part of the design goals, it was posited that robot structure should be able to be used with different adhesion techniques. Moreover, the proposed robot design is small, compact and easy-to-carry and is able to independently operate on high walls independent from ground for movement and adhesion. To conduct its missions, the robot can remain statically attached to the wall.
The flow of the paper is that section II presents different aspects of Robot design whereas section III presents testing results and finally, conclusion and possible future work is discussed in the end.
## II Robot Design and Analysis
In this section Mechanical design, pneumatic design, Inverse kinematics and walking and climbing gait design of the robot is discussed.
MECHANICAL DESIGN
The robot consists of four legs which are arranged symmetrically around the robot's central body. Each leg has five-degrees-of-freedom (DOF). Four of the DOFs are motorized and the fifth, which is in the gripping device i.e. suction cup assembly mounted on the tip of the leg, is a passive DOF. The first two DOFs, whose axes are perpendicular to the wall, enable the robot to move forward. The two remaining motorized DOFs whose axes are parallel to the wall's plane are designed for determining the distance of the robot from the wall and the angular constraint for the end effector. Using four joints in a leg gives benefit of reduced complexity for controlling the leg movement.
The robot mechanical design is such that other designs of end effector of the legs using different adhesion methods can be used. For example a claw as an end effector figure [6] can also be attached at the tip of each leg to climb on the extreme rough surface like tree or dam concrete wall.
Brackets shown in figure [1-3] are specifically designed for the servo motors used to move the joints. Four actuators per leg were assembled with an end effector's gripping device at the tip of each leg which is a suction cup shown in figure [4-5] specially designed for robot.
Figure 4: bracket for motor M1 and M4 in figure [7]
Figure 5: bracket for motor M3 in figure [7].
The design of the leg provides the robot with good gait capability. Furthermore, the robot can change its distance from the wall by extending its legs, to lower or raise itself in relation to the wall's surface according to the surface conditions. After the attachment of the suction cup and upon determination of the distance from the wall by motors 3 and 4 of every leg, the first two motors in each leg drive the robot's movement. Consequently, this leg design has the advantage of decoupling motion in plane (parallel to the wall) and normal to the plane. The complete robot model is shown in figure 8.
## 2. Inverse kinematics
Inverse kinematics is used to compute the joint angles for a required end effector's position in xyz plane. As one leg has four joints so there are four angles required to reach at the desired position in xyz space. As mentioned earlier that Robot's leg is designed such that it has the advantage of decoupling the motion in planes parallel to the wall and normal to the wall. Joints denoted by M1 and M2 [figure 7] which are closer to the central base are responsible for the location of the contact point in the xy plane. Whereas other two joints denoted by M3 and M4 [figure 7] are responsible for the distance from the wall and the approach angle of the cup in zy plane. The distance of the central body from the wall is constrained to the defined value Z. With these constraints and assumptions, we need to compute four joint angles to achieve a desired position of the suction cup at the end of a leg.
Two joint angles responsible for contact point of suction cup in xy-plane can be considered as two link manipulator in xy-plane shown in [figure 9].
## 3. Inverse kinematics
Inverse kinematics is used to compute the joint angles for a required end effector's position in xyz plane. As one leg has four joints so there are four angles required to reach at the desired position in xyz space. As mentioned earlier that Robot's leg is designed such that it has the advantage of decoupling the motion in planes parallel to the wall and normal to the wall. Joints denoted by M1 and M2 [figure 7] which are closer to the central base are responsible for the location of the contact point in the xy plane. Whereas other two joints denoted by M3 and M4 [figure 7] are responsible for the distance from the wall and the approach angle of the cup in zy plane. The distance of the central body from the wall is constrained to the defined value Z. With these constraints and assumptions, we need to compute four joint angles to achieve a desired position of the suction cup at the end of a leg.
Two joint angles responsible for contact point of suction cup in xy-plane can be considered as two link manipulator in xy-plane shown in [figure 9].
## 4. Inverse kinematics
Inverse kinematics, the joint angles for a required end effector's position in xyz plane are given by
\[\begin{array}{l}\theta_{1}\ =\ \varphi_{1}\ -\phi\\ \theta_{1}\ =\ \tan^{-1}\left(\frac{y}{x}\right)-\ \tan^{-1}\left(\frac{a_{2}\sin\theta_{2}}{a_{1} +a_{2}\cos\theta_{2}}\right)\ \rightarrow\ \ \left(1\right)\end{array}\]
Fromlaw of cosines
\[\begin{array}{l}\left(\sqrt{x^{2}+y^{2}}\right)^{2}=a_{1}^{2}+a_{2}^{2}-2a_{1 }a_{2}\cos\varphi_{2}\\ \\ -\cos\varphi_{2}=\frac{x^{2}+y^{2}-a_{1}^{2}-a_{2}^{2}}{2a_{1}a_{2}}=D\\ \\ 180-\varphi_{2}=D&\
Two joints M3 &M4 responsible for the distance of robot from the wall can also be considered as two link manipulator in \(zy\)-plane shown in [figure 10]. When the leg is attached to the wall the distance Z remain constant.
\[Z=a_{3}\sin\theta_{3}+a_{4}\sin(k)\] \[\theta_{3}=\sin^{-1}\!\left(\frac{Z\!-\!a_{4}\sin(k)}{a_{3}} \right)\!\rightarrow\!(3)\] \[k=\theta_{3}+\theta_{4}\] \[\theta_{4}=k-\theta_{3}\rightarrow(4)\]
## 3 Pneumatic Design
Two light weight suction pumps are used for producing suction power in suction cups at the end of each leg. To produce the suction power required to adhere to the wall, on board suction pumps uses diaphragm mechanism to produce suction with the help of a 12 volt dc motor.
The suction power in suction cup at each leg is controlled with solenoid valves operated by a relay board which are actuated by a microcontroller. Figure [11] shows the pneumatic circuit diagram designed for the robot.
## 4 Gait Design
There are many walking gaits have been designed for legged robots like diagonal gait, crawl gait and creep gait. The gait designed for this robot mimics a human mountain or wall climber. A human climbing on the wall with four limbs climbs in such a way that at any time three of its limbs remain attached to the wall and push the body in upward direction and fourth limb remains in the air moving to next possible point to hold.
Similarly, in this robot three legs remain firmly attached to the wall with the help of suction power in the suction cups and move the robot body in upward direction and in the meantime fourth leg not attached to the wall moves forward to reach to new upward position.
The gait designed for the robot can be explained step by step. When robot is powered up leg 1 is at position p\({}_{1}\), leg 2 is at position p\({}_{2}\), leg 3 is at position p\({}_{3}\) and leg 4 is at position p\({}_{4}\) and all legs are attached to the wall with the help of suction cup shown in figure [12-b]. All four legs moves in following four steps repeatedly in order to move the robot upward on the wall.
Figure 11: Pneumatic circuit diagram of the robot for adhesion
Figure 12: a), Gait design of the robot
## III Testing
The four joint angles required for leg movement were computed offline using equation 1-4 for each leg for position p1, p2, p3 and p4. The designed gait was implemented in aurduino microcontroller. A testing platform was developed to test the Robot's performance. Different inclination angles can be given to the testing platform to test the Robot's performance at different climb angles. Speed of the Robot and power consumption at different climb angles is shown in figure [13] & [14]. Due to onboard power supply and on board suction pump the weight of the robot is heavier, therefore, speed of the robot decreases as the climb angle increases due to slippage of suction cups. Similarly, the power consumption increases as the climbing angle increases due to increased load on suction pumps and servo motor to lift the robot. Figure [15] and figure [16] shows the actual robot in action on the testing platform.
## Conclusion
The design and fabrication of the wall climbing robot has been successfully achieved as shown in figure [15-16]. The components used are easily available and programming required is simple. The designed robot has a stable structure and has shown a capability of climbing on vertical surfaces with a stable climbing gait. The robot has the ability to be used with other adhesion techniques as well by just replacing the end effectors. However, the robot structure is made of aluminium and on board battery pack and suction pumps are used which makes it quite heavier. Light weight batteries and light weight fibre made structure should be used to reduce the weight for better climbing performance. End effector design using different adhesion methods should be designed, manufactured and tested. Different path planning algorithms can be implemented and tested. Finally, this robot should be further developed by incorporating environment awareness using different sensors and autonomous climbing capability can be achieved to perform tasks at difficult to reach places where human access is difficult.
## Acknowledgements
This work was done as final year project of bachelor degree at University of Engineering and Technology, UET, Lahore. Thanks to Robotics Lab staff for their support and Thanks to Mechatronics department UET Lahore for funding.
|
2308.00229 | Prompts Matter: Insights and Strategies for Prompt Engineering in
Automated Software Traceability | Large Language Models (LLMs) have the potential to revolutionize automated
traceability by overcoming the challenges faced by previous methods and
introducing new possibilities. However, the optimal utilization of LLMs for
automated traceability remains unclear. This paper explores the process of
prompt engineering to extract link predictions from an LLM. We provide detailed
insights into our approach for constructing effective prompts, offering our
lessons learned. Additionally, we propose multiple strategies for leveraging
LLMs to generate traceability links, improving upon previous zero-shot methods
on the ranking of candidate links after prompt refinement. The primary
objective of this paper is to inspire and assist future researchers and
engineers by highlighting the process of constructing traceability prompts to
effectively harness LLMs for advancing automatic traceability. | Alberto D. Rodriguez, Katherine R. Dearstyne, Jane Cleland-Huang | 2023-08-01T01:56:22Z | http://arxiv.org/abs/2308.00229v1 | # Prompts Matter: Insights and Strategies for Prompt Engineering in Automated Software Traceability
###### Abstract
Large Language Models (LLMs) have the potential to revolutionize automated traceability by overcoming the challenges faced by previous methods and introducing new possibilities. However, the optimal utilization of LLMs for automated traceability remains unclear. This paper explores the process of prompt engineering to extract link predictions from an LLM. We provide detailed insights into our approach for constructing effective prompts, offering our lessons learned. Additionally, we propose multiple strategies for leveraging LLMs to generate traceability links, improving upon previous zero-shot methods on the ranking of candidate links after prompt refinement. The primary objective of this paper is to inspire and assist future researchers and engineers by highlighting the process of constructing traceability prompts to effectively harness LLMs for advancing automatic traceability.
automated software traceability, large language models, prompt engineering
## I Introduction
The challenges of automating traceability have been well documented over the past two decades [1, 15, 18, 25]; however, achieving satisfactory degrees of accuracy across diverse datasets has been an ongoing challenge [9, 13] that has inhibited its adoption in industry. The release of the Google's BERT model [11] in 2018 introduced new possibilities for the field, transforming the once far off dream of automatic traceability into a reality for projects in common domains [16, 17]. However, despite these improvements, challenges such as highly-technical domain-specific terminology, low data availability for training, and lack of interpretability meant that automated tracing continued to under-perform in many projects and domains where trace links were still delivered at low degrees of accuracy [8, 19]. In the present day, large language models (LLMs), such as GPT3 and Claude [2, 4], offer the promise of further transformation in automated traceability, eliminating many of these problems and introducing new possibilities for the field. However, as of yet, there is no clear direction on how best to utilize LLMs for automated traceability.
When we began the work for this paper, our initial aspiration was to discover the "silver bullet" prompt for automated traceability. Similar to previous approaches [1, 16, 17], the "silver bullet" would discern true candidate links from false ones across all projects and circumstances. While we identified a prompting approach that performed well across multiple projects, we concluded that the optimal prompting strategy depends on factors like available resources, the model being used, and the targeted usage scenario. Different LLMs exhibit distinct strengths and weaknesses and may require different prompts to achieve desired outcomes on the same data sets; compounding this, variance across versions of the same base model can alter performance on the same task [5]. Moreover, top-performing models can be cost-prohibitive to many engineers and researchers. Despite LLMs' capabilities, high variability persists across projects, prompts, and parameters.
Therefore, by bringing attention to some of the obstacles we encountered while crafting out prompts, we hope to make researchers and practitioners aware of potential pitfalls when employing the models for traceability related tasks. Rather than merely showcase top results, we have chosen to elaborate on the process we followed to construct our prompts with the goal of inspiring other engineers who may wish to identify a prompt that best suits their needs.
In this paper, we seek to shed light on the following questions:
1. Do LLMs possess knowledge necessary for tracing projects with technical domain-specific vocabulary?
2. Can LLMs provide reasonable explanations for their decisions?
3. If so, can these explanations be utilized to improve prompts?
4. Can reasoning be used to improve responses?
5. How can LLMs be leveraged to generate software traceability links?
While much future work is needed in this area, we hope to aid future researchers and engineers by highlighting the process of constructing traceability prompts for leveraging LLMs effectively to advance automatic traceability.
## II Related Work
Effective automated software traceability has many benefits for software engineering, and several approaches have therefore been proposed to address its challenges. In recent years, the emergence of LLMs, such as GPT-3 and Claude, has shown promise for automating software traceability and mitigating the limitations of previous methods. In this section, we discuss
the relevant works that have explored the use of large language models and the subjectivity of trace establishment in the context of software traceability.
Early work in automated traceability relied on classical natural language processing (NLP) techniques such as the vector space model (VSM) and latent semantic indexing (LSI) to establish traceability links between software artifacts based on their textual similarity [1, 3]. In the 2010s, deep learning techniques such as long short-term memory networks (LSTMs) and gated recurrent units (GRUs) were applied to improve traceability performance. Researchers used these neural networks to learn distributed representations of software artifacts and match them based on semantic similarity [13]. Around 2018, pretrained language models and transformers revolutionized the field. Models like Google's BERT allowed researchers to generate contextualized embeddings of software artifacts and achieve state-of-the-art results in automated traceability tasks [16, 17]. Transformer language models then grew exponentially larger and more powerful, culminating in GPT-3 and models with hundreds of billions of parameters. GPT-3 demonstrated human-level language understanding with 175 billion parameters, achieving startling fluency and few-shot learning capabilities [4, 7, 24]. GPT-4 continues to push the limits of LLMs, scoring in the top 10% of the BAR exam [22].
In the domain of software engineering, efforts have been made to leverage large language models for various software engineering tasks including code generation, summarization, and enhancement [6, 27]. Although prompt-engineering is a relatively new area of exploration, some prior work has been done on how best to instruct models for various tasks. Researchers have identified different prompt patterns and techniques that tend to produce the best results - many of which are employed in this paper [12, 29]. Additionally, prompt engineers have crafted prompts for a variety of tasks, including classification [14, 20] and ranking [23], both of which we utilize in this paper.
However, there has not been extensive evaluation of the potential for large language models in automated software traceability. To address this gap, we conducted a preliminary investigation using Claude, an LLM developed by Anthropic, to predict trace links between software artifacts. We outlined our two approaches for trace link prediction: classification and ranking. The evaluation of our approaches will be discussed in the following section.
## III Experimental Setup
For the preliminary investigation reported in this paper, we analyzed three software engineering datasets: CM1, iTrust, and Dronology. We selected these datasets to span natural language and programming language artifacts as well as diverse application domains (embedded systems, healthcare, UAVs).
For each dataset, we selected only a subset of its data to use in our study in order to increase the depth of our analysis, reduce run-time, and decrease cost. To select the links, we first calculated the number of children artifacts traced to each parent and then identified the minimum, maximum and median number of links. Using these categories, we identified five parent artifacts: one with the fewest child links, three with the median number of child links, and one with the maximum number of child links. In cases where multiple parent artifacts tied for the minimum, median, or maximum, we randomly sampled from those tied parents. This allowed us to create a set of trace queries that were representative of the project's link distribution of its trace queries. Table I describes the selected queries for each system noting the parent and child types, the number of potential trace links (candidates), and the number of those links that were actually true.
Prior to the start of our experiments, we tested OpenAI's _text-davinci-003_ model for predicting trace links, and found that, while it required slightly different prompts, it had comparable capabilities to Anthropic's Claude instant model (_claude-instant-v1_). Due to its lower cost and increased speed, we selected Claude for the remainder of our experiments. We also explored utilizing embeddings to compute similarity scores between artifacts, similar to the original Vector Space Model (VSM) approaches [1]. We examined the ad-embedding model developed by OpenAI (_text-embedding-ada-002_), however, the results obtained from this investigation did not show a significant advantage over VSM. Therefore, we decided to leverage the generative capabilities of the models
for trace link predictions within this paper. Nevertheless, we acknowledge the need for future endeavors to conduct a more comprehensive analysis of the advantages and disadvantages associated with utilizing embeddings for generating trace links.
Additionally, we obtained summaries of all code artifacts to use in our experiments. We accomplished this by prompting the model to provide a several sentences focusing on the high-level functionality of the code. Although this removed some information, the resulting summaries contained most of the relevant details and reduced the number of tokens required for each tracing prompt.
For our first approach, we prompted the model to classify every source and target artifact pair. Each prompt followed a similar format, consisting primarily of a question and instructions to answer 'yes' or 'no', followed by the content of the source artifact numbered as '1' and the target artifact numbered as '2'. When a prompt directly referenced the source or target in the question, it used (1) to indicate the source or (2) to indicate the target, corresponding to the numbers of the artifact content (e.g., "Is (1) related to (2)?"). Each question was posed such that an answer of 'yes' was indicative of a link between the answers, while 'no' indicated that the artifacts were not linked. The resulting candidate links are then evaluated against the ground truth links using common classification metrics such as precision and recall.
Precision is the ratio of the number of correctly identified relevant trace links to the total number of trace links identified by the system. Recall, on the other hand, measures the ratio of the correctly identified relevant trace links to the total number of relevant trace links in the system. This is shown below where TP is the true positives, FP is false positives, and FN is false negatives.
\[\text{Precision}=\frac{\text{TP}}{\text{TP}+\text{FP}}\hskip 28.452756pt\text{ Recall}=\frac{\text{TP}}{\text{TP}+\text{FN}}\]
For our ranking approach, we prompted the model to rank all targets for each source artifact. In this case, the model was given the content of the source artifact and the ID and body of each target, separated by newlines. The model was instructed to return the artifact IDs in the order of relevance (from most to least) in a comma delimited list. Given the non-deterministic nature of responses from each model, there were times when the model neglected to include some artifact IDs. This problem was unique to the ranking task, as the model correctly output yes and no each time for the classification task. In these cases we randomly appended the missing ids to the end of the list for our evaluation. We calculate the Mean Average Precision of these rankings to showcase their performance. It provides a measure of the effectiveness of the ranking algorithm in identifying relevant trace links between software artifacts. To calculate MAP, the precision is computed at different levels of recall. The average precision is then calculated as the average of the precision values at each recall level. Finally, the mean of the average precision values across trace queries is taken to obtain the MAP score. The equation for MAP is obtained by taking the mean of the average precision values across different queries or datasets:
\[\text{MAP}=\frac{1}{N}\sum_{i=1}^{N}\text{Average Precision}_{i}\]
where \(N\) is the number of queries or datasets.
Throughout our process of generating trace-links, we have several conversations with the model to test its prior knowledge, understand its responses, and to brainstorm potential prompts and improvements to prompts. We include many of these in our paper. It is important to note that these exchanges occurred independently - the model could not reference previous conversations when responding to subsequent questions or when making new traceability classifications. Each conversation represented an isolated context, rather than a continuing thread. This methodology enabled us to systematically probe the model's knowledge without reliance on prior memorization.
## IV Analysis and Prompt Discovery
Due to space constraints, we focus our analysis on our prompt engineering process for the CM1 dataset. While we followed a similar prompting process with other datasets, we found that many of the challenges we faced were exemplified in CM1. We specifically chose CM1 to focus on because it has posed a unique challenge to researchers in the past [16, 17]. In the end, we show the performance of our best approach on all the datasets to highlight that many of the lessons we learned from CM1 were transferable to the other datasets as well.
### _Domain-specific Knowledge_
Before running trace-link prediction with the models, we first wanted to understand what prior knowledge the model had about the system being evaluated. Previously, BERT models often required extensive pre-training on domain-specific documentation to interpret the vocabulary within a project [16, 17]. CM1, in particular, includes a myriad of jargon and acronyms that have long presented a challenge to previous methods of automatic traceability [15, 26]. Since one strength of LLMs is their ability to perform well without any training [4, 22], we wanted to explore whether Claude possessed enough prior knowledge to succeed on the CM1 dataset. To do this, we had the following conversation with it about one of the CM1 artifacts containing a large number of acronyms.
```
1:for\(i=1\) to \(N\)do
[MISSING_PAGE_POST]
Asistant... Some guesses at the meaning:
DPU may stand for Data Processing Unit... SCM-
DCR's seems to be some kind of protocol of interface used for communication between DPU-TMALI and DPU-DCL
ERROR is a standard way for software components to return error codes. So DPU-DCI seems to be providing error codes to DPU-TMALI... DPU-TMALI is receiving error data from DPU-DCL decoding or interpreting those errors, and then queuing them up to be sent to DPU-CDM.
DPU-CCM seems to be a component that is receiving error data, perhaps for monitoring purposes...
As shown in the conversation above, the model was able to leverage its general knowledge to reason about what each component did from only the artifact content. Although we did not extensively investigate the model's knowledge of each acronym or vocabulary word in the dataset, this example shows that Claude can interpret the meaning of complex acronyms in some cases where previous models could not without fine-tuning [16, 17].
### _Classification Prompt Discovery_
Next, we began the process of creating a prompt to use for our trace link prediction. As discussed previously, we provide highlights only for our experiments with CM1 but performed a similar analysis across datasets. Results for each prompt can be found in Table II. Our initial prompt for the model was general:
The content of the two artifacts was included in the prompt following the format explained in Section III. On CM1, this prompt resulted in the model correctly identifying 6 out of 13 links and mispredicting 27 of the 225 true negative links as positive links for precision of 18% and recall of 46%.
In order to understand why the prompt failed on over half of the true links, we randomly selected one of the false negatives and prompted the model to explain why it responded 'no' for the link. For reference, the artifacts in the link are shown in Table IV in the row for Link #1.
```
Assistant...The first paragraph is very high level, while the second paragraph goes into more technical implementation details. If they were related, you would expect a more consistent level of technical depth...
```
Inspired by this response, we inserted "Ignoring the different abstraction levels of the artifacts..." before our question. Interestingly, this link was correctly predicted when we used the modified prompt. Throughout our experiments, this would be a re-occurring problem where the model often failed to predict links between artifacts with different levels of details/abstractions. We therefore began instructing it to ignore abstraction levels in many of our following prompts. When we re-ran this modified prompt on the dataset, it correctly identified 7 out of the 13 links (one more than before) but also predicted 86 false positives. We again prompted it to explain its decision for one of the remaining false negatives (Link #2 in IV). Its response and our subsequent questioning are shown below:
```
Assistant...while there is a maxell overlap in the error handling mention, the main substance and purposes of the artifacts are distinct. I would still assess them as not strongly related.
```
**Human:** Would there be a traceability link between artifact 1 and 2?
**Assistant:** There could potentially be a weak traceability link between the two artifacts based on the minor overlap in error handling/queueing...
We sound that asking it directly if the artifacts were linked received a better response, and further prompting revealed that Claude had a relatively robust understanding of traceability, thus we decided to update our prompt as follows:
```
Human:Is there a traceability link between (I) and (2)?
```
We suspected that this prompt would be more specific, potentially eliminating some of the false positives, and this was indeed the case with Claude only predicting one false positive. However, it now correctly identified only 4 of the 13 links. This time when questioned about why it incorrectly believed Link #3 (Table IV) was un-traced, Claude's confusion seemed to come from it not realizing the artifacts were talking about the same system:
```
Assistant...The first requirement mentions specific tasks...that need to report heartbeats, while the second requirement talks about tasks in general without specifying which tasks. So it's not clear if the tasks mentioned in the first requirement are the same as those referred to in the second.
```
We found this to be another common point of confusion for Claude so we updated our prompt to include "Below are artifacts from the same software system" at the start of our prompt.
```
Human:Below are artifacts from the same software system. Is there a traceability link between (I) and (2)?
```
With this additional information, Claude identified all of the project trace-links but wrongly predicted 98 negative links to be positive. Since Claude now mistakenly believed many artifacts were linked, we randomly selected a false positive (Link #4 in Table IV) to investigate further. This lead us to gain the following insight into the misprediction:
```
Assistant:Yes, there is a traceability link between (I) and (2). (I) mentions that the DPU-TMALI places errors on an error queue for the DPU-DCM. (2) describes the error collection and reporting mechanism of the DPU-CCM, specifically referring to enqueueing errors. So (2) provides an implementation detail for the error queueing functionality briefly mentioned in (I). Therefore, there is a traceability link between these two artifacts in the software system.
```
**Assistant:** The first paragraph is very high level, while the second paragraph goes into more technical implementation details. If the new related, you would expect a more consistent level of technical depth... ```
**Assistant:** The first paragraph is very high level, while the second paragraph goes into more technical implementation details. If the new related, you would expect a more consistent level of technical depth...
```
Inspired by this response, we inserted "Ignoring the different abstraction levels of the artifacts..." before our question. Interestingly, this link was correctly predicted when we used the modified prompt. Throughout our experiments, this would be a re-occurring problem where the model often failed to predict links between artifacts with different levels of details/abstractions. We therefore began instructing it to ignore abstraction levels in many of our following prompts. When we re-ran this modified prompt on the dataset, it correctly identified 7 out of the 13 links (one more than before) but also predicted 86 false positives. We again prompted it to explain its decision for one of the remaining false negatives (Link #2 in IV). Its response and our subsequent questioning are shown below:
```
Assistant...while there is a traceability link between (I) and (2). (I) mentions that the DPU-TMALI places errors on an error queue for the DPU-CCM. (2) describes the error collection and reporting mechanism of the DPU-CCM, specifically referring to enqueueing errors. So (2) provides an implementation detail for the error queueing functionality briefly mentioned in (I). Therefore, there is a traceability link between these two artifacts in the software system.
```
**Assistant:** The first paragraph is very high level, while the second paragraph goes into more technical implementation details. If the new related, you would expect a more consistent level of technical depth... ```
**Assistant:** The first paragraph is very high level, while the second paragraph goes into more technical implementation details. If the new related, you would expect a more consistent level of technical depth...
```
**Assistant:** The first paragraph is very high level, while the second paragraph goes into more technical implementation details. If the new related, you would expect a more consistent level of technical depth... ```
**Assistant:** The first paragraph is very high level, while the second paragraph goes into more technical implementation details. If the new related, you would expect a more consistent level of technical depth...
```
**Assistant:** The first paragraph is very high level, while the second paragraph goes into more technical implementation details. If the new related, you would expect a more consistent level of technical depth... ```
**Assistant:** The first paragraph is very high level, while the second paragraph goes into more technical implementation details. If the new related, you would expect a more consistent level of technical depth...
```
**Assistant:** The first paragraph is very high level, while the second paragraph goes into more technical implementation details. If the new related, you would expect a more consistent level of technical depth... ```
**Assistant:** The first paragraph is very high level, while the second paragraph goes into more technical implementation details. If the new related, you would expect a more consistent level of technical depth...
```
**Assistant:** The second paragraph goes into more technical implementation details. If the new related, you would expect a more consistent level of technical depth... ```
**Assistant:** The second paragraph goes into more technical implementation details. If the new related, you would expect a more consistent level of technical depth...
```
**Assistant:** The second paragraph goes into more technical implementation details. If the new related, you would expect a more consistent level of technical depth... ```
**Assistant:** The first paragraph is very high level, while the second paragraph goes into more technical implementation details. If the new related, you would expect a more consistent level of technical depth...
```
**Assistant:** The first paragraph is very high level, while the second paragraph goes into more technical implementation details. If the new related, you would expect a more consistent level of technical depth... ```
**Assistant:** The first paragraph is very high level, while the second paragraph goes into more technical implementation details. If the new related, you would expect a more consistent level of technical depth...
```
**Assistant:** The first paragraph is very high level, while the second paragraph goes into more technical implementation details. If the new related, you would expect a more consistent level of technical depth... ```
**Assistant:** The second paragraph goes into more technical implementation details. If the new related, you would expect a more consistent level of technical depth...
```
**Assistant:** The first paragraph is very high level, while the second paragraph goes into more technical implementation details. If the new related, you would expect a more consistent level of technical depth... ```
**Assistant:** The second paragraph goes into more technical implementation details. If the new related, you would expect a more consistent level of technical depth...
```
**Assistant:** The first paragraph is very high level, while the second paragraph goes into more technical implementation details. If the new related, you would expect a more consistent level of technical depth... ```
**Assistant:** The second paragraph goes into more technical implementation details. If the new related, you would expect a more consistent level of technical depth.
We found nothing in Claude's response to be inherently wrong, so we speculated that the mistake came not from a misunderstanding but rather from a misalignment between Claude and the tracers of CM1 on what should be considered a trace-link. Thus, we began to generate prompts that focused on the specific relationships between the requirements and regulations of CM1.
**Human: Below are artifacts from the same software system. Ignoring different levels of abstraction, can (2) be decomposed from (1)? Answer' yes or no?**.
This led to the model identifying 8 of the trace links with only 14 false positives. These missed trace links seemed to result from a confusion surrounding the model's understanding of decomposition, where the model believed that no new information could be added to a decomposed requirement. This was indicated by its response below for Link #1:
**Assistant...** So (2) contains new information that cannot be derived or decomposed from (1) alone. The answer is **no**.
Replacing the question with "does (2) fulfill (1)?" resulted in the model missing 7 links for a similar reason, including Link #1 again:
**Assistant...** (2) lacks critical details and scope needed to conclusively fulfill (1)...
Despite missing 7 links, the model was able to correctly rule out all but 3 true negatives. Although both of these prompts were too restrictive (i.e. not all the true links were captured), they seemed to align Claude with at least some of the relationships established in CM1 as indicated by its success at eliminating false links.
Some other prompts we tried included:
1. _are (1) and (2) part of the same feature and share functionality?_
2. _does (2) implement system functionality from (1)?_
3. _are (1) and (2) dependent on the same system capability?_
where each prompt began with "Below are artifacts from the same software system. Ignoring different levels of abstraction" and concluded with "Answer 'yes' or 'no'." Each subsequent prompt successfully predicted one more positive link than the one before, but led to an increasing number of false positives ranging from 28 all the way to 48 for #3.
Our final prompt of this structure was a combination of several previous prompts that had shown promise:
**Human: Below are artifacts from the same software system. Ignoring different levels of abstraction, can (2) be one of the hierarchical decompositions of (1)? Consider whether (2) implements a part of (1).**
By adding "one of the hierarchical" and "a part of", we hoped to make the prompt less restrictive so that the model would allow new information to be added to decomposed artifacts. This time the model correctly predicted all but one true link. Its 44 false positives still fell short of our goal of less than 20 but were significantly less than the 98 returned in the previous approach that achieved these high levels of recall.
### _Introduction of Chain-of-Thought Reasoning_
As our prompt evolved, we saw improvements in the model's link-prediction abilities, but the results were still falling short of our goals. It seemed evident that we needed to approach the task differently. As a result, we decided to break down the task into steps to aid the model in reasoning about the relationship between the artifacts before arriving at a final decision. This process, known as "chain of thought", has previously shown to be effective in helping LLMs solve complex tasks [28]. To test the impact of incorporating reasoning into the process, we reran our original prompt with this additional step at the beginning of the prompt:
**Human: You are working on a software system and are given a high-level requirement and a low-level requirement. Your job is to determine if they are traced. Give one reason why (1) might be related to (2) enclosed in -related. \(<\)/related. \(>\) and one reason why (1) might be un-related to (2) enclosed in \(<\)/unrelated. \(>\) but ignore levels of abstraction. Then based on these reasons, do you think (1) is related to (2)? Answer should be 'yes' or 'no' enclosed in \(<\)answer. \(<\)/answer.**
Having it enclose each separate response in xml tags allowed us to easily parse its answer despite the added complexity. An example of the model's response for link #5 in Table IV is shown below:
**Assistant...** <related. \(>\) Both specify requirements related to processing commands within a certain time \(<\)/related. \(>\) <unrelated. \(>\) Requirement 1 specifies a time constraint while requirement 2 provides a general description of a command and control subsystem \(<\)/unrelated. \(>\) <an. \(>\) yes \(<\)/answer.
Recall increased by nearly 50% while precision rose by 14%. With the added reasoning step, the model missed only 1 true link and misidentified just 25 false positives. With both questions (1) and (3) from Table II, the introduction of incremental reasoning helped the model find a higher portion of the true connections in the data. Due to time constraints, we were unable to test the reasoning on the remaining questions but we believe this is an interesting avenue for future work.
Encouraged by this initial success, we decided to have the model answer each of our questions as intermediate steps before finally determining whether the artifacts were related. We hoped this approach would help the model explore different ways in which the artifacts could be connected. It also allowed us to use a simple ranking system in which more 'yes' responses would increase the likelihood that the artifacts were linked. By quantifying the model's degree of support for a relationship through the ranking system, we could evaluate not just whether it predicted a link but also how confident it was in that prediction based on the reasoning exhibited in its responses.
**Human**: 1 am giving you two software artifacts from a system. Your job is to determine if there is a traceability link. Answer whether (2) implements a part of (1) with yes or no enclosed \(<\)implements\(>\)-\(<\)/implements\(>\). Answer whether (2) is a hierarchical decomposition of (1) with yes or no enclosed \(<\)decomposed\(>\)-\(<\)/decomposed\(>\). Answer whether (2) fulfills (1) with yes or no enclosed \(<\) fulfills\(>\)-\(<\)/fullities\(>\). Answer whether (2) and (1) are part of the same feature and shares functionality with yes or no enclosed in \(<\)feature\(>\)-\(<\)/feature\(>\). Answer whether (2) and (1) are dependent on the same system capability with yes or no enclosed in \(<\) capability-\(>\)-\(<\)/capability\(>\)-\(>\). Use your answers to give one reason with (1) might be related to (2) enclosed in \(<\)related\(>\)-\(<\)/related\(>\) and one reason why (1) might be un-related to (2) enclosed in \(<\) unrelated\(>\)-\(<\)/unrelated\(>\)- Now answer is (1) related to (2) with yes or no enclosed in \(<\)traced\(>\)-\(<\)/traced\(>\).
### _Ranking Prompt Discovery_
Despite not outperforming other classification prompts, ranking the artifacts by the number of 'yes' and 'no' answers, did provide the opportunity to establish a threshold retrospectively, allowing us to categorize items based on the strength of the model's prediction instead of relying on a single yes/no choice. This, combined with Claude's new 100k context window, inspired us to experiment with an entirely new strategy.
For our next experiment, we gave Claude the following instructions:
By providing the model with more context about the system in the prompt and allowing it to compare all targets when making its decision, we hoped to see a performance boost. Unfortunately, the task was not as simple as we had hoped, and we, like previous researchers, identified another nuance with the prompts - order matters [23]. When we presented the target artifacts in a random order, performance was barely above random; however, ordering artifacts that were more likely to be linked at the top, delivered significantly higher performance. It seemed that unless there was some pattern already established, the task would overwhelm the model. Because of this, we decided to rank the target artifacts based on their VSM similarity to the source. Then, we presented the model with targets in this order. With this initialization, the model improved upon the original VSM ranking. Furthermore, While discussions throughout the paper have focused on the CM1 dataset, we applied this approach to the three other datasets presented in Table I and report results for all four datasets in Table III.
### _Summary of Results_
Overall, our results demonstrated that the ranking task could be a useful approach to automated traceability, but it may require additional steps and further prompt refinement to reach the necessary performance. In the future, we plan to explore ways of decomposing the overall task into simpler, incremental steps to reduce complexity for the model as we did for the classification task. It should also be noted that the ranking task necessitated a large context window, which may pose a challenge for certain open-source models. Consequently, classification remains a valuable alternative when ranking is infeasible. Furthermore, classification opens up avenues for diverse applications of traceability, such as "trace views" that we discuss further in Section VI.
## V Threats to Validity
While this initial study provides promising evidence that prompt engineering can enhance LLMs for software traceability tasks, several threats could limit the validity of our findings. First, we evaluated only three open-source projects and only provide a detailed analysis of one, limiting the generalization of our findings. However, we selected projects that spanned multiple domains, artifact types, and sizes to improve generalizability. We also constructed trace queries that were representative of their parent distribution. Second, existing traceability datasets are typically incomplete, as truly considering every candidate link in a project grows \(\mathcal{O}(n^{2})\) with the number of artifacts. The LLMs identified potential missing traces, but we could not fully validate their accuracy without a project expert. Third, our study used a limited set of LLMs which may not represent the full space of the current state-of-the-art. However, we chose the leading LLMs from our initial explorations with publicly available commercial models. Clearly, there are many extension to this study considering more datasets, different LLMs, and other prompt engineering methods. We leave the full exploration of the problem space to future work and focus on showing the potential these models have towards advancing automated software traceability.
## VI Conclusions and Future Directions
Throughout our experiments, we addressed multiple questions and derived several key takeaways regarding using LLMs for trace-link prediction.
### _Key Takeaways_
* Small modifications to prompts can lead to significant differences in model outputs, emphasizing the importance of carefully crafting prompts.
* The performance of a given prompt in comparison to alternative phrasings can vary across datasets and models, though some general techniques like chain-of-thought reasoning tend to produce a more consistent performance.
* LLMs frequently identify different artifact relationships by than those selected by human tracers. Prompts should specify the targeted usage of the traceability links (e.g.
change impact analysis, hierarchical composition) to better align the model's output with the desired outcome.
* a possible advantage over purely similarity-based methods.
* Requiring models to show intermediate reasoning steps boosts performance on some tasks and builds in explanations into the decision making process. This is useful to both to those establishing the trace links and those using them.
* List ranking style prompts are highly sensitive to the order of artifacts presented in the prompt. This variability was mitigated by pre-sorting by VSM scores.
* Overall, carefully tailored prompts are needed to harness the versatility of LLMs for the task of traceability and to produce outputs that are consistent with the goals of traceability engineers and researchers.
Throughout this process, one of our biggest takeaways was how minor adjustments to prompts could have dramatic impacts on the results. Subtle changes, such as pluralizing words, interchanging prepositions, or reordering phrases, could alter the outcomes. These findings underscore the inherent challenge of engineering robust prompts. In future research, we aim to explore strategies that mitigate such variability and delve into the effectiveness of different prompts across different models.
Further, due to the limited number of trace queries we analyzed per dataset as well as our integration of chain-of-thought, we were able to review trace predictions in depth. Interestingly, we were often surprised by the strength of many false positives, forcing us to re-think the accurate and complete nature of these datasets. Reviewing predictions for even our smallest subset (265 combinations) became an arduous task. In reality, industrial projects range from 50K to 500K potential trace links, making it extremely challenging to have complete and standardized tracing practices. However, examining the predictions of a few selected trace links may still provide traceability experts with the insights they need to refine prompts in a way that improves performance across the project.
### _Do LLMs possess knowledge necessary for tracing projects with domain-specific vocabulary?_
Our conversations with Claude revealed that it contained sufficient knowledge to draw many correct conclusions about the CM1 system, irrespective of the acronyms or jargon used. Furthermore, we were able to obtain high MAP scores without performing any additional pre-training. Nevertheless, we plan to experiment with pre-training in the future to see if it can provide a performance boost. Additionally, we hope to test the model's knowledge on a wider range of datasets. It is
important to note that since the datasets in this paper were all publicly available at the time of the model's creation, we cannot eliminate the possibility that the model had previous exposure to them. Thus, we are particularly interested to see how the model performs on an entirely new dataset.
### _Can LLMs provide reasonable explanations for their decisions?_
By probing the model to elicit explanations for many of its mispredictions, we found that it could provide an in-depth analysis of its decision. Whether or not these explanations are accurate reflections of the reasoning behind the model's decision is beyond the scope of this paper, but we did find that when we adjusted the prompts based on the model's explanation, we were often able to change its answer.
### _If so, can these explanations be utilized to improve prompts?_
The ability to alter the model's decision by using its explanations proved to be a useful tool for improving prompts. Engaging in conversations with the model enabled an increased understanding of its interpretation of a given prompt, facilitating an iterative approach to refine prompts. Gradually adjusting the prompts in this way can be used to find a prompt that better aligns the model's understanding with the objectives of the tracer.
### _Can reasoning be used to improve responses?_
By asking the model to formally articulate its thinking in response to probing questions, the model was able to make a more well-informed final judgment about the relationship between the artifacts in the classification. This also offers the advantage of allowing the task to be broken down into smaller pieces, where the model first evaluates the relationship between the artifacts and then makes a final decision. Further, chain-of-thought reasoning has the potential to improve the ranking task and should be evaluated in future work.
### _How can LLMs be leveraged to generate software traceability links?_
In our experiments, we explored two different tasks which could be used to predict trace links from pairs of software artifacts: classification and ranking. While ranking allows for a nuanced expression of confidence in a prediction, classification offers the advantage of needing a smaller context window and enables the discovery of diverse relationship types. By adapting our prompts to describe various relationships, we captured distinct links. For instance, when inquiring whether two artifacts were part of the same feature, we discovered different links than when asking if they shared functionality. This can be used to present multiple "views" of traceability, where each view highlights different relationships within the system. This may be particularly valuable for change propagation where the prompt can focus on determining whether a modification to one artifact necessitates a change in the other. Additionally, multiple prompts may be combined to capture the many different relationships present in the project. This presents an avenue for future investigation.
An alternative way in which LLMs can be used for trace link prediction is by comparing the similarity of artifact embeddings. As mentioned previously, we opted not to explore this method in this paper, but future works might benefit from comparing this approach to those discussed in this paper.
### _Concluding Remarks_
Overall, our experiments demonstrated that large language models show promise for tracing software systems. As opposed to previous approaches for automated traceability, LLMs can perform well without pre-training and are able to offer detailed explanations of their decisions. These explanations are not only useful for helping an engineer make an informed decision about a trace-link but can guide the process of selecting an appropriate prompt for the tracing task. Through iterative prompt refinement, the models can be used to classify trace links and establish a diverse set of relationships between project artifacts. The models are also capable of ranking target artifacts based on how related they are to a source artifact, albeit with aid from VSM. Ranking can allow engineers to sift through a prioritized list of candidate links and potentially reducing the review time required.
While this paper showcases the power of LLMs for traceability, it also highlights many of the lingering challenges in engineering effective prompts for the models. Careful tailoring of prompts can help to reach high performance for each project but this was ultimately a time-consuming task that may not always be feasible. Although the community might one day discover a "silver bullet" prompt, a more practical path forward may be to identify common patterns that make prompts most effective for certain projects and tracing objectives. Discovering such patterns could enable partially automating this process so that it can be seamlessly integrated into current traceability workflows. There remains much future work that must be done to gain a comprehensive understanding of how LLMs can best be utilized to enhance the field of traceability.
## Acknowledgement
The work in this paper has been partially funded by USA National Science Foundation Grants # SHF-1901059, SHF-1909007, and PFI-TT-2122689.
\begin{tabular}{c c c} \hline \hline ID & \multicolumn{2}{c}{Source} & Target \\ \hline
1 & The DPU-CCM shall implement a mechanism whereby large memory loads and dumps can be accomplished incrementally. & Memory Upload and Download Handling Data can be uploaded to several types of locations, including: * DRAM * EEPROM * Hardware registers * EEPROM filesystem \\ & & The D-MEM-DAT-UPDLD command specifies the target location. If the destination is the EEPROM filesystem, a "block number" is provided in lieu of a memory address, which is used by the DPU FSW to formulate a filename of the form _egfs1:DPU_blk.##_, where ## is the block number. In this case, once the entirety of the uploaded data is received by the DPU FSW, the uploaded data is then written to that file in the EEPROM filesystem. If a file already exists with that name, it is overwritten. The EEPROM filesystem can be reinitialized using the command D-MEM-DISK-INIT. \\ \hline
2 & The DPU-TMALI shall utilize SCM-DCI-SR, along with ERRNO provided by DPU-DCI to decode errors and place them on an error queue for DPU-CCM. & Control and Monitoring the CCM Control Task initializes the DPU FSW. It is the responsibility of the CCM Control Task to establish a successful boot. It does so by blocking on temporary semaphores, each with a 5 second timeout, after spawning the SCU Interface Task and the CCM Command Task. If both of these tasks report a successful initialization by giving the semaphore, the CCM Control Task toggles the BC_INDEX parameter in EEPROM to indicate a successful boot. If either task does not report a successful initialization, the CCM Control Task disables the watchdog strobe to effect a reboot of the DPU. The rationale for selecting the successful initialization of these two tasks as the definition of a successful boot is that the DPU FSW requires these tasks, as a minimum, to establish ground contact and provide commandability. Once this initialization is complete, the task blocks on a binary semaphore which is given by the SCUI Command ISR upon arrival of the 1 Hz Clock Message. In the event a Clock Message does not arrive, the semaphore will time out after 1.5 seconds. The CCM Control Task remains alive to create and transmit DPU housekeeping at the appropriate intervals, perform various periodic processing tasks, and to process memory dump commands. The final call to cmErrEnq0) is performed in order that if an error occurs in an interrupt service routine, a global variable is set to the value of the errno which is then enqueued into the Error/Event Queue as part of this task's normal processing. The DPU-CCM shall collect a TASK_HBEAT from DPU-SCUI, DPU-CCM, DPU-DCX, DPU-TMALI, and DPU-DPA. Non-responsive tasks will be reported in DPU_HK. \\ \hline
3 & The DPU-CCM shall collect a TASK_HBEAT from DPU-SCUI, DPU-CCM, DPU-DCX, DPU-TMALI, and DPU-DPA. Non-responsive tasks will be reported in that case. & Control and Monitoring Every time the CCM Control executes, it calls cmPerProcess() to handle periodic processing responsibilities. Such responsibilities include analog to digital conversion updates, DPU task monitoring, ICU heartbeat message production, and watchdog strobe. The cmHealthChk() function, called by cmPerProcess() verifies the execution of other tasks by monitoring the amount of time that has elapsed since each task last reported. Other tasks report their execution to the CCM Control Task by calling the function, ccmTaskReport(), providing their task index. Each task has an expected execution frequency, and if a task does not execute as expected, an error is reported in DPU housekeeping. If the Command Dispatch Task fails to report for an extended period, the DPU will execute a reboot, since it is impossible to command the DPU if this task is not executing, otherwise it will strobe the watchdog. \\ \hline
4 & The DPU-TMALI shall utilize SCM_DCI_SR, along with ERRNO provided by DPU-DCI to decode errors and place them on an error queue for DPU-CCM. & Error Collection and Reporting The cmErrEnq() function tracks the last error reported and its frequency of occurrence. Once an error code has been reported it becomes the previously reported error code maintained by cmErrEnq(). A repetition count is then incremented for each subsequent, consecutively reported, identical instance of this previously reported error. If this error code is reported more than once in one high-rate housekeeping reporting period, then a special error, S_ccm_ERR_REPEAT is enqueued with the repetition count for the error encoded in the least significant byte. This mechanism effectively reduces the potential for housekeeping telemetry to become flooded with a single repeated error. \\ \hline
5 & The DPU-CCM shall process real-time non-deferred commands within B ms of receipt from the ICU or the SCU. & The Command and Control CSC provides the core command and control functionality for the system. It includes tasks for initializing the system at bootup, scheduling housekeeping data generation, monitoring other tasks, executing periodic tasks, and receiving and dispatching real-time commands. It maintains data structures for system state, commands, errors and events. \\ \hline \hline \end{tabular} |
2303.15556 | Complexity of Reconfiguration in Surface Chemical Reaction Networks | We analyze the computational complexity of basic reconfiguration problems for
the recently introduced surface Chemical Reaction Networks (sCRNs), where
ordered pairs of adjacent species nondeterministically transform into a
different ordered pair of species according to a predefined set of allowed
transition rules (chemical reactions). In particular, two questions that are
fundamental to the simulation of sCRNs are whether a given configuration of
molecules can ever transform into another given configuration, and whether a
given cell can ever contain a given species, given a set of transition rules.
We show that these problems can be solved in polynomial time, are NP-complete,
or are PSPACE-complete in a variety of different settings, including when
adjacent species just swap instead of arbitrary transformation (swap sCRNs),
and when cells can change species a limited number of times (k-burnout). Most
problems turn out to be at least NP-hard except with very few distinct species
(2 or 3). | Robert M. Alaniz, Josh Brunner, Michael Coulombe, Erik D. Demaine, Jenny Diomidova, Ryan Knobel, Timothy Gomez, Elise Grizzell, Jayson Lynch, Andrew Rodriguez, Robert Schweller, Tim Wylie | 2023-03-27T19:14:50Z | http://arxiv.org/abs/2303.15556v2 | # Complexity of Reconfiguration in Surface Chemical Reaction Networks
###### Abstract
We analyze the computational complexity of basic reconfiguration problems for the recently introduced surface Chemical Reaction Networks (sCRNs), where ordered pairs of adjacent species nondeterministically transform into a different ordered pair of species according to a predefined set of allowed transition rules (chemical reactions). In particular, two questions that are fundamental to the simulation of sCRNs are whether a given configuration of molecules can ever transform into another given configuration, and whether a given cell can ever contain a given species, given a set of transition rules. We show that these problems can be solved in polynomial time, are NP-complete, or are PSPACE-complete in a variety of different settings, including when adjacent species just swap instead of arbitrary transformation (swap sCRNs), and when cells can change species a limited number of times (\(k\)-burnout). Most problems turn out to be at least NP-hard except with very few distinct species (2 or 3).
Chemical Reaction Networks, reconfiguration, hardness 10.4230/LIPIcs.
## 1 Introduction
The ability to engineer molecules to perform complex tasks is an essential goal of molecular programming. A popular theoretical model for investigating molecular systems and distributed systems is Chemical Reaction Networks (CRNs) [5, 24]. The model abstracts chemical reactions to independent rule-based interactions that creates a mathematical framework equivalent [7] to other well-studied models such as Vector Addition Systems [16] and Petri nets [22]. CRNs are also interesting for experimental molecular programmers, as examples have been built using DNA strand displacement (DSD) [25].
Abstract Surface Chemical Reaction Networks (sCRNs) were introduced in [23] as a way to model chemical reactions that take place on a surface, where the geometry of the surface is used to assist with computation. In this work, the authors gave a possible implementation of the model similar to ideas of spatially organized DNA circuits [19]. This strategy involves DNA strands being anchored to a DNA origami surface. These strands allow for "species" to be attached. Fuel complexes are pumped into the system, which perform the reactions. While these reactions are more complex than what has been implemented in current lab work, it shows a route to building these types of networks.
### Motivation
Feed-Forward circuits using DNA hairpins anchored to a DNA origami surface were implemented in [4]. This experiment used a single type of fuel strand. The copies of the fuel strand attached to the hairpins and were able to drive forward the computation.
A similar model was proposed in [8], which modeled DNA walkers moving along tracks. These tracks have guards that can be opened or closed at the start of computation by including or omitting specific DNA species at the start. DNA walkers have provided interesting implementations such as robots that sort cargo on a surface [27].
A new variant of surface CRNs we introduce is the \(k\)-burnout model in which cells can switch states at most \(k\) time before being stuck in their final state. This models the practical scenario in which state changes expend some form of limited fuel to induce the state change. Specific experimental examples of this type of limitation can be seen when species encode "fire-once" DNA strand replacement reactions on the surface of DNA origami, as is done within the Signal Passing Tile Model [20].
### Previous Work
The initial paper on sCRNs [23] gave a 1D reversible Turing machine as an example of the computational power of the model. They also provided other interesting constructions such as building dynamic patterns, simulating continuously active Boolean logic circuits, and cellular automata. Later work in [6] gave a simulator of the model, improved some results of [23], and gave many open problems- some of which we answer here.
In [2], the authors introduce the concept of swap reactions. These are reversible reactions that only "swap" the positions of the two species. The authors of [2] gave a way to build feed-forward circuits using only a constant number of species and reactions. These swap reactions may have a simpler implementation and also have the advantage of the reverse reaction being the same as the forward reaction, which makes it possible to reuse fuel species.
A similar idea for swap reactions on a surface that has been studied theoretically are friends-and-strangers graphs [9]. This model was originally introduced to generalize problems such as the 15 Puzzle and Token Swapping. In the model, there is a location graph containing
uniquely labeled tokens and a friends graph with a vertex for every token, and an edge if they are allowed to swap locations when adjacent in the location graph. The token swapping problem can be represented with a complete friends graph, and the 15 puzzle has a grid graph as the location graph and a star as the friends graph (the 'empty square' can swap with any other square). Swap sCRNs can be described as multiplicities friends-and-strangers graph [17], which relax the unique restriction, with the surface grid (in our case the square grid) as the location graph and the allowed reactions forming the edges of the friends graph.
### Our Contributions
In this work, we focus on two main problems related to sCRNs. The first is the reconfiguration problem, which asks given two configurations and a set of reactions, can the first configuration be transformed to the second using the set of reactions. The second is the 1-reconfiguration problem, which asks whether a given cell can ever contain a given species. Our results are summarized in Table 1. The first row of the table comes from the Turing machine simulation in [23] although it is not explicitly stated. The size comes from the smallest known universal reversible Turing machine [18] (see [28] for a survey on small universal Turing machines.)
We first investigate swap reactions in Section 3. We prove both problems are PSPACE-complete using only four species and three swap reactions. For reconfiguration, we show this complexity is tight by showing with three or less species and only swap reactions the problem is in P.
In Section 4, we study a restriction on surface CRNs called \(k\)-burnout where each species is guaranteed to only transition \(k\) times. This is similar to the freezing restriction from Cellular Automata [12, 13, 26] and Tile Automata [3]. We start with a simple reduction showing reconfiguration is NP-complete in 2-burnout. This is also of interest since the reduction only uses three species types and a reaction set of size one. For 1-reconfiguration, we show the problem is also NP-complete in 1-burnout sCRNs. This reduction uses a constant number of species.
In Section 5, we analyze reconfiguration for all sCRNs that have a reaction set of size one. For the case of only two species, we show for every possible reaction, the problem is solvable in polynomial time. With three species or greater, we show that reconfiguration is NP-complete. The hardness comes from the reduction in burnout sCRNs.
Finally, in Section 6, we conclude the paper by discussing the results as well as many open questions and other possible directions for future research related to surface CRNs.
## 2 Surface CRN model
**Chemical Reaction Network.** A _chemical reaction network (CRN)_ is a pair \(\Gamma=(S,R)\) where \(S\) is a set of species and \(R\) is a set of reactions, each of the form \(A_{1}+\dots+A_{j}\to B_{1}+\dots+B_{k}\) where \(A_{i},B_{i}\in S\). (We do not define the dynamics of general CRNs, as we do not need them here.)
**Surface, Cell, and Species.** A _surface_ for a CRN \(\Gamma\) is an (infinite) undirected graph \(G\). The vertices of the surface are called _cells_. A _configuration_ is a mapping from each cell to a species from the set \(S\). While our algorithmic results apply to general surfaces, our hardness constructions assume the practical case where \(G\) is a grid graph, i.e., an induced subgraph of the infinite square grid (where omitted vertices naturally correspond to cells without any species). When \(G\) is an infinite graph, we assume there is some periodic pattern of cells that is repeated on the edges of the surface. Figure 1 shown an example set of species and reactions and a configuration of a surface.
**Reaction.** A _surface Chemical Reaction Network (sCRN)_ consists of a surface and a CRN, where every _reaction_ is of the form \(A+B\to C+D\) denoting that, when \(A\) and \(B\) are in neighboring cells, they can be replaced with \(C\) and \(D\). \(A\) is replaced with \(C\) and \(B\) with \(D\).
**Reachable Surfaces.** For two surfaces \(I,T\), we write \(I\rightarrow^{1}_{\Gamma}T\) if there exists a \(r\in R\) such that performing reaction \(r\) on \(I\) yields the surface \(T\). Let \(I\rightarrow_{\Gamma}T\) be the transitive closure of \(I\rightarrow^{1}_{\Gamma}T\), including loops from each surface to itself. Let \(\Pi(\Gamma,I)\) be the set of all surfaces \(T\) for which \(I\rightarrow_{\Gamma}T\) is true.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline Problem & **Type** & **S**retches & **R**R**its & **Result** & **R**ef \\ \hline Reconfiguration & 1D sCRN & 17 & 67 & PSPACE-complete & [23] \\
1-Reconfiguration & Swap sCRN & 4 & 3 & PSPACE-complete & Thm. 3 \\ \hline
1-Reconfiguration & Swap sCRN & \(\leq 3\) & Any & P & Thm. 6 \\ \hline
1-Reconfiguration & Swap sCRN & Any & \(\leq 2\) & P & Thm. 6 \\ Reconfiguration & Swap sCRN & 4 & 3 & PSPACE-complete & Thm. 4 \\ \hline Reconfiguration & Swap sCRN & \(\leq 3\) & Any & P & Thm. 5 \\ Reconfiguration & Swap sCRN & Any & \(\leq 2\) & P & Thm. 5 \\ Reconfiguration & 2-burnout & 3 & 1 & NP-complete & Thm. 7 \\ \hline
1-Reconfiguration & 1-burnout & 17 & 40 & NP-complete & Thm. 8 \\ Reconfiguration & sCRN & \(\geq 3\) & 1 & NP-complete & Cor. 15 \\ Reconfiguration & sCRN & \(\leq 2\) & 1 & P & Thm. 11 \\ \hline \end{tabular}
\end{table}
Table 1: Summary of our and known complexity results for sCRN reconfiguration problems, depending on the type of sCRN, number of species, and number of rules. All problems are contained in PSPACE, while all \(k\)-burnout problems are in NP.
Figure 1: Example sCRN system.
Figure 2: An initial, single step, and target configurations
### Restrictions
**Reversible Reactions.** A set of reactions \(R\) is _reversible_ if, for every rule \(A+B\to C+D\) in \(R\), the reaction \(C+D\to A+B\) is also in \(R\). We may also denote this as a single reversible reaction \(A+B\rightleftharpoons C+D\)
**Swap Reactions.** A reaction of the form \(A+B\rightleftharpoons B+A\) is called a _swap reaction_.
\(k\)**-Burnout.** In the \(k\)-burnout variant of the model, each vertex of the system's graph can only switch states at most \(k\) times (before "burning out" and being stuck in it's final state).
### Problems
**Reconfiguration Problem.** Given a sCRN \(\Gamma\) and two surfaces \(I\) and \(T\), is \(T\in\Pi(\Gamma,S)\)?
**1-Reconfiguration Problem.** Given a sCRN \(\Gamma\), a surface \(I\), a vertex \(v\), and a species \(s\), does there exist a \(T\in\Pi(\Gamma,S)\) such that \(T\) has species \(s\) at vertex \(v\)?
## 3 Swap Reactions
In this section, we will show 1-reconfiguration and reconfiguration with swap reactions is PSPACE-complete with only 4 species and 3 swaps in Theorems 3 and 4. We continue by showing that this complexity is tight, that is, reconfiguration with 3 species and swap reactions is tractable in Theorems 5 and 6.
### Reconfiguration is PSPACE-complete
We prove PSPACE-completeness by reducing from the motion planning through gadgets framework introduced in [10]. This is a one player game where the goal is to navigate a robot through a system of gadgets to reach a goal location. The problem of changing the state of the entire system to a desired state has been shown to be PSPACE-complete [1]. This reduction treats the model as a game where the player must perform reactions moving a robot species through the surface.
### The Gadgets Framework
**Framework.** A gadget is a finite set of locations and a finite set of states. Each state is a directed graph on the locations of the gadgets, describing the _traversals_ of the gadget. An example can be seen in Figure 3. Each edge (traversal) describes a move the robot can take in the gadget and what state the gadget ends up in if the robot takes that traversal. A robot enters from the start of the edge and leaves at the exit.
In a _system_ of gadgets there are multiple gadgets connected by their locations. The _configuration_ of a system of gadgets is the state of all gadgets in the system. There is a single robot that starts at a specified location. The robot is allowed to move between connected locations and allowed to move along traversals within gadgets. The system of gadgets can also be restricted to be planar, in which case the cyclic order of the locations on the gadgets is fixed, and the gadgets along with their connections must be embeddable in the plane without crossings.
The _1-player motion planning reachability problem_ asks whether there exists a sequence of moves within a system of gadgets which takes the robot from its initial location to a target location. The _1-player motion planning reconfiguration problem_ asks whether there exists
a sequence of moves which brings the configuration of a system of gadgets to some target configuration.
It may seem strange to reduce from a 1-player game when sCRNs are typically thought of as 0-player simulations; however, this is exactly the correct analogue. In the sCRN model there are many reactions that may occur and we are asking whether there exists a sequence of reactions which reaches some target configuration; in the same way 1-player motion planning asks if there exists a sequence of moves which takes the robot to the target location. The existential query of possible moves/swaps remains the same regardless of whether a player is making decisions vs them occurring by natural processes. The complexity of the gadgets used here are considered in the 0-player setting in [11].
**Locking 2-Toggle.** The Locking 2-toggle (L2T) is a 4 location, 3 state gadget. The states of the gadget are shown in Figure 3.
### Constructing the L2T
We will show how to simulate the L2T in a swap sCRN system. Planar 1-player motion planning with the L2T was shown to be PSPACE-complete [10]. We now describe this construction.
**Species.** We utilize 4 species types in this reduction and we name each of them according to their role. First we have the _wire_. The wire is used to create the connection graph between gadgets and can only swap with the robot species. The _robot_ species is what moves between gadgets by swapping with the wire and represents the robot in the framework. Each gadget initially contains 2 robot species, and there is one species that starts at the initial location of the robot in the system. The robot can also swap with the key species. Each gadget has exactly 1 _key_ species. The key species is what performs the traversal of the gadget by swapping with the lock species. The _lock_ species can only swap with the key. There are 4 locks in each gadget. The locks ensure that only legal traversals are possible by the robot species.
These species are arranged into gadgets consisting of two length-5 horizontal tunnels. The two tunnels are connected by a length-3 central vertical tunnel at their 3rd cell. At the 4th cell of both tunnels there is an additional degree 1 cell connected we will call the holding cell.
**States and Traversals.** The states of the gadget we build are represented by the location of the key species in each gadget. If the key is in the central tunnel of the gadget then we are in state 1 as shown in Figure 3(b). Note that in this state the key may swap with the adjacent locks, however we consider these configurations to also be in state 1 and take advantage of this later. The horizontal tunnels of the gadget in this state contain a single lock with an adjacent robot species.
Figure 3: The Locking 2-Toggle (L2T) gadget and its states from the motion planning framework. The numbers above indicate the state and when a traversal happens across the arrows, the gadget changes to the indicated state.
States 2 and 3 are reflections of each other (Figures 3(c) and 3(d)). This state has a robot in the central tunnel and the key in the respective holding cell. The gadget in this state can only be traversed from right to left in one of the tunnels.
Figure 5 shows the process of a robot species traversing through the gadget. Notice when a robot species "traverse" a gadget, it actually traps itself to free another robot at the exit. We prove two lemmas to help verify the correctness of our construction. The lemmas prove the gadgets we design correctly implement the allowed traversals of a locking 2-toggle.
A robot may perform a rightward traversal of a gadget through the north/south tunnel if and only if the key is moved from the central tunnel to the north/south holding cell.
Proof.: The horizontal tunnels in state 1 allow for a rightward traversal. The robot swaps with wires until it reaches the third cell where it is adjacent to two locks. However the key in the central tunnel may swap with the locks to reach the robot. The key and robot then swap. The key is then in the horizontal tunnel and can swap to the right with the lock there. It may then swap with the robot in the holding cell. This robot then may continue forward to the right and the key is stuck in the holding cell.
Notice when entering from the left the robot will always reach a cell adjacent to lock species. The robot may not swap with locks so it cannot traverse unless the key is in the central tunnel.
A robot may perform a leftward traversal of a gadget through the north/south tunnel if and only if the key is moved from the north/south holding cell to the central tunnel.
Proof.: In state 2 the upper tunnel can be traversed and in state 3 the lower tunnel can be traversed. The swap sequence for a leftward traversal is the reverse of the rightward traversal, meaning we are undoing the swaps to return to state 1. The robot enters the gadget and swaps with the key, which swaps with the locks to move adjacent to the central tunnel. The key then returns to the central tunnel by swapping with the robot. The robot species can then leave the gadget to the left.
Figure 4: Locking 2-toggle implemented by swap rules. (a) The swap rules and species names. (b-d) The three states of the locking 2-toggle.
Figure 5: Traversal of the robot species.
A robot entering from the right will not be able to swap to the position adjacent to the holding cell if it contains a lock. This is true in both tunnels in state \(1\) and in the non-traversable tunnels in states \(2\) and \(3\).
We use these lemmas to first prove PSPACE-completeness of \(1\)-reconfiguration. We reduce from the planar \(1\)-player motion planning reachability problem.
\(1\)-reconfiguration is PSPACE-complete with \(4\) species and \(3\) swap reactions or greater.
Proof.: Given a system of gadgets create a surface encoding the connection graph between the locations. Each gadget is built as described above in a state representing the initial state of the system. Ports are connected using multiple cells containing wire species. When more than two ports are connected we use degree-\(3\) cells with wire species. The target cell for \(1\)-reconfiguration is a cell containing a wire located at the target location in the system of gadgets.
If there exists a solution to the robot reachability problem then we can convert the sequence of gadget traversals to a sequence of swaps. The swaps relocate a robot species to the location as in the system of gadgets.
If there exists a swap sequence to place a robot species in the target cell there exists a solution to the robot reachability problem. Any swap sequence either moves an robot along a wire, or traverses it through a gadget. From Lemmas 1 and 2 we know the only way to traverse a gadget is to change its state (the location of its key) and a gadget can only be traversed in the correct state.
Now we show Reconfiguration in sCRNs is hard with the same set of swaps is PSPACE-complete as well. We do so by reducing from the Targeted Reconfiguration problem which asks, given an initial and target configuration of a system of gadgets, does there exist sequence of gadget traversals to change the state of the system from the initial to the target and has the robot reach a target location. Note prior work only shows reconfiguration (without specifying the robot location) is PSPACE-complete[1] however a quick inspection of the proof shows the robot ends up at the initial location so requiring a target location does not change the computational complexity for the locking \(2\)-toggle. One may also find it useful to note that the technique used in [1] for gadgets and in [15] for Nondeterministic Constraint Logic can be applied to reversible deterministic systems more generally and could be used to give a reduction directly from \(1\)-reconfiguration of swap sCRNs to reconfiguration of swap sCRNs.
Reconfiguration is PSPACE-complete with \(4\) species and \(3\) swap reactions or greater.
Proof.: Our initial and target configurations of the surface are built with the robot species at the robots location in the system of gadget, and each key is placed according to the starting configuration of the gadget.
Again as in the previous theorem we know from Lemmas 1 and 2 the robot species traversal corresponds to the traversals of the robot in the system of gadgets. The target surface can be reached if and only the target configuration in the system of gadgets is reachable.
### Polynomial-Time Algorithm
Here we show that the previous two hardness results are tight: when restricting to a smaller cases, both problems become solvable in polynomial time. We prove this by utilizing
previously known algorithms for _pebble games_, where labeled pebbles are placed on a subset of nodes of a graph (with at most one pebble per node). A _move_ consists of moving a pebble from its current node to an adjacent empty node. These pebble games are again a type of multiplicity friends-and-strangers graph.
Reconfiguration is in P with \(3\) or fewer species and only swap reactions. Reconfiguration is also in P with \(2\) or fewer swap reactions and any number of species.
Proof.: First we will cover the case of only two swap reactions. There are two possibilities: the two reactions share a common species or they do not. If they do not, we can partition the problem into two disjoint problems, one with only the species involved in the first reaction and the other with only the species from the second reaction. Each of these subproblems has only one reaction, and is solvable if and only if each connected component of the surface has the same number of each species in the initial and target configurations.
The only other case is where we have three species, A, B, and C, where A and C can swap, B and C can swap, but A and B cannot swap. In this case, we can model it as a pebble motion problem on a graph. Consider the graph of the surface where we put a white pebble on each A species vertex, a black pebble on each B species vertex, and leave each C species vertex empty. A legal swap in the surface CRN corresponds to sliding a pebble to an adjacent empty vertex. Goraly et al. [14] gives a linear-time algorithm for determining whether there is a feasible solution to this pebble motion problem. Since the pebble motion problem is exactly equivalent to the surface CRN reconfiguration problem, the solution given by their algorithm directly says whether our surface CRN problem is feasible.
\(1\)-reconfiguration is in P with \(3\) or fewer species and only swap reactions. \(1\)-reconfiguration is also in P with \(2\) or fewer swap reactions.
Proof.: If there are only two swap reactions, we again have two cases depending on whether they share a common species. If they do not share a common species, then we only need to consider the rule involving the target species. The problem is solvable if and only if the connected component of the surface of species involved in this reaction containing the target cell also has at least one copy of the target species. Equivalently, if the target species is A, and A and B can swap, then there must either be A at the target location or a path of B species from the target location to the initial location of an A species.
The remaining case is when we again have three species, A, B, and C, where A and C can swap, B and C can swap, but A and B cannot swap. If C is the target species, then the problem is always solvable as long as there is any C in the initial configuration. Otherwise, suppose without loss of generality that the target species is A. Some initial A must reach the target location. For each initial A, consider the modified problem which has only that single A and replaces all of the other copies of A with B. A sequence of swaps in legal in this modified problem if and only if it was legal in the original problem. The original problem has a solution if and only if any of the modified ones do. We then convert each of these problems to a robot motion planning problem on a graph: place the robot at the vertex with a single copy of A, and place a moveable obstacle at each vertex with a B. A legal move is either sliding the robot to an adjacent empty vertex or sliding an obstacle to an adjacent empty vertex. Papadimitriou et al. [21] give a simple polynomial time algorithm for determining whether it is possible to get the robot to a given target location. By applying their algorithm to each of these modified problems (one for each cell that has an initial A), we can determine whether any of them have a solution in polynomial time (since there are only linearly many such problems), and thus determine whether the original \(1\)-reconfiguration problem has a solution in polynomial time.
## 4 Burnout
In this section, we show reconfiguration in \(2\)-burnout with species \((A,B,C)\) and reaction \(A+B\to C+A\) is NP-complete in Theorem 4. Next, we show \(1\)-reconfiguration in \(1\)-burnout with \(17\) species and \(40\) reactions is NP-complete in Theorem 4.
### \(2\)-Burnout Reconfiguration
This is a simple reduction from Hamiltonian Path, specifically when we have a stated start and end vertex.
Reconfiguration in \(2\)-burnout \(s\)CRNs with species \((A,B,C)\) and reaction \(A+B\to C+A\) is NP-complete.
Proof.: Let \(\Gamma=\{(A,B,C),(A+B\to C+A)\}\). Given an instance of the Hamiltonian path problem on a grid graph \(H\) with a specified start and target vertex \(v_{s}\) and \(v_{t}\), respectively, create a surface \(G\) where each cell in \(G\) is a node from \(H\). Each cell contains the species \(B\) except for the cell representing \(v_{s}\) which contains species \(A\). The target surface has species \(C\) in every cell except for the final node containing \(A\), \(v_{t}\).
The species \(A\) can be thought of as an agent moving through the graph. The species \(B\) represents a vertex that hasn't been visited yet, while the species \(C\) represents one that has been. Each reaction moves the agent along the graph, marking the previous vertex as visited.
\((\Rightarrow)\) If there exists a Hamiltonian path, then the target configuration is reachable. The sequence of edges in the path can be used as a reaction sequence moving the agent through the graph, changing each cell to species \(C\) finishing at the cell representing \(v_{t}\).
\((\Leftarrow)\) If the target configuration is reachable, there exists a Hamiltonian path. The sequence of reactions can be used to construct the path that visits each of the vertices exactly once, ending at \(v_{t}\).
Each cell transitions through species in the following order: \(B,A,C\). This means the CRN is \(2\)-burnout which bounds the max sequence length for reaching any reachable surface, putting the reconfiguration problem in NP.
### \(1\)-Burnout \(1\)-Reconfiguration
For \(1\)-burnout \(1\)-reconfiguration, we show NP-completeness by reducing from \(3\)SAT and utilizing the fact that once a cell has reacted it is burned out and can no longer participate in later reactions.
Figure 6: An example reduction from Hamiltonian Path. We are considering graphs on a grid, so any two adjacent locations are connected in the graph. Left: an initial board with the starting location in blue. Middle: One step of the reaction. Right: The target configuration with the ending location in blue. Bottom: the single reaction rule.
**Theorem 8**: \(1\)_-reconfiguration in \(1\)- burnout sCRNs with \(17\) species and \(40\) reactions is NP-complete._
Proof.: We reduce from 3SAT. The idea is to have an 'agent' species traverse the surface to assign variables and check that the clauses are satisfied by 'walking' through each clause. If the agent can traverse the whole surface and mark the final vertex as'satisfied', there is a variable assignment that satisfies the original 3SAT instance.
_Variable Gadget._ The variable gadget is constructed to allow for a nondeterministic assignment of the variable via the agent walk. At each intersection, the agent 'chooses' a path depending on the reaction that occurs. If the agent chooses 'true' for a given variable, it will walk up then walk down to the center species. If the agent chooses 'false', the agent will walk down then walk up to the center species. From the center species, the agent can only continue following the path it chose until it reaches the next variable gadget. Examples of the agent assigning variables can be seen in Figure 7.
Each variable assignment is 'locked' by way of geometric blocking. When the agent encounters a variable gadget whose variable has already been assigned, the agent must follow that same assignment or it will get'stuck' trying to react with a burnt out vertex. This can be seen in Figure 8.
_Initial Configuration._ First, the configuration is constructed with variable gadgets connected in a row, one for each variable in the 3SAT instance. This row of variable gadgets is where the agent will nondeterministically assign values to the variables. Next, a row of variable gadgets, one row for each clause, is placed on top of the assignment row, connected with helper species to fill in the gaps.
For each clause, if a certain variable is present, the center species of the variable gadget reflects its literal value from the clause. For example, if the variable \(x1\) in clause \(c1\) should be true to satisfy the clause, the variable gadget representing \(x1\) in \(c1\)'s row will contain a \(T\) species in the center cell. Lastly, the agent species is placed in the bottom left of the configuration. An example configuration can be seen in Figure 9.
The agent begins walking and nondeterministically assigns a value to each variable. After assigning every variable, the agent walks right to left. If at an intersection, the agent chooses a different assignment than it did its first pass, the agent becomes'stuck' only being able to react with a burnt out vertex.
After walking all the way to the left, the first clause can be checked. The agent starts in the unsatisfied state, walking through each variable in the row, left to right. If the current variable assignment at a variable gadget satisfies this clause, the agent changes to the satisfied state and continues walking. If the agent walks through all the variables without becoming
Figure 8: The assignment ‘locking’ process.
Figure 7: All the possible configurations of two variable gadgets.
satisfied, the computation ends. If the clause was satisfied, the agent continues by walking back, right to left, to begin evaluation of the next clause. If the agent walks all the way to the final vertex with a satisfied state, then the initial variable assignment satisfies all the clauses.
(\(\Rightarrow\)) If there exists a variable assignment that satisfies the 3SAT instance, then the final vertex can be marked with the satisfied state \(s\). The agent can only mark the final cell with the satisfied state \(s\) if all clauses can be satisfied.
(\(\Leftarrow\)) If the final vertex can be marked with satisfied state \(s\), there exists a variable assignment that satisfies the 3SAT instance. The variable assignment that the agent non-deterministically chose can be read and used to satisfy the 3SAT instance.
## 5 Single Reaction
When limited to a single reaction, we show a complete characterization of the reconfiguration problem. There exists a reaction using 3 species for which the problem is NP-complete. For all other cases of 1 reaction, the problem is solvable in polynomial time.
Figure 10: Species identification and transition rules for 1-burnout 1-reconfiguration.
Figure 9: Reduction from 3SAT to 1-burnout 1-reconfiguration. (a) The starting configuration of the surface for the example formula \(\varphi=(\neg x_{2}\lor x_{3}\lor x_{4})\land(\neg x_{1}\lor x_{2}\lor x_{4}) \land(x_{1}\lor\neg x_{2}\lor x_{3})\). (b) The configuration after evaluating the first clause. A red outline represents the unsatisfied state, and a green outline represents the satisfied state.
### 2 Species
We start with proving reconfiguration is in P when we only have 2 species and a single reaction.
Reconfiguration with species \(\{A,B\}\) and reaction \(A+A\to A+B\) OR \(A+B\to A+A\) is solvable in polynomial time on any surface.
Proof.: The reaction \(A+B\to A+A\) is the reverse of the first case. By flipping the target and initial configurations, we can reduce from reconfiguration with \(A+B\to A+A\) to reconfiguration \(A+A\to A+B\).
We now solve the case where we have the reaction \(A+A\to A+B\).
All cells that start and end with species \(B\) can be ignored as they do not need to be changed, and can not participate in any reactions. If there is a cell that contains \(B\) in the initial configuration but \(A\) in the target, the instance is 'no' as \(B\) may never become \(A\).
Let any cell that starts in species \(A\) but ends in species \(B\) be called a _flip_ cell, and any species that starts in \(A\) and stays in \(A\) a _catalyst_ cell.
An instance of reconfiguration with these reactions is solvable if and only if there exists a set of spanning trees, each rooted at a catalyst cell, that contain all the flip cells. Using these trees, we can construct a reaction sequence from post-order traversals of each spanning tree, where we have each non-root node react with its parent to change itself to a \(B\). In the other direction, given a reaction sequence, we can construct the spanning trees by pointing each flip cell to the neighbor it reacts with.
Reconfiguration with species \(\{A,B\}\) and reaction \(A+A\to B+B\) is solvable in polynomial time on any surface.
Proof.: Reconfiguration in this case can be reduced to perfect matching. Create a graph \(M\) including a node for each cell in \(S\) containing the \(A\) species initially and containing \(B\) in the target, with edges between nodes of neighboring cells. If \(M\) has a perfect matching, then each edge in the matching corresponds to a reaction that changes \(A\) to \(B\). If the target configuration is reachable, then the reactions form a perfect matching since they include each cell exactly once.
Reconfiguration with \(2\) species and \(1\) reaction is in P on any surface.
Proof.: As we only have two species and a single reaction, we can analyze each of the four cases to show membership in P. We divide into two cases:
\(A+A\)**:**: When a species reacts with itself, it can either change both species, which is shown to be in P by Lemma 10; or it changes only one of the species, which is in P by Lemma 9.
\(A+B\)**:**: When two different species react, they can either change to the same species, which is in P by Lemma 9; or they can both change, which is a swap and thus is in P by Theorem 5.
### 3 or more Species
Moving up to 3 species and 1 reaction, we showed earlier that there exists a reaction for which reconfiguration is NP-complete in Theorem 7. Here, we give reactions for which reconfiguration between 3 species is in P, and in Corollary 15 we prove that all remaining reactions are isomorphic to one of the reactions we've analyzed.
**Lemma 12**.: _Reconfiguration with species \((A,B,C)\) and reaction \(A+B\to C+C\) is solvable in polynomial time on any surface._
Proof.: At a high level, we create a new graph of all the cells that must change to species \(C\), and add an edge when the two cells can react with each other. Since a reaction changes both cells to \(C\) we can think of the reaction as "covering" the two reacting cells. Finding a perfect matching in this new graph will give a set of edges along which to perform the reactions to reach the target configuration.
Consider a surface \(G\) and a subgraph \(G^{\prime}\subseteq G\) where we include a vertex \(v^{\prime}\) in \(G^{\prime}\) for each cell that contain \(A\) or \(B\) in the initial configuration and \(C\) in the target configuration. We include an edge \((u^{\prime},v^{\prime})\) between any vertices in \(G^{\prime}\) that contain different initial species, i.e. any pair of cell which one initially contains \(A\) and the other initially \(B\).
Reconfiguration is possible if and only if there is a perfect matching in \(G^{\prime}\). If there is a perfect matching then there exists a set of edges which cover each cell once. Since \(G^{\prime}\) represents the cells that must change states, and the edges between them are reactions, the covering can be used as a sequence of pairs of cells to react. If there is a sequence of reactions then there exists a perfect matching in \(G^{\prime}\): each cell only reacts once so the matching must be perfect, and the cells that react have edges between them in \(G^{\prime}\).
**Lemma 13**.: _Reconfiguration with species \((A,B,C)\) and reaction \(A+B\to A+C\) is solvable in polynomial time on any surface._
Proof.: The instance of reconfiguration is solvable if and only if any cell that ends with species \(C\) either contained \(C\) in the initial configuration, or started with species \(B\) and have an \(A\) adjacent to perform the reaction. Additionally, since a reaction cannot cause a cell to change to \(A\) or \(B\), each cell with an \(A\) or \(B\) in the target configuration must contain the same species in the initial configuration.
The final case we study is \(4\) species \(1\) reaction. Any sCRN with \(5\) or more species and \(1\) reaction has a species which is not included in the reaction.
**Lemma 14**.: _Reconfiguration with species \((A,B,C,D)\) and the reaction \(A+B\to C+D\) is in P on any surface._
Proof.: We can reduce Reconfiguration with \(A+B\to C+D\) to perfect matching similar to Lemma 12. Create a new graph with each vertex representing a cell in the surface that must change species. Add an edge between each pair of neighboring cells that can react (between one containing \(A\) and the other \(B\)). A perfect matching then corresponds to a sequence of reactions that changes each of the species in each cell to \(C\) or \(D\).
**Corollary 15**.: _Reconfiguration with \(3\) or greater species and \(1\) reaction is NP-complete on any surface._
Proof.: First, from Theorem 7 we see that there exists a case of reconfiguration with \(3\) species that is NP-hard.
For membership in NP, we analyze each possible reaction. We note that we only need to consider two cases for the left hand side of the rule, \(A+A\) and \(A+B\). Any other reaction is isomorphic to one of this form as we can relabel the species. For example, rule \(B+C\to A+A\) can be relabeled as \(A+B\to C+C\). Also, we know that \(C\) must appear somewhere in the right hand side of the rule. If it does not then the reaction only takes place between two species, which is always polynomial time as shown above, or it involves a species we can relabel as \(C\).
Here are the cases for \(A+B\) and our analysis results:
When we have \(A+A\) on the left side of the rule, the only case we must consider is \(A+A\to B+C\) (since all 3 species must be included in the rule). We have already solved this reaction: first swap the labels of \(A\) and \(C\) giving rule \(C+C\to B+A\), then reverse the rule to \(B+A\to C+C\) and swap the initial and target configuration. Finally since rules do not care about orientation this is equivalent to the rule \(A+B\to C+C\) in Lemma 12.
Finally, for 4 species and greater, the only new case is \(A+B\to C+D\), which is proven to be in P in Lemma 14. Any other case would have species that are not used since a rule can only have 4 different species in it.
Thus, all cases are either in NP, or in P which is a subset of NP, therefore, the problem is in NP. Also, since our results for each case apply for any surface, the same is true in general.
## 6 Conclusion
In this paper, we explored the complexity of the configuration problem within natural variations of the surface CRN model. While general reconfiguration is known to be PSPACE-complete, we showed that it is still PSPACE-complete even with several extreme constraints. We first considered the case where only swap reactions are allowed, and showed reconfiguration is PSPACE-complete with only four species and three distinct reaction types. We further showed that this is the smallest possible number of species for which the problem is hard by providing a polynomial-time solution for three or fewer species when only using swap reactions.
We next considered surface CRNs with rules other than just swap reactions. First, we considered the burnout version of the reconfiguration problem, and then followed by the normal version with small species counts. In the case of 2-burnout, we showed reconfiguration is NP-complete for three species and one reaction type, and 1-burnout is NP-complete for 17 species with 40 distinct reaction types. Without burnout, we achieved, as a corollary, that three species, one reaction type is NP-complete while showing that dropping the species count down to two yields a polynomial-time solution.
This work introduced new concepts that leaves open a number of directions for future work. While we have fully characterized the complexity of reconfiguration for the swap-only version of the model, the complexity of reconfiguration with general rule types for three species systems remains open if the system uses more than one rule. In the 1-burnout variant of the model, we have shown 1-reconfiguration to be NP-complete, but the question of general reconfiguration remains a "burning" open question.
|
2307.08796 | Classification with Incoherent Kernel Dictionary Learning | In this paper we present a new classification method based on Dictionary
Learning (DL). The main contribution consists of a kernel version of incoherent
DL, derived from its standard linear counterpart. We also propose an
improvement of the AK-SVD algorithm concerning the representation update. Our
algorithms are tested on several popular databases of classification problems. | Denis C. Ilie-Ablachim, Bogdan Dumitrescu | 2023-07-17T19:27:32Z | http://arxiv.org/abs/2307.08796v1 | # Classification with Incoherent Kernel
###### Abstract
In this paper we present a new classification method based on Dictionary Learning (DL). The main contribution consists of a kernel version of incoherent DL, derived from its standard linear counterpart. We also propose an improvement of the AK-SVD algorithm concerning the representation update. Our algorithms are tested on several popular databases of classification problems.
dictionary learning, kernel, incoherence, classification
## I Introduction
Dictionary Learning (DL) is a representation learning method used in signal processing and machine learning that aims to find a sparse representation for input data organized as vectors. DL has many applications starting from simple ones like image denoising, inpainting or signal reconstruction and going to coding, clustering or classification. For a given set of samples, \(\mathbf{Y}\), represented by a matrix of \(N\) columns (signals) of size \(m\), we intend to find a dictionary \(\mathbf{D}\) of size \(m\times n\) and a sparse representation matrix \(\mathbf{X}\) of size \(n\times N\) such that good sparse representations \(\mathbf{Y}\approx\mathbf{D}\mathbf{X}\) are obtained. The representation is based on linear combinations of the columns of the dictionary \(\mathbf{D}\), named atoms. The DL problem can be formulated as follows
\[\begin{array}{ll}\min_{\mathbf{D},\mathbf{X}}&\left\|\mathbf{Y}-\mathbf{D}\mathbf{X}\right\|_{ \text{F}}^{2}\\ \text{s.t.}&\left\|\mathbf{x}_{\ell}\right\|_{0}\leq s,\ell=1:N\\ &\left\|\mathbf{d}_{j}\right\|=1,j=1:n,\end{array} \tag{1}\]
where \(\left\|\cdot\right\|_{0}\) represents the \(0\)-pseudo-norm and \(s\) is the sparsity level. More precisely, each signal is represented as a linear combination of at most \(s\) atoms.
There are several successful DL methods, including K-singular value decomposition (K-SVD) [1] and the Method of Optimal Directions (MOD) [2]; improved methods and variations of the DL problem including regularization and coherence reduction are presented in [3]. All these algorithms are iterative and in most of them an iteration consists of computing the sparse representations \(\mathbf{X}\) with fixed dictionary \(\mathbf{D}\) and then updating the atoms successively, possibly together with the coefficients with which an atom contributes to representations. Of special interest is the Approximate version of K-SVD (AK-SVD) [4], which does not seek exact optimality for both an atom and its representation coefficients, but optimizes them successively. AK-SVD has lower complexity than other algorithms and gives similar end results in most DL problems.
In this paper we present a new perspective on a classification problem via dictionary learning with incoherent atoms. This problem was first introduced in [5], where the solution is computed by optimizing the whole dictionary. We introduce a new optimization method in AK-SVD style, in which the dictionary \(\mathbf{D}\) is updated atom by atom. Our contribution is to extend the problem by projecting the signals in a nonlinear space, as linear spaces can hinder classification performance. To this purpose, we use kernel representations in order to better quantify the similarity between signals. Another contribution is to introduce a new update rule for the coefficients representations, by taking into consideration only the most recent atoms in all computations; this improvement can lead to the increase of classification accuracy.
The contents of this paper is as follows. In Section II-A we introduce the classification problem and the principle of its solution via DL. Section II-B presents an incoherent DL algorithm suited for classification. Section II-C contains our main contribution: the kernel version of the incoherent DL algorithm and the new update rule for representations.
Section III is dedicated to experimental results, obtained by running simulations on three publicly available datasets, namely YaleB, AR Face and Caltech 101.
## II Classification with Dictionary Learning
### _Standard Dictionary Learning classification_
The representation learning approach (1) can be also used in classification problems. Considering a set of feature vectors classes \(\mathbf{Y}=[\mathbf{Y}_{1},\dots,\mathbf{Y}_{c},\dots,\mathbf{Y}_{C}]\), where the columns of matrix \(\mathbf{Y}_{c}\in\mathbb{R}^{m\times N_{c}}\) are the vectors belonging to class \(c\), we intend to learn a specific dictionary, \(\mathbf{D}_{c}\), for each class. For a given test signal \(\mathbf{y}\in\mathbb{R}^{m}\) the classification is achieved by finding the dictionary with the smallest residual of the representation:
\[c=\operatorname*{argmin}_{i=1:C}\ \|\mathbf{y}-\mathbf{D}_{i}\mathbf{x}_{i}\|,\ \text{with}\ \left\|\mathbf{x}_{i}\right\|_{0}\leq s. \tag{2}\]
### _Incoherent Dictionary Learning classification_
In order to improve the classification performance, the problem can be extended by adding discriminative power to each dictionary. By this, we intend to maintain a good sparse representation for its own class while achieving a bad representation for the other classes. A solution for this problem was presented in [6] where a penalty term was added to the DL problem, transforming it into
\[\min_{\mathbf{D}_{i},\mathbf{X}_{i}}\sum_{i=1}^{C}\left\|\mathbf{Y}_{i}-\mathbf{D}_{i}\mathbf{X}_{i }\right\|_{F}^{2}+\gamma\sum_{i=1}^{C}\sum_{l\neq i}\left\|\mathbf{D}_{i}^{\top}\bm {D}_{l}\right\|_{F}^{2}. \tag{3}\]
The second term introduces an incoherence measure between pairs of dictionaries from different classes. By this formulation we intend to project dictionaries into quasi-orthogonal spaces, while retaining most of their representation ability.
The DL problem (3) can be approximately solved by an approach similar to Approximated K-SVD [1]. The optimization consists of an iterative process in which the representations \(\mathbf{X}_{i}\) and the dictionaries \(\mathbf{D}_{i}\) are alternately optimized while all other variables are fixed. The representations are computed with Orthogonal Matching Pursuit (OMP) [7] as usual in DL, since the penalty term does not depend on \(\mathbf{X}_{i}\). The dictionaries are updated sequentially, atom by atom. Let us assume that we optimize atom \(\mathbf{d}_{j}\) from dictionary \(\mathbf{D}_{i}\). The optimization problem (3) becomes
\[\min_{\mathbf{d}_{j}}\left\|\mathbf{F}_{ij}-\mathbf{d}_{j}\mathbf{X}_{j,\mathcal{I}_{j}} \right\|_{F}^{2}+2\gamma\sum_{l\neq i}\left\|\mathbf{D}_{l}^{\top}\mathbf{d}_{j}\right\| _{F}^{2}, \tag{4}\]
where \(\mathbf{F}_{ij}=\left[\mathbf{Y}_{i}-\sum_{\ell\neq j}\mathbf{d}_{\ell}\mathbf{x}_{\ell}^{\top }\right]_{\mathcal{I}_{j}}\) is the representation error when all atoms but \(\mathbf{d}_{j}\) are considered and \(\mathcal{I}_{j}\) denotes the indices of the nonzero positions on the \(j\)th row of \(\mathbf{X}_{i}\) (those containing the coefficients of \(\mathbf{d}_{j}\) in the representations). The solution has been previously presented in [6] and is
\[\mathbf{d}_{j}\leftarrow\mathbf{F}_{ij}\mathbf{x}-2\gamma\mathbf{D}\mathbf{\bar{D}}^{\top}\mathbf{d}_{j}, \tag{5}\]
where we have denoted \(\mathbf{x}=\mathbf{X}_{j,\mathcal{I}_{j}}\) and \(\mathbf{\bar{D}}=[\mathbf{D}_{1},\ldots,\mathbf{D}_{i-1},\mathbf{D}_{i+1},\ldots,\mathbf{D}_{C}]\) is the complementary dictionary to the current one.
The atom update operations of the IDL algorithm based on AK-SVD are summarized in Algorithm 1[3, Alg.4.2]. Note that the representations are also updated and that the representation error is manipulated efficiently.
```
Data: current dictionary \(\mathbf{D}\in\mathbb{R}^{m\times n}\) complementary dictionary \(\mathbf{\bar{D}}\in\mathbb{R}^{m\times(n-1)}\) representation matrix \(\mathbf{X}\in\mathbb{R}^{n\times N}\) Result: updated dictionary \(\mathbf{D}\)
1 Compute error \(\mathbf{E}=\mathbf{Y}-\mathbf{D}\mathbf{X}\) for\(j=1\)to\(n\)do
2 Modify error: \(\mathbf{F}=\mathbf{E}_{\mathcal{I}_{j}}+\mathbf{d}_{j}\mathbf{X}_{j,\mathcal{I}_{j}}\) Update atom: \(\mathbf{d}_{j}=\mathbf{F}\mathbf{X}_{j,\mathcal{I}_{j}}^{\top}-2\gamma\mathbf{\bar{D}}\mathbf{ \bar{D}}^{\top}\mathbf{d}_{j}\) Normalize atom: \(\mathbf{d}_{j}\leftarrow\mathbf{d}_{j}/\left\|\mathbf{d}_{j}\right\|\) Update representation: \(\mathbf{X}_{j,\mathcal{I}_{j}}^{\top}=\mathbf{F}^{\top}\mathbf{d}_{j}\) Recompute error: \(\mathbf{E}_{\mathcal{I}_{j}}=\mathbf{F}-\mathbf{d}_{j}\mathbf{X}_{j,\mathcal{I}_{j}}\)
```
**Algorithm 1**Incoherent AK-SVD Dictionary Update
### _Incoherent Kernel Dictionary Learning classification_
In order to evade the linear character of the representation, kernel dictionary learning (KDL) was introduced in [8, 9]. Through this method, the space of signals is extended to a nonlinear feature vector space. We associate with each signal \(\mathbf{y}\) the feature vector \(\varphi(\mathbf{y})\), where \(\varphi(\mathbf{y})\) is a nonlinear function. The dictionary \(\mathbf{D}\) is also extended to a nonlinear space by \(\varphi(\mathbf{Y})\mathbf{A}\), where \(\mathbf{A}\) contains the coefficients of the dictionary. So, the DL problem (1) is transformed into
\[\min_{\mathbf{A},\mathbf{X}} \|\varphi(\mathbf{Y})-\varphi(\mathbf{Y})\mathbf{A}\mathbf{X}\|_{F}^{2}\] s.t. \[\|\mathbf{x}\epsilon\|_{0}\leq s,\ell=1:N \tag{6}\] \[\|\varphi(\mathbf{Y})\mathbf{a}_{j}\|=1,j=1:n.\]
The problem becomes computationally tractable by the use of Mercer kernels, which allows the substitution of the scalar product of feature vectors with the computation of a kernel function \(k(\mathbf{x},\mathbf{y})=\varphi(\mathbf{y})^{\top}\varphi(\mathbf{x})\). Denoting \(\mathbf{K}_{il}=\varphi(\mathbf{Y}_{l})^{\top}\varphi(\mathbf{Y}_{i})\), the incoherent DL problem (3) is transformed into the Incoherent Kernel Dictionary Learning (IKDL) problem
\[\min_{\mathbf{A}_{i},\mathbf{X}_{i}}\sum_{i=1}^{C}\left\|\varphi(\mathbf{Y}_{i})-\varphi( \mathbf{Y}_{i})\mathbf{A}_{i}\mathbf{X}_{i}\right\|_{F}^{2}+\gamma\sum_{i=1}^{C}\sum_{l \neq i}\left\|\mathbf{A}_{i}^{\top}\mathbf{K}_{li}\mathbf{A}_{l}\right\|_{F}^{2}. \tag{7}\]
Using a similar alternate optimization technique and similar notations, the kernel correspondent of problem (4) for optimizing an atom \(\mathbf{a}_{j}\) is
\[\min_{\mathbf{a}_{j}}\left\|\varphi(\mathbf{Y}_{i})\mathbf{F}_{ij}-\varphi(\bm {Y}_{i})\mathbf{a}_{j}\mathbf{X}_{j,\mathcal{I}_{j}}\right\|_{F}^{2}+ \tag{8}\] \[2\gamma\sum_{l\neq i}\left\|\mathbf{A}_{l}^{\top}\mathbf{K}_{il}\mathbf{a}_{j }\right\|_{F}^{2}.\]
In order to solve this optimization problem, we compute the partial derivatives with respect to atom \(\mathbf{a}_{j}\) as follows:
\[\frac{\partial\left\|\varphi(\mathbf{Y}_{i})\left(\mathbf{F}_{ij}-\mathbf{a}_{j}\mathbf{x}^{\top }\right)\right\|_{F}^{2}}{\partial\mathbf{a}_{j}}=-2\mathbf{K}_{ii}\left(\mathbf{F}_{ij}- \mathbf{a}_{j}\mathbf{x}^{\top}\right)\mathbf{x} \tag{9}\]
and
\[\frac{\partial\left\|\mathbf{A}_{l}^{\top}\mathbf{K}_{il}\mathbf{a}_{j}\right\|_{F}^{2}}{ \partial\mathbf{a}_{j}}=2\mathbf{K}_{il}^{\top}\mathbf{A}_{l}\mathbf{A}_{l}^{\top}\mathbf{K}_{il} \mathbf{a}_{j}. \tag{10}\]
By using (9) and (10), the minimum in (8) is obtained when
\[-\mathbf{K}_{ii}\left(\mathbf{F}_{ij}-\mathbf{a}_{j}\mathbf{x}^{\top}\right)\mathbf{x}+2\gamma\sum_{l \neq i}\mathbf{K}_{il}^{\top}\mathbf{A}_{l}\mathbf{A}_{l}^{\top}\mathbf{K}_{il}\mathbf{a}_{j}=0 \tag{11}\]
and so the solution is
\[\mathbf{a}_{j}=\left(\mathbf{K}_{ii}\|\mathbf{x}\|^{2}+2\gamma\sum_{l\neq i}\mathbf{K}_{il}^{\top} \mathbf{A}_{l}\mathbf{A}_{l}^{\top}\mathbf{K}_{il}\right)^{-1}\mathbf{K}_{ii}\mathbf{F}_{ij}\mathbf{x}. \tag{12}\]
The resulting atom is the solution of a \(m\times m\) linear system. Given the complexity of the problem, we seek a more convenient approximation.
We note that, given the atom \(\mathbf{a}_{j}\), the optimal associated representation in (8) is \(\mathbf{X}_{j,\mathcal{I}_{j}}^{\top}=\mathbf{F}_{ij}^{\top}\mathbf{K}_{ii}\mathbf{a}_{j}\), like in the kernel K-SVD algorithm (the penalty does not contain the representation). We insert this optimal representation in (8) and obtain
\[\min_{\mathbf{a}_{j}}\left\|\varphi(\mathbf{Y}_{i})\left(\mathbf{F}_{ij}-\mathbf{a}_{j}\mathbf{a}_{ j}^{\top}\mathbf{K}_{ii}\mathbf{F}_{ij}\right)\right\|_{F}^{2}+2\gamma\left\|\hat{\mathbf{K} }_{i}\mathbf{a}_{j}\right\|_{F}^{2}, \tag{13}\]
where
\[\hat{\mathbf{K}}_{i}=\left[\mathbf{K}_{i1}^{\top}\mathbf{A}_{1}\ \ldots\ \mathbf{K}_{i,i-1}^{\top}\mathbf{A}_{i-1}\ \mathbf{K}_{i,i+1}^{\top}\mathbf{A}_{i+1}\ \ldots\ \mathbf{K}_{iC}^{\top}\mathbf{A}_{C}\right]^{\top}. \tag{14}\]
Expressing the Frobenius norm via its trace form, the new objective from (13) becomes
\[Tr\left[\left(\mathbf{F}_{ij}-\mathbf{a}_{j}\mathbf{a}_{j}^{\top}\mathbf{K}_{ii} \mathbf{F}_{ij}\right)^{\top}\mathbf{K}_{ii}\left(\mathbf{F}_{ij}-\mathbf{a}_{j}\mathbf{a}_{j}^{ \top}\mathbf{K}_{ii}\mathbf{F}_{ij}\right)\right]+\] \[2\gamma Tr\left[\mathbf{a}_{j}^{\top}\mathbf{K}_{i}^{\top}\mathbf{\hat{\mathbf{K} }}_{i}\mathbf{a}_{j}\right]. \tag{15}\]
After direct transformations and neglecting the terms that do not depend on \(\mathbf{a}_{j}\), we are left with the minimization of
\[-\mathbf{a}_{j}^{\top}\left(\mathbf{K}_{ii}\mathbf{F}_{ij}\mathbf{F}_{ij}^{\top}\mathbf{K}_{ii}-2 \gamma\hat{\mathbf{K}}_{i}^{\top}\hat{\mathbf{K}}_{i}\right)\mathbf{a}_{j}. \tag{16}\]
The solution is the eigenvector corresponding to the maximum eigenvalue of the matrix
\[\mathbf{H}=\mathbf{K}_{ii}\mathbf{F}_{ij}\mathbf{F}_{ij}^{\top}\mathbf{K}_{ii}-2\gamma\hat{\mathbf{K}} _{i}^{\top}\hat{\mathbf{K}}_{i}. \tag{17}\]
Since this is again a high complexity operation, we make a single iteration of the power method on the matrix \(\mathbf{H}\). So, given the current atom \(\mathbf{a}_{j}^{(k)}\) (at iteration \(k\)), the new atom is
\[\mathbf{a}_{j}^{(k+1)}=\mathbf{H}\mathbf{a}_{j}^{(k)}=\mathbf{K}_{ii}\mathbf{F}_{ij}\mathbf{x}-2\gamma \hat{\mathbf{K}}_{i}^{\top}\hat{\mathbf{K}}_{i}\mathbf{a}_{j}^{(k)}, \tag{18}\]
followed by atom normalization. We have denoted again \(\mathbf{x}=\mathbf{X}_{j,\mathcal{I}_{j}}\). The atom update (18) is the kernel version of (5).
The atom update operations of the IKDL algorithm are summarized in Algorithm 2 for a single dictionary (hence the index \(i\) has disappeared). We also propose an improvement with respect to the structure of Algorithm 1. We note that the representation update uses the most recent version of the current atom; however, the error matrix \(\mathbf{F}\) is computed using the previous version of the atom. By introducing the most recent version of the atom in the error, the representation update becomes
\[\left(\mathbf{X}_{j,\mathcal{I}_{j}}^{\top}\right)^{(k+1)}=\mathbf{F}^{\top}\mathbf{K}\bm {a}_{j}=\mathbf{E}_{\mathcal{I}_{j}}^{\top}\mathbf{K}\mathbf{a}_{j}+\left(\mathbf{X}_{j, \mathcal{I}_{j}}^{\top}\right)^{(k)}. \tag{19}\]
Due to normalization, we have \(\mathbf{a}_{j}^{\top}\mathbf{K}\mathbf{a}_{j}=1\) and so this product has disappeared from the second term above. We name Updated-error AK-SVD (UAK-SVD) this version of the algorithm and we will compare it with the usual AK-SVD update. The difference is only in the representation updates, step 6 of Algorithms 1 and 2.
```
Data: kernel matrix \(\mathbf{K}\in\mathbb{R}^{N\times N}\) current dictionary \(\mathbf{A}\in\mathbb{R}^{N\times n}\) complementary dictionary \(\hat{\mathbf{K}}\in\mathbb{R}^{(N-1)\times N}\) representation matrix \(\mathbf{X}\in\mathbb{R}^{n\times N}\) Result: updated dictionary \(\mathbf{D}\)
1 Compute error \(\mathbf{E}=\mathbf{I}-\mathbf{A}\mathbf{X}\) for\(j=1\)to\(n\)do
2 Modify error: \(\mathbf{F}=\mathbf{E}_{\mathcal{I}_{j}}+\mathbf{a}_{j}\mathbf{X}_{j,\mathcal{I}_{j}}\) Update atom: \(\mathbf{a}_{j}=\mathbf{K}\mathbf{F}\mathbf{X}_{j,\mathcal{I}_{j}}-2\gamma\hat{\mathbf{K}}^{\top} \hat{\mathbf{K}}\mathbf{a}_{j}\) Normalize atom: \(\mathbf{a}_{j}\leftarrow\left(\mathbf{a}_{j}^{\top}\mathbf{K}\mathbf{a}_{j}\right)^{\frac{1}{2}}\) Update representation: \(\mathbf{X}_{j,\mathcal{I}_{j}}^{\top}\leftarrow\mathbf{E}_{\mathcal{I}_{j}}^{\top} \mathbf{K}\mathbf{a}_{j}+\mathbf{X}_{j,\mathcal{I}_{j}}^{\top}\) Recompute error: \(\mathbf{E}_{\mathcal{I}_{j}}=\mathbf{F}-\mathbf{a}_{j}\mathbf{X}_{j,\mathcal{I}_{j}}\)
```
**Algorithm 2**Incoherent Kernel UAK-SVD Dictionary Update
For the classification scheme we need only the reconstruction errors from equation (2). For the kernel version, the classification of a signal \(\mathbf{y}\) results from
\[c=\underset{i=1:C}{\text{argmin}}\ \left\|\varphi(\mathbf{y})-\varphi(\mathbf{Y}_{i})\mathbf{A}_{i} \mathbf{x}_{i}\right\|,\ \text{with}\ \left\|\mathbf{x}_{i}\right\|_{0}\leq s, \tag{20}\]
which leads to
\[c=\underset{i=1:C}{\text{argmin}}\ k(\mathbf{y},\mathbf{y})+\mathbf{x}_{i}^{\top}\mathbf{A}_{i}^{ \top}\mathbf{K}_{i}\mathbf{A}_{i}\mathbf{x}_{i}-2k(\mathbf{y},\mathbf{Y}_{i})\mathbf{A}_{i}\mathbf{x}, \tag{21}\] \[\text{with}\ \left\|\mathbf{x}_{i}\right\|_{0}\leq s.\]
Here, as well as in the IKDL algorithm, the representations are computed with Kernel OMP [8].
## III Experiments
In this section we present the main results obtained with the Incoherent Kernel Dictionary Learning algorithm. The datasets used in the simulations are YaleB [10], AR Face [11] and Caltech 101 [12].
For the evaluation step, each dataset is independently used and was provided in [13]. We measure performance through classification accuracy, training time and testing time.
All the algorithms were developed in Matlab \(2018a\), on a laptop with \(3.5\)GHz Intel CPU and \(16\) GB RAM memory. The execution time and accuracy are reported as the average over the 3 best results. For the methods that require the use of a kernel function, we used two types of kernels: radial basis function kernel (\(k(\mathbf{x},\mathbf{y})=\exp\frac{-||\mathbf{x}-\mathbf{y}||_{2}^{2}}{2\sigma^{2}}\)) and polynomial kernel (\(k(\mathbf{x},\mathbf{y})=(\mathbf{x}^{\top}\mathbf{y}+\alpha)^{\beta}\)). For the kernel methods, we have tried different parameter values in our simulations. We have chosen the final form based on the best results from these simulations. The code for the proposed algorithms is available at [https://github.com/denisilie94/Incoherent-Kernel-Dictionary-Learning](https://github.com/denisilie94/Incoherent-Kernel-Dictionary-Learning).
**YaleB Database** is organized into two sub-datasets, according to the extended and cropped images. The dataset is composed of \(16128\) images of \(38\) human subjects under \(9\) poses and \(64\) illumination conditions. During the simulation step only the extended dataset was used, including \(2414\) face images of \(38\) persons. For the training and testing step the images per subject were split in half. The dimension of the feature vectors is \(504\).
**AR Face Database** is a face dataset containing more than \(4000\) color images corresponding to \(126\) different people (
men and \(56\) women). The images were taken having a frontal view with different facial expressions, illumination conditions and occlusions. For the experimental phase a set of \(2600\) images of \(50\) females and \(50\) male subjects are extracted. For each subject, \(20\) images were used for training and \(6\) for testing.
Beside the face recognition tasks, an object recognition task was attempted in the simulations. For this we used **Caltech 101 Database**. The dataset includes 9,144 images from 102 classes (101 common object classes and a background class). The number of samples in each category varies from 31 to 800. In the experiments, 30 samples per category were used for training, while the rest are used for testing.
During the simulations we performed tests with dictionaries of different sizes (\(40\), \(60\), \(80\) and \(100\) atoms) having a sparsity constraint equal to \(10\%\), \(20\%\), \(50\%\) and \(80\%\) of the number of atoms. Taking into account the training time and the resulted classification accuracy, we chose to use only dictionaries with 40 atoms and a sparsity constraint of \(20\). Increasing sparsity can improve the results, but this will also affect the training time. All tests were performed on 10 DL iterations. For a larger number of iterations the improvement in accuracy is insignificant. We set the hyperparameters of the optimization problem following a grid search: \(\gamma\in[0.01,0.1,0.5,1,2,4,6]\), \(\sigma\in[0.5,1,2,4,5,6,8,10]\), \(\alpha\in[0.5,1,2,4]\) and \(\beta\in[2,3]\). In the case of all datasets, for the IDL problem we used \(\gamma=4\), while for the IKDL problem \(\gamma\) was set to \(0.1\). Regarding the kernel functions, we used the following parameters: \(\sigma=4\), \(\alpha=2\) and \(\beta=2\) for YaleB dataset; \(\sigma=8\), \(\alpha=4\) and \(\beta=2\) for AR Face dataset; and \(\sigma=5\), \(\alpha=4\) and \(\beta=2\) for Caltech 101 dataset.
The main results are summarized in Tables I, II for classification with plain incoherent DL; Tables III, IV contain results with IKDL and the RBF kernel; Tables V and VI contain results with IKDL and polynomial kernel. As we can see, the results vary depending on the chosen algorithm. The UAK-SVD method usually improves the classification accuracy, although sometimes only slightly. Regarding the kernel extension, the introduced nonlinearity does not always insure an improvement, as we can see for YaleB dataset, but there is a strong improvement regarding the execution time. In the case of YaleB dataset, the training time decreased by \(10\) times, while for the AR Face dataset the training is done \(25\) times faster. The best improvement is visible for the Caltech 101 dataset, where training time has been reduced \(200\) times. The execution time is reduced due to the small size of the dictionaries in the kernel version. This property is valid only for cases where the signal size is much larger than the number of signals per class; for example, in the YaleB case, the dictionary of a class has size \(504\times 40\) in the IDL approach, but size only \(32\times 40\) in IKDL; it is thus remarkable that the accuracy loss is so small when kernels are used. This property is also valid for the other datasets, where we have signals of size 540 for AR Face dataset and 3000 for Caltech 101 dataset.
## IV Conclusions
In this paper we have extended the family of dictionary learning algorithms for classification problems. We have presented a modified version of AK-SVD in which the most recent version of an atom is used in all respects in the representation update. We have proposed a kernel version of incoherent AK-SVD that can improve classification performance by increasing the separation of dictionaries dedicated
to different signal classes. The experimental results confirm the good behavior of our algorithms, especially in terms of complexity.
|
2306.09979 | Evaluation of Speech Representations for MOS prediction | In this paper, we evaluate feature extraction models for predicting speech
quality. We also propose a model architecture to compare embeddings of
supervised learning and self-supervised learning models with embeddings of
speaker verification models to predict the metric MOS. Our experiments were
performed on the VCC2018 dataset and a Brazilian-Portuguese dataset called
BRSpeechMOS, which was created for this work. The results show that the Whisper
model is appropriate in all scenarios: with both the VCC2018 and BRSpeech- MOS
datasets. Among the supervised and self-supervised learning models using
BRSpeechMOS, Whisper-Small achieved the best linear correlation of 0.6980, and
the speaker verification model, SpeakerNet, had linear correlation of 0.6963.
Using VCC2018, the best supervised and self-supervised learning model,
Whisper-Large, achieved linear correlation of 0.7274, and the best model
speaker verification, TitaNet, achieved a linear correlation of 0.6933.
Although the results of the speaker verification models are slightly lower, the
SpeakerNet model has only 5M parameters, making it suitable for real-time
applications, and the TitaNet model produces an embedding of size 192, the
smallest among all the evaluated models. The experiment results are
reproducible with publicly available source-code1 . | Frederico S. Oliveira, Edresson Casanova, Arnaldo Cândido Júnior, Lucas R. S. Gris, Anderson S. Soares, Arlindo R. Galvão Filho | 2023-06-16T17:21:42Z | http://arxiv.org/abs/2306.09979v1 | # Evaluation of Speech Representations for MOS prediction
###### Abstract
In this paper, we evaluate feature extraction models for predicting speech quality. We also propose a model architecture to compare embeddings of supervised learning and self-supervised learning models with embeddings of speaker verification models to predict the metric MOS. Our experiments were performed on the VCC2018 dataset and a Brazilian-Portuguese dataset called BRSpeechMOS, which was created for this work. The results show that the Whisper model is appropriate in all scenarios: with both the VCC2018 and BRSpeechMOS datasets. Among the supervised and self-supervised learning models using BRSpeechMOS, Whisper-Small achieved the best linear correlation of 0.6980, and the speaker verification model, SpeakerNet, had linear correlation of 0.6963. Using VCC2018, the best supervised and self-supervised learning model, Whisper-Large, achieved linear correlation of 0.7274, and the best model speaker verification, TitaNet, achieved a linear correlation of 0.6933. Although the results of the speaker verification models are slightly lower, the SpeakerNet model has only 5M parameters, making it suitable for real-time applications, and the TitaNet model produces an embedding of size 192, the smallest among all the evaluated models. The experiment results are reproducible with publicly available source-code1.
Footnote 1: [https://github.com/freds0/BSpeech-MOS-Prediction](https://github.com/freds0/BSpeech-MOS-Prediction)
Keywords:speech assessment, speech evaluation, mos prediction
## 1 Introduction
The development of speech synthesis and voice conversion models has increased the need for automatic methods to evaluate the quality of generated speech. The most reliable methods among the available options rely on manual evaluation, where human evaluators are chosen to assess signal quality using a predefined numerical scale. In recent work, self-supervised learning (SSL) models have been used to predict the quality of synthesized speech. Representations obtained from models such as Wav2Vec 2.0 [3], HuBERT [12], WavLM [4], and TERA [18] have been used. These models produce high quality representations and their training requires a large amount of data.
Whisper [24], in the other hand is a for general-purpose speech recognition model based on supervised learning (SL), and it was developed with the goal of creating a robust system that generalizes well across domains, tasks and languages without relying on fine-tuning to achieve high accuracy. Whisper embeddings can be used to speech
recognition, speech translation, language identification and other tasks. The authors demonstrated that training on a large and diverse supervised dataset alone can significantly enhance the robustness of speech systems. However, to date, the embeddings generated by the Whisper model have not been evaluated for their effectiveness in the task of speech quality prediction.
Speaker embeddings generated by speaker verification models (SV) offer an alternative to high-quality embeddings. Unlike the latter, speaker embeddings have a fixed size that remains constant regardless of the length of the utterance. Earlier studies, such as [31], have examined the properties that are captured by speaker embeddings, such as the spoken content, speaker's gender, speaking rate and audio channel information. These studies have demonstrated satisfactory performance on various tasks, which has motivated further exploration of these features for predicting the quality of synthesized speech. Also, so far, the representations of SV models have not been evaluated in the speech quality prediction task.
In this paper, we propose to evaluate high-quality representations from both SL and SSL models, as well as SV representations, for the purpose of predicting the quality of synthesized speech in text-to-speech (TTS) systems. In addition, we investigate the use of these models to evaluate speech samples in a low resource dataset, in Brazilian Portuguese. Models based on SV can be an alternative to generate high quality embeddings with low computational cost, allowing the evaluation of speech quality in real time.
This paper is organized as follows: Section 2 presents some prior research on automatically predicting the quality of synthesized speech. Section 3 outlines the proposed model architecture developed in this study and Section 4 details the experiments proposed. Then, Section 5 discusses the obtained results, and finally, Section 7 presents the conclusions of this work.
## 2 Related Works
Several studies have addressed the development of automatic methods for evaluating the quality of synthesized speech and have obtained results that correlate with human evaluation methods. The first pioneering work that used Deep Learning to predict quality was proposed in 2016 with the AutoMOS model by Patton et al. [23]. Fu et al. [9] proposed the Quality-Net model to predict the PESQ [26], a metric which compares a degraded speech signal with a reference speech signal to provide an objective measure of the perceived voice quality by the human listener. Lo et al. [19] developed MOSNet, a improved version of Quality-Net for the MOS prediction task.
Cooper et al. [7] investigate the ability of SSL models to predict speech quality in out-of-domain scenarios. With the aim of achieving this goal, the researchers conducted experiments on embedding extraction models, such as Wav2Vec 2.0 [3] and HuBERT [12], and compare them to the MOSNet model. The models were trained on datasets such as the Blizzard Challenge [13] and the Voice Conversion Challenge [8], and then evaluated on the ASVSpoof 2019 [27] and Blizzard Challenge 2019 [32] datasets. The findings reveal that the Wav2Vec model outperforms the other evaluated models. However, evaluating without fine-tuning in a zero-shot setting proved to be challenging and resulted in a notable decrease in performance.
MOSA-Net [34] is a cross-domain model that uses inputs from multiple domains, including spectrograms, waveforms and SSL features. According to the authors, using features from multiple domains contributes to more accurate results, and training for predicting multiple metrics outperforms the task of predicting a single one. Although the model can be adjusted to predict subjective metrics, no comparative experiments have been conducted with other models.
Tseng et al. [28] compared models for predicting MOS using embeddings generated by Wav2Vec 2.0 [3], TERA [17], CAC [22], and APC [5]. The authors proposed an architecture where the human's identification is the input and defines the human bias. The experiments show that the Wav2Vec model achieves the best results at the sentence and system levels. Similarly, Tseng, Kao and Lee [29] proposed DDOS, a model for MOS prediction that uses Wav2Vec 2.0 for feature extraction in conjunction with a representation of the evaluator, in order to specify the human bias. The model consists of two submodules, the regression head and the distribution head, which uses attentive pooling and DNNs to predict the score and distribution of the data. The results of the submodules are then combined to predict the MOS.
Yang et al. [33] developed an framework for improving speech quality prediction by combining various SSL models, such as Wav2Vec 2.0, WavLM, HuBERT, and Data2Vec [2]. The framework consists of two parts: the first involves training the SSL models individually, while the second involves fusing the results of each model. The goal of the framework is to fine-tune the SSL models and enhance the accuracy of MOS prediction, treating model fusion as a technique similar to ensemble. Ragano et al. [25] presented experiments comparing combining Wav2Vec 2.0 model representations with features extracted from convolutional layers, exploring different architectural combinations. Ultimately, the authors found that incorporating features extracted from convolutional layers did not improve the results.
## 3 Model Proposal
The proposed model for evaluating the quality of synthesized speech consists of two modules: the Feature Extractor, which is responsible for extracting speech features, and the MOS Predictor, which predicts speech quality based on the extracted features. The architecture of the MOS Predictor consists of two dense blocks, ReLU activation function, and dropout. Several models are evaluated as the Feature Extractor, including SV, SL, and SSL models. The architecture of the proposed model can be seen in Figure 1. Details of the selected models are given below.
### Speaker Verification Models
The GE2E, Clova, TitaNet, and SpeakerNet models, originally proposed for SV, were selected for speech feature extraction and are discussed in more detail below:
**GE2E**[30] is a model that uses the Generalized End-to-End loss function for training and consists of LSTM layers and a fully connected layer with softmax activation. It extracts the vector of embeddings from the log-mel filterbank energies of each speaker's
sentences and computes the centroid of each speaker. The similarity matrix is determined from the centroid of each speaker and the parameters learned during training.
**Clova**[11] is a model based on the ResNet architecture [10] proposed in 2020 for speaker recognition. There are two versions: Q/SAP, lighter and with fewer parameters, and H/ASP, which focuses on the quality of the results. Both versions take log-mel-filterbanks as input and use residual blocks and attentive pooling layers. Version Q/ SAP uses self-attentive pooling. The model was trained with a combination of prototypical and softmax angular loss functions. Version H/ ASP achieved higher accuracy and was selected for use in this work.
**SpeakerNet**[14] is a model with encoder-decoder architecture proposed in 2020 for speaker recognition and verification. It is based on the QuartzNet [16] model and has a statistics pooling layer for intermediate feature extraction. The model is trained with the loss functions cross-entropy and additive angular margin. There are two versions, SpeakerNet-L and SpeakerNet-M, with 7M and 5M trainable parameters, respectively. The SpeakerNet-M version showed better results and is used in this work.
**TitaNet**[15] is a model with an encoder-decoder architecture proposed in 2022 for speaker verification tasks. It is based on the ContextNet model and has an initial block, a final block, and intermediate blocks that use time-channel separable convolutional layers and residual connections with squeeze and excitation layers. The model uses an attentive statistics pooling layer to extract temporal-independent intermediate features and a decoder consisting of two linear layers.
### Self-Supervised Learning based Models
SSL models are trained with thousands of hours of unlabeled audio. In this work, the following models were selected: Wav2vec 2.0, HuBERT, and WavLM.
**Wav2vec 2.0**[3] was developed for the task of automatic speech recognition, which learns latent representations through a process of masking parts of the audio. A new
Figure 1: The proposed model consists of two modules: Feature Extractor (in blue) and MOS Predictor (in yellow).
version, XLSR [6], was trained on a multilingual dataset consisting of 50,000 hours of recordings in 53 languages. The XLS-R [1] version is the latest and has been trained using over 400,000 hours of recordings in 128 languages.
**HuBERT**[12] learns latent representations of speech through training similar to that of Wav2Vec, along with the K-means algorithm used to discretize the input Mel spectrogram. In this work, two versions of the HuBERT model are used, called Large and xLarge, trained with 60,000 hours of English audio data.
**WavLM**[4] is a more general version of the HuBERT model that can be used for tasks such as speech separation, speaker diarization, speaker verification, and speech recognition. In this work, two versions were selected for evaluation: Large and BasePlus.
### Supervised Learning based Model
Radford et al. proposed _Web-scale Supervised Pretraining for Speech Recognition_ (Whisper) [24], an encoder-decoder model based on Transformer, which maps the audio spectrogram to a sequence of text tokens. Whisper was trained through supervised training with approximately 680,000 hours of labeled audio data in English and other 96 languages, including Brazilian Portuguese. Results show that the Whisper model is robust in different scenarios and outperforms SSL-based models when evaluated on different datasets. In this work, five versions were selected for evaluation: Tiny, Base, Small, Medium, and Large.
## 4 Experiments
This study evaluates a total of 16 models for predicting speech quality. Four of them are based on SV; seven are based on SSL (versions of Wav2vec 2.0 [3], WavLM [4] and HuBERT [12]); and five based on SL (versions of Whisper). Table 1 summarizes the models evaluated in this study. This table shows the dimensions of the output embedding and the total parameters of each model, in order to better compare the models.
We used two datasets for the experiments in this study: the VCC2018 dataset [20] and a Brazilian-Portuguese dataset, which was exclusively created for this present study and is known as BRSpeechMOS. The VCC2018 dataset consists of a total of 28,292 audio samples in English with a sampling rate of 16kHz, each sample being evaluated by 4 evaluators. The BRSpeechMOS dataset contains 2,428 audio samples at 16kHz, and each of these samples has been evaluated by an average of two evaluators. The distribution of scores for the dataset can be seen in Figure 2. This dataset has been utilized to assess the model's performance on a dataset with limited resources.
All evaluated models were first trained with the VCC2018 dataset and then fine-tuned with BRSpeechMOS. Model training was stopped early when no more improvements were observed in a test set, with Spearman correlation analysis. Then, the weights of the best models were selected to be evaluated in a validation set. The experiments were performed on a DGX-1 server running the Linux Ubuntu 18.04 operating system. The server was equipped with a Dual 20-Core Intel Xeon processor E5-2698 v4 2.2 GH, 256 GB RAM, and an NVIDIA(r) Tesla(r) V100 GPU.
## 5 Results
The results are presented below, grouping the models according to the following categories: speaker verification (SV), self-supervised learning (SSL) and supervised learning (SL). The evaluation metrics used in this study include Pearson correlation (LCC), Spearman rank correlation coefficient (SRCC), Kendall-Tau rank correlation (KTAU), and mean square error (MSE).
### VCC2018 Experiments
Table 2 shows the results of all performed experiments using the VCC2018 dataset. For comparison purposes, the results of the experiments using the MOSNet [19] model
\begin{table}
\begin{tabular}{|c c c c c|} \hline Category & Model & Version & Output dim & Total param \\ \hline \hline Baseline & MOSNet [19] & - & - & 1,1M \\ \hline \multirow{4}{*}{SV} & TitaNet [15] & Large & [192] & 25,3M \\ & SpeakerNet [14] & Medium & [256] & 5M \\ & GE2E [30] & - & [256] & 1,4M \\ & CLOVA [11] & H/ASP & [512] & 8M \\ \hline \multirow{4}{*}{SSL} & \multirow{4}{*}{Wav2Vec 2.0 [1]} & xls-r-300m & [1024, T] & 300M \\ & & xls-r-1b & [1280, T ] & 1B \\ & & xls-r-2b & [1920, T] & 2B \\ \cline{1-1} & \multirow{2}{*}{WavLM [4]} & Base-Plus & [768, T] & 94M \\ \cline{3-5} & & Large & [1024, T] & 316M \\ \cline{3-5} & & Large & [768, T] & 300M \\ \cline{3-5} & & xLarge & [1024, T] & 1B \\ \hline \multirow{4}{*}{SL} & \multirow{4}{*}{Whisper [24]} & Tiny & [384, T] & 39M \\ & & Base & [512, T] & 74M \\ \cline{1-1} & & Small & [768, T] & 244M \\ \cline{1-1} & & Medium & [1024, T] & 769M \\ \cline{1-1} & & Large & [1280, T] & 1,5B \\ \hline \end{tabular}
\end{table}
Table 1: The MOSNet model is in the _Baseline_ category; in the SV category, models based on speaker verification; in the SSL category, models based on self-supervised training; in the SL category, models based on supervised training. The ”Output dim” column shows the size of _embeddings_ generated by the _Feature Extractor_ module. The ”Total param” column shows the total set of training parameters for the Feature Extractor module.
Figure 2: Distributions of scores for BRSpeechMOS dataset.
are also presented. Among the SV models, TitaNet obtained the best results in all metrics, with LCC=0.6933, SRCC=0.6667, KTAU=0.5005 and MSE=0.0160. However, the SpeakerNet, GE2E, and CLOVA models show similar results, all superior to the MOSNet model.
Among the SSL models, Table 2 shows that the Wav2Vec 2.0 xls-r-1b model presented the best LCC value, with a value equal to 0.7140. However, in the other metrics, the WavLM-Large model performs best, with SRCC=0.7036, KTAU=0.5316 and MSE=0.0151. On the other hand, HuBERT presented the worst results among the SSL models evaluated. And among the SL models, it appears that the Whisper Large model presented the best results, with LCC=0.727, SRCC=0.7061, KTAU=0.5365 and MSE= 0.0194. It is worth mentioning that the Whisper Large model presented the best results among all the models using the VCC2018 dataset.
### BRSpeechMOS Experiments
Table 3 shows the results of all experiments using the BRSpeechMOS dataset. The following experiments using the MOSNet model are also presented: _MOSNet ZeroShot_ (MZS), which was trained using only the VCC2018 dataset and follows the methodology and hyperparameters used by the original authors; _MOSNet From Scratch_ (MFS), which was trained exclusively with the BRSpeechMOS dataset; and _MOSNet Fine Tuning_ (MFT), which was pre-trained with the VCC2018 dataset and fine-tuned with the BRSpeechMOS dataset.
\begin{table}
\begin{tabular}{|c c c c c c c|} \hline Category & Model & Version & LCC \(\uparrow\) & SRCC \(\uparrow\) & KTAU \(\uparrow\) & MSE \(\downarrow\) \\ \hline \hline \multirow{8}{*}{SV} & MOSNet & - & 0.5588 & 0.5159 & 0.3765 & 0.5166 \\ & TitaNet & Large (TtN) & **0.6933** & **0.6667** & **0.5005** & **0.0160** \\ & SpeakerNet & Medium (SpN) & 0.6428 & 0.6210 & 0.4598 & 0.0202 \\ & GE2E & - (Ge2) & 0.6118 & 0.5846 & 0,4306 & 0.0193 \\ & CLOVA & H/ASP (CLO) & 0.6903 & 0.6623 & 0,4966 & 0.0162 \\ \hline \multirow{8}{*}{SSL} & \multirow{3}{*}{Wav2Vec 2.0} & xls-r-300m (Wv3) & 0.7090 & 0.6866 & 0.5190 & 0.0153 \\ & & xls-r-1b (Wv1) & **0.7140** & 0.6893 & 0.5210 & 0.0268 \\ & & xls-r-2b (Wv2) & 0.7014 & 0.6757 & 0.5096 & 0.0159 \\ & & Base-Plus (WIB) & 0.6917 & 0.6816 & 0.5122 & 0.0163 \\ & & Large (WIL) & 0.7120 & **0.7036** & **0.5316** & **0.0151** \\ & & Large (HbL) & 0.6692 & 0.6441 & 0.4800 & 0.0170 \\ & & xLarge (HbX) & 0.6871 & 0.6684 & 0.5012 & 0.0170 \\ \hline \multirow{8}{*}{SL} & \multirow{3}{*}{Whisper} & Tiny (WpT) & 0.7072 & 0.6881 & 0.5187 & 0.0281 \\ & & Base (WpB) & 0.7178 & 0.6951 & 0.5249 & 0.0225 \\ \cline{1-1} & & Small (WpS) & 0.7136 & 0.6906 & 0.5218 & 0.0212 \\ \cline{1-1} & & Medium (WpM) & 0.7205 & 0.6957 & 0.5267 & 0.0195 \\ \cline{1-1} & & Large (WpM) & **0.7274** & **0.7061** & **0.5365** & **0.0194** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results of experiments using the VCC2018 dataset.
The results of the experiments using the BRSpeechMOS dataset showed that not all models generalize well in a low-resource dataset. Among the SV models, the SpeakerNet model performed best in all metrics evaluated, with LCC=0.6963, SRCC=0.6772, KTAU=0.5173 and MSE=0.0311, followed by the CLOVA model. The TitaNet model was the one that presented the worst results. We believe that the poor performance on the BRSpeechMOS dataset is due to the small dimension of the output embedding, equal to 192 as shown in Table 1, which likely causes the embeddings to specialize in the features that differentiate the speakers. Therefore, more training data would be needed for the MOS Prediction module to accurately map the features to the MOS score.
Among the SSL models using the BRSpeechMOS, the Whisper Large model stood out, with LCC=0.6858, SRCC=0.6831, KTAU=0.5275 and MSE=0.0322. This table also confirms that the HuBERT model has lower performance compared to the other models. And among the SL models, it can be seen that the Whisper Small model had the best performance, with LCC=0.6980, SRCC=0.6968, KTAU=0.5400 and MSE=0.0440, followed by the Whisper Large model.
## 6 Discussion
When evaluating SV models using the VCC2018 dataset, all models presented good results. That is, using a dataset with a large number of samples, all models proved to be adequate to predict speech quality, with TitaNet presenting the best results. However, when conducting the same experiments with the BRSpeechMOS dataset, which has 2,428 samples, the results showed that the SpeakerNet model can extract more adequate
\begin{table}
\begin{tabular}{|c c c c c c c|} \hline Category & Model & Version & LCC \(\uparrow\) & SRCC \(\uparrow\) & KTAU \(\uparrow\) & MSE \(\downarrow\) \\ \hline \hline \multirow{4}{*}{Baseline} & MOSNet & Zero Shot (MZS) & 0.2196 & 0.2107 & 0.1520 & 0.0611 \\ & MOSNet & From Scratch (MFS) & 0.5090 & **0.3677** & **0.2693** & 0.0452 \\ & MOSNet & Fine Tuning (MFT) & **0.5118** & 0.3603 & 0.2612 & **0.0445** \\ \hline \multirow{4}{*}{SV} & TitaNet & (TtN) & 0.1012 & 0.1177 & 0.0849 & 0.0623 \\ & SpeakerNet & (SpN) & **0.6963** & **0.6772** & **0.5173** & **0.0311** \\ & GE2E & (Ge2) & 0.2655 & 0.2584 & 0.1791 & 0.0704 \\ & CLOVA & (CLO) & 0.6860 & 0.6755 & 0.5123 & 0.0359 \\ \hline \multirow{4}{*}{SSL} & \multirow{4}{*}{Wav2Vec 2.0} & xls-r-300m (Wv3) & 0.6739 & 0.6593 & 0.5073 & 0.0335 \\ & & xls-r-1b (Wv1) & 0.6539 & 0.6451 & 0.4937 & 0.0477 \\ & & xls-r-2b (Wv2) & 0.6667 & 0.6439 & 0.4959 & 0.0341 \\ & & Base-Plus (WIB) & 0.6082 & 0.5936 & 0.4463 & 0.0382 \\ & & Large (WIL) & **0.6858** & **0.6831** & **0.5275** & **0.0322** \\ & & Large (HbL) & 0.5959 & 0.5863 & 0.4407 & 0.0482 \\ & & xLarge (HbX) & 0.6262 & 0.6214 & 0.4669 & 0.0368 \\ \hline \multirow{4}{*}{SL} & \multirow{4}{*}{Whisper} & Tiny (WpT) & 0.6587 & 0.6240 & 0.4753 & 0.0564 \\ & & Base (WpB) & 0.6460 & 0.6083 & 0.4645 & 0.0486 \\ \cline{1-1} & & Small (WpS) & **0.6980** & **0.6968** & **0.5400** & **0.0440** \\ \cline{1-1} & & Medium (WpM) & 0.6904 & 0.6696 & 0.5161 & 0.0534 \\ \cline{1-1} & & Large (WpL) & 0.6956 & 0.6852 & 0.5277 & 0.0777 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results of experiments using the BRSpeechMOS dataset.
features to evaluate samples quality even when using a much smaller dataset compared to VCC2018.
The representations of the BSpeechMOS using the SpeakerNet model were extracted and projected to 2D space using t-SNE [21]. Figure 3 illustrates the relationship between sample representations and their MOS score. It can be observed that the samples with score 5 (blue), 4 (cyan), and 3 (green) are in clusters. On the other hand, there are also clusters formed by samples with grades 1 (red), 2 (yellow), and 3 (green). Probably, if the BRSpeechMOS samples were evaluated by a larger number of evaluators, the clusters would be more homogeneous. The projections using the VCC2018 are not shown since all the models performed relatively well in the quality prediction task.
Experiments with the SSL models, Wav2Vec 2.0 [1], WavLM [4], HuBERT [12], and with SL model Whisper [24], using both datasets showed very similar results. The Whisper model showed the best results, which can be justified by a large amount of training data in different languages, including Brazilian Portuguese. In contrast, the HuBERT model showed slightly worse results compared to the other SSL models. This is evidenced by the correlation metrics, as shown in Tables 2 and 3.
When comparing all models, it is noticeable that the SL and SSL models are superior to the SV models. However, it is worth noting that the SL model with the best results, SpeakerNet, has only 5M parameters, while the smallest SL-SSL model, Whisper-Tiny, has 39M parameters, almost \(8\)\(times\) the number of parameters of the SpeakerNet model. To better compare the models, Figure 4 shows the ranking of the models with the best results, with the models sorted on the \(x\) axis by the number of parameters of the Feature Extractor module.
performance on the VCC2018 dataset. Additionally, when applied to the BRSpeechMOS dataset, the Whisper model, specifically the Small version, continued to exhibit the highest predictive accuracy, highlighting its ability to generalize well. Furthermore, our study suggests that models designed for speaker verification can also be suitable for predicting speech quality, with the SpeakerNet model performing particularly well, even when using the BRSpeech dataset, which has limited resources and was created exclusively for this study.
## 8 Acknowledgements
The authors are grateful to the Center of Excellence in Artificial Intelligence2 (CEIA) at the Federal University of Goias (UFG) for their support and to CyberLabs3 and Coqui4 for their valuable assistance.
Footnote 2: [https://ceia.ufg.br/](https://ceia.ufg.br/)
Footnote 3: [https://cyberlabs.ai](https://cyberlabs.ai)
Footnote 4: [https://coqui.ai/](https://coqui.ai/)
|
2307.06963 | Is Task-Agnostic Explainable AI a Myth? | Our work serves as a framework for unifying the challenges of contemporary
explainable AI (XAI). We demonstrate that while XAI methods provide
supplementary and potentially useful output for machine learning models,
researchers and decision-makers should be mindful of their conceptual and
technical limitations, which frequently result in these methods themselves
becoming black boxes. We examine three XAI research avenues spanning image,
textual, and graph data, covering saliency, attention, and graph-type
explainers. Despite the varying contexts and timeframes of the mentioned cases,
the same persistent roadblocks emerge, highlighting the need for a conceptual
breakthrough in the field to address the challenge of compatibility between XAI
methods and application tasks. | Alicja Chaszczewicz | 2023-07-13T07:48:04Z | http://arxiv.org/abs/2307.06963v1 | # Is Task-Agnostic Explainable AI a Myth?
###### Abstract
Our work serves as a framework for unifying the challenges of contemporary explainable AI (XAI). We demonstrate that while XAI methods provide supplementary and potentially useful output for machine learning models, researchers and decision-makers should be mindful of their conceptual and technical limitations, which frequently result in these methods themselves becoming black boxes. We examine three XAI research avenues spanning image, textual, and graph data, covering saliency, attention, and graph-type explainers. Despite the varying contexts and timeframes of the mentioned cases, the same persistent roadblocks emerge, highlighting the need for a conceptual breakthrough in the field to address the challenge of compatibility between XAI methods and application tasks.
Machine Learning, ICML
## 1 Introduction
The development of explainability methods, or their incorporation into AI models, has often been motivated by the aim of enhancing user trust in the model. Users may view explanations similarly; for instance, a survey of clinicians by (Tonekaboni et al., 2019) revealed that they perceive explainability as _"a means to justify their clinical decision-making."_
Despite numerous results highlighting limitations of explanations (Adebayo et al., 2018; Rudin, 2019; Slack et al., 2020; Tomsett et al., 2020; Neely et al., 2021; Bilodeau et al., 2022), they continue to be considered in high-risk decision contexts, such as healthcare (Loh et al., 2022).
While various methods have been developed and used, we observe a missing link between the theoretical or technical problems that these methods address and the way in which practitioners may hope to apply them - to assist with actual real-world tasks and decision-making. Our work elucidates why XAI methods are not, in general, reliable decision justification enablers.
We focus on a broad class of explanations, for which a complex uninterpretable machine learning model is first trained, and then XAI aims to "explain" its predictions for a given input. We expose the unstable foundations of the examined XAI methods by presenting three illustrative research case studies that disclose a prevailing pattern across various machine learning tasks, data domains, and time frames. This pattern can be succinctly described as follows:
* Stage 1: XAI method is developed, but its utility is presented in a simplistic setup; theoretical guarantees relevant to potential real-world tasks are missing.
* Stage 2: As variations of the method are developed, an alternative perspective allows for a more comprehensive evaluation, revealing the method's limitations and bringing its reliability into question.
* Stage 3: The inconclusive, and sometimes contradictory results, provided by subsequent evaluation techniques, metrics, and theoretical considerations, make it challenging to determine the best-suited method variation for specific practical tasks and to identify when the method is truly effective and dependable. It demonstrates that the methods cannot be considered reliable task-agnostic explainers.
Our work is not a survey but can serve as a framework for unifying the challenges in the development process of contemporary XAI, which tends to follow the listed above stages. We hope this framework assists researchers, current practitioners, and potential XAI users in a better contextual understanding of what XAI currently is and is not.
In the following, we investigate three XAI research trajectories covering image, text, and graph data, which utilize saliency, attention, and graph-based explanation methods, respectively. Each research story is presented in a distinct section, wherein the subsections directly correspond to the stages delineated in the framework we propose.
## 2 Saliency explainers & image data
The first research story is a classic of explainable AI literature. Natural to modern deep learning pipelines and straightforward to implement, saliency methods have quickly become a popular XAI tool (Samek et al., 2021). The idea behind them is to apply the first-order approximation of the amount of influence that the input data has on the machine learning model output.
The saliency methods can be used across different data domains. Originally, they were introduced for computer vision tasks and are still frequently applied in this context. In the following discussion, we also focus on image data.
### Stage 1: Idea and methods
The vanilla saliency method (Simonyan et al., 2014) boils down to a gradient computation of output with respect to input data. The result is a map with the same dimensionality as the input, showing each input part's detailed "importance" (here, gradient). Taking images as an example, each pixel gets its attribution value, which is a gradient of output with respect to this pixel. The end product is a saliency map (Figure 1).
The initial gradient idea sparked a fruitful investigation that aimed to make the gradient-based influence approximation better and the resulting maps "clearer" (Figure 2). The engineered methods included combining gradients with input data values, smoothing gradients by adding noise and averaging, integrating gradients over a path from a baseline image (e.g., a black image), zeroing out negative gradients while backpropagating or combining gradients and final convolutional layer feature maps (GradientInput (Shrikumar et al., 2017), SmoothGrad (Smilkov et al., 2017), Integrated (Sundararajan et al., 2017), Guided Backpropagation (Springenberg et al., 2015), GradCAM (Selvaraju et al., 2016))
None of those methods was evaluated in a quantitative way, as pointed out by the paper introducing SmoothGrad, the latest of the listed above methods: _"Since there is no ground truth to allow for quantitative evaluation of sensitivity maps, we follow prior work and focus on two aspects of qualitative evaluation."_(Smilkov et al., 2017) The two mentioned aspects relied on purely visual qualitative analysis.
At that stage, evaluations and comparisons prioritized visual aspects. Although many methods drew inspiration from theoretical concepts, they lacked a direct connection and guarantees related to how well the "explanations" correspond to the model's "reasoning" process.
### Stage 2: Experiments questioning reliability
Throughout that development phase, assessments were carried out to determine if the saliency maps generated aligned with the method's initial goal to "explain" the model. The method's reliability was called into question through meticulously designed experiments.
A fresh perspective on the evaluation and validity of the saliency methods came in the work _"Sanity Checks for Saliency Maps"_(Adebayo et al., 2018). The paper introduced two sanity checks that could reveal if a given saliency method is not working as intended, i.e., serving as an explanation. The two checks analyzed how the saliency maps change
* when the model weights are randomized;
* when the model is trained on data with random labels.
The results of the novel quantitative empirical analysis were alarming. Only two of the tested methods fully passed the checks. The other ones produced visually very similar results **independent** of whether the model was properly trained, had randomized weights or was trained on **random** labels. For example, if saliency maps for the classification of a digit 0 were generated for a model trained on the MNIST dataset with random labels, the "explanations" looked as in Figure 3. Many saliency methods produced sensible "explanations", while the network was unable to classify with a better than random guessing accuracy.
One might point out that this analysis is still purely vi
Figure 1: A Convolutional Neural Network was trained to classify images. The dog image (left) is fed into the model. The resulting saliency map (right) for the predicted class is obtained by a gradient computation through a single backward pass (right). The intensity of each pixel represents the magnitude of the output gradient with respect to this pixel. Figure adapted from (Simonyan et al., 2014)
Figure 2: Saliency maps for an input image of a bird (left) generated by different saliency methods. While the vanilla gradient method output is noisy, the other methods “improve” the map visually. Figure adapted from (Adebayo et al., 2018)
sual. Indeed, the authors also compared pre- and post-randomization saliency maps quantitatively, and, for some of the methods, the metrics were telling a different story than the visuals, i.e., in some cases, some metrics distinguished between the two scenarios (pre- and post-randomization) while visually the produced maps seemed very similar.
### Stage 3: Beyond original visual assessment
It became evident that solely relying on a visual comparison of methods is insufficient for assessing their effectiveness. As a result, new metrics and tests have been developed to evaluate the connection between methods and their corresponding tasks. However, designing these metrics presented challenges.
#### 2.3.1 Metrics and ground truth
A natural next step might be to find "good metrics", based on which it could be determined whether or not a given saliency method is best at producing "relevant explanations". This, however, is a tough problem when ground truth explanations are unknown; many suggested metrics for saliency map evaluation have been shown to be unreliable (Tomsett et al., 2020; Hedstrom et al., 2023). Another approach is to use a dataset with known ground truth. Under such a scenario, we can see which method recovers the ground truth best. The results, however, do not necessarily generalize to a real-world dataset of interest (Yang and Kim, 2019). Moreover, a "golden" metric would not be a complete problem solution. A visual inspection might be the actual way how users interact with an explanation method. If this is the case, we would need to ensure that conclusions based on visual perception are helpful for a task of interest; a proper metric would be insufficient.
#### 2.3.2 Sanity checks
Not only metrics but also the tests themselves are so far unable to tell the whole story. A sanity check or an evaluation protocol is not a task-independent indicator of the saliency method's validity. We should take into account how the saliency method, model, and task interact with each other, as this might confound the final validity or invalidity conclusion. For example, even if the network weight randomization sanity check fails, it actually does not determine that the tested saliency method is generally insensitive to model weights. It rather shows that the inductive bias in the network is strong enough that for some specific combinations of saliency method and task, the method produces "explanation-like" looking outputs (Yona and Greenfeld, 2021). Moreover, the above sanity checks are not exhaustive, and other tests have been proposed (Kindermans et al., 2019).
Saliency maps are explanations that can be generated for any differentiable models. However, in light of these limitations, providing concrete guarantees or applying rigorous assessment tools to accurately establish their comparative effectiveness and reliability is currently unattainable.
## 3 Attention explainers & textual data
Let us consider another explainability method based on potentially "interpretable" attention components of a model.
The attention mechanism (Bahdanau et al., 2015; Vaswani et al., 2017) is a technique used in neural networks to focus on certain parts of the input data selectively. By emphasizing or ignoring certain information, attention can improve model performance and, possibly, model explainability potential. While attention "explanations" are currently used across different tasks, attention was initially introduced as part of natural language processing models. In the following, we focus on the NLP domain.
### Stage 1: Idea and methods
The attention weights that determine focus on the given input part can be considered "importance" weights, i.e., the bigger the weight, the more critical the input element is (Figure 4)1. Highlighting input elements with the greatest attention scores has been used in numerous research papers to visualize neural network "reasoning process" and to somehow "explain" it, for example (Xu et al., 2015; Choi et al., 2016; Lee et al., 2017; Mullenbach et al., 2018; Chen
Figure 3: Saliency maps produced for a MNIST image of digit zero. For the majority of the methods, the explanations produced for a properly trained model (top row) look very similar to maps generated for a model trained on random labels (bottom row). Figure adapted from (Adebayo et al., 2018)
et al., 2020).
**Question**: Where is Sandra?
**Original Attention**:John travelled to the garden. Sandra travelled to the garden
Figure 4: A model with the attention module was trained to solve a question answering problem. For the presented question, the greatest attention weight was attributed to the word _garden_. Assuming attention correlates with importance, this might indicate that the word _garden_ was crucial for the model prediction. Example adapted from Jain & Wallace (2019)
At that stage, interpreting weights in the attention module as "importance" is abstract and ill-defined. Used to highlight key input components, this high-level concept aids users in identifying crucial input aspects but falls short of providing a thorough understanding of underlying mechanisms and the model's "reasoning".
### Stage 2: Investigations challenging dependability
If the attention weights are employed as explanations for the model's "reasoning process", it is essential to assess them as such and examine their correlation with the model's actual behavior.
The work _"Attention is not explanation"_Jain & Wallace (2019) raised concern about the ongoing natural adoption of the attention mechanism for explainability purposes. The authors posed the following questions:
* Do attention weights correlate with other feature importance measures?
* Do alternative attention weights significantly change model predictions?
Based on the performed empirical tests, the answer to both was _"No, largely they do **not**."_
To answer the first question, the work compared computed attention scores with importance measures based on gradients (see Section 2) and leave-one-out methods - one input element was removed, and the amount of change in the output was attributed as the importance of this element. The experiments demonstrated no consistent correlation between the attention scores and those two other importance attribution methods.
The second question was addressed by assessing the output change in the case of permuted attention weights and adversarial attention weights, which were found by maximizing the distance from the original attention scores while keeping the model output similar. The two tests showed there exist many alternative attention weights configurations (found by shuffling or adversarially) that produce similar model outputs (Figure 5).
### Stage 3: Inconsistencies in evaluation methods
Previously established evaluation methods were not comprehensive; further research and findings demonstrate numerous nuances related to the evaluation process and the evaluation tasks. Various papers have attempted to make attention mechanisms "more explanatory", while others suggest that attention may serve as an explanation for some tasks but not for others.
#### 3.3.1 Adversarial weights
The mentioned above conclusions were shortly questioned by _"Attention is **not not explanation"_Wiegreffe & Pinter (2019). The authors found the permuted and adversarial attention weights experiments insufficient to support the claim that attention is not an explanation.
Therefore they suggested new diagnostic tests. The experiments were performed on the datasets filtered by a procedure that checked if a model with an attention module was significantly better than a model with fixed uniform weights. If the test outcome was negative, the authors did not consider the given dataset further, claiming that _"attention is not explanation if you don't need it."_.
Then an adversary network was trained for the whole dataset, unlike Jain & Wallace (2019), who found adversarial weights per individual data instance. This change in adversary computation was justified by the statement that manipulating part of the trained model per instance did not actually demonstrate a full adversarial model able to produce claimed adversarial explanations.
The new experimental analysis showed that the fully adversarial model did manage to find adversarial attention weights, but they were not as distant from the original weights as in Jain & Wallace (2019). Moreover, the adversarial weights seemed to have less encoded information when tested with the introduced diagnostic tool, which compared the performance of the MLP network with the final layer averaging token representations using original atten
Figure 5: The original attention mechanism focuses on the word garden, which refers to the place to which Sandra traveled. After adversarially shifting the focus to the word irrelevant to the question asked (the place where John traveled), the model output remains almost the same. Example adapted from Jain & Wallace (2019)
tion vs. adversarial weights. The traditional original attention weights resulted in better MLP evaluation scores. The authors conclude that in the current form, attention weights cannot be treated as _"one true, faithful interpretation of the link"_ between model inputs and outputs but that the experimental analysis done so far does not show that attention is not an explanation.
#### 3.3.2 Correlation with other attributions
While (Wiegreffe and Pinter, 2019) found the experimental analysis for agreement between attention importance and other feature importance measures valid, the later work (Neely et al., 2021) claims that measuring agreement is not a proper evaluation method. Consistency as evaluation implicitly assumes that one of the methods is nearly ideal, since we aim to find a high correlation with it. This is not necessarily true. (Neely et al., 2021) show little correlation between a range of feature attribution methods.
#### 3.3.3 Further points
(Serrano and Smith, 2019) claim attention weights do not form good explanations as it is easier to make a model flip its decision by removing features with higher gradient importance rather than ones with high attention weights. Conversely, (Vashishth et al., 2019) point out that cited above (Jain and Wallace, 2019; Wiegreffe and Pinter, 2019; Serrano and Smith, 2019) all evaluated attention explainability for a single NLP task type. (Vashishth et al., 2019) claim that in other NLP tasks, attention is "more explainable". Some researchers attempt to modify attention mechanisms so that it better reflects the model's decision process (Mohankumar et al., 2020).
The debate, on whether attention is a good explanation and, if so, when, is still ongoing (Bibal et al., 2022). The discrepancies observed among evaluations result in no guarantees when applying this explanation method in task-agnostic settings, given the numerous variations in behaviors encountered.
## 4 Graph-type explainers & graph data
Let us continue to the most recent research story, which in contrast to saliency 2 and attention 3 starts with some set of clearly defined quantitative "explainability" evaluation tests.
As Graph Neural Networks (GNNs) emerged as a new paradigm of machine learning on graphs, a need for XAI methods for GNNs arose.
The saliency methods (Section 2) can be applied to GNNs, as pioneered by (Pope et al., 2019; Baldassarre and Azizpour, 2019). Attention weights XAI ideas (Section 3) can be used as well and transfer directly to the GNN domain when Graph Attention Networks are the model of choice (Velickovic et al., 2018).
Although these two categories of explainability techniques can be employed, the inherent nature of graphs that combine feature information and topological structure gave rise to many XAI ideas targeting graph-based data.
### Stage 1: Idea and methods
The first method providing explanations specifically for GNNs was GNNExplainer (Ying et al., 2019). GNNExplainer aims to find a compact subgraph that is the most influential driver behind a GNN prediction. This subgraph is determined by the edge and feature masks found in a gradient-based optimization procedure.
To evaluate the method (Ying et al., 2019) suggest a set of synthetic datasets. The graphs in the datasets are generated randomly, and the ground truth classes are dictated by predefined motifs planted in the graphs (Figure 6). The explainers can then be evaluated based on their ability to find those motifs. (Ying et al., 2019) evaluate predictions also qualitatively on two real-world datasets.
The subsequent work, such as PGExplainer (Luo et al., 2020), PGMExplainer (Yuan et al., 2021), and SubgraphX (Vu and Thai, 2020), followed (Ying et al., 2019) and evaluated GNN explainers on the suggested synthetic datasets or their close variants. The set of real-world datasets employed displayed variation. One dataset frequently but not always utilized (Dai et al., 2022) to evaluate GNN explainers is MUTAG (Debnath et al., 1991) (Figure 7). Real-world datasets usually do not have ground truth "explanations", therefore the evaluation is either qualitative or based on some metric showing how the model's predictions change after manipulating the original graph, e.g., by removing the explanation
Figure 6: Generation of a synthetic graph XAI evaluation dataset introduced by (Ying et al., 2019). First, a special motif is defined, here, a house shape (left). Then a random base graph is created (middle). Finally, a motif is planted in the generated random graph (right). Node classes are defined based on their different structural roles (colors) in the graph, optionally also on node features.
part from the graph or taking the produced explanation as input, without the rest of the graph.
At this phase, the construction of graph explainable artificial intelligence (XAI) involves artificial tasks and an absence of definitive assurances.
### Stage 2: Navigating datasets limitations
(Faber et al., 2021) find the state of evaluation of former GNN explainers' work not satisfactory. They find evaluation procedures with synthetic datasets and also with real-world datasets limited and unreliable. They argue that former evaluations on real-world datasets done by manipulating the input graphs (for example, removing the explanation subgraph and checking how significantly the model output changes) are improper. For instance, in the case of more complex explanations, if a class corresponds to the presence of motif A or motif B, when both motifs are in a graph, removing either of the motifs would not change the model output. This would indicate that the removed motif is not important for the prediction, while it actually is. Moreover, they point out that, similarly to other data domains, manipulating input graphs changes input data distribution, which might affect the evaluation procedure.
Therefore they recommend using synthetic benchmark datasets with known ground truth. However, they find existing synthetic datasets and evaluation procedures prone to several pitfalls that prevent a fair explainers comparison:
* **Bias term** GNN model can learn a bias term; for example, by default, it could output class 0, which corresponds to some planted motif A, and output class 1 if found evidence of the existence of motif B. Therefore, explainers will have no success finding motif A as an explanation subgraph because of the bias terms; this motif A is simply not used by the GNN reasoning process.
* **Redundant evidence** If detecting only part of the motif suffices to produce a correct output, we should not expect an explainer to be able to find a full explanation subgraph.
* **Trivial explanation** The existence of a trivial explanation, such as nearest neighbors nodes or personalized PageRank (Page et al., 1999) attributions, might result in a positive evaluation of a GNN explainer, while it might be doing something trivial not connected to the GNN "reasoning".
* **GNN performance and alignment** In case GNN performs poorly on a given task, the fact that an explainer does not find a proper explanation may be due to imperfect GNN training and not an underperforming GNN explainer. Moreover, if architecture enforces processing of focus on the non-explanation graph part, a GNN explainer will be unable to uncover the pure explanatory part.
Three synthetic benchmark datasets introduced by (Faber et al., 2021) aim to avoid the above pitfalls. The authors evaluate several saliency gradient-based methods (Section 2) and two graphs targeting explainers GNNExplainer and PGMExplaner. The latter did not perform well on the proposed benchmarking datasets, while the former gave "mixed results", with no method mastering all three datasets.
### Stage 3: Lack of evaluation standardization
As the field of GNN explainers grew, several experimental research papers attempted to evaluate a set of developed GNN explainability methods.
(Agarwal et al., 2023) develop a flexible synthetic data generator taking (Faber et al., 2021) recommendations into account. The best performer among tested methods for the generated dataset is graph-data-focused SubgraphX. However, gradient-based methods take the lead when the experiments are conducted for three real-world datasets (modified for unique explanations).
(Amara et al., 2022) agree with (Faber et al., 2021) and argue that the initial set of synthetic datasets "_gives only poor informative insight._" They show that simple baselines (e.g., PageRank) almost always beat the developed, more sophisticated explainers on those datasets. Moreover, they evaluate GNN explainers on ten real-world datasets without ground-truth explanations. In contrast to (Faber et al., 2021), they perform an experimental evaluation using metrics quantifying GNN model output change when the explanatory part is removed or the only part left. Overall they find that the simple gradient saliency method performs best. Additionally, they present a decision tree of what explainability method is recommended depending on the metric of interest and explanation type. However, when testing the framework on a new dataset, the best-performing explainer turned out to
Figure 7: The MUTAG dataset (Debnath et al., 1991) consists of graphs representing different molecules. Some of those molecules are mutagenic. The machine learning task is to classify the graphs as mutagenic or not. To solve this problem, a GNN can be trained. An explanation module (here PGExplainer) is expected to detect subgraphs whose chemical properties determine mutagenicity. Figure from (Luo et al., 2020)
be different from all the methods in the suggested decision tree for any choice of metrics and any explanation type.
When testing on a different set of real-world datasets without ground truth (Yuan et al., 2022) conclude that graph-data focused SubgraphX method _"outperforms the other methods significantly and consistently."_ On the contrary, a study of the use case of cyber malware analysis finds that the gradient-based methods perform best (Warmsley et al., 2022).
(Longa et al., 2022) introduce a set of synthetic datasets with known ground truth, aiming to follow (Faber et al., 2021) guidelines. The experimental analysis shows that which method performs best is not constant and changes between datasets. Not only datasets but also a GNN architecture influences strongly performance of explainability methods, which work well for some architectures and fail for others.
All of the above experimental studies use various evaluation protocols, a changing set of explainability methods, a different set of synthetic or real-world datasets, and various GNN model architectures.
The above, therefore, indicates that there are no comprehensive guidelines available for practitioners and that the field has yet to reach a consensus on evaluation methods for graph XAI.
## 5 Summary
### Lack of reliability guarantees for XAI methods
We established a unifying framework to address XAI challenges, identifying a recurring three-stage pattern in method development, regardless of data type, explanation unit, or timeline. Initially, XAI methods lack real-world applicability due to simplistic presentation and missing guarantees. Assessing emerging variations reveals limitations and reliability concerns. Inconsistent evaluation results challenge the determination of tasks for specific methods, dismissing universally applicable XAI methods.
We illustrated the pattern in three cases - image, textual, and graph data with saliency, attention, and graph-type explainers (respectively addressed in Sections 2, 3, and 4). These cases, developed at different timescales, encountered similar roadblocks, emphasizing shared challenges in XAI development and proving a pervasive absence of comprehensive evaluation frameworks, standardized metrics, and robust reliability guarantees in the explainable AI domain.
Even though we did not cover all XAI areas, similar stories to the ones presented exist, and the same challenges apply. For example, not-covered techniques, such as SHAP (Lundberg and Lee, 2017) and LIME (Ribeiro et al., 2016), have been demonstrated to exhibit instability and inconsistency. This means that minor changes to the input can lead to significant differences in the explanations provided, and even reapplying the same method to an identical input can yield dissimilar results (Alvarez-Melis and Jaakkola, 2018; Slack et al., 2020; Lee et al., 2019). These methods also lack robust metrics for evaluating the "quality" of explanations (Lakkaraju et al., 2022).
### Missing method-task link
Despite the abovementioned limitations, AI explainability methods continue to be applied in various contexts, propelled by the increasing demand for transparency, accountability, and trust in AI systems.
Effectively pairing XAI methods with problems they can address presents its own unique challenge. Currently, there is a scarcity of guidelines indicating which method or combination of methods would be best suited for a specific problem or dataset (Han et al., 2022). While numerous XAI surveys and taxonomies of developed methods exist, they primarily focus on technical categories (e.g., gradient vs. attention methods) rather than on categories that would suggest dataset and problem-specific XAI methods (e.g., debugging, decision making, spurious correlations), assuming they exist.
As an example, in a recent survey of XAI for healthcare (Jin et al., 2022) also used technical categories and stated: _"At the current state of the art, which method we should choose still does not have a definite answer."_
Furthermore, choosing any explanation method is not a viable option, as existing methods often disagree on the explanations they provide (Neely et al., 2021; Krishna et al., 2022).
## 6 Towards provably useful XAI
In this section, we analyze what conceptual and methodological breakthroughs are needed in order to break the recurring pattern blocking XAI methods development.
We hypothesize that one of the main challenges between the current state of XAI and provably useful XAI methods is the missing method-task link.
To be able to guarantee the method's usability and reliability in a specific task, we need either to root explanations deeply in theory directly corresponding to the task's requirements or to create use-case-inspired explanations and then empirically evaluate them in the targeted application.
### Task-relevant theoretical guarantees
While many current methods are inspired by some mathematical concepts, these concepts are not directly relevant to
a task. For example, if a method assigning importance to individual features has a theoretical guarantee that, in the case of a linear model, it recovers the true model's coefficients, that might be a desirable mathematical behavior, but it might be utterly irrelevant for an actual task involving a complex model.
Two examples of frequently applied methods with strong mathematical underpinnings are Integrated Gradients (Sundararajan et al., 2017) and SHAP (Lundberg and Lee, 2017). Recent research (Bilodeau et al., 2022) indicates, however, that the mathematical "guarantees" result in these methods provably failing at practical applications. Specifically, (Bilodeau et al., 2022) show that _"for any feature attribution method that satisfies the completeness and linearity axioms, users cannot generally do better than random guessing for end-tasks such as algorithmic recourse and spurious feature identification."_
As pointed out by (Bilodeau et al., 2022), such methods are also considered across healthcare applications, for example, for ICU false alarm reduction task (Zaeri-Amirani et al., 2018) or cancer prognosis and diagnosis (Zhou et al., 2021; Roder et al., 2021). Therefore it is of great practical significance that users understand that common conceptions related to outputs of these methods may not hold, _"for instance, positive feature attribution does **not**, in general, imply that increasing the feature will increase the model output. Similarly, zero feature attribution does **not**, in general, imply that the model output is insensitive to changes in the feature."_
If a theory is to guarantee the method's reliability, we need to ensure that this theory is rooted in a specific task, holds in circumstances under which the method is practically applied, and that its assumptions and limitations are clear to the user.
### Task-inspired development and evaluation
Another way to ensure the method's usefulness and reliability is through a rigorous evaluation on a real-world task of interest.
There exist some evaluation procedures for current methods; however, existing evaluation is really limited. XAI evaluations involving humans often rely on simplistic proxy tasks or subjective opinions on explanations quality (Ribeiro et al., 2016; Lundberg and Lee, 2017; Ribeiro et al., 2018; Jeyakumar et al., 2020). A recent study reveals a positive impact of explanations, specifically that they can _"reduce overreliance on AI systems during decision-making"_(Vasconcelos et al., 2023). In this study, the decision-making task involved finding a reachable exit in a maze, with a model suggesting a correct exit either with or without an explanation that described or visualized a suggested path (correct or incorrect). However, the generalizability of this conclusion remains uncertain, as both the problem setup and the explanations were simplistic and synthetically generated. Another issue is the lack of distinction in some studies between the performance of a model with and without explanations. For instance, the highly cited work _"Explainable Machine-Learning Predictions for the Prevention of Hypoxaemia during Surgery"_(Lundberg et al., 2018) demonstrates in a study involving five anesthesiologists that doctors' predictions improve when provided with model suggestions and corresponding explanations. However, the evaluation does not include the baseline of using only the model without explanations.
Human expert studies on real-world tasks represent the gold standard for evaluations (Doshi-Velez and Kim, 2017); however, despite efforts to reduce their cost for XAI tests, they remain resource-intensive (Chen et al., 2022). Even when conducted, obtaining definitive answers can be challenging. For example, studies by (Jesus et al., 2021; Amarasinghe et al., 2022) attempt to address the lack of application-grounded XAI evaluations by assessing a few explanation methods for credit card fraud detection. Both studies focus on the same problem of identifying fraudulent transactions, and both recruit the same group of experts. However, a slight change in the experimental setup leads to contrasting conclusions: (Jesus et al., 2021) report better metrics for a model with explanations compared to one without, whereas (Amarasinghe et al., 2022) do not observe a significant difference.
In summary, it is challenging and expensive to compare and evaluate non-task-specific methods, and it is unfeasible to estimate methods' reliability for all possible real-world tasks.
However, if a method were initially developed with a specific real-world task application target and needed to be evaluated only for this application, then rigorous and empirical evaluation under realistic settings would be possible and lead to significant conclusions.
## 7 Conclusions
Given the present state of XAI, explanations without solid evidence for enhancing human-AI collaboration should not be used to justify decisions or establish trust in machine learning models. To transform the dynamics of explainability method development, a stronger focus on practical applications is crucial to avoid following the presented XAI development pattern.
We suggest treating explanations without explicit task-relevant guarantees as black boxes themselves. They can be useful but should not be trusted.
## 8 Acknowledgements
We thank Jure Leskovec and Carlos Guestrin for the discussions that inspired the work on this paper. AC was supported by Stanford School of Engineering Fellowship.
|
2301.04222 | Geometric phases along quantum trajectories | A monitored quantum system undergoing a cyclic evolution of the parameters
governing its Hamiltonian accumulates a geometric phase that depends on the
quantum trajectory followed by the system on its evolution. The phase value
will be determined both by the unitary dynamics and by the interaction of the
system with the environment. Consequently, the geometric phase will acquire a
stochastic character due to the occurrence of random quantum jumps. Here we
study the distribution function of geometric phases in monitored quantum
systems and discuss when/if different quantities, proposed to measure geometric
phases in open quantum systems, are representative of the distribution. We also
consider a monitored echo protocol and discuss in which cases the distribution
of the interference pattern extracted in the experiment is linked to the
geometric phase. Furthermore, we unveil, for the single trajectory exhibiting
no quantum jumps, a topological transition in the phase acquired after a cycle
and show how this critical behavior can be observed in an echo protocol. For
the same parameters, the density matrix does not show any singularity. We
illustrate all our main results by considering a paradigmatic case, a spin-1/2
immersed in time-varying a magnetic field in presence of an external
environment. The major outcomes of our analysis are however quite general and
do not depend, in their qualitative features, on the choice of the model
studied. | Ludmila Viotti, Ana Laura Gramajo, Paula I. Villar, Fernando C. Lombardo, Rosario Fazio | 2023-01-10T22:05:18Z | http://arxiv.org/abs/2301.04222v4 | # Geometric phases along quantum trajectories
###### Abstract
A monitored quantum system undergoing a cyclic evolution of the parameters governing its Hamiltonian accumulates a geometric phase that depends on the quantum trajectory followed by the system on its evolution. The phase value will be determined both by the unitary dynamics and by the interaction of the system with the environment. Consequently, the geometric phase will acquire a stochastic character due to the occurrence of random quantum jumps. Here we study the distribution function of geometric phases in monitored quantum systems and discuss when/if different quantities, proposed to measure geometric phases in open quantum systems, are representative of the distribution. We also consider a monitored echo protocol and discuss in which cases the distribution of the interference pattern extracted in the experiment is linked to the geometric phase. Furthermore, we unveil, for the single trajectory exhibiting no quantum jumps, a topological transition in the phase acquired after a cycle and show how this critical behavior can be observed in an echo protocol. For the same parameters, the density matrix does not show any singularity. We illustrate all our main results by considering a paradigmatic case, a spin-1/2 immersed in time-varying a magnetic field in presence of an external environment. The major outcomes of our analysis are however quite general and do not depend, in their qualitative features, on the choice of the model studied.
## I Introduction
As Berry first stated in his seminal work [1], when a quantum system is prepared in an energy eigenstate and adiabatically driven in a cycle, it acquires, in addition to the dynamical phase, a phase that depends solely on the path traced in the ray space. Being independent of the specific dynamics giving rise to the path, this phase is of geometrical nature. Following Berry's breakthrough, consistent generalizations of the Geometric Phase (GP) have been found for unitary evolutions which are kept cyclic while they are not required to be adiabatic [2], in the presence of degenerate subspaces [3], and for the case in which both the adiabaticity and the cyclicity conditions are removed [4; 5]. Further generalizations include the definitions of GPs for mixed states [6; 7; 8; 9; 10] and the so-called off-diagonal GPs [11; 12], which apply in the case where the initial and final states are orthogonal.
GPs are profoundly linked to the theory of fiber bundles and holonomies, bridging geometrical concepts like parallel transport over curved spaces with physics [13; 14; 15], and contributing in this way to the understanding of quantum mechanics at the foundational level. Since their discovery, GPs have also emerged in most diverse physical systems [16; 17], deepening the comprehension of numerous phenomena such as integer quantum Hall effect [18], topological insulators and superconductors [19; 20], as well as playing a pivotal role in quantum information processing [21; 22; 23].
The quest for implementations of geometric quantum information processing has also spurred the search for geometric interferometry in several different setups. The first proposal of this kind was realized in NMR [22]. Thereafter, Berry phases in superconducting qubits were both studied theoretically in [24] and observed experimentally for different regimes of couplings in circuit-QED arrangements [25; 26; 27; 28; 29; 30]. In this direction, high-fidelity quantum gates were demonstrated with trapped ions [31]. The need to improve the performance of quantum information processing devices against the exposure to external environment has led to the suggestion of non-adiabatic geometric gates schemes [32; 33; 34; 35; 36; 37]. In this context, it becomes of fundamental importance to understand how geometric interferometry is affected by the presence of an external environment. Consequently, GPs need to be generalized to deal with the systems subject to non-unitary quantum evolution. The effect of fluctuations in the classical control parameters of a quantum cyclic evolution may average out mitigating their effect on the accumulated Berry phase [38]. The presence of an external bath was found to give rise to new geometric contributions to decoherence [39; 40], as experimentally detected in [41; 42]. Different definitions of GPs applicable in the non-unitary case have been put forward. Tong _et al._[43] introduced a purification-independent formula computed over the reduced density matrix while an average over different histories (trajectories) taking into account system-bath interaction was discussed in Carollo _et al._[44; 45] and further analyzed in [46; 47; 48]. Additional work along these lines can be found in [49; 50; 51; 52].
There is, however, a different level of description of open quantum systems which may capture features that are washed out by simply looking at the properties of density matrices. This level is accessed, for example, when the state of the system is continuously monitored. In this
case, the quantum system is described by a wave function whose smooth evolution is interrupted by random quantum jumps induced by the coupling with the environment [53]. This sequence of smooth evolutions interrupted by jumps is named a quantum trajectory (see [54] for a recent review on the subject).
_Goal of the present work is to describe the properties of accumulated GP along quantum trajectories._ In this approach we are inspired by the work of Gebarth _et al._[55] where the GPs induced by a sequence of weak measurements stirring the system along a path in a parameter space were analyzed. The randomness introduced by the occurrence of jumps in a given trajectory is reflected in the fact that the GPs inherit a stochastic nature. By random sampling over the trajectories, the entire distribution can be reconstructed. Since the Berry phase is not an observable, the average value does not correspond to the phase accumulated by the average state (this is, the density matrix). Previous works, with the notable exception of [55], either restrict the study of the dynamics of smoothly evolving pure states with no jumps or define average quantities. Understanding the fluctuations of GPs induced by random jumps is to a large extent unexplored. We would like to fill this gap by studying this distribution and whether it is related to the corresponding distribution in the interference fringes in a spin-echo experiment. Finally, with regard to the topological transition discussed in [55], further investigated theoretically in [56; 57] and experimentally observed in [58; 59], we will argue that despite the different dynamical settings it is a generic feature present in adiabatically driven monitored systems. We will show that depending on the coupling to the external environment, the monitored quantum system will show a topological transition in the phase accumulated in a cycle and we will argue that this transition is visible in echo dynamics.
The paper is organized as follows. In the next Section, we will define the dynamical setting we are interested in: A quantum system subject to a time-periodic Hamiltonian and coupled to an external bath. With the intention to highlight the essence of our results, we will consider the paradigmatic case of a two-level system that evolves in presence of an externally varied magnetic field. The associated density matrix is governed by the Lindblad equation. In order to follow the dynamics of the system along its quantum trajectories, we introduce a specific unravelling of the Lindblad equation which relays on microscopic considerations, these aspects are introduced in Section II. In Section III the model and its coupling to the environment are introduced. In Section IV we define the GP that will be the founding block of all our analysis. For an isolated system and sufficiently slow driving, this reduces to the Berry phase [1]. The presence of the environment induces both a smooth drift and random jumps in the dynamics, so the evolution of the state is generically neither adiabatic nor cyclic. To keep the presentation self-consistent, we further include in this same Section other definitions of GPs present in the literature. These will be employed for comparison in the posterior Section V.1, where we discuss the distribution of the GPs accumulated along quantum trajectories and analyze reference GP values in order to account for differences with other definitions of GPs proposed in the context of open quantum systems. Due to the intrinsic randomness of the quantum trajectory, a monitored echo experiment might be altered. In Section V.2 we discuss the probability distributions of the interference fringes and detail whether/when they relate to the corresponding distribution of the GPs. Our analysis of GPs in monitored systems is completed in Section V.3 where we will show that the topological transition discovered in [55] for a specific setting is actually a generic feature in periodically driven open quantum systems. Indeed, for the sequence of states known as no-jump trajectory, which can be thought of as the smooth evolution generated by a non-hermitian Hamiltonian, we find the GP displays a complex pattern in the parameters space exhibiting singular points. These singularities can be tracked down to correspond to points of vanishing probability for such a trajectory, and to reveal the border between distinct topological sectors. The transition observed in the evolution when varying the parameters is topological in the sense that it is related to a discontinuous jump of an integer-valued topological invariant. Section V.3 will be entirely devoted to the study of this transition and ways to detect it through an echo protocol. A summary of our results and concluding considerations are presented in Section VI. The appendices give some additional ingredients used to compute the GP in the numerical simulations, Appendix A, a detailed analysis of the already mentioned interference fringes distribution, Appendix B, a brief discussion on how the distribution of GPs may depend on the unravelling of the Lindblad equation (leading to the same averaged evolution), Appendix C, and analytical treatment of the no-jump trajectory, Appendix D.
## II From Lindblad dynamics to quantum trajectories
_Lindblad equation -_ In order to make a connection with existing literature, it is convenient to set the stage and start from the case in which the state of an open quantum system is described by a density matrix \(\rho(t)\). In this case, under proper conditions, the dynamics is governed by the Lindblad equation [60; 61] (\(\hbar=1\))
\[\dot{\rho}=-i\left[H,\rho\right]+\sum_{\alpha}[L_{\alpha}\rho L_{\alpha}^{ \dagger}-\frac{1}{2}\{L_{\alpha}^{\dagger}L_{\alpha},\rho\}]\;. \tag{1}\]
The first term in the r.h.s. of the Lindblad equation accounts for the unitary evolution, while the second originates in the coupling to the environment. The strength and the nature of this coupling are encoded in the Lindblad operators \(L_{\alpha}\). We will consider a Hamiltonian \(H\) that depends periodically on time \(H(t+2\pi/\Omega)=H(t)\)
with \(T=2\pi/\Omega\) the period of a cycle in suitable parameter space. The Lindblad operators, if time-dependent, should also be time-periodic \(L_{\alpha}(t+2\pi/\Omega)=L_{\alpha}(t)\).
It is useful to already at this point briefly comment on the adiabatic limit for slow dynamics as this issue will be central in the analysis conducted along the paper. If the evolution is unitary, for a sufficiently large period \(T\), a system prepared in an eigenstate will remain in the corresponding instantaneous eigenstate up to small corrections due to Landau-Zener transitions between energy levels. In other words, the occupancy of any given eigenstate will not change in time. The situation strongly differs in presence of an environment. In this case, a proper adiabatic limit is not well defined, since the slow driving limit where adiabatic dynamics sets in, is also the regime in which the consequences of the external baths are the most severe and the system reaches a (possibly periodic) steady state. The adiabatic limit itself should be reconsidered [62] in an open system, as the existence of a continuum of energy levels makes the energy splittings of the system a bad reference scale for defining the regimes. Effects due to non-adiabaticity and corrections due to the presence of the environment seem thus to be inextricably linked.
_Monitored dynamics and quantum trajectories -_ The dynamics of the systems radically change when it is possible to continuously monitor their state. In this case, the state of the system remains pure and consists of intervals of smooth evolution interrupted at random times by abrupt changes called quantum jumps. A sequence of smoothly-evolving intervals together with a set of random events is denominated a quantum trajectory. The literature on the subject is vast and we refer to the following papers and books for a general overview [53; 54; 63; 64]and applications e.g. to many-body systems [65; 66].
Evolution is described in this framework as follows. If at time \(t\) the state of the system is \(|\psi(t)\rangle\), at a later \(t+\delta t\) time it will be
\[|\psi(t\!+\!\delta t)\rangle=\left\{\begin{array}{ll}\frac{K_{o}|\psi(t) \rangle}{\sqrt{p_{o}(t)}}&\mbox{with probability}\quad p_{o}(t)\\ \\ \frac{K_{o}|\psi(t)\rangle}{\sqrt{p_{o}(t)}}&\mbox{with probability}\quad p_{ \alpha}(t)\end{array}\right. \tag{2}\]
where \(o,\alpha=1,..\) label the different operators \(K_{\alpha}\) inducing dynamical steps
\[K_{o}=1-i\,\delta t\left[H-\frac{i}{2}\sum_{\alpha}L_{\alpha}^{\dagger}L_{ \alpha}\right]\hskip 28.452756ptK_{\alpha}=\sqrt{\delta t}L_{\alpha} \tag{3}\]
and \(p_{o/\alpha}(t)=\langle\psi(t)|\,K_{o/\alpha}^{\dagger}K_{o/\alpha}\,|\psi(t)\rangle\). Each choice in the r.h.s. of Eq.(2) represents evolution steps of different characters. The second line corresponds to the occurrence of a jump \(K_{\alpha}\) at time \(t\), while the first is a smooth evolution (no jump), albeit altered from unitarity by the fact that acquiring the information that no jumps occurred modifies the evolution of the system. The no-jump operator \(K_{o}\) can also be thought of as generated by an effective drift Hamiltonian \(H_{o}\) to which it relates in the usual way \(K_{o}=1-i\,\delta t\,H_{o}\). The full evolution in a time interval \([0,t]\) is therefore characterized by a sequence of \(N_{J}\) jumps of types \(\alpha_{i}\) occurring at times \(t_{i}\). We will denote the string of these events
\[\mathcal{R}(t,N_{J})=\{(\alpha_{1},t_{1}),\ldots,(\alpha_{i},t_{i}),\ldots( \alpha_{N_{J}},t_{N_{J}})\}, \tag{4}\]
with \(0\geq t_{i}\geq t\;\;\forall i\), the quantum trajectory. As mentioned above, this framework naturally emerges when the system is continuously and indirectly monitored, so that each trajectory can be viewed as the result of continuous measurements of the environment on a given basis. From this perspective, continuous monitoring may lead to decoherence mitigation by the environment [67], also post-selection and error correction schemes [68; 69] have been proposed.
The properties of the Kraus operators \(K_{o/\alpha}\) guarantee that the probabilities to get a given outcome sum up to one, and the time step \(\delta\,t\) should be taken small enough for the first order approximation to be valid, which requires \(\sum_{\alpha}p_{\alpha}\ll 1\). Averaging over every possible jump sequence one gets back the Lindblad equation [53] in Eq.(1), the converse implication is not valid, an infinite number of different unravellings give rise to the same Lindblad evolution [54]. We will address this question in Appendix C.
## III The model
Since we are interested in studying the impact of an external environment on the GPs, we will consider a unitary evolution over which the accumulated GP, in the adiabatic limit, is the Berry phase. To be concrete, we shall consider a spin-1/2 particle in presence of a time-dependent magnetic field \(\mathbf{B}(t)=\omega\,\hat{\mathbf{n}}_{\mathbf{B}}(t)\), whose direction is given by \(\hat{\mathbf{n}}_{\mathbf{B}}=(\sin\left(\theta\right)\cos(\Omega\,t),\sin \left(\theta\right)\sin(\Omega\,t),\cos\theta)\) with fixed polar angle \(\theta\) and time-varying azimuthal angle \(\Omega\,t\). Such unitary evolution is generated by the Hamiltonian
\[H(t)=\frac{1}{2}\,\mathbf{B}(t)\cdot\mathbf{\sigma}, \tag{5}\]
with \(\mathbf{\sigma}=(\sigma_{x},\sigma_{y},\sigma_{z})\); and \(|0\rangle\) and \(|1\rangle\), the eigenstates of \(\sigma_{z}\). The instantaneous eigenstates of \(H(t)\) are denoted \(|\psi_{-}(t)\rangle\) and \(|\psi_{+}(t)\rangle\).
If the system could be kept perfectly isolated while the direction of \(\mathbf{B}(t)\) is adiabatically changed in a cycle parameterized by \(t\in[0,T]\), with \(T=2\pi/\Omega\) (as shown in Fig.1), it would acquire an adiabatic (Berry) phase \(\phi_{\mathbf{a}}^{\pm}=-\pi(1\mp\cos\theta)\), where the \(\mp\) sign depends on the energy eigenstate in which the system was initially prepared.
_Lindblad operators -_ For a system that evolves according to \(H(t)\) given by Eq. (5) coupled to an environment of harmonic oscillators a consistent time-dependent Lindblad equation of the form in Eq. (1) can be derived from microscopic considerations as long as the evolution remains sufficiently slow [70; 71], with Lindblad operators given by
\[\begin{split} L_{-}(t)&=\sqrt{\gamma_{-}}\left\langle \psi_{-}(t)\right|\sigma_{x}\left|\psi_{+}(t)\right\rangle\left|\psi_{-}(t) \right\rangle\left\langle\psi_{+}(t)\right|\\ L_{+}(t)&=\sqrt{\gamma_{+}}\left\langle\psi_{+}(t) \right|\sigma_{x}\left|\psi_{-}(t)\right\rangle\left|\psi_{+}(t)\right\rangle \left\langle\psi_{-}(t)\right|\\ L_{d}(t)&=\sqrt{\gamma_{d}}\sum_{i=\pm}\left\langle \psi_{i}(t)\right|\sigma_{x}\left|\psi_{i}(t)\right\rangle\left|\psi_{i}(t) \right\rangle\left\langle\psi_{i}(t)\right|\end{split} \tag{6}\]
and corresponding to decay, spontaneous excitation, and dephasing respectively. The coupling strengths considered in this work are, in terms of the dissipation ratio \(\Gamma\), \(\gamma_{-}=\Gamma\;;\;\gamma_{d}=0.32\,\Gamma\), while we consider \(\gamma_{+}\) to be negligible (all the results that we will show are rather generic and do not qualitatively depend on the chosen values). The jumps defined by Eq. (3) from the Lindblad operators above lead, after averaging, to a consistent Lindblad equation for slow dynamical evolutions [72]. The operators introduced in Eq. (6) induce transitions and dephasing between the instantaneous eigenstates of the Hamiltonian defined in Eq.(5). In order to keep the analysis as general as possible, we will include a further term in the Lindbladian which requires considering a fourth operator
\[L_{z}=\sqrt{\gamma_{z}}\sigma_{z} \tag{7}\]
along a fixed direction in the Bloch sphere. The particular choice of \(\sigma_{z}\) operator as the additional Lindblad operator is motivated by the need of introducing transitions that do not simply involve the instantaneous eigenstates. Any other Lindblad operator that differed from those in Eq.(6) would lead to similar qualitative conclusions.
While the unitary evolution of the closed system will follow the curly path indicated in purple in Fig.1, the actual dynamics will follow, with some probability, the path indicated in blue (see Fig.1 for illustrative purposes), i.e. it will be discontinuous and not necessary closed after a cycle of the driving, even in the slow-driving limit. Moreover, the slower the driving, the more jumps will occur (see light blue curve in Fig.1). The task of the next Sections is to characterize GPs under these conditions.
_Smooth evolution with no jumps -_ A particularly interesting quantum trajectory is that which is smooth along the whole evolution. Before addressing the characterization of GPs in indirectly monitored systems, we provide insight into the evolution giving rise to it. When the records of the measurements performed on the environment reveal zero jumps, the dynamics describe a continuous smooth path and is generated by an effective drift Hamiltonian which depends both on the Hamiltonian of the system and the Lindblad operators as described by Eq. (3). Within the model considered in our work, the effective drift Hamiltonian \(H_{o}\) governing the no-jump dynamics [\(K_{o}=1-\delta tH_{o}\) in Eq.(3)] is given by
\[H_{o}(t)=\left(1-i\frac{\Gamma}{2\omega}f(t)\right)\,H(t) \tag{8}\]
with \(f(t)=\cos^{2}(\theta)+\sin^{2}(\theta)\sin^{2}(\Omega t)\). We highlight the fact that, due to the unitarity of \(\sigma_{z}\) matrix, the no-jump evolution is completely independent of the fourth Lindland operator \(L_{z}\) included ad-hoc and, consequently, from the parameter \(\gamma_{z}\). An illustrative example of the trajectory generated by the above evolution, referred to as the no-jump trajectory in what comes, is the orange path in Fig. 1. In Appendix D we provide the analytic solution for the dynamics associated to the non-Hermitian Hamiltonian \(H_{o}(t)\) of Eq.(8). While this trajectory is unique, the number of possible (even though unevenly probable) trajectories in which \(N_{J}>0\) jumps occur increases with the number \(N_{J}\) of jumps, diverging as \(\delta t\) goes to zero. Its uniqueness will make the no-jump trajectory especially suitable for the analysis of some features of the GPs, we come back to this question in Section V.3.
## IV Geometric phases in open systems - definitions -
As mentioned in Sec. I, the accumulation of a GP during the dynamics of a quantum system is not neces
Figure 1: Trajectories described by the state of the system on the Bloch sphere under different conditions. The black line corresponds to unitary evolution in the adiabatic limit. The purple line depicting a curly ring corresponds to general unitary dynamics in which non-adiabatic corrections start to be visible. In the presence of an environment, the quantum state can suffer from jumps or can be smoothly driven along the whole evolution. For a system prepared in the exited eigenstate, the orange trajectory corresponds to a fully smooth drift. Differently, the blue path shows a jump that projects the state into the instantaneous ground eigenstate and is afterward smoothly driven. Finally, the light blue path shows a case with several jumps, where the non-adiabatic corrections appear in between the jumps.
sarily restricted to an adiabatic evolution. For a generic quantum trajectory, consisting of a sequence of smoothly-evolving intervals together with a set of random quantum jumps \(\mathcal{R}\), a proper phase that deals with both aspects of evolution can be defined.
Considering the evolution in a time interval \([0,T]\), parameterized with \(t\), the GP associated to a trajectory in which \(N_{J}\) jumps are registered at times \(t_{i}\), can be written as
\[\phi[\mathcal{R}]= \arg\left\langle\psi(0)|\psi(T)\right\rangle \tag{9}\] \[-\mathrm{Im}\sum_{i=0}^{N_{J}}\int_{t_{i}}^{t_{i+1}}\!\!dt\frac{ \left\langle\psi(t)\right|\!\left|\psi(t)\right\rangle}{\left\langle\psi(t)| \psi(t)\right\rangle}\] \[-\sum_{(t_{i},\alpha_{i})\in\mathcal{R}}\arg\left\langle\psi(t_{i })\right|K_{\alpha_{i}}\left|\psi(t_{i})\right\rangle,\]
where \(\mathcal{R}=\mathcal{R}(T,N_{J})\) for brevity, with \(t_{0}=0\) and the convention that \(t_{N_{J}+1}\equiv T\) in the sum of integrals. The definition of GP as given in Eq.(9) will be at the basis of our analysis and refer to Appendix A for a derivation of this expression. As it is evident from the dependence on the times and nature of the jumps, the phase \(\phi[\mathcal{R}(T,N_{J})]\) will be a stochastic variable, dependent on the trajectory \(\mathcal{R}(T,N_{J})\). The first term in Eq. (9) is the total relative phase between the initial and final states. The remaining terms are of two different kinds, reflecting the properties of the dynamics itself. The second term features the dynamical phases accumulated along the intervals of smooth evolution that take place before, between, and after jumps, and which should be subtracted in order to access the purely geometrical object \(\phi_{\mathcal{R}}\). The occurrence at time \(t_{i}\) of a jump generated by the operator \(K_{\alpha_{i}}\) introduces a contribution \(\arg\left\langle\psi(t_{i})\right|K_{\alpha_{i}}\left|\psi(t_{i})\right\rangle\) which represents the phase difference between the state before and after the jump. Such a term equals the GP associated with the trajectory build-up by joining the states by the shortest geodesic in the Hilbert space. The expression in Eq. (9) is independent of the \(U(1)\) gauge choice. It neither requires the trajectory to trace a close path in the state space nor relies on adiabaticity condition. Moreover, it does not even demand unitarity as it is well defined also if the states \(\left|\psi(t_{i})\right\rangle\) or \(\left|\psi(t^{\prime})\right\rangle\) are not normalized (the norm should however be non-vanishing).
Suitable to be applied to the trajectories that emerge in master equation unraveling, Eq. (9) has been employed in limiting forms for addressing the definition of GPs fitting non-unitary evolution. A first explored route was to focus on the no-jump trajectory [44; 45]. This approach, which disregards the possibility of quantum jumps by restricting to the smooth evolution, preserves the well-known definitions of GPs applicable to pure states and includes environmental effects through the non-hermiticity of \(H_{o}\). If no jumps are registered along the entire evolution, this is, if \(\mathcal{R}(T,0)=\emptyset\), the GP \(\phi_{0}\equiv\phi[\mathcal{R}(T,0)=\emptyset]\) reads
\[\phi_{0}=\arg\left\langle\psi(0)|\psi(T)\right\rangle-\mathrm{Im}\int_{0}^{T} \frac{\left\langle\psi(t)\right|\!\left|\psi(t)\right\rangle}{\left\langle\psi (t)|\psi(t)\right\rangle}dt \tag{10}\]
which trivially reduces to the expression for the GP accumulated in the most general unitary evolution [5] when this is indeed the case, and therefore the states are instantaneously normalized, rendering the denominator \(\left\langle\psi(t)|\psi(t)\right\rangle\equiv 1\ \forall\,t\). Eq.(10) also reduces to Aharonov-Anandan and Berry phases as the conditions required by each definition are fulfilled, namely, for cyclic and unitary while not necessarily adiabatic evolution and for both cyclic and adiabatic evolution. Note that phase \(\phi_{0}\) is ill-defined if some internal product on its argument vanishes, this observation will become of relevance when discussing the topological transition in Section V.3.
Several other works consider the full Lindblad equation unraveling, suggesting to define the GP of the ensemble-averaged state \(\rho(t)\) as an average over the ensemble of phases \(\{\phi_{\mathcal{R}}\}=\phi_{\{\mathcal{R}\}}\) obtained by applying Eq.(10) to each trajectory [44; 45; 46]. It has been extensively discussed whether this is a proper definition of a GP for the density matrix representing the state of the system as it does not allow for a one-to-one relation between the set of density matrices and the obtained GP values [47; 48; 73].
Finally, a different approach introduces a generalized GP defined directly from the reduced density matrix [43]. The expression reads
\[\phi_{\rho}= \arg\left(\sum_{j}\sqrt{\lambda_{m}(0)\lambda_{m}(t)}\left\langle \xi_{m}(0)|\xi_{m}(t)\right\rangle\right.\] \[\left.\times\exp\!\left\{-\int_{0}^{t}dt^{\prime}\left\langle\xi _{m}(t^{\prime})\middle|\dot{\xi}_{m}(t^{\prime})\right\rangle\right\}\right) \tag{11}\]
where \(\lambda_{k}(t)\) and \(|\xi_{k}\rangle\) are the instantaneous eigenvalues and eigenstates of the density matrix \(\rho(t)\) which describes the state of the system. Even though defined for non-degenerate but otherwise general mixed states, when computed over pure states under unitary evolution, reduces to the unitary expression of the GP.
All the above-mentioned proposals of GPs applicable when dynamics are non-unitary either restrict to modified evolutions on which pure-state GP definitions would be applicable or seek a consistently defined GP for the reduced density matrix \(\rho(t)\), which accounts for an averaged description. Stochastic processes, however, arising from master equation unraveling, acquire independent physical relevance in continuous monitoring schemes. As anticipated in the introduction, the randomness introduced by the occurrence of jumps in a given trajectory reflects in the GPs acquiring a stochastic nature itself. This approach, therefore, requires a study of the environmentally-induced effects in GPs from a statistical perspective. The probability associated with some GP
value will be related to that of individual trajectories as
\[P[\phi]=\sum_{\mathcal{R}/\phi[R]=\phi}P[\mathcal{R}]. \tag{12}\]
The average phase corresponds only to the first moment of the distribution
\[\bar{\phi}=\sum\,\phi P[\phi]\;.\]
and in some cases may be not sufficient in characterizing the dynamics.
For easy later reference, we provide a table summarizing the GP definitions reviewed along this section
\begin{tabular}{|c|c|c|} \hline GP & Description & \\ \hline \(\phi_{\text{a}}\) & Adiabatic Berry phase & \\ \(\phi[\mathcal{R}]\) & GP associated to the quantum trajectory \(\mathcal{R}(T,N_{J})\) & Eq.(9) \\ \(\phi_{0}\) & GP associated to the no-jump trajectory & Eq.(10) \\ \(\phi_{u}\) & GP accumulated on general unitary evolution ( from Eq. (10) with \(\langle\psi(t)|\psi(t)\rangle=1\)) & \\ \(\bar{\phi}\) & Average over the probability distribution \(P[\phi]\) & \\ \(\phi_{\rho}\) & Mixed state geometric phase [43] & Eq.(11) \\ \hline \end{tabular}
The next Section will be devoted to the properties of \(P[\phi]\) and how representative the different GPs applicable to trajectories are, see Eqs. (9 - 10). As we will be showing in the following, in most cases the entire probability distribution, i.e. all higher order cumulant, is necessary to understand the accumulation of GPs in a continuously monitored system. We will also discuss under which circumstances and what features of \(P[\phi]\) can be extracted by geometric interferometry through a spin-echo protocol.
## V Results
### Geometric phase distribution \(P[\phi]\)
We investigate in this section the distribution of the ensemble \(\{\phi_{\mathcal{R}}\}=\phi_{\{\mathcal{R}\}}\) of GPs obtained by employing Eq. (9) to each individual realization (trajectory) of the evolution, characterized by some set \(\mathcal{R}(T,N_{J})\). In Fig. 2 we show two representative cases in which the corresponding dynamics of a hypothetical _unitary_ evolution would either be faster (with small but non-zero non-adiabatic corrections) or slow enough to be considered in the adiabatic regime while the environment remains the same, characterized by the dissipation rate \(\Gamma=10^{-3}\omega\), which leads to \(\gamma_{-}=\Gamma\), \(\gamma_{d}=0.32\,\Gamma\), and negligible \(\gamma_{+}\).
We first attend the case with \(\gamma_{z}=0\), in which the environment induces jumps involving instantaneous eigenstates only. The two situations, corresponding to the two sets of parameters indicated before, are shown in Fig. 2, in panels (a) and (b) respectively. In both panels, we also plot for reference the adiabatic (Berry) result, the no-jump and unitary GPs, and the average of the distribution. Being the Berry phase independent of \(\Omega\), it is exactly the same for both cases, this is, \(\phi_{\text{a}}\sim 1.482\pi\). For the parameters chosen, the value \(\phi_{0}\) computed from Eq.(10) over the trajectory with no jumps, shows small deviations from \(\phi_{\text{a}}\). While the values of these character
Figure 2: Probability distribution \(P[\phi]\) of GPs for a magnetic field oriented with \(\theta=0.34\pi\) and driven in a loop at frequencies (a) \(\Omega=5\times 10^{-3}\omega\) and (b) \(\Omega=5\times 10^{-4}\omega\). The environment is characterized by the dissipation rate \(\Gamma=10^{-3}\omega\) and a \(\gamma_{z}=0\). In both panels, the solid red line depicts the adiabatic (Berry) phase \(\phi_{\text{a}}^{+}\), and the black dashed and dot-dashed lines signalize the GPs \(\phi_{0}\) and \(\phi_{u}\) associated with no-jump and general unitary evolution. The black dotted line indicates the first moment of the distribution \(\bar{\phi}\). The inset in panel (a) is a zoom in which the difference between these reference GP values is visible.
istic GPs are similar, the entire distribution of the monitored system is drastically different on each panel. In the first case of faster driving, the period \(T\) is such that a considerable amount of times the evolution is completed registering no jumps, with the mean number of jumps over the ensemble \(\bar{N}_{J}=0.63\). The narrow peak in the Figure shows these cases of entire smooth evolution. In addition, there is a small background revealing the accumulated GP along those trajectories where jumps occurred. The composition of the ensemble is reflected in the histogram by the presence of a large contribution, corresponding to \(\sim 50\%\) of the realizations, due to the no-jump GP-value and the remaining \(50\%\) of the counts distributed in a broad way over the possible GP values. This broad background distribution can be easily interpreted as the randomness inherited by the GP due to the (random) time at which the jump occurred. A single term \(\left\langle\psi(t_{i})\right|K_{-}\left|\psi(t_{i})\right\rangle\) in Eq. (9), denoting a contribution to the GP from a jump at time \(t_{i}\), successfully accounts for the background when considering all possible jump-times. The peak in the distribution agrees well with both the adiabatic and the no-jump values. The average phase, on the other side, is a bit off due to the small and poorly structured background, broadly distributed over \(2\pi\). This clearly demonstrates that even a single jump occurring at a random time leads to very large fluctuations in the accumulated GP. In the case with slower driving shown in panel (b) the mean number of jumps over the set of trajectories is \(\bar{N}_{J}=1.77\). This means that the state of the system is much more likely to undergo an abrupt change, or even more than one, in each realization of the cycle. As expected, the distribution of GPs becomes much wider, and a sharp peak around \(\phi_{0}\) is not visible anymore. Higher-order cumulants become necessary to understand the dynamics. The three lines, corresponding to the adiabatic, no-jump, and average GPs do not provide clear information on the dynamics of the monitored system.
The rate \(\Omega/\omega\) at which the magnetic field is rotated has thus a direct impact on the distribution of GPs. For larger rates, the system is exposed to the environment for a shorter period of time, but deviations from the adiabatic regime become non-negligible. On the other hand, lowering the driving frequency might result in the system being exposed to environmental effects for too long, implying strong corrections to \(\phi_{\mathcal{R}}\) from \(\phi_{a}^{+}\). Fig. 3 shows the distribution of GP-values obtained along a range of different \(\Omega/\omega\) rates which include the cases presented in Fig.2. For high enough frequency, the distribution shows a sharp peak around the no-jump value of the GP and almost no background counts. On the other hand, this no-jump value deviates considerably from the Berry phase. The broad background visible in panel (a) of Fig. 2 develops as the frequency rate is lowered, this is, as the relative period grows. Further on, the background turns into a second peak, while the one in the no-jump value decreases. For the smaller rate values, the distribution shows the behavior depicted by panel (b) of Fig. 2, this is, a broad single-peaked distribution. This regime shows non-negligible environmental effects also over the GP associated with the no-jump evolution, which deviates from the adiabatic result even though the driving is performed slowly. We refer to Appendix D for an analytical expression for the dependence of this deviation on the different parameters involved. The broadening exhibited by the distribution as the frequency rate decreases, is reflected in the increment of the distribution variance. This is shown in Fig. 4.
We conclude this section by analyzing the distribution of GP values when \(\gamma_{z}\neq 0\). As already discussed, a non-zero value of \(\gamma_{z}\) induces jumps to states that are not instantaneous eigenstates of the Hamiltonian and thus allows to consider of a wider class of cases. The resulting phenomenology depends only quantitatively on the choice of the Lindblad operator \(L_{z}\). Specifically, we take \(\gamma_{z}=0.1\,\Gamma\) and consider, as we did before, two different values of the speed at which the system is cyclically driven. The results are shown in Fig.5, with the qualitative features of the distribution closely resembling those obtained in the case with \(\gamma_{z}=0\).
Panel (a) of Fig.5 corresponds to the faster case. The mean number of jumps \(\bar{N}_{J}=0.69\) is slightly above the one obtained in the \(\gamma_{z}=0\). The additional jumps generated by \(K_{z}\) are not sufficient to modify the distribution qualitatively, which continues to show a well-defined peak (arising from the occurrence of smooth evolution with no jumps) plus a broad small background. In panel (b), showing the case in which the system is driven slower,
Figure 3: Probability distribution \(P[\phi]\) of GPs as a function of the ratio \(\Omega/\omega\). The field is oriented with \(\theta=0.34\pi\) and the environment is characterized by the dissipation rate \(\Gamma=10^{-3}\omega\) and a \(\gamma_{z}=0\) amplitude for the fourth Linbland operator. The GP values are displayed on the y-axis, while their probability is indicated by the intensity of the count color. The solid red line depicts the adiabatic (Berry) phase \(\phi_{a}^{+}\), the black dashed line indicates the GP \(\phi_{0}\) accumulated along smooth trajectories with no jumps, and the black dotted line shows the first moment of the distribution \(\tilde{\phi}\).
the mean number of jumps is also slightly increased from the \(\gamma_{z}=0\) case due to the additional presence of \(\gamma_{z}\) jumps, reaching a value \(\bar{N}_{J}=2.66\).
The cases discussed above contain the first message of the present work. The stochastic nature of the GP in monitored dynamics needs to be taken into account and it is not possible to characterize it only through a single value. This rises the additional question of how this fact reflects on the experimental outcomes. To address this question, we will consider in the next Section a spin-echo protocol and see how, when, and whether the distribution in the interference fringes is affected by the randomness of the process.
### Distribution of interference fringes in a spin-echo protocol
If the system is prepared in an eigenstate of the Hamiltonian and subsequently driven in a cycle, adiabatically and in absolute isolation from the environment, then the quantum state accumulates a Berry phase that can be measured by implementing a spin-echo protocol [74]. It goes as follows. The system is initially prepared in a superposition state \(|\psi(0)\rangle\) which reads \((1/\sqrt{2})(|\psi_{+}(0)\rangle+|\psi_{-}(0)\rangle)\) in terms of the ground and excited instantaneous eigenstates of \(H(0)\). Then, it is driven for a period \(T\), causing each eigenstate to acquire both a dynamical and a geometric phase \(\phi_{\rm a}^{\pm}\). A spin-flip operation and a second cycle in the opposite direction lead to a cancellation of the dynamical phases, resulting in a purely geometric relative phase. Berry phase can thus be extracted through state tomography [25; 75; 27] or by realizing that the probability for the system to be back in the initial state once the full evolution is completed, the persistence probability, is related to the Berry phase as \(|\langle\psi(0)|\psi(2\,T)\rangle|^{2}=\cos^{2}(2\,\phi_{\rm a}^{+})\)[76]. The relation between the persistent probability and the GP given above relies on two factors: the adiabatic regime preventing the transitions between eigenstates and the exact cancellation of the dynamical phases during the protocol. If an echo experiment is performed on a system that is exposed to the effect of the environment and continuously monitored, the persistence probability will retain its dependence on the dynamical evolution. Nevertheless, it is worth understanding to which extent it is possible to learn features of GPs in a monitored system through an echo protocol.
For each realization of the protocol, characterized by a sequence of jumps \(\mathcal{R}(2\,T,N_{J})\),
Figure 5: Probability distribution \(P[\phi]\) of the GPs for a magnetic field oriented with \(\theta=0.34\pi\) and driven in a loop at frequencies (a) \(\Omega=5\times 10^{-3}\omega\) and (b) \(\Omega=5\times 10^{-4}\omega\). The environment is characterized by the dissipation rate \(\Gamma=10^{-3}\omega\) and \(\gamma_{z}=0.1\,\Gamma\). In both panels, a blue solid contour indicates (for comparison) the \(\gamma_{z}=0\) distributions. The solid red line depicts the adiabatic (Berry) phase \(\phi_{\rm a}^{+}\), the black dashed and dot-dashed lines signalize the GPs \(\phi_{0}\) and \(\phi_{u}\) associated with no-jump and general unitary evolution. The black dotted line shows the first moment of the distribution \(\tilde{\phi}\). The inset in panel (a) zooms in to see the differences in the positions of the lines and the peak of the distribution
persistent probability \(\mathcal{P}_{\mathcal{R}}\) through an associated angle \(\varphi_{\mathcal{R}}\)
\[\mathcal{P}_{\mathcal{R}}=|\langle\psi(0)|\psi(2T)\rangle|^{2}\equiv\cos^{2} \left(2\,\varphi_{\mathcal{R}}\right). \tag{13}\]
Both the persistence probability and the parameter \(\varphi_{\mathcal{R}}\) inherit the stochastic character of the trajectories, with the probability of measuring a given value \(\varphi\) related to the probability of the trajectories as
\[P[\varphi]=\sum_{\mathcal{R}/\varphi_{\mathcal{R}}=\varphi}P[\mathcal{R}]. \tag{14}\]
In the limiting case in which the persistence probability approaches its adiabatic value, \(\varphi\) will approach \(\phi_{\mathrm{a}}^{+}\). Away from that particular regime, \(\varphi_{\mathcal{R}}\) is NOT equal to the GP \(\phi_{\mathcal{R}}=\phi[\mathcal{R}]\) but, as mentioned previously, a convenient parametrization of the spin-echo interference fringes.
The non-adiabatic and environment-induced deviations from \(\phi_{\mathrm{a}}^{+}\) can be analyzed by examining the ensemble \(\{\varphi_{\mathcal{R}}\}=\varphi_{\{\mathcal{R}\}}\) that is obtained by computing Eq. (13) for each individual realization of the protocol. This study will also allow seeking possible relations, if any, between the stochastic behavior of the GPs and that of experimental outcomes (note that \(\varphi_{\mathcal{R}}\) is defined modulo \(\pi/2\) and up to a sign, therefore, any relation between the distribution of GPs and the distribution of the experimental results should take this into account). The frequency \(\Omega\) at which the magnetic field is rotated is expected, once again, to have a direct impact on the distribution [36; 37]. On increasing the relative value of \(\Omega\), the system will be exposed to the disruptive influence of the environment for shorter times, allowing to a larger extent a partial cancellation of the dynamical phases. At the same time, in this regime, non-negligible deviations from the adiabatic results will be unavoidable. On the other hand, smaller values of \(\Omega\) might result in the system being exposed to environmental effects for too long, leading to strong deviations of the echo-parameter values \(\varphi\) from \(\phi_{\mathrm{a}}^{+}\).
In analogy with what we did in section V.1, we examine first the case \(\gamma_{z}=0\) and present, in Fig.6, two representative cases in which the hypothetical _unitary_ evolution would either be faster or slow enough to be considered within the adiabatic regime. These are shown in panels (a) and (b) of Fig.6 respectively. In both panels, we also display the adiabatic Berry phase \(\phi_{\mathrm{a}}\) (which does not depend on \(\Omega\)), the GP \(\phi_{0}\) obtained in a protocol with no jumps, and the GP \(\phi_{u}\) obtained in general unitary evolution. The \(\varphi\) value obtained from an echo experiment which is completed without detecting jumps is also shown. For the parameters chosen, both panels show very small deviations of \(\varphi\) extracted in a protocol with no jumps from the Berry phase (see the insets in Fig.6). It should be noted, however, that the probability of registering this specific trajectory is different in the two cases, as it can be seen in the differences in the full \(P[\varphi]\) distributions.
The first striking feature that comes out is the presence of three distinct sharp peaks. The broad distribution observed in the GP values completely disappears in the spin-echo. This behavior originates from the fact that when \(\gamma_{z}=0\), only jumps between instantaneous eigenstates are possible. This particular aspect of the unravelling leads, when combined with the properties of the persistence probability, to a distribution of interference fringes qualitatively different from that of the GPs. Each of the peaks shown in panel (a) of Fig. 6 can be understood as arising from a different set of quantum tra
Figure 6: Probability distribution \(P[\varphi]\) obtained in the echo-protocol for a magnetic field oriented with \(\theta=0.34\pi\) and driven in a loop at frequencies (a) \(\Omega=5\times 10^{-3}\omega\) and (b) \(\Omega=5\times 10^{-4}\omega\). The environment remains the same, characterized by the dissipation rate \(\Gamma=10^{-3}\omega\) and a \(\gamma_{z}=0\). In both panels, the solid red line depicts the adiabatic (Berry) phase \(\phi_{\mathrm{a}}^{+}\), and the black dashed and dash-dotted lines signalize the GPs obtained in no-jump and unitary evolution respectively. Furthermore, the black dash double-dotted line indicates the \(\varphi\) value obtained in an echo protocol with no jumps. The insets in both panels show a range in which the result of a smoothly performed echo experiment is distinguishable from the Berry phase.
jectories in the following way. For the parameters chosen in panel (a) of Fig. 6 trajectories with at most one jump are possible. The three peaks correspond to protocols with no jumps, protocols with one jump of the type \(L_{\pm}\), and one jump of the type \(L_{d}\) respectively. We refer to Appendix B for a detailed justification of this identification. Trajectories that remain smooth along the whole protocol induce the right peak in Fig.6 (closest to the no-jump result, \(\varphi\sim 1.475\pi\) for this choice of parameters). The central peak, centred at the value \(\varphi\sim\,1.375\pi\) trivially associated via Eq. (13) with a persistence probability taking the value \(1/2\), builds up from all those cases in which the state of the system is, at some given time, projected into an eigenstate of \(H(t)\). In those trajectories, all the information about the accumulated phase before the jump is lost. As a consequence, immediately after a jump \(L_{\pm}\), and regardless of both the previous evolution and the time at which the jump occurred, the persistence probability takes the exact value \(1/2\). The third peak, the left one, is due to trajectories in which a jump \(L_{d}\) occurs. This type of jump has the effect of introducing a \(\pi\)-shift in the relative phase of the echo state, that corresponds with the position of the left peak in Fig.6. Therefore, the interference fringes distribution shows three peaks out of which two encode the same information, namely, the \(\varphi\) value of a smoothly driven protocol, while the central peak contains almost no information. Furthermore, the distribution is quite sharp because, for the parameters chosen, the described classes of trajectories are all detected, while more complex quantum trajectories are highly improbable (see Appendix B). In panel (b) of Fig. 6 the two peaks located at the sides have almost vanished. This reveals that when the system is driven at lower relative frequencies, a decay jump or a spontaneous excitation will be detected in almost every trajectory. A similar effect is obtained if the decay rate \(\Gamma/\omega\) increases while keeping the ratio \(\Omega/\omega\) fixed.
A second aspect of the distribution \(P[\varphi]\) is which features of GPs in open systems it captures. In panel (a) Fig. 6, the fast-driven regime, the \(\varphi\) value obtained from protocols with no jumps agrees well with the adiabatic (Berry) phase, and both of these show small but visible deviations from no-jump GP. The \(\varphi\) value is more closely related to the adiabatic case than the actual GP accumulated in smoothly drifted dynamics. For the slower driving shown in panel (b) of Fig.6, the no-jump \(\varphi\) value remains a good indicator of the adiabatic phase, even though registering a smooth protocol is in this case less probable. Under these conditions, most of the experiment realizations will contribute the central peak, which is not related to any characteristic GP.
Inspection of Fig.6 suggests that, as in the case of the GPs distribution, the interplay between non-adiabatic corrections and environmentally induced jumps is better revealed when the distribution \(P[\varphi]\) is analyzed as a function of the rate \(\Omega/\omega\). This is shown in Fig. 7, which includes the two paradigmatic cases of Fig. 6. The Berry phase \(\phi_{\rm a}\) and the values \(\phi_{0}\) and \(\bar{\phi}\) of the GP associated with smooth trajectories and the first moment of the GP distribution are also given for reference. In the non-adiabatic regime, \(\Omega/\omega\gtrsim 0.1\) the \(\varphi\) value is most of the time the one arising in a protocol with no jumps, and shows appreciable but still small deviations from the adiabatic phase. A trajectory with a single jump might be observed, albeit with less probability. If this is the case, the mixing of the eigenvalues due to non-adiabatic transitions will produce slightly broad distributions around the other two peaks, revealing the stochastic nature of the jump times. Non-adiabatic corrections have a much stronger impact on \(\phi_{0}\) (for an analytical expression of this scaling, see Appendix D), its behaviour completely disconnects from that of the the distribution of echo protocols. On the other side, approaching the adiabatic regime, the three peaks get sharper. This behavior is accompanied by a sharp decrease in the height of the side-peaks and an enhancement of counts on the trivial, middle peak. Along the full range, there is a region in which the interplay between environmentally-induced and non-adiabatic effects allows for good agreement between the GP accumulated in smooth non-unitary evolution and the value of \(\varphi\). The behavior displayed by both the GP and the echo "phase" in smooth non-unitary evolution is further analyzed in Appendix D. Differently to the case of the no-jump values, which display a reasonable agreement, the inset in Fig. 7 shows that the (consistently re-ranged) first moment of the GP distribution \(\bar{\phi}\) remains, along the whole frequency range, completely uncorrelated from both the \(\varphi\) distribution and all echo characteristic values.
Figure 7: Probability distribution \(P[\varphi]\) of \(\varphi\) (as determined in an echo experiment) as a function of the ratio \(\Omega/\omega\). The field is oriented with \(\theta=0.34\pi\) and the environment is characterized by the dissipation rate \(\Gamma=10^{-3}\omega\) and a \(\gamma_{z}=0\). The \(\varphi\) values are displayed on the y-axis, while the intensity of the count color indicates their probability. The solid red line depicts the adiabatic (Berry) phase \(\phi_{+}^{+}\), while the black dashed line signalizes the no-jump GP \(\phi_{0}\). The inset shows the probability distribution \(P[\varphi]\) accompanied by the first moment of the GP distribution \(\bar{\phi}\).
The distribution changes radically when \(\gamma_{z}\neq 0\). In what follows we discuss the case \(\gamma_{z}=0.1\,\Gamma\) with \(\Gamma=10^{-3}\omega\). We start re-considering the two representative cases of fast and slower driving, displayed in panels (a) and (b) of Fig. 8 respectively. The first noticeable aspect is that, while three peaks observed in Fig. 6 (indicated here by the blue contours) can still be detected, they are now coexisting with a broad distribution.
As visible in panel (a) of Fig. 8, the three peaks heights discussed previously decrease in the presence of \(\gamma_{z}\). The suppression of the peaks is accompanied by the appearance of a broad background distribution covering the entire range. Panel (b) of Fig. 8 attends the slow driving situation, in which the probability to have a jump, and even several, along each trajectory, grows. The inclusion of the \(L_{z}\) jump modifies the sharp-peaked distribution into a broad one, which covers the entire range of \(\varphi\) values. In particular, while the two peaks connected to the no-jump trajectory disappeared. This happens because the inclusion of this term in the Lindbladian induces jumps into states other than the eigenstates of the Hamiltonian. In this sense, we may consider the results quite generic, not specifically dependent on the choice of the Lindblad operator. In order to get a more complete view of the effect of a finite \(\gamma_{z}\), Fig. 9 shows the distribution of \(\varphi\)-values as a function of \(\Omega/\omega\). For a non-adiabatic evolution in which almost no jumps are detected, the behavior exhibited by the distribution is similar to that observed in the \(\gamma_{z}=0\) case. When the velocity of the driving is reduced, gradually favouring the occurrence of jumps, the effect of introducing a finite \(\gamma_{z}\) value becomes more relevant. The \(L_{z}\) jumps lead to \(\varphi\) values that do also depend on the time at which different jumps occurred and hence to the broad background.
Summarizing, while the distribution of interference fringes is, in general, quite different from that of the phase accumulated along a single trajectory, the analysis of a spin-echo protocol allows to extract reliable information on both the no-jump trajectories and the adiabatic (Berry) phase in some regimes of parameters. In the following Section we will concentrate on the no-jump
Figure 8: Probability distribution \(P[\varphi]\) for a magnetic field oriented with \(\theta=0.34\pi\) and driven in a loop at frequencies (a) \(\Omega=5\times 10^{-3}\omega\) and (b) \(\Omega=5\times 10^{-4}\omega\). The environment is characterized by the dissipation rate \(\Gamma=10^{-3}\omega\), and finite \(\gamma_{z}=0.1\,\Gamma\). In both panels, a blue solid contour indicates the \(\gamma_{z}=0\) distributions. The solid red line depicts the adiabatic (Berry) phase \(\phi_{\mathrm{a}}^{+}\), and the black dashed and dash-dotted lines signalize the GPs obtained in no-jumps and unitary evolution respectively. Finally, the black dash double-dotted line indicates the \(\varphi\) value obtained in an echo protocol with no jumps. The insets zoom in a range in which differences between the reference values, panel (a), and the full magnitude of the central peak, panel (b), are visible.
Figure 9: Probability distribution \(P[\varphi]\) as a function of the rate \(\Omega/\omega\) between the frequency \(\Omega\) at which the magnetic field its rotated and its amplitude \(\omega\). The field is oriented with \(\theta=0.34\pi\) and the environment is characterized by the dissipation rate \(\Gamma=10^{-3}\omega\) and \(\gamma_{z}=0.1\Gamma\). The \(\varphi\) values are displayed on the y-axis, while the intensity of the count color indicates their probability. Extra lines signalize reference GP factors. The solid red line indicates the adiabatic (Berry) phase \(\phi_{\mathrm{a}}^{+}\), while the black dashed line is the value \(\phi_{0}\) extracted from evolution with no jumps.
trajectory (corresponding to the side-peaks of the persistent probability in the echo-protocol) and show that undergoes a topological transition as a function of the coupling to the environment.
### Topological transitions
As already anticipated, we conclude this analysis of GPs in monitored systems by focusing on the no-jump trajectory. We will show, following in spirit the work in Ref. [55], that the drift jump-free dynamics encode a topological transition. We would like to emphasize that, although the setting is very much different from that of [55], we believe that the nature of the transition is the same. Our analysis is a strong hint to the conjecture that this type of transition is rather generic for monitored systems.
_Phase diagram -_ The GP \(\phi_{0}\) given by Eq. (10) depends, for every fixed \(\theta\), on the ratios \(\Omega/\omega\) and \(\Gamma/\omega\). We recall the no-jump trajectory, and therefore the GP associated with it, have no dependence on \(\gamma_{z}\). Plotted as a function of the above-mentioned parameters, the GP shows discrete singularities at critical points, around which it makes a \(2\pi\) winding. Meanwhile, the probability associated with this particular trajectory vanishes at these points. We refer to Appendix D for details of the analytical derivation. Fig.10 shows a color plot of the GP in the \(\Gamma-\Omega\) diagram at fixed values of the angle \(\theta\). The range of the parameters is shown to highlight the singular point and the \(2\pi\) winding of the GP around it. The white lines indicate the probability for the no-jump trajectory, which approaches zero on reaching the singularity. We will show that the collection of these singular points delimits regions of the parameter space associated with different topological classes of evolution. This will be done by defining a topological invariant \(\mathrm{n}\in\mathbb{Z}\) (see below) and explicitly showing it takes different values over different regions of the parameter-rates plane.
_Topological transition in the no-jump trajectory -_ Direct inspection of the effective drift Hamiltonian shows that if the magnetic field points in the z-direction, the exited eigenstate \(|\psi_{+}\rangle\) of \(H(t)\) remains fixed in a pole of the Bloch sphere independently of the values taken by the parameter rates \(\Omega/\omega\) and \(\Gamma/\omega\). Therefore, the GP associated with the no-jump trajectory identically vanishes (mod \(2\pi\)) for \(\theta=0\) and \(\theta=\pi\). Without loss of generality, the mod \(2\pi\) freedom can be eliminated from the GP by simultaneously setting \(\phi_{0}(\theta=0)=0\) and demanding continuity. In this way, \(\phi_{0}(\theta=\pi)\) is completely determined by the evolution and acquires a value
\[\phi_{0}(\theta=\pi)=2\pi\,\mathrm{n}, \tag{15}\]
where \(\mathrm{n}\) is an integer number that characterizes the dependence of the GP with \(\theta\) for fixed parameter values. Being an integer, \(\mathrm{n}\) constitutes a topological invariant because it can not be changed by smoothly deforming \(\phi_{0}(\theta)\). As a consequence, if the GP is characterized by different values of \(\mathrm{n}\) as a function of the various parameters, this will impose the GP to undergo a non-smooth transformation, as the singular behavior exhibited in Fig.10. Indeed, points in the parameter space slightly to the right and slightly to the left of the singularity (indicated with crosses in Fig. 10) give rise to no-jump evolutions associated with topological invariants \(\mathrm{n}=0\) and \(\mathrm{n}=1\) respectively, thus identifying different topological classes. To explicitly show this, Fig. 11 compares the behavior as a function of \(\theta\) of these GPs by means of showing the difference \(\Delta(\theta)\) between them. Given two points, say (1) and (2) and labelled by crosses in Fig. 11, \(\Delta(\theta)\) is defined as
\[\Delta(\theta)=\frac{1}{2\pi}\left[\phi_{0}^{(\Gamma_{1},\Omega_{1})}-\phi_{0 }^{(\Gamma_{2},\Omega_{2})}\right]. \tag{16}\]
This difference is seen to vanish (up to some smooth small deviations) up to \(\theta=0.34\pi\), this is, until the angle of the singularity. At this specific \(\theta\) value the GP obtained from each parameter rate abruptly deviates, so that their difference shows a step and settles around \(\Delta=1\) for the remaining range. The different topological numbers \(\mathrm{n}\) is reflected by the value \(\Delta(\pi)=1\) for \(\theta=\pi\).
Over the full parameter space, the GP shows several singularities, with locations that depend on the value of \(\theta\). The set of singular points composes two counter-phase oscillating curves that define a chain of concatenated closed regions and split the parameter-rate space into an upper
Figure 10: Geometric phase associated with the no-jump trajectory, displayed over a limited region of the parameters plane defined by the ratios \(\Omega/\omega\) and \(\Gamma/\omega\). The value of the GP is given by color, as indicated by the bar on the right. The direction of the field is fixed to \(\theta=0.34\pi\). A singularity is observed \(\Omega/\omega=4.8082\times 10^{-3}\) and \(\Gamma/\omega=0.0306\). The crosses indicate points slightly to the left of the singularity (\(\Omega/\omega=4.8\times 10^{-3}\)) and slightly to the right of it (\(\Omega/\omega=4.8084\times 10^{-3}\)), which will be shown to belong to different topological sectors.
and lower region. This is shown in panel (a) of Fig. 12. Parameters within each sector lead to the same n value. The area below the sequence of closed regions is characterized by \(\mathrm{n}=-1\). The points given by parameter values \(\Gamma=0\) and \(\Omega/\omega\ll 1\), defining the adiabatic regime, belong to this region. The regions in between the lines are topologically trivial sectors with \(\mathrm{n}=0\), while the upper one is characterized by \(\mathrm{n}=1\). It is worth pointing out that these topological sectors are not equally probable. Besides the singular points of vanishing probability, the probability of attaining a trajectory with no jumps increases as \(\Gamma\) is reduced. This implies that the upper topological sector is less probable than the others.
_Topological transition in the echo experiment -_ With the aim of seeking experimentally detectable signatures of the topological transition, we perform a close inspection of the echo experiment that is completed without any jump event. In Section V.2, the \(\varphi\) value extracted in this case was observed to show good agreement with the adiabatic (Berry) phase for a wide range of frequencies. However, the close agreement of \(\varphi\) with \(\phi_{\mathrm{a}}\) will not hold for arbitrarily small frequency values, and it will deviate when the ratio \(\Gamma/\Omega\) becomes sufficiently large. Fig. 13 shows the \(\varphi\) value as a function of the frequency ratio. For easy reference and comparison, we consider an environment characterized by the dissipation rate \(\Gamma/\omega=0.0306\), which is included in the ranges exhibited by Figs. 10 to 12.
For large frequency ratio, the no-jump \(\varphi\) value shows the behavior described in Section V.2. However, approaching smaller frequencies, it shows a highly oscillating step and finally settles in the constant value \(\varphi\sim 1.375\pi\), associated with a persistence probability \(1/2\). This regime will be accessed when the state at the end of the protocol coincides, up to a global phase, with \(|\psi_{-}(0)\rangle\), this happens when the smooth drift suppresses the occupancy of the exited eigenstate within a cycle. Full population transfer from the excited to the ground eigenstate taking place within the evolution cycle requires the system to be driven at a slow frequency, smaller than that leading to a singular point. This requirement establishes a connection between the value of the echo phase and the topological classes of evolution, as distinctive regimes of \(\varphi\) are accessed on one and the other side of the singular points. We refer to Appendix D for details on this point. The limits of the range along which \(\varphi\) shows the step and turns from \(\sim\phi_{\mathrm{a}}\) into the central value are marked, on Fig. 13 with two light blue dotted lines. The righter region of the plot corresponds to evolutions characterized by the topological number \(\mathrm{n}=-1\). The range between the light-blue lines corresponds to the densely packed sequence of topological sectors illustrated by panel (a) of
Figure 11: \(\Delta(\theta)\) between GPs computed for points slightly to the right and slightly to the left of the singularity, indicated in Fig. 10 with x’s. The GP is, in each case, characterized by a different value of the topological invariant \(\mathrm{n}\). This is reflected in the fact that they differ by \(2\pi\) for \(\theta=\pi\).
Figure 12: Critical lines dividing the parameters’ plane into different topological classes of the no-jump evolution. The classes are characterized by different n values. The critical angle \(\theta_{c}\) at which each singular point is found is indicated by a color as described by the bar on the right. Panels (a) and (b) display different ranges for the rates \(\Omega/\omega\) and \(\Gamma/\omega\).
Fig. 12. Finally, once on the left of the last vertical line, the evolution is associated with a value \(\mathrm{n}=1\) of the topological number.
The inset in Fig. 13 shows \(\varphi\) as a function of the dissipation rate. In this plot, for easy reference and comparison, the value of the frequency rate is kept fixed at \(\Omega/\omega=4.8\times 10^{-3}\), also included in the ranges exhibited by Figs. 10 to 12. Once again, the \(\varphi\) value shows good agreement with the adiabatic phase up to some critical \(\Gamma/\Omega\) relation, at which it shows a decreasing step, finally landing at \(\varphi\sim 1.375\pi\). As in the main plot, light blue dotted lines mark the limit of the step and split the plot into three distinctive sectors. The left of the first line corresponds to \(\mathrm{n}=-1\) evolution, while the right side of the plot, to \(\mathrm{n}=1\). The space between lines, once again, can be associated with the intermediate zone, which is a single region (see Fig. 12 (a)) thus leading to no oscillations.
In summary, a measure of the persistent probability in an echo protocol carries clear indications of the topological transition. The peak structure discussed in Section V.2 allows to identify the no-jump trajectory. The subsequent analysis of this peak, as summarized in Fig.13, is sufficient to capture the topological transition.
## VI Conclusions
In this paper, we have studied geometric phases in a continuously monitored quantum system. In absence of any coupling to the environment, the cyclic time-dependence of the Hamiltonian leads, in the adiabatic regime, to the Berry phase, and to its consistent generalization for a generic unitary evolution. The presence of an environment induces quantum jumps so that in a single realization of the dynamics the wave function, following a given quantum trajectory, accumulates a GP that is itself a stochastic quantity. We have analyzed the distribution of GPs by highlighting the interplay between non-adiabatic effects and the influence of the external environment. We have shown that for slow drivings the distribution of phases is broad because of the several different occurrences of jumps at random times. On speeding up the driving, the number of jumps reduces and the distribution becomes peaked around the no-jump trajectory (still deviating from the Berry phase because of the non-adiabatic correction and the non-Hermitian drift term). A first quantitative measure of the distribution has been given by the variance, discussed in Fig.4.
In order to have experimental access to the GPs along a given trajectory, we have also analyzed a spin-echo protocol. The structure provided by the jump operators taken together with the possibility of level transitions due to non-adiabaticity and the characteristics of the persistence probability can be set in such a way that they lead either to the observation of broad distributions or extremely sharp peaks. This interplay should be thus considered in order to be explored as a tool or otherwise, the experiment is rendered uninformative.
We have finally concentrated on the no-jump trajectory, showing that it undergoes a topological transition as a function of the dissipation strength. Interestingly, this transition is not necessarily connected to singularities occurring in the dynamics of the density matrix. Indeed, for the model considered herein, at the transition point occurring in the no-jump trajectory the behavior of the density matrix is smooth. Despite the striking differences shown between the GP and the interference fringes of an echo experiment, traces of this transition can be observed in the behavior of the interference fringes.
In this work, we have considered a specific model for the jump operators corresponding to a well-defined type of monitoring. However, it is important to understand to which extent the properties we have discussed here depend on the type of unravelling. This question might be of particular relevance, especially if one wants to define topological properties associated with Markovian systems starting from the properties of their trajectories (there are infinite ways of unravelling the same Lindblad dynamics). A glimpse on this question is summarised in Appendix C where we consider an unraveling corresponding to a homodyne detection. For what concerns the distribution the qualitative pictures we have outlined in the body of the paper remain valid although important quantitative differences may arise.
Figure 13: Dependence of \(\varphi\) (black dashed line) obtained in a protocol with no jump events, as a function of the ratio \(\Omega/\omega\). The field is oriented with \(\theta=0.34\pi\) and the environment is characterized by the dissipation rate \(\Gamma=0.0306\omega\), included in the ranges displayed in Figs. 10 - 12. The adiabatic (Berry) phase is also indicated for reference, with a red solid line. The inset shows the \(\varphi\) value as a function of the rate \(\Gamma/\omega\), with the magnetic field characterized by the same angle \(\theta\) and \(\Omega/\omega=4.8\times 10^{-3}\), coinciding as well with the values used in the previous plots.
###### Acknowledgements.
We would like to acknowledge Alessandro Romito for very useful discussions and critical reading of the manuscript. The work of R.F. has been supported by the ERC under grant agreement n.101053159 (RAVE) and by a Google Quantum Research Award. The work of L.V., F.L., and P.I.V. is supported by Agencia Nacional de Promocion Cientifica y Tecnologica (ANPCyT), Consejo Nacional de Investigaciones Cientificas y Tecnics (CONICET), and Universidad de Buenos Aires (UBA). P.I.V. acknowledges ICTP-Trieste Associate Program. R.F. acknowledges that his research has been conducted within the framework of the Trieste Institute for Theoretical Quantum Technologies (TQT).
## Appendix A Pancharatnam phase along a quantum trajectory
As stated in section II, the quantum trajectory emerging in a single monitored evolution of the system can be understood as intervals of smooth dynamics interrupted at random times by quantum jumps. Considered in this way, evolution in a time interval \(t\in[0,T]\) is characterized by an array of jumps of type \(\alpha_{i}\) occurring at times \(t_{i}\) of the form given by Eq. (4), and the parameter \(t\) is a continuous variable within the intervals delimited by the \(t_{i}\)'s. In the quantum jumps approach, the algorithm applied in constructing the trajectories goes as follows [53]. The time interval \([0,T]\) is discretized into N steps of length \(\delta\,t\). and the state is consistently updated at each time step according to a randomly-decided non-hermitian operator, as described in Eq. (2). Hence, each quantum trajectory can also be thought of from an algorithmic point of view as the ordered collection of states generated by the action of a specific sequence of operators \(K_{0,\alpha}\) given by Eq. (3), and is in this way a discrete set of states.
For a sequence of N discrete pure states, the suitable GP expression is Pancharatnam phase [5; 44; 45], and is given by
\[\phi_{P}[\psi]=\arg\left\langle\psi_{1}|\psi_{\mathrm{N}}\right\rangle-\arg( \left\langle\psi_{1}|\psi_{2}\right\rangle...\left\langle\psi_{\mathrm{N-1}} |\psi_{\mathrm{N}}\right\rangle). \tag{10}\]
The Pancharatnam phase is independent of the \(U(1)\) gauge choice and does not require the sequence to close, rely on adiabaticity condition or demand for unitarity, allowing for non-normalized states in the sequence, as long as non of them perfectly vanishes. Exhibiting these characteristics, it becomes a natural definition of GP to be applied to monitored dynamics, in which evolution is generated by non-hermitian operators. It equals the unitary GP associated with the trajectory build-up from joining consecutive states in the sequence by the shortest geodesic in the Hilbert space.
While this definition does not imply any constraint on the number of states in the sequence by itself, when applied in the context of quantum jumps the number N of states is constrained from below as a consequence of the condition reigning the time step. An evolution in time-interval \([0,T]\) consist of \(\mathrm{N}=T/\delta\,t\gg 1\) states. Splitting the sequence of states \(\{\left|\psi_{1}\right\rangle\left|\psi_{2}\right\rangle...\left|\psi_{ \mathrm{N}}\right\rangle\}\) into sets starting and ending at those corresponding to the specific times \(t_{i}\) where a jump is registered, sets a bridge between this two different descriptions of a quantum trajectory. Each time interval \([t_{i},t_{i+1}]\), discretized in time-steps of length \(\delta t\), consist of a number of steps that depend on the specific values of \(t_{i}\) and \(t_{i+1}\). From a given jump-time \(t_{i}\), any time-step in the consecutive interval can be found as \(t_{i}+k_{i}\,\delta t\), this is, by adding some amount \(k_{i}\in\mathbb{N}\) of increments \(\delta t\), up to some maximum value \(k_{i}^{*}\) that satisfies \(t_{i+1}=t_{i}+k_{i}^{*}\,\delta t\) (See Fig. 14).
At each given time, the outcome of a measurement performed on the environment will be associated to the corresponding Kraus operator acting on the system and the state generated by its action. Therefore, there is a to a one-to-one correspondence between the discrete set conforming the time interval and the array of states forming the trajectory. The splitting at jump-times \(t_{i}\) can thus be mapped into the trajectory as
\[\bigcup_{i=0}^{N_{J}}\{\left|\psi(t_{i}+k_{i}\,\delta t)\right\rangle k_{i}=0,...,k_{i}^{max}-1\} \tag{11}\]
with \(N_{J}\) the number of jumps occurring in the trajectory and the out-bounds indexes \(i=0\) and \(i=N_{J}+1\) signaling the entire time-interval limits \(t_{0}=0\) and \(t_{N_{J}+1}=T\).
Introducing such a decomposition into the formula for Pancharatnam phase, Eq. (10) can be re-written as
\[\phi_{P}= \arg\left\langle\psi(0)|\psi(T)\right\rangle\] \[- \sum_{i=0}^{N_{J}}\sum_{k_{i}=1}^{k_{i}^{*}-1}\arg\left\langle \psi(t_{i}+k_{i}\,\delta t)|\psi(t_{i}+\left(k_{i}+1\right)\delta t)\right\rangle\] \[- \sum_{i=0}^{N_{J}}\arg\left\langle\psi(t_{i})|\,K_{\alpha_{i}}\,| \psi(t_{i})\right\rangle. \tag{12}\]
The formula in Eq.(9) for the GP is thus associated with a single trajectory is derived by taking the continuous
Figure 14: Illustrative diagram depicting time interval \([0,T]\). Both the discretization in \(\delta t\) steps and the splitting at jump times \(t_{i}\) are indicated. The relation between times and states is represented as well
limit \(\delta t/T\to 0\) within the intervals of smooth evolution [44; 45]. This expression, more suitable for the exam performed in our work, inherits all the properties of the Pancharatnam phase from which it is obtained.
## Appendix B Interference fringes distribution
As discussed in Sec. V.2, the distribution of interference fringes from an echo experiment, which we parameterize with \(\varphi\), shows three (sometimes sharp) peaks. When \(\gamma_{z}=0\) only jumps between instantaneous energy eigenstates are possible, and the three peaks emerge from sets of trajectories of a different character as follows.
1. Smooth trajectories with no jumps generate the piling up in the no-jump value \(\varphi_{0}\sim 1.43\pi\)
2. Trajectories in which at least one decay or spontaneous excitation jump occurred, projecting the state into an eigenstate \(\left|\psi_{\pm}(t_{i})\right\rangle\) of \(H(t)\), give rise to the peak at \(\varphi\sim 1.375\pi\).
3. Trajectories in which only dephasing jumps took place give rise to the peak at \(\varphi\sim 1.275\pi\).
In this appendix, we provide a detailed justification of this observation. With the aim of providing an accessible presentation of the qualitative aspects of the phenomena, we will generally disregard the non-hermiticity of the smooth evolution between jumps, thinking of those intervals as unitary (slowly or rapidly driven) evolution. Hence, this presentation should not be taken as a rigorous quantitative analysis.
We begin with the consideration of the peak (1.) coinciding with the no-jump value \(\varphi_{0}\sim 1.34\pi\). As presented in Sec. II this smooth trajectory is unique and therefore the exact same value of \(\varphi\) will be expected on every case in which this trajectory is obtained.
We thus turn to the case in which jumps are indeed detected, with special care on the anti-intuitive shrinking of the distribution in the slower regime in which more jumps are detected. When \(\gamma_{z}=0\) three jumps are possible within our unravelling of the Lindblad equation. Two out of these three jumps project the state into an energy eigenstate, namely, decay jumps and spontaneous excitations. Whenever a jump of this kind takes place at some instant of time \(t_{i}\), immediately after the jump the state of the system turns into
\[\left|\Psi(t_{i})\right\rangle=e^{i\,\xi(t_{i})+i\,\phi(t_{i})}\left|\psi_{\pm }(t_{i})\right\rangle \tag{10}\]
with \(\xi(t_{i})\) the dynamical phase and \(\phi(t_{i})\) the geometrical phase, given by Eq. (9) accumulated up to the occurrence of the jump. If the protocol ends immediately after, the persistence probability \(\mathcal{P}_{\mathcal{R}}=\left|\,\langle\psi(0)|\psi(2T)\rangle\,\right|^{2 }=1/2\) preserves no information on either the GP or the specific characteristics of the jump. If, on the other hand, the system continues to evolve, the possibility of obtaining any information on a phase or the jump time will rely on the interplay between the non-adiabatic transitions and the existence of further jumps. If the evolution continues from the first jump on, this will happen smoothly until either the protocol is finished or another jump takes place. Different regimes of \(\Omega/\omega\) give rise to the smooth evolution of different natures. If the protocol is performed slowly enough, this smooth evolution is (almost) transition-free and \(\left|\psi(t)\right\rangle\sim e^{i\,\xi(t>t_{i})+i\,\phi(t>t_{i})}\left|\psi_{ \pm}(t>t_{i})\right\rangle\), so the result obtained for the persistent probability remains to be \(\mathcal{P}_{\mathcal{R}}=1/2\). Moreover, this regime favors the occurrence of further jumps, thus reinforcing the erasing of information by re-projecting into eigenstates of \(H(t)\). The complete independence of the result on the times \(t_{i}\) of the jumps makes this peak (2.) extremely sharp in the slow regime. On the other hand, if the system is driven faster, along the smooth evolution after the jump the state develops contributions from the other eigenstate due to non-adiabatic effects, favoring the emergence of relative phases and becoming
\[\left|\psi(t>t_{i})\right\rangle =A_{\pm}(t>t_{i})\left|\psi_{\pm}(t>t_{i})\right\rangle\] \[+A_{\mp}(t-t_{i})\left|\psi_{\mp}(t-t_{i})\right\rangle \tag{11}\]
with \(A\pm(t)\) the amplitudes for each eigenstate. In such a situation, the persistence probability depends on \(t_{i}\), leading to the broadening observed in the central peak of Fig.7 for faster driving, while still not trivially connected to the GP. As anticipated in the previous paragraphs, each jump of this kind will erase all information on the phases and any dependence on previous jump times. The possibility of further erasing events is mitigated in faster protocols by the reduction of exposure to the environment.
The third peak (3.) observed in the distribution at \(\varphi\sim 1.475\pi\) can be understood by adding dephasing jumps to the previous discussion. A dephasing jump has the effect of introducing a \(\pi\) shift in the relative phase of the state. If the evolution afterward remains transitionless (and no erasing jumps occur at any point), the evolution resembles that of the adiabatic echo experiment up to corrections that can be disregarded, so the persistence probability takes the value \(\mathcal{P}\sim\sin^{2}(2\phi_{a})\) (with cos replaced by sin due to the relative \(\pi\) shift). This situation leads to a well-defined single \(\varphi\)-value which is independent of the time \(t_{i}\) at which the jump took place. Therefore, in the slow-driving range, a well-defined peak emerges, that might however be small, as in this regime decay jumps are likely. As the magnetic field is rotated faster, non-adiabatic effects induce a dependence on \(t_{i}\) on the persistence probability. This dependence on \(t_{i}\) is inherited by the "phases" extracted, and thus responsible for the broadening of the distribution observed in Fig. 7 for larger \(\Omega/\omega\) values.
The inclusion of a jump operator \(\propto\sigma_{z}\) modifies this three-peaked distribution by leading to a broad background which is present even in the case in which it is not the dominant process. The \(K_{z}\) jumps promote the development of relative phases as they mix eigenstates of the Hamiltonian. Even if the system has, at some time,
transitioned to an eigenstate, suffering from a \(\sigma_{z}\)-jump suddenly drags it away into a superposition state.
## Appendix C Dependence on the unravelling: field displacement
Another paradigmatic quantum trajectories scheme arising from a different unraveling of the master equation is that of the so-called diffusive trajectories, in which the monitored quantities produce continuously fluctuating signals instead of discontinuous jumps [77]. This is the prototypical scheme of continuous or ideal homodyne detection, which can be theoretically obtained as a limiting case of the mentioned discrete homodyne detection [63, 64, 78, 73, 79].
The master equation Eq. (1) is invariant under the transformation
\[H(t)\to H^{\prime}(t)=H(t)-\sqrt{\lambda}\frac{i}{2}\sum_{ \alpha}(K_{\alpha}-K_{\alpha}^{\dagger})\] \[K_{\alpha}\to K_{\alpha}^{\prime}=K_{\alpha}+\sqrt{\lambda}\, \mathbb{I}, \tag{20}\]
where \(\sqrt{\lambda}\in\mathbb{R}\). Therefore it is possible to substitute \(K_{\alpha}\) and \(H(t)\) in Eq. (1) by \(K_{\alpha}^{\prime}\) and \(H^{\prime}(t)\) without modifying the averaged dynamics of the system and unravel it using the standard direct detection (quantum jumps) scheme applied before. When the reservoir is assumed to be made of harmonic modes, like electromagnetic radiation, adding the displacement \(\sqrt{\lambda}\) to the Lindblad operators corresponds to the implementation of homodyne detection [64, 78, 80]. In this case, taking \(\sqrt{\lambda}\) suitably large leads to a measurement of the quadrature of the system dipole \(K_{\alpha}+K_{\alpha}^{\dagger}\). However, in order to keep the collapse probability per step small, it would be necessary to reduce the time step and hence increase the simulation time by the same order. For this reason, we refrain to consider finite large \(\sqrt{\lambda}\) values in this section and focus on the modifications suffered by \(P[\phi]\) for smaller \(\sqrt{\lambda}\) values.
In Fig. 15 we present, also for the case of this different unravelling of the Linbland equation, the two cases in which the driving is performed faster or slow enough for the hypothetical _unitary_ dynamics to be considered adiabatic. As in the previous cases, the environment remains fixed with \(\Gamma=10^{-3}\omega\) and \(\gamma_{z}=0\), and we have taken \(\lambda=2.5\times 10^{-5}\omega<\Gamma\). The two cases are shown in panels (a) and (b) of Fig.15 respectively, where we also plot the no-jump and unitary GPs, and the average of the distribution for reference. Striking differences from the case of direct detection arise. For this \(\lambda/\omega\ll 1\), the reference values displayed remain close to those obtained in the \(\lambda=0\), while the distributions behave differently. In the fast-driven case displayed in panel (a), the expected increase in jumps is reflected by the decrease of the sharp peak piling up from no-jump trajectories. However, the formerly broad, but still uneven background, has now turned into a completely uniform distribution in which each phase value (but the no-jump) is evenly probable. The described behavior is reinforced when the system is driven at slower frequency rates. The previously broad while single-peaked distribution lacks, in a system monitored through the operators \(K_{\alpha}^{\prime}\) forming the new basis, of any structure.
Figure 15: Probability distribution \(P[\phi]\) of GP values obtained in an unravelling with \(K_{\alpha}^{\prime}\) and \(H^{\prime}(t)\) operators. The magnetic field is oriented at \(\theta=0.34\pi\) and driven in a cycle at frequencies (a) \(\Omega=5\times 10^{-3}\omega\) and (b) \(\Omega=5\times 10^{-4}\omega\). The environment is characterized by the dissipation rate \(\Gamma=10^{-3}\omega\) and \(\gamma_{z}=0\). We have taken \(\lambda=2.5\times 10^{-5}\omega\). In both panels, a blue solid contour recalls the distributions obtained in the original unraveling considered in this work. Extra lines indicate the new reference GP values. The black dashed and dot-dashed lines signalize the GPs \(\phi_{0}\) and \(\phi_{u}\) associated with no-jump and general unitary evolution. The black dotted line shows the first moment of the distribution \(\bar{\phi}\).
## Appendix D Smooth evolution with no jumps: Analytic approach
We provide in this Appendix some additional analytical results for the no-jumps evolution. As previously mentioned, this particular case can be thought of as generated by the non-hermitian Hamiltonian in Eq. (8), in such a way that a non-normalized state \(\left|\tilde{\psi}(t)\right\rangle\) will follow Schrodinger's equation
\[i\frac{d}{dt}\left|\tilde{\psi}(t)\right\rangle=H_{o}(t)\,\left|\tilde{\psi}(t )\right\rangle \tag{101}\]
where \(H_{o}(t)\) is not only non-hermitian but also explicitly time-dependent due to the function \(f(t)\). The effective drift Hamiltonian shares eigenstates with \(H(t)\), but the eigenvalues associated with these eigenstates are now complex and time-dependent, given by \(\pm\omega/2\)\([1-i\,\Gamma/(2\omega)f(t)]\).
The dynamics of the normalized state of the system
\[\left|\psi(t)\right\rangle=\frac{\left|\tilde{\psi}(t)\right\rangle}{\sqrt{ \left\langle\tilde{\psi}(t)\right|\tilde{\psi}(t)\right\rangle}} \tag{102}\]
will be governed by the more involved, nonlinear equation which is found by jointly differentiating Eq. (102) and making use of Eq. (101).
The not-normalized state can be expanded into the instantaneous eigenstates of \(H_{o}(t)\), was \(\left|\tilde{\psi}(t)\right\rangle=\tilde{c}_{+}\left|\psi_{+}(t)\right\rangle+ \tilde{c}_{-}\left|\psi_{-}(t)\right\rangle\). Explicit computation of Eq. (101) leads to the following differential equations for the coefficients \(\tilde{c}_{\pm}(t)\)
\[\dot{\tilde{c}}_{\pm}=\left(\mp i\frac{\omega}{2}-i\frac{\Omega} {2}(1\mp\cos(\theta))\mp\frac{\Gamma}{4}\,f(t)\right)\tilde{c}_{\pm}(t)\\ +\,i\frac{\Omega}{2}\sin(\theta)\,\tilde{c}_{\mp}(t), \tag{103}\]
where the real term \(\sim-\Gamma\tilde{c}_{+}(t)\) indicates that even in the case with no jumps, the presence of the environment favors state transitions, as the amplitude of the excited eigenstate is suppressed. Taking into account the normalization procedure involved in turning from the not-normalized state into the real, normalized one, this suppression implies a population transfer from the excited eigenstate into the ground state. As a consequence, any trivial implementation of the adiabatic approximation is prevented. A second feature observed in Eq. (103) is that, for the parameters chosen in this work, a good agreement can be obtained by replacing \(f(t)\) with its mean value \(f(t)\sim 1-\sin^{2}(\theta)/2\). By performing this replacement, dynamics become easily solvable in the rotating frame. The smooth evolution of each eigenstate of the system is, within this approximation, given by
\[\left|\psi_{(\pm)}(t)\right\rangle=\mathcal{N}_{\pm}\,e^{-i \Omega/2\,t}\\ \left\{\left[\pm(\nu+\varepsilon)e^{-i\varepsilon/2\,t}\mp(\nu- \varepsilon)e^{i\varepsilon/2\,t}\right]\left|\psi_{\pm}(t)\right\rangle\\ -\Omega\sin(\theta)\left|\psi_{\mp}(t)\right\rangle\right\}, \tag{104}\]
where both \(\nu\) and \(\varepsilon\) are complex quantities given by \(\nu=\omega-\Omega\cos(\theta)-i\,\Gamma/2(1-\sin^{2}(\theta)/2)\) and \(\varepsilon=\sqrt{\nu^{2}+\Omega^{2}\sin^{2}(\theta)}\), \(\mathcal{N}_{\pm}\) is a normalization factor. At this point, it should be stressed that Eq.(104) explicitly shows how the state \(\left|\psi_{(\pm)}(t)\right\rangle\) obtained when evolving an eigenstate will _not_ be equal to the instantaneous eigenstate at a later time in the general case.
_Geometric phase -_ The GP associated with a trajectory in which no jumps can be explicitly computed from Eq. (10). While the general expression is quite involved, it takes, for small rates \(\Omega/\omega\sim\Gamma/\omega\) of the driving frequency and the dissipation rate to the amplitude of the magnetic field, the form
\[\phi_{0}\sim -\pi(1-\cos(\theta)) \tag{105}\] \[-\pi\sin^{2}(\theta)\left(\frac{\Omega}{\omega}+\cos(\theta) \frac{\Omega^{2}}{\omega^{2}}\right)\] \[-\frac{\sin^{2}(\theta)}{4}\left(\frac{\Omega}{\omega}+\cos( \theta)\frac{\Omega^{2}}{\omega^{2}}\right)\frac{e^{-4\pi\,\text{Im}(\nu)/ \Omega}-1}{2\,\text{Im}(\nu)/\Omega},\]
where the first term in the r.h.s is the Berry phase. The term in the second line of the equation is the main correction originating exclusively from non-adiabaticity, in otherwise unitary evolution. The third line accounts for the non-trivial effect of the environment in the no-jump evolution. As \(\Gamma\to 0\) this term turns into a further contribution due to non-adiabaticity.
_Phase diagram singularities -_ When computing the accumulated GPs analyzed in Sections V.1 and V.3 we have taken \(\left|\psi_{+}(0)\right\rangle\) as our initial state. Thus, a vanishing probability for observing this particular trajectory, of the kind observed at the GP singular points, requires \(\left|\psi(T)\right\rangle\propto\left|\psi_{-}(0)\right\rangle\). Considering the cyclic character of the instantaneous eigenstates, this means a singular point will take place whenever a full population transfer is achieved exactly in a time period. It was already inferred from the differential equations governing the evolution of the \(\tilde{c}_{\pm}\) coefficients, that the dynamics generated by the effective drift Hamiltonian \(H_{o}(t)\) favored transitions from the excited to the ground instantaneous eigenstate. As long as the original approximation remains accurate, the singular points of the GP will be defined through the equation \((\nu+\varepsilon)-(\nu-\varepsilon)e^{2i\pi\varepsilon/\Omega}=0\).
_No-jump interference fringe -_ In Section V.2, we have studied the interference fringes of an echo experiment. For this purpose, we've defined the convenient parameter \(\varphi\) given by Eq. (13). Restricting to the case of a protocol performed without registering any jump, it was shown that the value of \(\varphi\) displays, generally, better agreement
with the Berry phase than with the GP \(\phi_{0}\) accumulated by the state of the system under equal conditions, this is, when it is smoothly driven along one period of time. As long as the no-jump value \(\varphi\sim\phi_{a}\), good agreement between this "phase" and the GP will be obtained when the second and third lines in Eq. (45) are sufficiently small. However, it is worth noting that the \(\varphi\) value will not remain close to the Berry phase for arbitrarily small driving frequency. While the protocol has shown to be less sensitive to both non-adiabatic and environmentally induced effects than the GP, it will account for the non-ideal conditions. It was already shown that the environment induces population transfer from the excited to the ground state. The asymmetry between the smooth evolution of each eigenstate should be expected to prevent, at some point, the cancellation of the dynamical evolutions. Figure 13 illustrates this situation, by showing the \(\varphi\) value as a function of the rate between the driving frequency and the field amplitude. While for larger rates \(\gtrsim 0.01\) the phase reproduces the behavior discussed in Section V.2, this situation does not hold if the rate is lowered enough. At some critical value, the parameter extracted from the echo protocol starts deviating from the adiabatic phase.
A rather singular situation arises when the state at the end of the protocol coincides, up to a global phase, with \(|\psi_{-}(0)\rangle\), so that the persistence probability turns \(\mathcal{P}=1/2\). As a consequence, the \(\varphi\sim 1.375\pi\) value observed in Fig. 13, trivially associated with \(\mathcal{P}\sim 1/2\) by Eq. (13) is obtained. In this case, the three peaks observed in the distribution \(\mathcal{P}[\varphi]\) (see Section V.2) merge into a single, central peak. This regime is accessed when full population transfer occurs within a cycle and the system reaches a steady state \(\sim|\psi_{-}(t)\rangle\). As we have discussed above, the parameters leading to full population transfer at the exact time of a cycle correspond to singular points of the GP. Then, full population transfer within a cycle implies evolutions performed either at higher dissipation rates or at slower frequencies than those for which the singularity occurs. This requirement establishes a connection between the value of the echo phase and the topological classes of evolution, as distinctive regimes of \(\varphi\) are accessed on one and the other side of the singular points.
|
2308.08113 | Enhanced super-Heisenberg scaling precision by nonlinear coupling and
postselection | In quantum precision metrology, the famous result of Heisenberg limit scaling
as $1/N$ (with $N$ the number of probes) can be surpassed by considering
nonlinear coupling measurement. In this work, we consider the most
practice-relevant quadratic nonlinear coupling and show that the metrological
precision can be enhanced from the $1/N^{\frac{3}{2}}$ super-Heisenberg scaling
to $1/N^2$, by simply employing a pre- and post-selection (PPS) technique, but
not using any expensive quantum resources such as quantum entangled state of
probes. | Lupei Qin, Jialin Li, Yazhi Niu, Xin-Qi Li | 2023-08-16T02:57:22Z | http://arxiv.org/abs/2308.08113v1 | # Enhanced super-Heisenberg scaling precision by nonlinear coupling and postselection
###### Abstract
In quantum precision metrology, the famous result of Heisenberg limit scaling as \(1/N\) (with \(N\) the number of probes) can be surpassed by considering nonlinear coupling measurement. In this work, we consider the most practice-relevant quadratic nonlinear coupling and show that the metrological precision can be enhanced from the \(1/N^{\frac{3}{2}}\) super-Heisenberg scaling to \(1/N^{2}\), by simply employing a pre- and post-selection (PPS) technique, but not using any expensive quantum resources such as quantum entangled state of probes.
_Introduction.--_ Quantum precision measurement (or, quantum metrology) is one of the main quantum technology frontiers under explorations. It can reach unprecedented metrological precision owing to using quantum resources such as entanglement and squeezing [1; 2; 3]. However, the precision of quantum metrology is also limited by measurement strategies and some fundamental quantum principles. For instance, the quantum nature of the underlying state, interacting evolution, and final measurement would make the output results of measurement suffer strong statistical uncertainty. Fundamentally, the various metrological precisions must be bounded by the Heisenberg uncertainty principle or, equivalently, the Heisenberg inequality [4]. For the metrological schemes taking only classical strategies, i.e., not exploring any quantum correlations between the probes, the precision can at most scale with \(N\) (the number of probes) as \(1/\sqrt{N}\)[1; 2; 3; 4]. This is actually the shot-noise-caused scaling limit (owing to the absence of correlation), which is also, more often, referred to as standard quantum limit (SQL), owing to taking standard classical measurement strategies. However, the standard classical strategies are not optimal. If quantum correlation is introduced, e.g., by quantum entanglement of the probes, the SQL can be surpassed by achieving the better precision of \(1/N\) scaling, which has a \(\sqrt{N}\) precision enhancement over the SQL [1; 2; 3]. This scaling is usually referred to as Heisenberg limit (HL). For long time, HL was believed as the ultimate limit, e.g., as the ultimate bound to the precision of phase measurements in the case of interferometry.
However, the above conclusions of SQL and HL are achieved only for linear coupling measurement. The so-called HL (scaling as \(1/N\), but not really limited by the Heisenberg principle) can be surpassed by considering nonlinear coupling measurements [4; 5; 6; 7; 8; 9; 10]. In this work, following some references, we may dub the metrological precision better than \(1/N\) scaling as super-Heisenberg limit (or super-Heisenberg scaling). For a general \(k_{\rm th}\)-order nonlinear coupling measurement, the main conclusions are [4; 5; 6; 7; 8; 9; 10]: _(i)_ an optimal sensitivity that scales as \(1/N^{k}\) can be achieved, if using as well an entangled initial probe state; _(ii)_ sensitivity that scales as \(1/N^{k-1/2}\) can be possible, if the probe is initially prepared in a product state. In practice, nonlinear coupling can be realized in such as quantum optical and condensed matter systems. For instance, following the pioneering works, subsequent studies consider the specific realizations by introducing nonlinear Kerr medium in the Mach-Zehnder interferometer (MZI), and by exploring spin-based atom ensembles and condensed matter systems [11; 12; 13; 14]. Moreover, experimental demonstrations have also been carried out [15; 16; 17; 18]. For nonlinear coupling measurement, the most practice-relevant case is the second-order nonlinearity with \(k=2\). The both results scaling as \(1/N^{2}\) (for entangled initial probe state) and \(1/N^{\frac{3}{2}}\) (for product initial probe state) are super-Heisenberg scaling, being thus of great interest.
In this work, we consider to introduce the strategy of pre- and post-selection (PPS), which was proposed by Aharonov, Albert, and Vaidman (AAV) in the quantum weak value (WV) measurement [19; 20]. We will show that, without need of additional quantum resources such as entangled state of the probes, the PPS strategy can help to enhance the metrological precision scaling from \(1/N^{\frac{3}{2}}\) to \(1/N^{2}\), for the quadratic nonlinear coupling measurement. Actually, the weak-value-amplification (WVA) technique has been successfully demonstrated in quantum precision measurement in various contexts [21; 22; 23; 24; 25; 26; 27; 28]. The basic conclusion achieved by the WVA community is somehow twofold. In one aspect, indeed, the WVA technique can amplify small signals beyond the resolution of detector, can suppress the technical noise in some practical situations, and can greatly outperform the conventional measurement (CM) in the presence of power saturation of detectors [29; 30; 31; 32; 33; 34; 35; 36; 37; 38]. In other aspect, owing to discarding data in the postselection, the Fisher information encoded in the postselected data (and thus the metrological precision) cannot surpass the result of conventional measurement (without involving the postselection procedure) [39; 40; 41; 42; 43; 44].
Very recently, by considering a two-state system (qubit) linearly coupled to an optical coherent state (probe meter), it was found that the PPS strategy employed in the WVA measurement can lead to some new results [45]: (i) with the increase of the measurement strength (to violate the AAV limit), the WVA scheme can outperform the conventional approach; (ii) the WVA scheme can make a mixture of coherent states work better than a pure coherent state with identical average photon numbers, while the opposite conclusion was claimed in context of conventional measurement (without using the PPS strategy) [46]. In present work, we extend the study of Ref. [45] from linear to quadratic nonlinear coupling measurement,
and obtain new results over the conventional measurement, by means of the PPS strategy of WVA.
_Formulation of the measurement scheme._-- Following Ref. [45], let us consider a quantum two-state (qubit) system, with states denoted as \(|1\rangle\) and \(|2\rangle\), coupled to an optical probe (meter) system prepared in the coherent state \(|\alpha\rangle\). However, in this work, rather than the linear coupling, we consider a quadratic nonlinear coupling described by \(\hat{H}^{{}^{\prime}}=-\lambda\hat{\sigma}_{z}\hat{n}^{2}\), where the Pauli operator \(\hat{\sigma}_{z}=|2\rangle\langle 2|-|1\rangle\langle 1|\) describes the qubit system, and the number operator \(\hat{n}=\hat{a}^{{}^{\prime}}\hat{a}\) is for the optical meter, with \(\hat{a}^{{}^{\prime}}\) (\(\hat{a}\)) the creation (annihilation) operator of a photon. Accordingly, the evolution of the whole qubit-plus-meter system is governed by the unitary operator \(U=\exp(i\chi\hat{\sigma}_{z}\hat{n}^{2})\), where \(\chi=\int_{0}^{\tau}dt\lambda=\lambda\tau\) is the integrated strength of interaction over time \(\tau\). For quantum precision measurement, we assume \(\chi\) the parameter to be estimated. Starting with a product state of the qubit and meter, \(|\Psi_{J}^{(i)}\rangle=|i\rangle|\alpha\rangle\), where \(|i\rangle=\cos\frac{\theta_{i}}{2}|1\rangle+\sin\frac{\theta_{i}}{2}e^{i\varphi _{i}}|2\rangle\), the coupling interaction will evolve the entire system into an entangled state as
\[|\Psi_{J}\rangle=\cos\frac{\theta_{i}}{2}|1\rangle|\phi_{-}\rangle+\sin\frac{ \theta_{i}}{2}e^{i\varphi_{i}}|2\rangle|\phi_{+}\rangle\,, \tag{1}\]
where the meter's state is affected differently by the qubit states \(|1\rangle\) and \(|2\rangle\), being given by
\[|\phi_{\mp}\rangle=e^{-|\alpha|^{2}/2}\sum_{n=0}^{\infty}\frac{\alpha^{n}}{ \sqrt{n!}}e^{\mp i\chi n^{2}}|n\rangle\,. \tag{2}\]
Then, through the coupling interaction, the parameter \(\chi\) under estimation has been encoded into the meter states. One can perform various types of measurement to extract the value of \(\chi\). In this work, following the idea of WVA, we consider applying the strategy of post-selection to amplify the signal for \(\chi\). To be specific, let us assume the post-selection measurement of the qubit state with \(|f\rangle=\cos\frac{\theta_{f}}{2}|1\rangle+\sin\frac{\theta_{f}}{2}e^{i \varphi_{f}}|2\rangle\), and keep the final measurement results (e.g., photon numbers) of the meter only after successful post-selection of the qubit state. Mathematically, the post-selection is simply described as \(|\widetilde{\Phi}_{f}\rangle=\langle f|\Psi_{J}\rangle\), resulting thus in a state for the meter as
\[|\widetilde{\Phi}_{f}\rangle=\cos\frac{\theta_{i}}{2}\cos\frac{\theta_{f}}{2 }|\phi_{-}\rangle+\sin\frac{\theta_{i}}{2}\sin\frac{\theta_{f}}{2}e^{i\varphi _{0}}|\phi_{+}\rangle\,. \tag{3}\]
Here \(\varphi_{0}\) is introduced to denote the phase difference of the PPS states \(|i\rangle\) and \(|f\rangle\), i.e., \(\varphi_{0}=\varphi_{i}-\varphi_{f}\). We further normalize the postselected meter state as \(|\Phi_{f}\rangle=|\widetilde{\Phi}_{f}\rangle/\sqrt{p_{f}}\), with \(p_{f}=\langle\widetilde{\Phi}_{f}|\widetilde{\Phi}_{f}\rangle\). We may mention that \(p_{f}\) is right the success probability of postselection, which is explicitly given by
\[p_{f}=A+Be^{-|\alpha|^{2}}\sum_{n=0}^{\infty}\frac{|\alpha|^{2n}}{n!}\cos(2 \,\chi\,n^{2}+\varphi_{0})\,, \tag{4}\]
where \(A=\frac{1}{2}(1+\cos\theta_{i}\cos\theta_{f})\) and \(B=\frac{1}{2}\sin\theta_{i}\sin\theta_{f}\). Based on the state \(|\Phi_{f}\rangle\), one can consider certain specific measurement, e.g., the photon number measurement to it. Straightforwardly, the probability of obtaining \(n\) photons is given by \(P_{f}(n)=|\langle n|\Phi_{f}\rangle|^{2}\). Explicitly, we find
\[P_{f}(n)=\frac{e^{-|\alpha|^{2}}}{p_{f}}\left(\frac{|\alpha|^{2n}}{n!}\left[A +B\cos(2\,\chi\,n^{2}+\varphi_{0})\right]\right)\,. \tag{5}\]
In order to carry out a characterization for the metrological precision of the parameter \(\chi\), we can calculate the Fisher information (FI) \(F_{f}\) of \(\chi\) encoded in the distribution function \(P_{f}(n)\), through
\[F_{f}=\sum_{n}\frac{1}{P_{f}(n)}\left(\frac{\partial P_{f}(n)}{\partial\chi} \right)^{2}\,. \tag{6}\]
Then, the estimate precision for \(\chi\) is bounded as \(\delta(\chi)\geq 1/\sqrt{p_{f}F_{f}}\), following the Cramer-Rao bound (CRB) inequality. This is the result associated with the specific scheme of photon-number measurement to \(|\Phi_{f}\rangle\). One can also consider the so-called quantum Fisher information (QFI) encoded in the state \(|\Phi_{f}\rangle\), which is given by
\[Q_{f}=4\left(\frac{d\langle\Phi_{f}|}{d\chi}\frac{d|\Phi_{f}\rangle}{d\chi}- \left|\langle\Phi_{f}|\frac{d|\Phi_{f}\rangle}{d\chi}\right|^{2}\right)\,. \tag{7}\]
Unlike the FI associated with certain specific measurement scheme, the QFI represents the maximum amount of information for all possible measurement schemes. Thus, we carry out the following result for the WVA scheme
\[p_{f}Q_{f} =4\,\left[A\langle\hat{n}^{4}\rangle-\frac{1}{p_{f}}\left(C \langle\hat{n}^{2}\rangle\right)^{2}\right. \tag{8}\] \[-\left.Be^{-|\alpha|^{2}}\sum_{n=0}^{\infty}\frac{|\alpha|^{2n}}{n! }\,n^{4}\,\cos(2\,\chi\,n^{2}+\varphi_{0})\right.\] \[-\left.\frac{1}{p_{f}}B^{2}\left(e^{-|\alpha|^{2}}\sum_{n=0}^{ \infty}\frac{|\alpha|^{2n}}{n!}\,n^{2}\,\sin(2\,\chi\,n^{2}+\varphi_{0}) \right)^{2}\right]\,.\]
In this result, \(\langle\bullet\rangle\) represents the expectation over the optical coherent state \(|\alpha\rangle\). Thus we have \(\langle\hat{n}^{4}\rangle=N^{4}+6N^{3}+4N^{2}+N\), and \(\langle\hat{n}^{2}\rangle=N^{2}+N\). Here we also introduced \(C=(\cos\theta_{i}+\cos\theta_{f})/2\).
In this work, we would like to term the FI (QFI) \(p_{f}F_{f}\) (\(p_{f}Q_{f}\)) as WVA-FI (WVA-QFI), and especially in numerical demonstrations choose the PPS parameters as \(\theta_{i}=\frac{\pi}{2}\), \(\varphi_{0}=\pi\), and \(\theta_{f}=\frac{\pi}{2}\). This choice makes the WVA in the AAV regime, i.e., making the AAV-WV \(\sigma_{z}^{w}=\langle f|\sigma_{z}|i\rangle/\langle f|i\rangle\) divergent. Note that, for finite coupling strength, the postselected average of readout results is not divergent, but maximally amplified [44]. In the AAV limit, one can prove that the meter's state after postselection is \(\widetilde{\Phi}_{f}\sim\langle f|i\rangle e^{i\chi\sigma_{z}^{w}\hat{n}^{2}}|\alpha\rangle\), from which we find that the coupling strength is amplified as \(\widetilde{\chi}=\chi\sigma_{z}^{w}\). Actually, this reveals the basic idea of WVA.
In order to carry out a comparison with the conventional measurement, let us consider to encode the parameter \(\chi\) into the probe state \(|\alpha\rangle\) by coupling through a single state \(|1\rangle\) or \(|2\rangle\) of the qubit, but not through their superposition. Actually, in the absence of postselection, it can be proved that using the
superposed state of \(|1\rangle\) and \(|2\rangle\) will result in the same QFI as using the single state \(|1\rangle\) or \(|2\rangle\)[45]. To be specific, let us consider coupling the qubit through \(|1\rangle\) to the probe field. In this case, after interaction, the joint state for the entire qubit-plus-meter is \(|\Psi\rangle_{\rm cm}=|1\rangle|\phi_{-}\rangle\). Then, the QFI of \(\chi\) in the state \(|\phi_{-}\rangle\) is simply obtained as
\[Q_{\rm cm}=4\left(4\,N^{3}+6\,N^{2}+N\right)\,. \tag{9}\]
In this result, we also use \(N=|\alpha|^{2}\), to denote the mean photon number of the probe field.
_Results and discussion.--_ For linear coupling measurement, as studied in Ref. [45], the interaction-strength-encoded meter states \(|\phi_{\pm}\rangle\) have simple analytic solutions. However, for the quadratic nonlinear coupling measurement considered in this work, we are unable to find analytic solutions for \(|\phi_{\pm}\rangle\), but only present the formal expression in terms of power series expansion, as shown by Eq. (2). Therefore, we would like to carry out numerical results and make some discussions.
In Fig. 1 we plot the numerical results of \(p_{f}F_{f}\) and \(p_{f}Q_{f}\) for the WVA measurement. Indeed, as expected, we find that \(p_{f}F_{f}\) is always bounded by \(p_{f}Q_{f}\). Only at some special postselection angles, e.g., at \(\theta_{f}=3\pi/2\), \(p_{f}F_{f}\) can reach the maximum value of \(p_{f}Q_{f}\). Since the WVA-FI and WVA-QFI are periodic functions of \(\theta_{f}\) with period of \(2\pi\), here we only show the results of a single period. We notice that the postselection dependence behavior varies drastically with the measurement strength \(\chi\). In Fig. 2 we show the particular \(\chi\) dependence which, non-monotonically oscillating in the stronger coupling regime, is quite different from the linear coupling measurement (see Fig. 2 in Ref. [45]).
Importantly, we find that, with the increase of \(\chi\), both \(p_{f}Q_{f}\) and \(p_{f}F_{f}\) can exceed the QFI \(Q_{\rm cm}\) of the conventional scheme, which is free from \(\chi\), as observed in Fig. 1 (c) and (d). This result is somehow beyond the usual claim among the WVA community [31; 39; 40; 41; 42; 43; 44]. That is, it was generally believed that, from either the perspective of Fisher information or the signal-to-noise ratio, the _intrinsic_ overall estimate precision cannot be enhanced by the WVA technique. The basic reason is that, despite the enhanced signal after postselection, the postselection will discard a large number of measurement data and thus result in a larger uncertainty fluctuations. Therefore, the WVA technique was conceived of having, at least, no theoretical advantages [39; 40; 41], but only having some technical advantages in practice such as not being bounded by the saturation limit of photo-detectors [27; 28].
We notice that the analysis of achieving such conclusion was largely based on using the transverse spatial wavefunction of a light beam as probe meter, for instance, in Ref. [43], and in many other references. In Ref. [45] and in present work, the analysis is fully along the line in Ref. [43]. The only difference is using an optical coherent state as probe meter. Therefore, the claim that the WVA scheme cannot exceed the conventional measurement is not a result imposed by fundamental physics. The result displayed in Fig. 1 indicates that the nonlinear coupling WVA measurement can reach a precision better than the conventional approach in the absence of postselection. This is a next example, subsequent to the previous investigation of linear coupling measurement [45].
We also notice that the conventional nonlinear coupling measurement (without postselection) can realize super-Heisenberg scaling metrological precision. For instance, for quadratic nonlinear coupling measurement, it was found [4; 5; 6; 7; 8; 9; 10] that the precision can scale with the (average) photon numbers of probe as \(1/N^{\frac{3}{2}}\). Indeed, this is in agreement with the result of QFI shown in Eq. (9), which scales with the photon numbers dominantly as \(N^{3}\) in the large \(N\) limit. In this context, we may raise an interesting and practically important question: Is it possible to surpass this super-Heisenberg scaling limit, under the restriction of quadratic nonlinear coupling measurement and not using expensive quantum resources such as entangled state of the probe photons? In the following, we will show that the answer is _yes_. That is, by involving the simple PPS strategy in WVA, we can boost the metrological precision to scale with the probe photon numbers from \(1/N^{\frac{3}{2}}\) to \(1/N^{2}\). Actually, preliminary insight from the result of the WVA-QFI given by Eq. (8) indicates that this is possible, owing to the leading term proportional to \(N^{4}\) in large \(N\) limit. The only unclear point is how fast this term becomes the dominant one after accounting for the summed pre-coefficients, when compared with other lower-order terms in the relatively complicated expression of Eq. (8).
In Fig. 1, we have already shown that the WVA-FI can exceed the QFI of the conventional scheme for a fixed average photon number \(N\) of the probe field, by properly (even slightly) increasing the measurement coupling strength \(\chi\). Now, we add to show in Fig. 3 that for a fixed coupling strength \(\chi\), the WVA-FI \(p_{f}F_{f}\) can increase very fast with the average probe photon numbers. Actually, the increasing is highly nonlinear, owing to the nonlinear measurement coupling. This implies that the precision enhancement boosted by the PPS technique of WVA will be very efficient for the nonlinear coupling measurement, by increasing the photon numbers of the probe field. We also notice that the WVA-FI enhancement with the increase of \(N\) is more efficient for
Figure 1: The WVA-FI \(p_{f}F_{f}\) (solid red lines) and WVA-QFI \(p_{f}Q_{f}\) (dashed black lines) are compared with each other, and with the QFI \(Q_{\rm cm}\) of conventional measurement (dash-dotted blue lines, without using the PPS technique). The results shown in (a)-(d) correspond to measurement strengths \(\chi=0.001\), 0.005, 0.01, and 0.1, respectively. Other parameters used for the numerical calculations are: \(\theta_{i}=\frac{\pi}{2}\), \(\varphi_{0}=\pi\), and \(|\alpha|^{2}=N=8\).
stronger nonlinear coupling strength (but being restricted as well in the weak coupling regime). This can be understood jointly with the help of the result in Fig. 2.
In Fig. 4, we re-plot the result of the measurement strength \(\chi=0.01\) in Fig. 3 to show super-Heisenberg scaling with the average photon numbers (for larger range of \(N\), but not much large, for instance, approximately larger than ten). We explicitly demonstrate the dominant overall \(N^{4}\)-scaling behavior for the WVA-FI \(p_{f}F_{f}\), in comparison with the \(N^{3}\)-scaling of the QFI \(Q_{\rm cm}\) of the conventional scheme. The WVA-FI scaling behavior shown in Fig. 4 simply means that the metrological precision can be enhanced by the PPS strategy of WVA from \(1/N^{\frac{3}{2}}\) to \(1/N^{2}\) scaling limit, based on the CRB inequality. In this context, we may mention the long standing issue of achieving the HL (beyond the SQL) in quantum metrology, which is extremely demanding, e.g., needing expensive resources such as quantum entanglement or squeezing. Noting that what has been gained from SQL to HL is a \(\sqrt{N}\) precision enhancement, it seems thus extremely desirable to achieve this same precision enhancement by means of the simple PPS strategy of WVA.
About the quadratic nonlinear coupling, in turn, we have shown that the average photon number is \(\sqrt{N}\), and the average photon number is \(\sqrt{N}\), by various ways based on available state-of-the-art experimental platforms. For instance, in the Mach-Zehnder interferometer (MZI), inserting nonlinear optical kerr medium into the interference arms can lead to quadratic nonlinear coupling with the probe laser field, while the laser field is well described by a coherent state. Since the interference paths can be described as the'system' states \(|1\rangle\) and \(|2\rangle\), the nonlinear MZI can fall into the theoretical description of our present work. Recently, based on the nonlinear MZI, interesting studies were carried out for parameter estimation of the gravitational effect, such as estimating the characteristic parameters of black-hole and wormholes [47; 48; 49]. In particular, the super-Heisenberg scaling \(\sim 1/N^{\frac{3}{2}}\) was highlighted [47; 48; 49], viewing that current techniques (e.g. the atomic interferometry) for gravitational measurements are limited to the SQL. Even if using quantum entanglement or squeezing resource, only the HL precision can be achieved. Our present study, remarkably, indicates that, if further applying the PPS strategy of WVA, the performance of the nonlinear MZI can be improved to achieve the super-Heisenberg scaling \(\sim 1/N^{2}\).
In addition to the realization in terms of the nonlinear optical MZI mentioned above, it is also possible to realize the quadratic nonlinear coupling as considered in this work by employing the optical cavity QED or solid-state circuit QED systems. For instance, consider a three-level atom (or artificial atom) coupled with the single-mode cavity field. For the three-level atom, beside the ground state \(|g\rangle\) and excited state \(|e\rangle\), we assume the existence of a third intermediate state \(|e^{\prime}\rangle\) for the purpose of mediating a two-photon virtual transition coupling between \(|g\rangle\) and \(|e\rangle\), which can result thus in the nonlinear dispersive coupling Hamiltonian \(H^{\prime}=\chi\sigma_{z}(a^{\dagger}a)^{2}\), where \(\sigma_{z}=|e\rangle\langle e|-|g\rangle\langle g|\). The nonlinear coupling strength \(\chi\) reads as \(\chi=|g_{1}g_{2}|^{2}/(\Delta_{1}^{2}\Delta_{2})\), obtained by using the 4th-order perturbation theory. In this result, \(g_{1}\) and \(g_{2}\) are the single photon transition coupling strengths
Figure 3: Average photon number dependence of the WVA-FI \(p_{f}F_{f}\) and the conventional QFI \(Q_{\rm cm}\). Results shown are for measurement strengths \(\chi=0.001\), 0.01, and 0.1, while the result of \(\chi=0.001\) in the plot has been amplified by \(10^{3}\) for better visual effect. Note also that the conventional QFI \(Q_{\rm cm}\) is independent of \(\chi\). The WVA related PPS parameters are the same as in Fig. 2.
Figure 2: Measurement strength dependence of the WVA-FI \(p_{f}F_{f}\) (solid red line) and WVA-QFI \(p_{f}Q_{f}\) (dashed blue line). The WVA related PPS parameters are chosen as \(\theta_{i}=\frac{\pi}{2}\), \(\varphi_{0}=\pi\), and \(\theta_{f}=\frac{\pi}{2}\); and the average photon number is assumed as \(|\alpha|^{2}=N=8\).
Figure 4: Super-Heisenberg scaling with the average photon number, re-plotted using the result of measurement strength \(\chi=0.01\) in Fig. 3 (but for larger range of \(N\)). The numerical results are fitted by the solid straight lines with slopes \(\widetilde{k}=4\) and \(3\), respectively. The slope fitted here corresponds to the precision scaling \(1/N^{k}\) with \(k=\widetilde{k}/2\), according to the CRB inequality. Then, it is evident that the WVA-FI \(p_{f}F_{f}\) has better precision scaling than the conventional QFI \(Q_{\rm cm}\), i.e., with a \(\sqrt{N}\) precision enhancement purely owing to employing the PPS strategy in WVA.
between the states \(|g\rangle\) and \(|e^{\prime}\rangle\), and between \(|e^{\prime}\rangle\) and \(|e\rangle\), respectively. The energy detunings are \(\Delta_{1}=\omega-(E_{e^{\prime}}-E_{g})\) and \(\Delta_{2}=2\omega-(E_{e}-E_{g})\). We also mention that the interplay of cavity driving and photon loss will result in a coherent state in steady state. Then in the so-called 'bad-cavity' limit, the coupling of the cavity field to the atom and the subsequent measurement of the field [50; 51; 52; 53], are completely captured by the treatment in present work.
_Summary.--_ For the purpose of quantum metrology, we have shown that, without need of expensive quantum resources such as entanglement of probes or squeezing, applying the PPS strategy of WVA can help to enhance the metrological precision scaling with the probe photon numbers from \(1/N^{\frac{3}{2}}\) to \(1/N^{2}\), for quadratic nonlinear coupling measurement. The basic reason of achieving this enhancement is that the PPS technique of WVA encodes the parameter \(\chi\) into the superposed state of Eq. (3), resulting thus in the dominant \(N\) scaling of QFI from \(N^{3}\) to \(N^{4}\), which has a \(\sqrt{N}\) precision enhancement over the conventional approach. This better scaling means that the metrological precision can be enhanced more efficiently by increasing the photon numbers of the probe field, while the overall WVA-FI is also larger than the QFI of conventional approach. The metrology scheme analyzed in this work also enjoys the general technical advantages of WVA, such as breaking through the resolution limit of detector, owing to the amplified'signal' (from \(\chi\) to \(\widetilde{\chi}\)).
The nonlinear metrology scheme is quite relevant to phase measurement based on the the Mach-Zehnder interferometer (MZI), by considering to insert a piece of nonlinear optical kerr medium into one of the interference arms, which also resembles the laser interferometer for gravitational wave detection, such as LIGO. Thus, LIGO may consider the use of nonlinear media and may even consider to employ the PPS strategy of WVA. The PPS procedure of WVA in the MZI can be implemented by simply introducing a controllable phase shift in one of the arms, before collecting the probe light from one output port to perform photon number (intensity) or quadrature measurement. We anticipate that the present work can motivate further exploration along this research line.
_Acknowledgements.--_ This work was supported by the NNSF of China (Nos. 11675016, 11974011 & 61905174).
|
2303.12239 | Data driven analysis of cosmic rays in the heliosphere: diffusion of
cosmic protons | Understanding the time-dependent relationship between the Sun's variability
and cosmic rays (GCR) is essential for developing predictive models of
energetic radiation in space. When traveling inside the heliosphere, GCRs are
affected by magnetic turbulence and solar wind disturbances which result in the
so-called solar modulation effect. To investigate this phenomenon, we have
performed a data-driven analysis of the temporal dependence of the GCR flux
over the solar cycle. With a global statistical inference of GCR data collected
in space by AMS-02 and PAMELA on monthly basis, we have determined the rigidity
and time dependence of the GCR diffusion mean free path. Here we present our
results for GCR protons, we discuss their interpretation in terms of basic
processes of particle transport and their relations with the dynamics of the
heliospheric plasma. | N. Tomassetti, E. Fiandrini, B. Bertucci, F. Donnini, M. Graziani, B. Khiali, A. Reina Conde | 2023-03-22T00:08:18Z | http://arxiv.org/abs/2303.12239v1 | # Data driven analysis of cosmic rays in the heliosphere: diffusion of cosmic protons
###### Abstract
Understanding the time-dependent relationship between the Sun's variability and cosmic rays (GCR) is essential for developing predictive models of energetic radiation in space. When traveling inside the heliosphere, GCRs are affected by magnetic turbulence and solar wind disturbances which result in the so-called solar modulation effect. To investigate this phenomenon, we have performed a data-driven analysis of the temporal dependence of the GCR flux over the solar cycle. With a global statistical inference of GCR data collected in space by AMS-02 and PAMELA on monthly basis, we have determined the rigidity and time dependence of the GCR diffusion mean free path. Here we present our results for GCR protons, we discuss their interpretation in terms of basic processes of particle transport and their relations with the dynamics of the heliospheric plasma.
## I Introduction
When entering the heliosphere, Galactic Cosmic Rays (GCRs) are subjected to the solar modulation effect which causes a significant modification in the energy spectrum of their flux in comparison with the local interstellar spectrum (LIS) outside the heliosphere. To understand solar modulation, it is crucial to model the transport processes of GCRs in the solar wind and its embedded magnetic field. The main processes are diffusion, drift, advection and adiabatic deceleration. All these processes are time dependent and follow the quasiperiodical 11-year solar cycle. Solar modulation is very important in GCR physics and heliophysics [1; 2]. Modeling the temporal evolution of GCRs in interplanetary space is also important for assessing radiation risks and hazards in long-duration crewed space missions. In this respect, the recent high-precision and time-resolved data from AMS-02 [4; 5] and PAMELA [6; 7] experiments offer a unique possibility to study the GCR modulation over a long period of time.
## II Methodology
### The model
We implemented a 2D description of the heliosphere, modeled as a spherical bubble centered on the Sun. The wind flows radially from the Sun, with a speed \(V_{sw}(r,\theta,t)\) that depends on helioradius \(r\), heliolatitude \(\theta\), and time \(t\)[11]. The solar wind drops to a subsonic speed across the termination shock \(r_{\rm TS}=85\,\)AU, and vanishes at the heliopause \(r_{\rm HP}=122\,\)AU. The Earth is placed in the equatorial plane, at \(r_{0}=\)1 AU. The interplanetary magnetic field (IMF) \(vecB\) is wound up in a rotating spiral, where its angular aperture depends on the wind speed. Similarly, the heliospheric current sheet (HCS) is modeled on that structure. The HCS is a rotating layer which divides the IMF into two hemispheres of opposite polarity. The angular size of the HCS amplitude depends, in particular, on the tilt angle \(\alpha\) between magnetic and solar rotational axis. The tilt angle is time dependent. It ranges from \(\sim 10^{\circ}\) during solar minimum (flat HCS) to \(\sim 80^{\circ}\) during maximum and reversal (wavy HCS). Measurements of the tilt angle are provided by the Wilcox Solar Observatory (WSO), since the 1970's to date, on a 10-day basis [12].
The transport of GCRs in the heliosphere is described by the Parker equation [1]:
\[\frac{\partial f}{\partial t}=\nabla\cdot[{\bf K}^{S}\cdot\nabla f]-(\vec{V} _{sw}+\vec{V}_{D})\cdot\nabla f+\frac{1}{3}(\nabla\cdot\vec{V}_{sw})\frac{ \partial f}{\partial(lnR)} \tag{1}\]
where \(f\) is the phase space density of GCR particles, \(R=p/Z\) is their rigidity (momentum/charge ratio), \({\bf K}^{S}\) is the symmetric part of the diffusion tensor, \(\vec{V}_{sw}\) is the solar wind speed, and \(\vec{V}_{D}\) is the drift speed
\[\vec{V}_{D}=\frac{\beta R}{3}\nabla\times\frac{\vec{B}}{B^{2}}\,. \tag{2}\]
The GCR flux \(J=J(t,R)\) is given by \(J=\frac{\beta c}{4\pi}n\), where \(\beta c\) is their speed and \(n=4\pi R^{2}f\) is their number density. In this work, we solved Eq. 1 by means of the _stochastic differential equation_ method in steady-state conditions (\(\partial/\partial t=0\)) [8].
The diffusion of GCR particles arises from their scattering off the small-scale irregularities of the turbulent IMF. Drift motion is caused by gradient and curvature of the regular component of the IMF, and in particular across HCS. Diffusion and drift can be formally incorporated in the diffusion tensor \({\bf K}\) as symmetric and antisymmetric parts, respectively: \({\bf K}={\bf K}^{S}+{\bf K}^{A}\), with \(K^{S}_{ij}=K^{S}_{ji}\) and \(K^{A}_{ij}=-K^{A}_{ji}\). However, in Eq. 1, drift is explicitly accounted by the \(\vec{V}_{D}\)-term, and thus only the symmetric part of the diffusion tensor appears in the \({\bf K}\)-term [1]. The \({\bf K}^{S}\) tensor can be also split into parallel and perpendicular diffusion \(K_{\parallel}\) and \(K_{\perp}\), where we assume \(K_{\perp}=\xi K_{\parallel}\), with \(\xi\cong\,0.02\)[13]. The corresponding _mean free paths_ are \(\lambda_{\parallel}\) and \(\lambda_{\perp}\), respectively, such that
\(K_{\parallel}=\beta c\lambda_{\parallel}/3\), where \(\beta=v/c\) is the particle speed. A large compilation of observational on the parallel mean free path in the \(\sim\!0.5\,\mathrm{MV}\) - \(5\,\mathrm{GV}\) rigidity range was reported in Palmer [28]. The mean free path, however, is rigidity and time dependent. From the condition of cyclotron resonance, the scattering of GCRs occurs when their Larmor radius \(r_{L}=r_{L}(R)\) is comparable with the typical size of the irregularities \(\hat{\lambda}\). From the condition \(r_{L}\sim\hat{\lambda}\), it turns out that GCRs with rigidity \(R\) resonate at wave number \(k_{\mathrm{res}}\sim 1/R\). The IMF irregularities follows a distribution of the type \(w(k)\propto k^{-\eta}\), which is the spectrum of interplanetary turbulence expressed in terms of wave numbers \(k=2\pi/\lambda\). An important parameter is the index \(\eta\), on which different regimes can be distinguished for the IMF power spectrum [14]. The resulting rigidity dependence of the diffusion mean free path (or coefficient) is \(\lambda_{\parallel}\sim R^{2-\eta}\). To account for different regimes in the IMF power spectrum [14], the mean free paths are often parameterized as a _double power-law_ function of the particle rigidity. For the parallel component, we have adopted the following description:
\[\lambda_{\parallel}=K_{0}\left(\frac{B_{0}}{B}\right)\left(\frac{R_{0}}{R} \right)^{a}\times\left[\frac{(R/R_{0})^{h}+(R_{k}/R_{0})^{h}}{1+(R_{k}/R_{0})^ {h}}\right]^{\frac{b-a}{h}}\,, \tag{3}\]
where \(R_{0}\equiv\!1\,\mathrm{GV}\) sets the rigidity scale, \(B_{0}\) is the local value of the IMF \(B\) at \(r_{0}=1\) AU, and the normalization factor \(K_{0}\) is given in units of \(10^{23}\)\(cm^{2}s^{-1}\). The spectral indices \(a\) and \(b\) set the slopes of the rigidity dependence of \(\lambda_{\parallel}\) below and above \(R_{k}\), respectively. The parameter \(h\) sets the smoothness of the transition. The perpendicular component follows from \(\lambda_{\perp}\equiv\xi\lambda_{\parallel}\), with the addition of small corrections in the polar regions [15].
### The key parameters
In general, all parameters entering Eq. 3 might be time-dependent [16]. We identify a minimal set of _diffusion parameters_ as \(K_{0},a,b\). These parameters and their temporal dependence will be determined using time-resolved GCR proton data from AMS-02 and PAMELA [6; 7; 4]. Along with diffusion parameters, we define a minimal set of _heliospheric parameters_\(\{\alpha,B_{0},A\}\) that describe the time-dependent conditions of the heliosphere in a given epoch: the HCS tilt angle \(\alpha\), the local IMF intensity \(B_{0}\), and its polarity \(A\). Magnetic polarity is defined as the sign of the IMF in the outward (inward) direction from the Sun's North (South) pole. To obtain the solution of Eq. 1 for a given GCR species, the LIS has to be specified as boundary condition. For GCR protons, our LIS model is obtained by Galactic propagation calculations and GCR flux data [17; 18; 19; 20]. The data, used to constrain the GCR propagation model, are from the Voyager-1 spacecraft at \(\sim\!100\,\)-\(500\,\mathrm{MeV}\) of kinetic energy [3], and from AMS-02 experiment at \(E\sim\!100\,\mathrm{GeV}\) - \(2\,\mathrm{TeV}\)[21; 22; 4; 22]. Our proton LIS agrees fairly well with other recent models [23; 24; 25; 10; 26; 27]. A compilation of proton LIS models is shown in Fig. 1, along with the data from Voyager-1 and AMS-02.
### The analysis
We determine the time-dependent GCR diffusion parameters by means of a statistical inference on the monthly measurements of the GCR proton fluxes reported by AMS-02 and PAMELA [6; 7; 4]. For every month, however, the heliospheric parameters have to be specified as well. They are evaluated from observations of the WSO observatory (\(\alpha\), \(A\)) and by _in situ_ measurements of the ACE space probe (\(B_{0}\)). For a given epochs \(t\), a backward moving average is calculated within a time window \([t-\Delta T,t]\), with \(\Delta T=6-12\,\mathrm{months}\). This ensures that the average values \(\hat{\alpha}\), \(\hat{A}\), and \(\hat{B}_{0}\) reflect the average IMF conditions sampled by GCRs arriving Earth at the epoch \(t\)[25; 11]. Hence, for each epoch, the diffusion parameters \(K_{0}\), \(a\), and \(b\) can be determined with a global fit on the GCR proton measurements from AMS-02 and PAMELA. In practice, to make the fit, we have built a 6D parameter grid where each node corresponds to a parameter configuration \(\vec{q}=(\alpha\), \(B_{0}\), \(A\), \(K_{0}\), \(a\), \(b)\). The grid has 938,400 nodes. With the stochastic method, the GCR proton spectrum \(J_{m}(E,\vec{q})\) was calculated for each node of the grid at several values of kinetic energies between 20 MeV and 200 GeV. The simulation was highly CPU consuming. It required the simulation of 14 billion pseudo-particle trajectories, backward-propagated from Earth to the heliopause and then re-weighted according to their LIS. Once the proton grid was fully sampled, the parameters were determined as follows. From measured fluxes \(J_{d}(E,t)\) made at epoch \(t\), the model calculation \(J(E,\vec{q})\) was evaluated as function of its parameters. The heliospheric parameters \(\hat{\alpha},\hat{B}_{0},\hat{A}\) were kept fixed at their evaluation at epoch \(t\). For a GCR flux measurements \(J_{m}(E,t)\), as function of energy \(E\) and observed at epoch
Figure 1: Compilation of proton LIS models from various works: long-dashed red [25], dot-dashed black [24] dotted blue [10], dotted green [23], dashed pink [20], solid orange line [27]. Data are from Voyager-1 [3] and AMS-02 [21].
\(t\), the diffusion parameters are determined by the minimization of the function:
\[\chi^{2}(K_{0},a,b)=\sum_{i}\frac{[J_{d}(E_{i},t)-J_{m}(E_{i},\vec{q})]^{2}}{ \sigma^{2}(E_{i},t)}\,, \tag{4}\]
where the errors are given by \(\sigma^{2}(E_{i},t)=\sigma_{d}^{2}(E_{i},t)+\sigma_{mod}^{2}(E_{i},t)\). The errors account for various contributions: experimental uncertainties in the data, theoretical uncertainties of the model, and uncertainties associated with the minimization procedure. The projections of the \(\chi^{2}\) surfaces calculated for two flux measurements are illustrated in Fig. 2 as function of the GCR diffusion parameters \(K_{0}\), \(a\), and \(b\). The figure shows two distinct epoch of solar minimum (March 2009) and solar maximum (April 2014). The best-fit parameter is shown in each curve together with its uncertainty band. The data come from PAMELA (March 2009) and AMS-02 experiment (April 2014). It can be seen that the parameters \(K_{0}\) and \(b\) are tightly constrained by the AMS-02 data. The parameter \(a\) is sensitive to low-rigidity data and thus it is better constrained by PAMELA. In general AMS-02 gives larger \(\chi^{2}\)-values in comparison with PAMELA, but the convergence of the fit is overall good.
## III Results and Discussion
Along with the two epochs of Fig. 2, the fits have been performed for the whole time-series of CR proton flux measurements of AMS-02 and PAMELA. The AMS-02 time series consists in 79 proton fluxes measured on 27-day basis between May 2011 and May 2017. The PAMELA series are 83 proton fluxes measured on 27-day basis between June 2006 and January 2014. Their data, provided on monthly basis, cover large fractions of the solar cycles 23 and 24. With the least-square minimization described in Sect. II.3, we obtained time-series of best-fit diffusion parameters \(K_{0},a,b\), along with their uncertainties. These results are shown in Fig. 3. The fit covers epochs of solar activity that include minimum, maximum, and IMF reversal. From the fit, we found that the diffusion parameters show a distinct time dependence that associated with solar activity [11]. The parameter \(K_{0}\) is in anti-correlation with the monthly sunspot number, which can be understood easily within the force field model, as the modulation parameter is \(\phi\propto 1/K_{0}\)[10]. Smaller \(K_{0}\) values imply slower diffusion and a more significant modulation effect, _i.e._, a stronger suppression of the low-energy GCR flux. In contrast, larger \(K_{0}\) values imply faster GCR diffusion, therefore causing a minor modification of the LIS. We found, for instance, a positive correlation between \(K_{0}(t)\) and the flux \(J(E_{0},t)\) evaluated at a given reference energy [11]. Our finding are in good agreement with other works [24; 25; 16]. In all these works, the GCR transport is dominated by parallel diffusion. We also find that the parameter \(b\) shows a remarkable time dependence, reflecting the connection between solar variability and the spectrum of magnetic turbulence in the inertial range. In this range, the associated spectral index evolves from \(\eta=0.74\pm 0.08\) at
Figure 3: Best-fit results for the diffusion parameters \(K_{0},a,b\) obtained with the monthly flux measurements of CR protons made by PAMELA (blue triangles) and AMS-02 (pink circles). The greenish band indicates the magnetic reversal epoch. During this period, the IMF polarity is not well defined.
Figure 2: One-dimensional projections of the \(\chi^{2}\) surfaces as function of the transport parameters \(K_{0}\), \(a\), and \(b\) evaluated with CR proton flux data in two epochs: April 2014, corresponding to solar maximum (pink dashed lines, from AMS-02), and March 2009 corresponding to solar minimum (blue solid lines, from PAMELA).
solar minimum to \(\approx 1.3\pm 0.15\) during the solar maximum. This shows that the IMF turbulence is subjected to variations [29; 30]. Regarding the parameter \(a\), we found a milder temporal dependence, _i.e._, a nearly constant spectral index of \(\eta=0.79\pm 0.13\). In both ranges, our findings are in agreement with direct measurements of the IMF power spectrum [14].
Once the full time-series of the diffusion parameters \(K_{0},a,b\) is reconstructed, from their best-fit values of Fig. 3 it is possible to calculate the time- and rigidity-dependent diffusion mean free path \(\lambda_{\parallel}(t,R)\) using Eq. 3. The result is shown in Fig. 4, where we plot the envelope of all mean free paths as function of the GCR rigidity inferred in the examined periods from AMS-02 (pink circles) and PAMELA (blue triangles). The two bands are largely overlapped. The resulting mean free path for parallel diffusion in good accordance with the so-called Palmer consensus on the observations of \(\lambda_{\parallel}\), shown in the figure as green shaded box [28].
## IV Acknowledgements
The present work has been developed in the framework of the joint research program between University of Perugia and Italian Space Agency (ASI) under agreement ASI-UniPG 2019-2-HH.0. B. K., M. G., and F. D. acknowledges support from agreement ASI-INFN 2019-19-HH.0. It is also acknowledged the support of Fondo Ricerca di Base of the University of Perugia. The GCR data used in this work have been retrieved through the cosmic-ray database of the ASI Space Science Data Center.
## V Declarations
All authors have approved this manuscript, agree to the order in which their names are listed, and declare that no conflict of interest exists. The authors are responsible for the content and writing of this article.
|
2310.04563 | Modeling the Risk of In-Person Instruction during the COVID-19 Pandemic | During the COVID-19 pandemic, safely implementing in-person indoor
instruction was a high priority for universities nationwide. To support this
effort at the University, we developed a mathematical model for estimating the
risk of SARS-CoV-2 transmission in university classrooms. This model was used
to evaluate combinations of feasible interventions for classrooms at the
University during the pandemic and optimize the set of interventions that would
allow higher occupancy levels, matching the pre-pandemic numbers of in-person
courses. Importantly, we determined that requiring masking in dense classrooms
with unrestricted seating with more than 90% of students vaccinated was easy to
implement, incurred little logistical or financial cost, and allowed classes to
be held at full capacity. A retrospective analysis at the end of the semester
confirmed the model's assessment that the proposed classroom configuration
would be safe. Our framework is generalizable and was used to support reopening
decisions at Stanford University. In addition, our framework is flexible and
applies to a wide range of indoor settings. It was repurposed for large
university events and gatherings and could be used to support planning indoor
space use to avoid transmission of infectious diseases across various
industries, from secondary schools to movie theaters and restaurants. | Brian Liu, Yujia Zhang, Shane G. Henderson, David B. Shmoys, Peter I. Frazier | 2023-10-06T20:00:43Z | http://arxiv.org/abs/2310.04563v2 | # Modeling the Risk of In-Person Instruction during the COVID-19 Pandemic
###### Abstract
During the COVID-19 pandemic, implementing in-person indoor instruction in a safe manner was a high priority for universities nationwide. To support this effort at the University, we developed a mathematical model for estimating the risk of SARS-CoV-2 transmission in university classrooms. This model was used to design a safe classroom environment at the University during the COVID-19 pandemic that supported the higher occupancy levels needed to match pre-pandemic numbers of in-person courses, despite a limited number of large classrooms. A retrospective analysis at the end of the semester confirmed the model's assessment that the proposed classroom configuration would be safe. Our framework is generalizable and was also used to support reopening decisions at Stanford University. In addition, our methods are flexible; our modeling framework was repurposed to plan for large university events and gatherings. We found that our approach and methods work in a wide range of indoor settings and could be used to support reopening planning across various industries, from secondary schools to movie theaters and restaurants.
decision analysis, education systems operations, epidemiology, stochastic model applications, simulation applications
## Introduction
During the initial period of the COVID-19 pandemic, from March 2020 to May 2021, many universities switched entirely to virtual instruction because of a fear that a large outbreak in the student population could quickly overwhelm local healthcare capacity and endanger students, employees, and residents who live near campus (Walke et al. 2020, Cipriano et al. 2021). These interventions were not without costs, as they harmed the
social well-being and educational outcomes of college students (Lee et al. 2021, Dorn et al. 2020) and damaged the local economies of college towns (Payne 2020, Sullivan 2020). Moreover, prolonged campus shutdowns negatively impact student learning (Dorn et al. 2020) and the livelihoods of those who work around campus (Sullivan 2020). Therefore, safely reopening college campuses for in-person instruction is important for universities nationwide for potential future pandemics.
The University in Ithaca, New York (name removed to support blinding) was a leader in safely re-opening for residential instruction (Frazier et al. 2022). In the Fall of 2020, over 75% of all students enrolled at the Ithaca campus returned for in-person instruction (Rosenberg 2020b) and extensive testing, contact tracing, and classroom de-densification protocols resulted in fewer than 200 COVID-19 cases throughout the semester out of a population of over 18,000 students (Cornell COVID-19 Modeling Team 2021b). During this semester, however, while two-thirds of students had at least one in-person class (Rosenberg 2020b), only 30% of all courses were held in-person (Srivastava and Rosenberg 2020). A mandated six-feet distancing requirement, set by the New York State Department of Health (New York State Department of Health 2021b), constrained the number of students that could be accommodated in each classroom. For example, a class with 200 students required a classroom that seated 1600 people. This mandate dramatically reduced the number of rooms on campus that could accommodate a large class of students. Furthermore, rooms with poor air circulation were excluded from usage, and only a limited number of classrooms could be retrofitted with HVAC to enhance ventilation due to high operational and energy costs. As a result, it was impossible to schedule many classes in person.
While the Fall 2020 semester proved that the University could safely reopen campus, and the level of in-person instruction during that semester was substantially above that offered by many other universities at the time (Patel and Lee 2022), the number of in-person classes remained significantly below pre-pandemic levels. This continued in Spring 2021: the number of in-person courses offered remained lower than pre-pandemic levels and again most students who returned to campus enrolled in hybrid schedules and took the majority of their classes virtually (Rosenberg 2020a).
The University started to gauge the possibility of offering the full roster of courses in-person when planning the Fall 2021 semester, since much of the community would have been vaccinated at the onset of the semester. However, the University faced considerable
uncertainty about the safety of offering a full roster of in-person courses. The level of safety associated with using all classrooms, not just rooms with high-quality ventilation, and filling them at greater density was not well understood. Adding to this challenge, the SARS-CoV-2 Delta variant emerged in the summer of 2021 with increased infectivity compared to the original strain and resistance to vaccines (Callaway 2021). Figure 1 shows a timeline of how the emergence of the Delta variant coincided with our planning period for the Fall 2021 semester.
To respond to this uncertainty, our team developed a modeling framework to estimate the risk of COVID-19 transmission in classrooms during the Delta wave of the pandemic. Using this framework, we determined that fully dense classrooms with mandatory masking and without special ventilation or restrictive seating plans would result in minimal risk to students, graduate student instructors, faculty, and teaching staff throughout the semester. Thus, safe in-person instruction could be offered without further enhancing ventilation in classrooms or developing fixed seating plans for each class, interventions that would have been difficult to implement. Following our recommendations, the University proceeded with dense in-person classes in Fall 2021, and empirical evidence aggregated at the end of the semester suggested that classroom transmission was extremely rare (Cornell University COVID-19 Response 2021).
Our modeling framework can be used to support the design of interventions during respiratory disease outbreaks in any context with indoor seating, from K-12 schools to restaurants and movie theaters. Our framework is flexible and allows a user to estimate the
Figure 1: Timeline of significant events during the planning period for the Fall 2021 Semester.
risk of virus transmission in rooms with various configurations. In addition, our framework can evaluate the effectiveness of various interventions, such as vaccines, masking, and ventilation. These functionalities make our framework a valuable tool for modeling indoor transmission.
**Contributions to the University**
Our modeling framework and analysis guided decision makers at the University in planning for the difficult task of resuming normal teaching operations for the Fall 2021 semester. We used our framework to recommend an implementable classroom configuration that allowed classes to meet in full density while ensuring safety for students and instructors. We also communicated our modeling approach with transparency and rigor through published analyses and town hall meetings (Cornell COVID-19 Modeling Team 2021a, Cornell University 2021a). As a result, the university was able to more effectively communicate and inform instructors, teaching assistants, students, and the broader community that returning to normal teaching operations had minimal risk.
The classroom policies that we recommended, namely mandatory masking with no distancing or additional ventilation requirements, sufficiently prevented COVID-19 transmission in classrooms. Our retrospective analyses (including contact tracing of COVID-positive students and employees, adaptive testing of students in classrooms with positive cases, and genetic sequencing of viral samples) at the end of the semester found that student travel and social events were much more influential drivers of COVID-19 spread on campus compared to classroom transmission (Cornell University 2021b).
Beyond classrooms, we also used our modeling framework to evaluate the risk of holding and attending other university events, such as Homecoming, concerts, holiday events, sporting events, and graduation. These analyses informed executive-level decisions on which events to hold throughout the Fall 2021 semester. As a result, we found our modeling approach to be useful for evaluating the risk of virus transmission in many indoor settings.
Our modeling framework and analyses were widely distributed and influenced return-to-campus decisions at other universities. Notably, Stanford University cited our analyses in their decision to return to on-campus instruction for Fall 2021 (Stanford University 2021). We believe that our success, along with the flexibility and generalizability of our modeling approach, makes our framework a useful tool for managing indoor operations during respiratory disease outbreaks and a valuable contribution towards mitigating the impacts of pandemics.
#### Related Work
Various universities leveraged mathematical models to support reopening decisions during the pandemic. In Summer 2020, the Georgia Institute of Technology used an integer programming framework to assign courses to instruction modalities (virtual, hybrid, or in-person) and courses to classrooms under social-distancing constraints (Navabi-Shirazi et al. 2022). In Fall 2020, Clemson University developed a rotational cohort-based schedule for courses, which aimed to minimize interactions between students across cohorts (Gore et al. 2022). Oklahoma State University, that same semester, developed an optimization tool in Excel to assign courses to classrooms to maximize the number of in-person courses offered (Johnson and Wilson 2022).
These modeling approaches allowed universities to resume in-person instruction, in limited capacities, in accordance with social distancing regulations. Our work is novel in that we assessed the risk of in-person instruction when social distancing regulations were relaxed. It was important for a university to understand this risk when returning to pre-pandemic levels of in-person instruction.
The rest of this paper is organized as follows. First, we describe in detail the challenges faced by the University when planning for the Fall 2021 semester. We then explain our framework for estimating the risk of COVID-19 transmission in classrooms, which includes mathematical modeling and a computer simulation. We apply our framework to evaluate different interventions and develop a strategy to safely operate dense in-person classrooms that was recommended to university leadership. We conclude with a retrospective evaluation of our model's validity and discuss its broader impact beyond modeling transmission in classrooms. Further details of our model are presented in the appendix.
#### Problem Statement
While planning for the Fall 2021 semester, the University aimed to offer as many in-person classes as possible while maintaining classroom safety. For the Fall 2020 and Spring 2021 semesters, the University had held only a limited number of de-densified classes, where the students were spaced six feet apart. The constraint of having a finite number of classrooms on campus posed a challenge, as expanding in-person classes would elevate student density in classrooms, potentially heightening the risk of indoor COVID-19 transmission. To mitigate this potential for elevated transmission risk, the University would need to
implement classroom interventions. Interventions under consideration included requiring masking, improving ventilation, increasing social distancing, and assigning seats randomly. (Assigning seats randomly would reduce the risk that unvaccinated students, who are more vulnerable to infection and have higher transmission when infected, would sit together in socially connected groups.) At the time, we had a limited understanding of the effectiveness of these interventions in preventing disease spread, whether deployed individually or combined. Amidst such uncertainty, one major goal of our modeling work was to identify a combination of interventions that could efficiently curb disease transmission within classrooms, all while maintaining a reasonable cost.
The University also faced additional concerns in the months leading up to the Fall 2021 semester. The more infectious Delta variant of COVID-19 was spreading globally and was responsible for a deadly second wave in India (Tareq et al. 2021). There was concern that the variant would spread to the United States and quickly become the dominant strain. Though many students were fully vaccinated, the vaccine's efficacy against Delta was uncertain. In addition, there was concern among the broader community, especially from faculty and student instructors, about the risk from teaching and attending classes.
Thus, the goal of our modeling work was to understand what classroom interventions would be needed to safely hold dense in-person classes, and to assess and communicate how these interventions would address the concerns that we faced heading into the Fall 2021 semester. We further discuss classroom density, classroom interventions, the Delta variant, and community concerns below.
#### Classroom Density
During the Fall 2020 semester, only 20% of courses were offered in-person. While the majority of students returned to Ithaca, few students were in classrooms on any given day during the semester. As such, the University could aggressively de-densify classrooms to reduce the potential for in-person transmission. All classrooms were configured to be socially distanced, where students were seated six feet apart (Cornell University CTRO 2020). Figure 2 shows the floor plan of a socially distanced classroom, Olin Hall 155, that can normally accommodate 287 students during normal university operations. In Fall 2020, with social distancing, the hall could only accommodate a maximum class size of 36 students, a 90% reduction from the pre-pandemic capacity.
Overall, social distancing reduced campus-wide classroom capacity by 87%. This reduced capacity was sufficient for the Fall 2020 semester, where only a fraction of courses were offered in-person under reduced schedules. However, maintaining the same distancing level for the increased in-person course schedule for Fall 2021 would have required each room to be used for more than 24 hours each day. Thus, further analysis was necessary to assess the safety of increasing classroom density.
Figure 2: Floor plan of Olin 155, a large lecture hall at the University.
#### Classroom Interventions
The potential interventions that the University could take included requiring masking, improving ventilation, increasing social distancing, and assigning seats randomly in classrooms, all of which faced implementation difficulties. Requiring masks in classrooms was the easiest intervention to execute. Assigning seats randomly would randomly assign students to seats once at the beginning of the semester and require them to sit in their assigned seats throughout the semester. This would reduce the chance that unvaccinated students, who have higher susceptibility and transmissibility when infected, would sit together in groups. To implement randomly assigned seats, seating charts would need to be planned in advance for each class. Increasing social distancing in classrooms would reduce classroom capacity and limit the number of in-person courses offered. Classes would also need to meet at inconvenient times (late night or early mornings) to accommodate reduced classroom capacity. Finally, increasing ventilation in classrooms was the most difficult intervention to implement, due to the cost of retrofitting all classrooms with HVAC equipment. Such improvements were only made in Summer 2020 for the largest classrooms at the University (classrooms with more than 100 seats that were used for socially-distanced instruction in Fall 2020 and Spring 2021).
In Appendix Section A.2, we describe how we modeled classroom interventions to estimate their efficacy before the start of the semester. This analysis informed the university on interventions needed to ensure safety.
#### Delta Variant Uncertainties
Figure 3 shows daily COVID-19 case counts in New York State in 2021. The dot-dashed line indicates the first date where the majority of cases in New York City were determined to be from the Delta variant (New York State Department of Health 2021a). The total daily case count in the state rose steadily from that date until the start of the Fall 2021 semester, indicated by the red dashed line in the figure. In retrospect, it is apparent that the semester started during the peak of the Delta wave of the pandemic.
The emerging Delta wave presented challenges when planning for the Fall 2021 semester. First, while the Delta variant drove an increase in cases during the summer of 2021, the exact increase in Delta's infectivity compared to the original strain was not well understood at the time when the University needed to decide on classroom density. In addition, the literature on vaccine efficacy against the Delta variant was sparse. Preliminary reports
from the United Kingdom and Israel were not encouraging; early studies from the UK NHS (Andrews et al., 2021) and Israel Health Ministry (Israel Ministry of Health, 2021) estimated BNT162b2 Pfizer vaccine efficacy of 88% and 39% respectively against symptomatic illness from the Delta variant. In context, the BNT162b2 Pfizer vaccine achieved vaccine efficacy of 95% against the original strain in clinical trials (Polack et al., 2020).
Appendix Section A2 explains how we estimated the Delta variant's increased infectivity and decreased vaccine efficacy to produce models of classroom risk that are robust to uncertainty in both parameters. We used these models to determine if the University could safely hold in-person classes during the Delta wave of the pandemic.
#### Community Concerns
Prior to the start of the Fall 2021 semester, many in the community expressed concern about the safety of in-person classes. In August 2021, the University Chapter of the American Association of University Professors expressed in a letter to the University President concerns about the risk of teaching in person due to the increased transmissibility of the Delta variant (Lieberwitz, 2021). During a faculty and staff town hall the same month, multiple questions were asked about the risk of transmission from teaching class and holding office hours, masking in classrooms, and the efficacy of vaccines against the Delta variant
Figure 3: Daily COVID-19 cases counts for New York State based on reports from state and local health agencies (The New York Times, 2021).
(Cornell University 2021a). In addition, multiple faculty members emphasized that the University needed to be more transparent about how classroom safety was assessed (Cornell Faculty 2021). We developed and communicated our modeling framework to reassure the community that in-person instruction was safe using transparent, data-driven methods.
### Modeling Framework
The modeling framework we developed consists of two parts: a mathematical model used to estimate transmission risk between individuals under different conditions and a simulation tool used to evaluate overall classroom risk. We sketch the main ideas here and provide a full description of our methodologies in the appendix.
#### Main Assumptions and Parameters
Our models rely on a set of parameters, the values of which are key to the predictions. We estimated parameter values from the literature available at the time of our analysis. For parameters with high uncertainty, we imposed reasonably chosen prior distributions on their values rather than using point estimates. Our assumptions were influenced by our previous work on developing epidemiological models for COVID-19 at the University (Frazier et al. 2022).
We assumed that the Delta variant would be dominant at the University at the start of the Fall 2021 semester, and would be 2.4 times as transmissible as the original COVID-19 strain (Washington et al. 2021, Callaway 2021). We conservatively estimated that 90% of the undergraduate population would be fully vaccinated at the start of the semester. Among the vaccinated population, we estimated the distribution of vaccine efficacy (VE) against infection and VE against transmission to be centered around 52% and 51% respectively. These estimates were obtained by weighting the results from several different studies by their sample size. We estimated that if either the source or the susceptible person were masked, the transmission probability would be reduced by 50% to 80%. Finally, we assume perfect compliance with any masking guidelines given by the University, since, in previous semesters, compliance to COVID-19 regulations was very high.
#### Mathematical Model of Transmission
Given an infectious person in a classroom, we decompose the risk of transmission into a short-range and a long-range component, each representing a major mode of SARS-CoV-2
transmission (Centers for Disease Control and Prevention 2021b). The short-range component models transmission due to the deposition of virus-containing respiratory droplets onto exposed mucous membranes; the long-range component models transmission due to the inhalation of virus-containing aerosols or fine droplets. In both components, we use an exponential dose-response model (Watanabe et al. 2010), where the dose is the amount of virus a susceptible individual is exposed to. The likelihood a susceptible individual becomes infected, given dose \(D\) and a positive constant \(c\), can be expressed as
\[\mathbb{P}(\text{transmission})=1-\exp(-c\cdot D). \tag{1}\]
As the dose increases, the likelihood of transmission increases as well.
**Short-range Transmission** In short-range transmission, the source exhales virus-containing droplets, which are large, heavy particles that tend to deposit on the ground or other surfaces. As the droplets are heavy and cannot travel far, the concentration of droplets in the air decreases with the distance from the source case (Mittal et al. 2020).
To model the fact that students mostly face the instructor, who typically stands in the front of the classroom, we assume the source case emits virus particles in a cone of directions towards the front; we call this set of directions the source case's _cone of exposure_. We model the transmission probability in two dimensions, accounting for the distance and angle of the susceptible individual relative to the source case.
We use maximum likelihood estimation to fit the model parameters (including the angle of the cone of exposure) based on a large dataset on COVID-19 transmission aboard high-speed trains in China (Hu et al. 2021), assuming that all secondary infections in the data were due to short-range transmission. The dataset gives the relative positions between infectious index cases on the train and nearby susceptible passengers, as well as the subsequent case incidence rates among the susceptible passengers. The seating configuration of the train car is similar to a lecture hall, where all individuals face the same direction and are spaced apart by rows of seats. To the best of our knowledge, this dataset was the best available at the time we fit our model.
**Long-range Transmission** We use the model and parameters in Schijven et al. (2021) and model long-range transmission by quantifying the concentration of virus-containing aerosols or fine droplets suspended in the air. (Hereafter, we call them "aerosols".) The
model assumes that aerosols are distributed uniformly across the room. As a result, the probability of transmission does not depend on distance or angle from the source and only depends on the rate of aerosol emission from the infectious source, duration of exposure, room volume, and level of ventilation.
**Overall Risk** We combine the estimated short-range and long-range transmission risk by taking the larger of the two.
When estimating the parameters for the short-range model, we assumed that all secondary infections in Hu et al. (2021) were due to short-range transmission, while in reality some cases may have arisen from long-range transmission. Therefore, the estimates for the short-range model may implicitly include some effect of long-range transmission. Setting the overall risk to the maximum, rather than the sum, of the two risks prevents overestimation. In fact, the simulated short-range risk is usually one to two orders of magnitude larger than the long-range risk within three meters, so it dominates the overall risk for those exposed to it. This is consistent with Public Health Ontario (2022), which finds that shorter distance usually implies higher transmission risk.
We assume that instructors are sufficiently distanced from the students such that short-range transmission is not possible. In our model, the risk from short-range transmission is negligible after six feet of distancing and we assume, based on prior semesters, that most instructors spend the majority of their time over six feet away from students.
We do not explicitly model an infectious instructor, because case investigations in the 2020-21 academic year did not reveal any faculty or student infections that were linked to classroom-based transmission (Cornell University 2021a), and faculty prevalence was much lower than that of students. In addition, the number of students in a class is typically much larger than the number of instructors. Moreover, even if the instructor is infectious in addition to an infectious student in the classroom, this merely approximately doubles the risk due to long-range transmission for each susceptible student. For the susceptible students most at risk, i.e., those sitting in the proximity of the infectious student, the risk from short-range transmission dominates that from long-range transmission by two orders of magnitude. Therefore, the expected number of secondary transmissions remains almost the same regardless of the instructor's infection status.
#### 4.2.2 Reflections
We developed this modeling framework in Summer 2021 to support reopening decisions at the University for the Fall 2021 semester. As the body of COVID-19 related literature expands, we recommend these modifications to our framework for future use.
1. We would evaluate and compare other theoretical models for estimating the risk of COVID-19 transmission through droplets and aerosols (Mirzaei et al. 2021, Bazant and Bush 2021). In addition, we would calibrate the model to more datasets that shed light on COVID-19 transmission in enclosed spaces, such as in restaurants (Cheng et al. 2022) and theaters (Adzic et al. 2022), as well as adjust for more recent variants such as Omicron (Ji et al. 2022).
2. We would update our estimates of virus transmissibility and vaccine efficacy based on the most up-to-date findings (Ciotti et al. 2022, Wan et al. 2023).
#### 4.2.3 Classroom Simulation Tool
In conjunction with our mathematical classroom model, our simulation tool allows us to estimate the risk of classroom transmission along with the effectiveness of various interventions, such as masking, social distancing, and increased ventilation. Figure 4 presents an illustration of the classroom simulation tool for a large lecture hall.
For each parameter setting (density level, vaccination rate, vaccine efficacy), we estimate the expected number of secondary infections in the classroom over a 1-hour period given one infectious source case among 50 students, averaged over 500 trials. (We omit the scenarios where there are two or more source cases in the same classroom at the same time. Such scenarios are unlikely compared to scenarios with one source case because prevalence is low, so they contribute little to overall risk. Further discussion is given in Appendix A1.) For each trial, we randomly generate a seating configuration and vaccination statuses among the students and randomly draw a student to be the source case. We repeat this for all combinations of density level, vaccination rate, and vaccine efficacy. We assume everyone is unmasked in the simulation. The effect of masking, modeled as an uncertain parameter with a normal prior, can be directly imposed on the results above through multiplication.
#### 4.2.4 Interventions and Scenarios Evaluated
Combining the mathematical model and classroom simulation tool, we evaluate several interventions (masking, seating policy, distancing, ventilation) across different scenarios.
_Notes_: The purple \(\mathbf{X}\) indicates the infectious source case and the purple cone indicates the cone of exposure, the set of directions in which the source case is modeled as emitting virus particles. Unvaccinated and vaccinated students are represented with red and blue dots respectively. The instructor is located on a stage sufficiently distanced from the class, far above the top margin of the illustration.
Table 1 summarizes the possible interventions along with their effectiveness against short-range and long-range transmission. We discuss these interventions and scenarios in further detail below.
**Masking:** Based on experimental and observational studies, we assume the masking effectiveness against transmission to range from 50% to 80% if either the infectious or the susceptible individual is masked (see details in Appendix A2). If both of them are masked, the risk of transmission is reduced by 75% to 96%.
\begin{table}
\begin{tabular}{|c|c|c|} \hline Intervention/Reduction in transmission & Short-range & Long-range \\ \hline Masking & ✓ & ✓ \\ \hline Seating Policy & ✓ & - \\ \hline Distancing & ✓ & - \\ \hline Ventilation & - & ✓ \\ \hline \end{tabular}
\end{table}
Table 1: Effectiveness of intervention methods at reducing short and long-range transmission.
Figure 4: Example illustration of classroom simulation tool.
We evaluate the intervention of masking for the entirety of the Fall 2021 semester, and we assume that there is perfect compliance with the masking mandate, consistent with the high compliance observed in previous semesters (Cornell University 2021a).
**Seating policy:** We consider two different seating policies: (1) randomly assign students to seats and enforce that students always sit in their assigned seats (_fixed seating_); (2) allow students to sit wherever they want (_unrestricted seating_).
Unrestricted seating could be more risky in that unvaccinated students could potentially group together. This would lead to a higher expected number of transmissions since unvaccinated students are more susceptible to COVID-19 infection and have a higher transmissibility if infected (de Gier et al. 2021, Lopez Bernal et al. 2021).
On the other hand, fixed seating is operationally difficult to implement. Our initial simulations showed that the fixed and unrestricted seating policies lead to comparable risk (see Figure 10 in Appendix A1). As a result, the University adopted the unrestricted seating policy; all simulation results shown here are thus based on unrestricted seating.
**Social Distancing:** We evaluate three social distancing options. In _fully dense_ seating, the default spacing for lecture halls pre-pandemic, students are distanced one foot apart from each other in the classroom. In _moderately dense_ seating, students are distanced three feet apart. In _distanced_ seating, students are seated six feet apart. This last configuration was used in the 2020-2021 academic year during the pandemic.
**Ventilation:** For an infectious source case, we assume that aerosol viral particles are emitted continuously over the hour at a constant rate and are immediately distributed evenly across the room once emitted. We quantify ventilation rate by measuring how often air is exchanged from the room, in the unit of air exchanges per hour (ACH), and we assume that air exchanges happen evenly over time. According to the University's Facilities Department, most classrooms have a ventilation rate of 1 ACH. We assume that this rate reduces the amount of viral aerosols accumulated in the classroom over an hour by half relative to having no ventilation (American Society of Heating, Refrigerating and Air Conditioning Engineers 2002).
We evaluate the worst case, where a poorly ventilated room can have 0 ACH and the risk of aerosol transmission is not reduced at all by ventilation. In addition, we evaluate the intervention where ventilation is improved to 3 ACH, which reduces the overall dose of transmission from aerosols by a factor of 4 relative to no ventilation.
**Class Type:** The type of class determines the intensity of respiratory activity that occurs in the room, which corresponds to different rates of viral aerosol emission. We assume breathing is the dominant type of respiratory activity for students attending lectures and that the effect of occasional speaking (e.g., asking and answering questions) is negligible. However, our simulation is able to handle activities such as talking and singing.
**Risk Over the Semester**
Given a set of interventions, we adopt the following procedure to assess the risk of transmission for students and instructors across the entire semester. More details are given in Appendix A.2. We only consider undergraduate students, but our results easily translate to graduate or professional students who typically take fewer classes.
We first use our classroom simulation tool to estimate the risk of transmission per hour spent in the classroom \(\eta\), conditioned on the class having an infectious source case. Given average class size \(n_{0}\) and prevalence of infectious individuals at the University \(p\), the probability that a susceptible student attends a class with an infectious student is given by
\[1-(1-p)^{n_{0}-1}\approx(n_{0}-1)\cdot p.\]
This approximation is justified since \(p\) is small at the University under our projections for the Fall 2021 semester (we also impose a prior on \(p\) to incorporate uncertainty in its estimated value). Multiplying the above expression by \(\eta\) gives the unconditional probability of infection per hour of class. Assuming a student's average time spent in class per semester to be \(T\) hours, the probability of infection in class over a semester is given by
\[1-(1-\eta\cdot(n_{0}-1)\cdot p)^{T}\approx\eta\cdot(n_{0}-1)\cdot p\cdot T.\]
Lastly, we generate a distributional estimate using 100,000 samples, with each sample representing the semester-wise risk associated with a specific parameter configuration drawn from the priors.
A similar procedure can be applied to faculty and graduate student instructors. We assume that the instructor is sufficiently distanced from the students so that risk only arises from transmission over long distances. We adjust the unconditional probability of infection per hour of class to account for their population sizes relative to the undergraduate population size. We also assume the \(T\) hours of classes are divided between faculty and graduate student instructors with a 2:1 ratio.
### Results and Recommendation for the Fall 2021 Semester
Here, we summarize the results and recommendations of our modeling analysis. Among the combinations of interventions deemed to be safe, enforced masking in dense classrooms with unrestricted seating is the easiest to implement. Masking reduces the risk of short-range transmission which allows for dense seating in classrooms. As a result, more classes can be held in-person given limited classroom space. Our model results show that fixed and unrestricted seating policies result in comparable risk, while the latter is easier to implement. In addition, the model shows that the reduction in transmission from masking and distancing are similar, and both of these interventions are more effective than increasing ventilation.
We next present the simulated distributions of infection risk for students and instructors. We assume 15,000 undergraduate students, 850 faculty instructors, and 3,120 graduate student instructors (details are given in Appendix A2).
#### Students Classroom Risk:
We project that the median risk of infection per student due to lecture transmission in Fall 2021 is 0.5% at 90% vaccination rate. (An earlier version of this analysis (Cornell COVID-19 Modeling Team 2021a) predicted this number to be 0.4% due to outdated parameters.) Figure 5 shows the estimated distribution of this risk across 100,000 simulated outcomes; the median is indicated by the red dashed line. The right tail of the estimated risk distribution mainly results from the right tail of the log-normal prior over the prevalence parameter.
#### Instructor Classroom Risk:
We project that the median risk of infection per instructor due to lecture transmission in Fall 2021 is 0.02% for vaccinated faculty instructors and 0.003% for vaccinated graduate student instructors. The estimated distribution of risk across simulated outcomes is presented in Figures 6 and 7.
The risk is approximately doubled for an unvaccinated instructor. Since over 99% of professorial faculty had been vaccinated by the start of the semester (Rosenberg 2021), and those who chose not to be vaccinated would be highly cautious, we only show the estimated risk for the vaccinated instructors here.
The projected risk for instructors is much lower than that for students. This is mainly due to the modeling choice that they are not subject to short-distance transmission, based on the natural distancing between instructors and students in classrooms. In addition, instructors spend less time in class over a semester compared to students.
Figure 5: Distribution of risk of lecture transmission for a student across the entire Fall 2021 semester, over \(10^{5}\) simulation trials.
Figure 6: Distribution of risk of lecture transmission for a vaccinated faculty instructor across the entire Fall 2021 semester, over \(10^{5}\) simulation trials.
#### Recommendations
Based on our analysis, we believed that fully dense in-person classes, with masking enforced, could be safely implemented for the Fall 2021 semester. We estimated the total risk of classroom transmission per student across the entire semester to be around 0.5%, or roughly 1 in 200. For faculty and graduate students, the estimated risk of classroom transmission was even lower, roughly 1 in 5,000 to 40,000 across the entire semester. An individual's odds of being struck by lightning in their life is on the order of 1 in 10,000 (National Weather Service 2019), which is comparable.
Under the assumption that 15,000 students would return to the University for the Fall 2021 semester, we anticipated an additional 75 cases due to classroom transmission. We did not expect these additional cases to strain the testing and quarantine capacity of the university; the University's testing infrastructure was able to handle tens of thousands of tests per week and the university had the capacity to quarantine hundreds of students at a time. In addition, given the estimate of COVID-19 hospitalization rates for college-age students of 0.005% (Centers for Disease Control and Prevention 2021a), we did not expect that any students would be hospitalized from an infection due to classroom transmission.
Figure 7: Distribution of risk of lecture transmission for a vaccinated graduate student instructor across the entire Fall 2021 semester, over \(10^{5}\) simulation trials.
Finally, assuming 850 faculty members and 3,120 graduate students serve as instructors in the Fall 2021 semester, we did not expect to observe any instructor cases linked to classroom transmission.
### Evaluation
To evaluate our modeling framework and recommendations, we retrospectively investigate COVID-19 cases from August 26, 2021 to December 7th, 2021. We excluded a spike in cases due to importation of the Omicron variant in mid December for two reasons: it happened after the end of the instruction period when no classes were in session, and our modeling analyses and recommendations were specific to the Delta variant.
### Student Transmission
We present the following body of evidence that minimal classroom transmission occurred among students during the Fall 2021 semester (Cornell University 2021b).
1. When a student tested positive during the Fall 2021 semester, the University tested all students attending the same class to the extent feasible. In addition, the genetic sequencing of positive cases were compared to determine if cases were related. These investigations did not yield evidence of classroom transmission.
2. We collected seating data for a class held in a lecture hall that contained over 1000 students. When a student in that class tested positive, we investigated to see if any students seated near the infected student subsequently tested positive. While this data is sparse, there were 20 instances of an infected student sitting within 3 seats of susceptible students. None of these cases was associated with a susceptible student testing positive.
3. Throughout the semester, the weeks with the highest rates of on-campus transmission corresponded to breaks when classes were not held. This is consistent with travel and social gatherings, rather than classes, driving COVID-19 transmission on campus, as was also observed in previous semesters (Cornell COVID-19 Modeling Team 2021b). Figure 8 shows the daily count of new cases for undergraduate students. The outbreaks occurred right after students returned to campus from breaks.
4. Contact tracing revealed that most positive cases can be linked by social gatherings, cohabitation, or travel.
This collection of evidence strongly suggests that classroom transmission was rare during the Delta wave of the Fall 2021 semester at the University.
**Instructor Transmission**
Throughout the Fall 2021 semester, only a single faculty member tested positive for COVID-19 at the University. In addition, the prevalence of positive cases among graduate students was 4 times lower than the prevalence among undergraduate students. In all, infection rates among faculty and graduate students were much lower compared to the rest of the University population, which suggests that in-person teaching did not appreciably increase the risk of contracting COVID-19 during the Fall 2021 semester relative to other sources of transmission.
**Extensions**
**Beyond the Classroom**
While the main focus of our work was to model the risk of COVID-19 transmission in lectures and classrooms, we received many requests from the University administration
Figure 8: Undergraduate daily case count during the Fall 2021 semester.
to assess the risk of holding extra-curricular events and gatherings during the Fall 2021 semester. We were able to modify our modeling framework to accommodate these requests; we used the same model structure but updated our parameters to model eating, singing, socializing, and other events that occur in social gatherings. Our modeling analysis influenced the following decisions. For Homecoming weekend, we determined that the homecoming football game and the Class of 2020's belated graduation ceremony were relatively low-risk events. However, we found that parties and festivities that occur after formal events incur substantially higher risk of COVID-19 transmission. As a result, we recommended canceling post-homecoming festivities such as the fireworks and light shows on-campus; these recommendations were accepted by the administration. We did not observe a large spike in cases on campus after homecoming weekend.
We were also asked by executive-level decision-makers at the University to assess the risk of the campus-sponsored Rosh Hashanah dinner. We determined that given the high vaccination rate on campus, this event would be safe to attend. Finally, we were asked to evaluate the risk of indoor physical education classes and music and choir classes. We modified our model to account for the increased aerosol emission due to these activities and determined that it was safe to hold these classes with dense seating configurations. No cases were linked to these courses at the end of the semester.
The flexibility of our framework in accommodating these _ad hoc_ situations indicates that our framework can be applied to other industries besides higher education, supporting reopening decisions for indoor activities across a wide range of applications.
#### Beyond COVID-19
In addition, by re-fitting the models on short range and long range transmission and by re-estimating the parameters on vaccine and mask efficacy, we can easily adapt our framework to model other respiratory diseases or COVID-19 variants. As such, our modeling framework can be used to assess infection risk in future pandemics across various settings.
#### Conclusion
Our modeling framework for COVID-19 transmission in classrooms allowed the University to analyze the risk of holding in-person classes and compare the effectiveness of interventions. Using the recommendations provided by our modeling framework, the University was able to return to pre-pandemic levels of in-person instructions for the Fall 2021 semester,
improving the educational experience of students compared to previous semesters while ensuring safety. Post-hoc analysis at the end of the semester confirmed that classroom transmission was rare and that teaching in-person classes was a low-risk activity. Finally, our modeling framework is flexible and can be adapted to model infections risk for respiratory diseases across a wide range of applications.
## Acknowledgments
This work was conducted under the support from Cornell University when the authors served in the Cornell COVID mathematical modeling team. This work was partially supported by National Science Foundation grant DMS2230023.
## References
* Adzic et al. (2022) Adzic F, Roberts BM, Hathway EA, Matharu RK, Ciric L, Wild O, Cook M, Malki-Epshtein L (2022) A post-occupancy study of ventilation effectiveness from high-resolution CO2 monitoring at live theatre events to mitigate airborne transmission of SARS-CoV-2. _Building and Environment_ 223:109392.
* American Society of Heating (2002) American Society of Heating, Refrigerating and Air Conditioning Engineers (2002) _ANSI-ASHRAE Standard 129-1997 (RA 2002): Measuring Air Change Effectiveness_ (ASHRAE).
* Andrews et al. (2021) Andrews N, Stowe J, Kirsebom F, Toffa S, Rickeard T, Gallagher E, Gower C, Kall M, Groves N, O'Connell AM, Simons D, Blomquist PB, Zaidi A, Nash S, Aziz NIBA, Thelwall S, Dabrera G, Myers R, Amirthalingam G, Gharba S, Barrett JC, Elson R, Ladhani SN, Ferguson N, Zambon M, Campbell CN, Brown K, Hopkins S, Chand M, Ramsay M, Bernal JL (2021) Effectiveness of COVID-19 vaccines against the Omicron (B. 1.1. 529) variant of concern. _MedRxiv_ 2021-12.
* Bazant and Bush (2021) Bazant MZ, Bush JW (2021) A guideline to limit indoor airborne transmission of COVID-19. _Proceedings of the National Academy of Sciences_ 118(17):e2018995118.
* Brown et al. (2021) Brown CM, Vostok J, Johnson H, Burns M, Gharpure R, Sami S, Sabo RT, Hall N, Foreman A, Schubert PL, Gallagher GR, Fink T, Madoff LC, Gabriel SB, MacInnis B, Park DJ, Siddle KJ, Harik V, Arvidson D, Brock-Fisher T, Dunn M, Kearns A, Laney AS (2021) Outbreak of SARS-CoV-2 infections, including COVID-19 vaccine breakthrough infections, associated with large public gatherings -- Barnstable County, Massachusetts, July 2021. _Morbidity and Mortality Weekly Report_ 70(31):1059.
* Callaway (2021) Callaway E (2021) Delta coronavirus variant: scientists brace for impact. _Nature_ 595(7865):17-18.
* Centers for Disease Control and Prevention (2021a) Centers for Disease Control and Prevention (2021a) COVID-net: COVID-19-associated hospitalization surveillance network. [https://gis.cdc.gov/grasp/covidnet/covid19_3.html](https://gis.cdc.gov/grasp/covidnet/covid19_3.html), Accessed: August 21, 2022.
* Centers for Disease Control and Prevention (2021b) Centers for Disease Control and Prevention (2021b) Scientific brief: SARS-CoV-2 transmission. [https://www.cdc.gov/coronavirus/2019-ncov/science/science-briefs/sars-cov-2-transmission.html](https://www.cdc.gov/coronavirus/2019-ncov/science/science-briefs/sars-cov-2-transmission.html), Accessed: June 11, 2022.
Cheng VCC, Lung DC, Wong SC, Au AKW, Wang Q, Chen H, Xin L, Chu AWH, Ip JD, Chan WM, Tsoi HW, Tse H, Ng KHL, Kwan MYW, Chuang SK, To KKW, Li Y, Yuen KY (2022) Outbreak investigation of airborne transmission of Omicron (B.1.1.529)-SARS-CoV-2 variant of concern in a restaurant: Implication for enhancement of indoor air dilution. _Journal of Hazardous Materials_ 430:128504.
* Ciotti et al. (2022) Ciotti M, Ciccozzi M, Pieri M, Bernardini S (2022) The COVID-19 pandemic: Viral variants and vaccine efficacy. _Critical Reviews in Clinical Laboratory Sciences_ 59(1):66-75.
* Cipriano et al. (2021) Cipriano LE, Haddara WM, Zaric GS, Enns EA (2021) Impact of university re-opening on total community COVID-19 burden. _PloS ONE_ 16(8):e0255782.
* Cornell COVID-19 Modeling Team (2021a) Cornell COVID-19 Modeling Team (2021a) Analysis of Fall 2021 Classroom COVID-19 Transmission. [https://covid.cornell.edu/assets/files/classroom_analysis.pdf](https://covid.cornell.edu/assets/files/classroom_analysis.pdf), Accessed: January 11, 2023.
* Cornell COVID-19 Modeling Team (2021b) Cornell COVID-19 Modeling Team (2021b) Mathematical Modeling for Cornell's Spring Semester. [https://covid.cornell.edu/assets/files/general-audience-spring-modeling-20210216.pdf](https://covid.cornell.edu/assets/files/general-audience-spring-modeling-20210216.pdf), Accessed: October 31, 2022.
* Cornell Faculty (2021) Cornell Faculty (2021) Letter to Provost Kotlikoff and Vice Provost Nishii. [https://cornellsun.com/2021/08/29/guest-room-letter-to-provost-kotlikoff-and-vice-provost-nishii/](https://cornellsun.com/2021/08/29/guest-room-letter-to-provost-kotlikoff-and-vice-provost-nishii/), Accessed: August 21, 2022.
* Cornell University (2021a) Cornell University (2021a) COVID-19 Faculty and Staff Town Hall, August 11, 2021. [https://www.cornell.edu/video/covid-19-faculty-staff-town-hall-august-11-2021](https://www.cornell.edu/video/covid-19-faculty-staff-town-hall-august-11-2021), Accessed: August 21, 2022.
* Cornell University (2021b) Cornell University (2021b) What is the university doing to track classroom transmission? [https://web.archive.org/web/20211221180155/https://covid.cornell.edu/students/academic-policies/](https://web.archive.org/web/20211221180155/https://covid.cornell.edu/students/academic-policies/), Accessed: August 21, 2022.
* Cornell University COVID-19 Response (2021) Cornell University COVID-19 Response (2021) End of semester guidance for instructors. [https://covid.cornell.edu/updates/20211211-end-semester-guidance.cfm](https://covid.cornell.edu/updates/20211211-end-semester-guidance.cfm), Accessed: October 31, 2022.
* Cornell University CTRO (2020) Committee on Teaching Reactivation Options (C-TRO). [https://covid.cornell.edu/assets/files/ctro-final-report.pdf](https://covid.cornell.edu/assets/files/ctro-final-report.pdf), Accessed: August 21, 2022.
* de Gier et al. (2021) de Gier B, Andeweg S, Joosten R, Ter Schegget R, Smorenburg N, van de Kassteele J, Hahne SJ, van den Hof S, de Melker HE, Knol MJ (2021) Vaccine effectiveness against SARS-CoV-2 transmission and infections among household and other close contacts of confirmed cases, the Netherlands, February to May 2021. _Eurosurveillance_ 26(31):2100640.
* Dorn et al. (2020) Dorn E, Hancock B, Sarakatsannis J, Viruleg E (2020) COVID-19 and student learning in the United States: The hurt could last a lifetime. _McKinsey & Company_ 1.
* Doung-Ngern et al. (2020) Doung-Ngern P, Suphanchaimat R, Panjangampathhana A, Janekrongtham C, Ruampoom D, Daochaeng N, Eungkanit N, Pisitpayat N, Srisong N, Yasopa O, Plernprom P, Promduangsi P, Kumphon P, Suangtho P, Peeriya Watakulin SC, Kripattanapong S, Chantian T, Bloss E, Namwat C, Limmathurotsakul D (2020) Case-control study of use of personal protective measures and risk for SARS-CoV-2 infection, Thailand. _Emerging Infectious Diseases_ 26(11):2607.
Fowlkes A, Gaglani M, Groover K, Thiese MS, Tyner H, Ellingson K, Cohorts HR (2021) Effectiveness of COVID-19 vaccines in preventing SARS-CoV-2 infection among frontline workers before and during B.1.617.2 (Delta) variant predominance -- eight US locations, December 2020-August 2021. _Morbidity and Mortality Weekly Report_ 70(34):1167.
* Frazier et al. (2022) Frazier PI, Cashore JM, Duan N, Henderson SG, Janmohamed A, Liu B, Shmoys DB, Wan J, Zhang Y (2022) Modeling for COVID-19 college reopening decisions: Cornell, a case study. _Proceedings of the National Academy of Sciences_ 119(2):e2112532119.
* Gore et al. (2022) Gore AB, Kurz ME, Saltzman MJ, Splitter B, Bridges WC, Calkin NJ (2022) Clemson University's rotational attendance plan during COVID-19. _INFORMS Journal on Applied Analytics_ 52(6):553-567.
* Harris et al. (2021) Harris RJ, Hall JA, Zaidi A, Andrews NJ, Dunbar JK, Dabrera G (2021) Effect of vaccination on household transmission of SARS-CoV-2 in England. _New England Journal of Medicine_ 385(8):759-760.
* Howard et al. (2021) Howard J, Huang A, Li Z, Tufekci Z, Zdimal V, van der Westhuizen HM, von Delft A, Price A, Fridman L, Tang LH, Tang V, Watson GL, Bax CE, Shaikh R, Questier F, Hernandez D, Chu LF, Ramirez CM, Rimoin AW (2021) An evidence review of face masks against COVID-19. _Proceedings of the National Academy of Sciences_ 118(4).
* Hu et al. (2021) Hu M, Lin H, Wang J, Xu C, Tatem AJ, Meng B, Zhang X, Liu Y, Wang P, Wu G, Xie H, Lai S (2021) Risk of coronavirus disease 2019 transmission in train passengers: an epidemiological and modeling study. _Clinical Infectious Diseases_ 72(4):604-610.
* Israel Ministry of Health (2021) Israel Ministry of Health (2021) Vaccine efficacy. [https://www.gov.il/BlobFolder/reports/vaccine-efficacy-safety-follow-up-committee/he/files_publications_corona_two-dose-vaccination-data.pdf](https://www.gov.il/BlobFolder/reports/vaccine-efficacy-safety-follow-up-committee/he/files_publications_corona_two-dose-vaccination-data.pdf), Accessed: June 13, 2022.
* Ji et al. (2022) Ji S, Xiao S, Wang H, Lei H (2022) Increasing contributions of airborne route in SARS-CoV-2 Omicron variant transmission compared with the ancestral strain. _Building and Environment_ 221:109328.
* Johnson and Wilson (2022) Johnson C, Wilson RL (2022) Practice summary: A multiobjective assignment model for optimal socially distanced classrooms for the Spears School of Business at Oklahoma State University. _INFORMS Journal on Applied Analytics_ 52(3):295-300.
* Konda et al. (2020) Konda A, Prakash A, Moss GA, Schmoldt M, Grant GD, Guha S (2020) Aerosol filtration efficiency of common fabrics used in respiratory cloth masks. _ACS Nano_ 14(5):6339-6347.
* Kumar et al. (2020) Kumar V, Nallamothu S, Shrivastava S, Jadeja H, Nakod P, Andrade P, Doshi P, Kumaraswamy G (2020) On the utility of cloth facemasks for controlling ejecta during respiratory events. _arXiv preprint arXiv:2005.03444_.
* Lee et al. (2021) Lee J, Solomon M, Stead T, Kwon B, Ganti L (2021) Impact of COVID-19 on the mental health of US college students. _BMC Psychology_ 9(1):1-10.
* Levine-Tiefenbrun et al. (2021) Levine-Tiefenbrun M, Yelin I, Katz R, Herzel E, Golan Z, Schreiber L, Wolf T, Nadler V, Ben-Tov A, Kuint J, Gazit S, Patalon T, Chodick G, Kishony R (2021) Initial report of decreased SARS-CoV-2 viral load after inoculation with the BNT162b2 vaccine. _Nature Medicine_ 27(5):790-792.
* Lieberwitz (2021) Lieberwitz R (2021) Cornell AAUP chapter letter to President Pollack. [https://cornellsun.com/2021/08/31/guest-room-cornell-aaup-chapter-letter-to-president-pollack/](https://cornellsun.com/2021/08/31/guest-room-cornell-aaup-chapter-letter-to-president-pollack/), Accessed: August 21, 2022.
* Lopez Bernal et al. (2021) Lopez Bernal J, Andrews N, Gower C, Gallagher E, Simmons R, Thelwall S, Stowe J, Tessier E, Groves N, Dabrera G, Myers R, Campbell CN, Amirthalingam G, Edmunds M, Zambon M, Brown KE, Hopkins S, Chand M, Ramsay M (2021) Effectiveness of Covid-19 vaccines against the B.1.617.2 (Delta) variant. _New England Journal of Medicine_ 385(7):585-594.
* Mirzaei et al. (2021) Mirzaei PA, Moshfeghi M, Motamedi H, Sheikhnejad Y, Bordbar H (2021) A simplified model to estimate COVID19 transport in enclosed spaces. _Journal of Physics: Conference Series_, volume 2069, 012191 (IOP Publishing).
* Mittal et al. (2020) Mittal R, Ni R, Seo JH (2020) The flow physics of COVID-19. _Journal of Fluid Mechanics_ 894.
* Morawska et al. (2009) Morawska L, Johnson G, Ristovski Z, Hargreaves M, Mengersen K, Corbett S, Chao CYH, Li Y, Katoshevski D (2009) Size distribution and sites of origin of droplets expelled from the human respiratory tract during expiratory activities. _Journal of Aerosol Science_ 40(3):256-269.
* National Weather Service (2019) National Weather Service N (2019) How dangerous is lightning? [https://www.weather.gov/safety/lightning-odds](https://www.weather.gov/safety/lightning-odds), Accessed: June 13, 2022.
* Navabi-Shirazi et al. (2022) Navabi-Shirazi M, El Ton bari M, Boland N, Nazzal D, Steimle LN (2022) Multicriteria course mode selection and classroom assignment under sudden space scarcity. _Manufacturing & Service Operations Management_ 24(6):3252-3268.
* New York State Department of Health (2021a) New York State Department of Health (2021a) Covid-19: Data on variants. [https://coronavirus.health.ny.gov/covid-19-variant-data](https://coronavirus.health.ny.gov/covid-19-variant-data), Accessed: June 13, 2022.
* New York State Department of Health (2021b) New York State Department of Health (2021b) Interim guidance for higher education during the COVID-19 public health emergency. [https://www.governor.ny.gov/sites/default/files/atoms/files/Higher_Education_Detailed_Guidelines.pdf](https://www.governor.ny.gov/sites/default/files/atoms/files/Higher_Education_Detailed_Guidelines.pdf), Accessed: March 8, 2023.
* Patel and Lee (2022) Patel K, Lee I (2022) Here's what the Ivy League is deciding for fall 2020. [https://www.thedp.com/article/2020/04/penn-ivy-league-fall-semester-2020-coronavirus](https://www.thedp.com/article/2020/04/penn-ivy-league-fall-semester-2020-coronavirus), Accessed: December 12, 2022.
* Payne (2020) Payne A (2020) College towns feel financial impact of pandemic. [https://wfl.org/college-towns-feel-financial-impact-of-pandemic/](https://wfl.org/college-towns-feel-financial-impact-of-pandemic/), Accessed: October 31, 2022.
* Polack et al. (2020) Polack FP, Thomas SJ, Kitchin N, Absalon J, Gurtman A, Lockhart S, Perez JL, Perez Marc G, Moreira ED, Zerbini C, Bailey R, Swanson KA, Roychoudhury S, Koury K, Li P, Kalina WV, Cooper D, Frenck RW, Hammitt LL, Tureci O, Nell H, Schaefer A, Unal S, Tresnan DB, Mather S, Dormitzer PR, Sahin U, Jansen KU, Gruber WC (2020) Safety and efficacy of the BNT162b2 mRNA Covid-19 vaccine. _New England Journal of Medicine_ 383(27):2603-2615.
* Pouwels et al. (2021) Pouwels KB, Pritchard E, Matthews PC, Stoesser N, Eyre DW, Vihta KD, House T, Hay J, Bell JI, Newton JN, Farrar J, Crook D, Cook D, Rourke E, Studley R, Peto TEA, Diamond I, Walker AS (2021) Effect
of Delta variant on viral burden and vaccine effectiveness against new SARS-CoV-2 infections in the UK. _Nature Medicine_ 27(12):2127-2135.
* Public Health Ontario (2022) Public Health Ontario (2022) COVID-19 transmission through short and long-range respiratory particles. [https://www.publichealthontario.ca/~/media/Documents/nCoV/phm/2022/01/covid-19-respiratory-transmission-range.pdf?sc_lang=en](https://www.publichealthontario.ca/~/media/Documents/nCoV/phm/2022/01/covid-19-respiratory-transmission-range.pdf?sc_lang=en).
* Puranik et al. (2021) Puranik A, Lenehan PJ, Silvert E, Niesen MJ, Corchado-Garcia J, O'Horo JC, Virk A, Swift MD, Halamka J, Badley AD, Venkatakrishnan AJ, Soundararajan V (2021) Comparison of two highly-effective mRNA vaccines for COVID-19 during periods of Alpha and Delta variant prevalence. _MedRxiv_.
* Rosenberg (2020a) Rosenberg M (2020a) Most courses will be taught online this spring. Here's how Cornell decided the roster. [https://cornellsun.com/2020/12/10/most-cornell-courses-will-be-taught-online-this-spring-heres-how-cornell-decided-the-roster/](https://cornellsun.com/2020/12/10/most-cornell-courses-will-be-taught-online-this-spring-heres-how-cornell-decided-the-roster/), Accessed: November 1, 2022.
* Rosenberg (2020b) Rosenberg M (2020b) Three-quarters of enrolled students are zooming from Ithaca this fall. [https://cornellsun.com/2020/11/08/three-quarters-of-enrolled-students-are-zooming-from-ithaca-this-fall/](https://cornellsun.com/2020/11/08/three-quarters-of-enrolled-students-are-zooming-from-ithaca-this-fall/), Accessed: October 31, 2022.
* Rosenberg (2021) Rosenberg M (2021) Vaccination, testing, masking: Here's what you need to know for Fall 2021. [https://cornellsun.com/2021/08/16/vaccination-testing-masking-heres-what-you-need-to-know-for-fall-2021/](https://cornellsun.com/2021/08/16/vaccination-testing-masking-heres-what-you-need-to-know-for-fall-2021/), Accessed: August 28, 2022.
* Schijven et al. (2021) Schijven J, Vermeulen LC, Swart A, Meijer A, Duizer E, de Roda Husman AM (2021) Quantitative microbial risk assessment for airborne transmission of SARS-CoV-2 via breathing, speaking, singing, coughing, and sneezing. _Environmental Health Perspectives_ 129(4):047002.
* Sheikh et al. (2021) Sheikh A, McMenamin J, Taylor B, Robertson C (2021) SARS-CoV-2 Delta VOC in Scotland: demographics, risk of hospital admission, and vaccine effectiveness. _The Lancet_ 397(10293):2461-2462.
* Srivastava and Rosenberg (2020) Srivastava M, Rosenberg M (2020) Two-thirds of Cornell classes are online, despite hybrid semester. [https://cornellsun.com/2020/09/06/two-thirds-of-cornell-classes-are-online-despite-hybrid-semester/](https://cornellsun.com/2020/09/06/two-thirds-of-cornell-classes-are-online-despite-hybrid-semester/), Accessed: November 1, 2022.
* Stanford University (2021) Stanford University (2021) COVID-19 update and new protocols. [https://healthalerts.stanford.edu/covid-19/2021/09/02/covid-19-update-and-new-protocols/](https://healthalerts.stanford.edu/covid-19/2021/09/02/covid-19-update-and-new-protocols/), Accessed: November 1, 2022.
* Sullivan (2020) Sullivan R (2020) College towns and COVID-19: the impact on New England. _New England Public Policy Center Regional Briefs_.
* Sun and Zhai (2020) Sun C, Zhai Z (2020) The efficacy of social distance and ventilation effectiveness in preventing COVID-19 transmission. _Sustainable Cities and Society_ 62:102390.
Tareq AM, Emran TB, Dhama K, Dhawan M, Tallei TE (2021) Impact of SARS-CoV-2 delta variant (B. 1.617. 2) in surging second wave of COVID-19 and efficacy of vaccines in tackling the ongoing pandemic. _Human Vaccines & Immunotherapeutics_ 17(11):4126-4127.
* The New York Times (2021) The New York Times (2021) Nytimes/COVID-19-data: A repository of data on coronavirus cases and deaths in the U.S. URL [https://github.com/nytimes/covid-19-data](https://github.com/nytimes/covid-19-data).
* Walke et al. (2020) Walke HT, Honein MA, Redfield RR (2020) Preventing and responding to COVID-19 on college campuses. _JAMA_ 324(17):1727-1728.
* Wan et al. (2023) Wan J, Cazer CL, Clarkberg ME, Henderson SG, Lee SE, Meredith GR, Osman M, Shmoys DB, Frazier PI (2023) Booster vaccination protection against SARS-CoV-2 infections in young adults during an Omicron BA.1-predominant period: A retrospective cohort study. _PLoS Medicine_ 20(1):e1004153.
* Wang et al. (2020) Wang Y, Tian H, Zhang L, Zhang M, Guo D, Wu W, Zhang X, Kan GL, Jia L, Huo D, Liu B, Wang X, Sun Y, Wang Q, Yang P, MacIntyre CR (2020) Reduction of secondary transmission of SARS-CoV-2 in households by face mask use, disinfection and social distancing: a cohort study in Beijing, China. _BMJ Global Health_ 5(5):e002794.
* Washington et al. (2021) Washington NL, Gangavarapu K, Zeller M, Bolze A, Cirulli ET, Schiabor Barrett KM, Larsen BB, Anderson C, White S, Cassens T, Jacobs S, Levan G, Nguyen J, Ramirez JM, Rivera-Garcia C, Sandoval E, Wang X, Wong D, Spencer E, Robles-Sikisaka R, Kurzban E, Hughes LD, Deng X, Wang C, Servellita V, Valentine H, De Hoff P, Seaver P, Sathe S, Gietzen K, Sickler B, Antico J, Hoon K, Liu J, Harding A, Bakhtar O, Basler T, Austin B, MacCannell D, Isaksson M, Febbo PG, Becker D, Laurent M, McDonald E, Yeo GW, Knight R, Laurent LC, de Feo E, Worobey M, Chiu CY, Suchard MA, Lu JT, Lee W, Andersen KG (2021) Emergence and rapid transmission of SARS-CoV-2 B.1.1.7 in the United States. _Cell_ 184(10):2587-2594.e7, ISSN 0092-8674.
* Watanabe et al. (2010) Watanabe T, Bartrand TA, Weir MH, Omura T, Haas CN (2010) Development of a dose-response model for SARS coronavirus. _Risk Analysis: An International Journal_ 30(7):1129-1138.
## Appendix. Modeling and Simulation Details
We describe the details of our modeling and simulation for estimating secondary infections in classrooms and projecting to the risk over the entire semester. In Section A1, we introduce the mechanism behind our simulation tool. In Section A2, we state the assumptions and parameter estimates in different components of the simulation. In Section A3, we develop the mathematical model for the transmission probability between a source case and a susceptible person over different distances, which is an important component in the simulation. Throughout the appendix, we use \(\boldsymbol{\#}\) to denote "number of" and \(\boldsymbol{\%}\) to denote "fraction of".
### Simulation
We implement a simulation tool in Python to simulate classrooms under different scenarios. We investigate the effect of different intervention measures (masking, increasing distancing, ventilation) on transmission risk under varying simulation parameters (vaccination rate among students, vaccine efficacy). We describe the simulation mechanism in this section. In Section A2, we discuss the assumptions and values for these parameters in more detail.
Our simulation has two stages. In the first stage, we estimate the conditional probability that a student or instructor is infected in a one-hour class given there is one positive source case in the same class. In the second stage, we extrapolate the probability that a student or instructor becomes infected due to attending or teaching classes over the entire semester.
We omit the scenarios where there are two or more source cases in the same classroom at the same time. Such scenarios are relatively unlikely because prevalence is low. Moreover, the fact that a few positives occur in the same classrooms at the same time makes reality slightly more optimistic than our estimates: in our estimates, everyone else in the classroom is susceptible while in reality, the other infectious individual cannot be infected again. In addition, the increase in the risk of infection created by adding a second positive is smaller than the increase created by adding a first positive1. The level of optimism introduced by this fact is extremely small, however, and our assumption produces nearly the same estimate as one that allows multiple positives to be in the same classroom.
Footnote 1: This follows (1) when the dose resulting from each positive is independent and identically distributed and (2) from concavity of the probability of infection as a function of the dose, given in Appendix A3. In particular, let \(V_{1}\) and \(V_{2}\) be the (strictly positive) dose to a given susceptible person associated with the first and second positives in a classroom, so that the dose is \(V_{1}\) if there is one positive in the classroom and \(V_{1}+V_{2}\) if there are two. We assume that \(V_{1}\) and \(V_{2}\) are independent and identically distributed after marginalizing over the random locations of the two positive individuals. Then let \(P(v)\) be the probability of infection given a dose \(v\), as given in Section A3. Since \(P\) is concave, and also using that \(P(0)=0\), then for strictly positive \(V_{1}\) and \(V_{2}\), \(P(V_{1}+V_{2})-P(V_{1})\leq P(V_{2})-P(0)\). Then, because \(V_{1}\) and \(V_{2}\) are identically distributed, \(\mathbb{E}[P(V_{1}+V_{2})-P(V_{1})]\leq\mathbb{E}[P(V_{2})-P(0)]=\mathbb{E}[P( V_{2})]=\mathbb{E}[P(V_{1})]\). The left-hand side \(\mathbb{E}[P(V_{1}+V_{2})-P(V_{1})]\) is the increase in the risk of infection created by adding a second positive, and the right-hand side \(\mathbb{E}[P(V_{1})]\) is the increase in risk from adding the first positive.
### Generating Classroom Seating Arrangements
We simulate seating at different density levels by assigning a fixed number of individuals to classrooms of different sizes. We assume that an average class contains \(n_{0}=50\) students and one instructor. From university floor plans, we identify three representative rooms, namely Hollister 206, Gates G01, and Rockefeller 201, that correspond to roughly 1', 3', and 6' distancing respectively for 50 students. The corresponding seating capacities are presented in Table 2.
Using the seating plan tool developed by the Committee on Teaching Reactivation Options (C-TRO) team (Greenberg et al. 2021), we identify seats in these classrooms and assign maximally distanced seats such that approximately 50 students can fit in each room. The generated seating plans are displayed in Figure 9. The rooms used have extra space above and beyond what is required to accommodate the social distancing requirements we have assumed. This extra room does not offer additional benefit in our simulations as it is not used. We assume the instructor is standing in the front of the classroom, with at least 6 feet distance from all the students.
We initially consider two seating policies: (1) fixed seating, where students are randomly assigned to their own seats independent of their vaccination statuses; (2) unrestricted seating, where students can freely choose among the allowed seats. For unrestricted seating, we pessimistically assume that unvaccinated students tend to sit together, as the same demographic factors that cause students to be unvaccinated upon arrival to Ithaca may also create social connections. Since fixed seating is operationally difficult to implement, and initial simulations show that these two seating policies lead to comparable risk (Figure 10), the University decided to adopt unrestricted seating. Figure 11 shows examples of unrestricted seating arrangements at different distancing levels.
**Stage 1: Simulating Infections in a Single Classroom**
In the first stage, we estimate the conditional probability that a student or instructor is infected in a one-hour class given there is one positive source case in the same class. We do this by simulating the expected number of secondary infections that occur in a one-hour class with \(n_{0}\!=\!50\) students, where initially a single student is infected. The simulation depends on the following configuration parameters: seating density, percentage of students vaccinated, and vaccine efficacy (VE). In particular, vaccine efficacy \(\mathtt{VE}\!=\!(v_{\rm source},v_{\rm susceptible})\) is a tuple of two parameters characterizing the reduction in infectivity of a vaccinated source case and the
\begin{table}
\begin{tabular}{|l|l|l|} \hline Room & Pre-COVID capacity (1’ distancing) & COVID capacity (6’ distancing) \\ \hline Hollister 206 & 52 & 12 \\ \hline Gates G01 & 156 & 22 \\ \hline Rockefeller 201 & 383 & 56 \\ \hline \end{tabular}
\end{table}
Table 2: Seating capacity for Hollister 206, Gates G01, and Rockefeller 201.
Figure 9: Seating plans generated for Hollister 206, Gates G01, and Rockefeller 201. They correspond to 1’, 3’, and 6’ social distancing, respectively. The green dots represent available seats (i.e., seats that are allowed to be occupied) and the empty circles are considered unavailable in our simulation tool.
reduction in infection probability of a vaccinated susceptible person respectively. We assume each student is vaccinated independently with probability equal to the vaccination rate among the students.
Figure 11: Example simulated seating arrangement of 50 students at different levels of social distancing. At each distancing level, students are placed into available seats shown in Figure 9. The red dots represent unvaccinated students, the blue dots represent vaccinated students, and the empty circles represent unavailable seats due to the distancing requirement or empty available seats. We assume pessimistically that unvaccinated students tend to sit together.
Figure 10: Comparison of fixed and unrestricted seating policies. We show the average number of student secondary infections over a one-hour lecture with a positive student for different vaccination rates (from 40% to 100%), distancing levels (1, 3, and 6 feet distancing), and masking rates (0% and 100%). The differences between the solid lines (unrestricted seating) and dashed lines (fixed seating) are small and further decrease as vaccination rate increases. The vaccination rate we assume in our subsequent simulations, 90%, is indicated by the grey dotted line.
In this stage, we assume everyone is unmasked. This allows the flexibility to model varying masking rate among students as well as uncertainty in masking effectiveness in the next stage of the simulation.
We loop over all the possible combinations of seating density and VE (values of VE are discussed in detail in Section A2). For each configuration with fixed parameter values, one run of the simulation proceeds in the following steps:
* Simulate a seating arrangement of \(n_{0}=50\) students in the corresponding classroom.
* Choose one student uniformly at random as the source case and simulate their vaccination status.
* For each susceptible student, first simulate their vaccination status, then compute the probability \(p\) that they are infected over the 1-hour lecture depending on their relative location to the source case. The mathematical model that produces this probability is derived in Appendix A3.
* Return the sum of the individual infection probabilities in the last step divided by \((n_{0}-1)\). This is the average probability that a susceptible student becomes infected in a one-hour class given that there is an infectious student in the class, denoted \(\eta_{\text{student}}(\text{density},\text{VE})\).
* We also compute the infection probability for the instructor, who is assumed to be subject to only the risk of aerosol transmission due to sufficient distancing, denoted \(\eta_{\text{instructor}}(\text{density},\text{VE})\).
For each simulation configuration, we perform 500 replications to obtain the average conditional probability of infection over 1 hour given a positive case in the classroom.
**Stage 2: Extrapolating Classroom Transmission Risk over the Semester**
In the second stage, we extrapolate the probability of infection over the entire semester based on results in Stage 1 and additional parameters (prevalence, masking effectiveness, and fraction of population masked).
To deal with uncertainty in the parameter values, we impose independent prior distributions on each of them. We sample parameter configurations from the prior and compute the output at each configuration, which produces a distribution of the output. In particular, for each sampled parameter configuration (vaccine efficacy VE = \((v_{\text{source}},v_{\text{susceptible}})\), masking effectiveness \(m\), prevalence \(p\)) and a selected seating density, we perform the following computation for an average student:
* Get \(\eta_{\text{student}}(\text{density},\text{VE})\) for the seating density and sampled vaccine efficacy from Stage 1 simulation results.
* Adjust for masking according to \(\beta_{\text{masked}}\), the fraction of students wearing a mask, and masking effectiveness \(m\), the reduction in transmission probability due to mask-wearing. We obtain the transmission probability adjusted for masking, conditioned on the existence of an infectious student in the class (we omit the terms (density, VE) for brevity): \[\tilde{\eta}_{\text{student}}=\eta_{\text{student}}(\text{density},\text{VE })\cdot[\beta_{\text{masked}}\cdot(1-m)+(1-\beta_{\text{masked}})].\]
This expression is an approximation in that we set \(m\) to be the reduction in transmission probability when both the source and susceptible individuals are masked, and we are not explicitly modeling the case where only one of them is masked. In practice, we assume \(\beta_{\text{masked}}=1\) in our modeling since masking compliance in previous semesters was very high (Cornell University 2021a).
* Under the assumption that infectious students are uniformly distributed across classes, the probability that a susceptible student attends class with an infectious student is given by \[1-(1-p)^{n_{0}-1}\approx(n_{0}-1)\cdot p,\] where the approximation is justified since prevalence, \(p\), is small in practice. Thus, the unconditional infection probability for a student over a one-hour lecture is given by \[\tilde{\eta}_{\text{student}}\cdot(n_{0}-1)\cdot p.\]
* Extrapolate the infection probability over the semester. Suppose an undergraduate student spends on average \(\tau_{\text{UG}}\) hours in class in a semester. The probability of infection over \(\tau_{\text{UG}}\) hours is \[\text{Risk}_{\text{student}}=1-(1-\tilde{\eta}_{\text{student}}\cdot(n_{0}-1) \cdot p)^{\tau_{\text{UG}}}\approx\tilde{\eta}_{\text{student}}\cdot(n_{0}-1) \cdot p\cdot\tau_{\text{UG}}.\] (2)
The above procedure does not eliminate already-infected students from the susceptible pool over time, so the estimated results are more pessimistic than reality. A similar computation can be done for faculty and graduate student instructors.
* We adjust for masking for instructors in a similar way as for students: \[\tilde{\eta}_{\text{instructor}}=\eta_{\text{instructor}}(\text{density}, \text{VE})\cdot[\beta_{\text{masked}}\cdot(1-m)+(1-\beta_{\text{masked}})].\]
* We assume faculty instructors teach a fraction \(\beta_{\text{faculty}}\) of all class hours, while graduate students teach the rest. Let \(n_{\text{UG}}\), \(n_{\text{faculty}}\), and \(n_{\text{graduate}}\) denote the number of undergraduates, faculty instructors, and graduate student instructors, respectively. The average hours a faculty instructor spends in class teaching in a semester is \[\tau_{\text{faculty}}=\frac{n_{\text{UG}}\cdot\tau_{\text{UG}}\cdot\beta_{ \text{faculty}}}{n_{0}\cdot n_{\text{faculty}}},\] where \(n_{\text{UG}}\cdot\tau_{\text{UG}}/n_{0}\) gives the total undergraduate lecture hours to be taught by all instructors. Similarly, the average hours a graduate student instructor spends in class teaching in a semester is \[\tau_{\text{graduate}}=\frac{n_{\text{UG}}\cdot\tau_{\text{UG}}\cdot(1-\beta_ {\text{faculty}})}{n_{0}\cdot n_{\text{graduate}}}.\]
* With the same reasoning as in Equation 2, we derive the risk for instructors over the semester: \[\text{Risk}_{\text{faculty}} = 1-(1-\tilde{\eta}_{\text{instructor}}\cdot(1-(1-p)^{n_{0}}))^{ \tau_{\text{faculty}}}\approx\tilde{\eta}_{\text{instructor}}\cdot n_{0}\cdot p \cdot\tau_{\text{faculty}}.\] \[\text{Risk}_{\text{graduate}} = 1-(1-\tilde{\eta}_{\text{instructor}}\cdot(1-(1-p)^{n_{0}}))^{ \tau_{\text{graduate}}}\approx\tilde{\eta}_{\text{instructor}}\cdot n_{0} \cdot p\cdot\tau_{\text{graduate}}.\]
In the next section we give details on how we obtain estimates for \(\tau_{\text{UG}}\), \(n_{\text{UG}}\), \(n_{\text{graduate}}\), \(n_{\text{faculty}}\), and \(\beta_{\text{faculty}}\).
### A2. Assumptions and Parameters
This section presents our assumptions and estimates for the parameters used in the modeling. A summary is given in Table 3. For parameters with high uncertainty, we design sensible prior distributions for their values rather than using a point estimate. From the joint priors on the parameters, we sample \(10^{5}\) parameter configurations and obtain a distribution for \(\text{Risk}_{\text{student}}\), \(\text{Risk}_{\text{faculty}}\), and \(\text{Risk}_{\text{graduate}}\) respectively. We treat the median as a main point estimate and use the 5% and 95% quantiles as optimistic and pessimistic estimates. Running simulations at a large number of parameter configurations sampled from the priors enables a better understanding of how the possible outcomes are distributed.
1. **The Delta variant** 1. We assume the Delta variant had dominated all infections by the start of the Fall 2021 semester. 2. The Delta variant is **2.4** times more transmissible than the non-variant strain. The Alpha variant was approximately 50% more transmissible than the original SARS-CoV-2 (Washington et al., 2021), and Delta is approximately 60% more transmissible than Alpha (Callaway, 2021). This gives a multiplicative increase of 1.5 * 1.6 = 2.4.
2. **Type of respiratory activity** Respiratory activities of different intensities are associated with varying transmissibility. We assume breathing is the dominant type of respiratory activity for students attending lectures and that the effect of occasional speaking (e.g., asking and answering questions) is negligible. However, our simulation is able to handle activities such as talking and singing.
3. **Level of vaccination among undergraduates** (\(\beta_{\rm vaccinated}\)) By the start of the Fall 2021 semester, 99% of undergraduate students and professorial faculty were fully vaccinated (Rosenberg, 2021). Erring on the conservative side, we used **90%** for the fraction of vaccinated undergraduates at the time the analysis was performed. We compute the infection probability for vaccinated and unvaccinated instructors separately.
4. **Vaccine efficacy** (\(\mathtt{VE}\!=\!(v_{\rm source},v_{\rm susceptible})\)) We base our estimates of vaccine efficacy on the literature available at the time of our modeling.
\begin{table}
\begin{tabular}{|l|l|} \hline
**Parameter** & **Value / Prior distribution** \\ \hline transmissibility of the Delta variant & 2.4 times that of Alpha \\ \hline respiratory activity & breathing \\ \hline \(\beta_{\rm vaccinated}\), fraction of students vaccinated & 90\% \\ \hline \(v_{\rm source}\), reduction in a vac’d source’s infectivity & see Table 4 \\ \hline \(v_{\rm susceptible}\), reduction in a vac’d susceptible’s infection prob. & see Table 4 \\ \hline \(\beta_{\rm masked}\), fraction of students masked & 100\% \\ \hline \(m\), masking effectiveness & \(\mathcal{N}(0.855,\,0.0536)\), truncated to [0,1] \\ \hline \(p\), prevalence & LogNormal(-6.157, 0.413) \\ \hline \(\tau_{\rm UG}\), avg hours an undergrad spends in class in a semester & 315 \\ \hline \(n_{0}\), class size & 50 \\ \hline \(n_{\rm UG}\), number of undergraduates & 15,000 \\ \hline \(n_{\rm faculty}\), number of faculty instructors & 850 \\ \hline \(n_{\rm graduate}\), number of graduate instructors & 3,120 \\ \hline \(\beta_{\rm faculty}\), fraction of classes taught by faculty instructors & 2/3 \\ \hline \end{tabular}
\end{table}
Table 3: Simulation parameters.
1. For \(v_{\rm source}\), we assume vaccination reduces the infectivity of a source case by **0**, **50%**, and **71%** with probabilities proportional to the sample size of the corresponding study (Levine-Tiefenbrun et al. 2021, Harris et al. 2021, Brown et al. 2021). See Table 4.
2. For \(v_{\rm susceptible}\), we assume vaccination reduces the risk of infection for susceptible individuals by **40%**, **42%**, **66%**, **76%**, **79%**, and **88%** with probability proportional to the sample size of the corresponding study (Puranik et al. 2021, Sheikh et al. 2021, Lopez Bernal et al. 2021, Pouwels et al. 2021, Fowlkes et al. 2021). See Table 4.
3. We have also applied uniform discrete distributions for both VE parameters (over three values for the VE in reducing viral load, and over six values for the VE in protecting against infections) and found the outcome to be of the same order of magnitude.
4. Furthermore, our simulation for a single classroom requires specifying the vaccination status of the source case. This is simulated using a Bernoulli random variable with parameter \(\mathbb{P}(\rm vaccinated\mid infected)\), which in turn can be deduced from \(v_{\rm susceptible}\) and \(\beta_{\rm vaccinated}\) using Bayes Rule. By definition, \[\mathbb{P}(\rm infected\mid vaccinated)=(1-v_{\rm susceptible})\cdot\mathbb{P}( \rm infected\mid unvaccinated).\]
By Bayes Rule,
\[\mathbb{P}(\rm vaccinated\mid infected) = \frac{\mathbb{P}(\rm infected\mid vaccinated)\cdot\mathbb{P}(\rm vaccinated)}{ \mathbb{P}(\rm infected\mid vaccinated)\mathbb{P}(\rm vaccinated)+\mathbb{P}(\rm infected \mid unvaccinated)\mathbb{P}(\rm unvaccinated)}\] \[= \frac{(1-v_{\rm susceptible})\cdot\beta_{\rm vaccinated}}{1-v_{\rm susceptible }\cdot\beta_{\rm vaccinated}}.\]
5. **Masking** (\(\beta_{\rm masked}\))
Effective July 30, 2021, the University required all individuals, including fully vaccinated ones, to wear masks indoors (Cornell University COVID-19 Response 2021b). We assume perfect compliance with this mandate. Thus we set \(\beta_{\rm masked}=\)**100%**.
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline & Study & Mean & Sample size \\ \hline & Pouwels et al. (2021) & 40\% & 199,411 \\ \cline{2-4} & Puranik et al. (2021) & 42\% & 22,064 \\ \cline{2-4} \(v_{\rm susceptible}\) & Fowlkes et al. (2021) & 66\% & 2,840 \\ \cline{2-4} & Puranik et al. (2021) & 76\% & 21,179 \\ \cline{2-4} & Sheikh et al. (2021) & 79\% & 53,679 \\ \cline{2-4} & Andrews et al. (2021) & 88\% & 15,871 \\ \hline & Brown et al. (2021) & 0\% & 469 \\ \cline{2-4} \(v_{\rm source}\) & Harris et al. (2021) & 50\% & 96,898 \\ \cline{2-4} & Levine-Tiefenbrun et al. (2021) & 71\% & 4,938 \\ \hline \end{tabular}
\end{table}
Table 4: Vaccine efficacy estimates in the literature at the time of the analysis (summer 2021).
Jefferson et al. (2023) conducted a meta-analysis on the effectiveness of masking for reducing the spread of respiratory viruses during the COVID-19 pandemic. The results of the meta-analysis were inconclusive due to the limits of the primary-source evidence included in the review (Soares-Weiser 2023). The included studies were conducted in various settings (e.g., hospitals, communities, households), and masking compliance was low and the intensity and duration of interaction was high in many of these settings. On the other hand, students are usually fully compliant with masking in the University's classrooms. Therefore, the review conducted by Jefferson et al. (2023) would not have altered our decision to enforce masking.
## 6 Masking effectiveness (\(m\))
We define masking effectiveness as the reduction in unmasked transmission probability due to masking. We assume two-way masking effectiveness, where both the source and susceptible are masked, to follow **Normal(0.855, 0.0536)**, from studies on one-way masking effectiveness in the literature.
1. We set 80% as an optimistic estimate for one-way masking effectiveness. 1. Masking of infectious individuals: Kumar et al. (2020) used fluid dynamics simulation and estimated that 12% of the airflow carrying virus particles leaks around the side of a mask. Wang et al. (2020) observed that masking by the primary case and family contacts before the primary case developed symptoms was 79% effective in reducing transmission.
2. Masking of susceptible individuals: Konda et al. (2020) experimentally measured the filtration efficiency of masks made from different materials and found that many materials could block particles larger than 0.3 micrometers with at least 96% filtration efficiency. (Morawska et al. (2009) observed that most human expiratory activities generate droplets or aerosols with size larger than this.) Doung-Ngern et al. (2020) reported that people wearing a mask all the time during contact with a COVID-19 patient are 84% less at risk of infection. Howard et al. (2021) noted that wearing masks provides additional protection by preventing touching the nose and mouth, which is another vector of transmission.
3. We set 50% as a conservative estimate for one-way masking effectiveness. Konda et al. (2020) observed that improper mask-wearing (e.g., having a gap between the face and the mask) can result in a large decrease in filtration efficiency. Unmasking temporarily to eat or drink would also reduce the protection.
4. Together, these imply that if masking is enforced, where the source and susceptible are both masked, transmission risk is reduced from the no-masking scenario by a multiplicative constant of \((1-80\%)^{2}=0.04\) to \((1-50\%)^{2}=0.25\). Thus, for two-way masking, we set the prior on the risk reduction factor to be Normal(0.855, 0.0536), truncated to \([0,1]\). The untruncated distribution is designed such that [0.75, 0.96] is the 95% symmetric confidence interval.
## 7 Prevalence in the University population (\(p\))
We impose a prior distribution on prevalence, which is computed using the following procedure:
* We sample \(N_{\text{infections}}\), the total number of infections over the entire semester from a distribution (details below).
* We conservatively assume each case is non-isolated and infectious for half a week, because surveillance testing requires that undergraduate students get tested twice a week on average. This is conservative since half a week is the maximum interval between tests, and the university took explicit measures to ensure consistent high compliance to testing (Cornell University COVID-19 Response 2021a).
* There are roughly 14 weeks in a semester. Thus, we approximate the prevalence at any time point during the semester as the total number of infected student-days divided by the total number of student-days in the semester: \[\frac{N_{\rm infections}\cdot 3.5\ {\rm days}}{n_{\rm UG}\cdot(14\cdot 7)\ {\rm days}}.\] This is a simplifying assumption as we do not model the temporal change of prevalence as the semester proceeds.
Before the start of the Fall 2021 semester, we assumed that \(N_{\rm infections}\) follows LogNormal(6.791, 0.413). This is derived by setting the mode to be 750 and the 97.5% quantile to be 2000. Modeling results in early 2021 (Cornell COVID-19 Modeling Team 2021b) show that a variant with more than twice the transmissibility of the original strain would lead to approximately 500 student infections, under the same masking and social distancing conditions as the Fall 2020 semester. To account for the relaxation of social distancing and masking (outside the classroom) in Fall 2021, we increased this number by 50% and set 750 to be the mode.
By Sep. 19, 2021, we observed 488 student cases since the start of the semester and 32 student cases in the past week. Assuming the constant rate of 32 cases/week for the remaining 11 weeks of the semester, we would then expect to see 488 + 32*11 = 840 cases total. We repeated the simulations with \(N_{\rm infections}\) following the LogNormal(6.872, 0.372) distribution, derived by setting the mode to 840 and the 97.5% quantile to be 2000, and got similar results. At mid-October 2021, the University's public COVID-19 dashboard showed 22 weekly cases (Office of Institutional Research & Planning, Cornell University 2021), so the assumed mode of 840 student cases was conservative.
* Prevalence is assumed to be proportional to \(N_{\rm infections}\) (which follows LogNormal(6.791, 0.413)), so it follows **LogNormal(-6.157, 0.413)**, where the mean is shifted from that of \(N_{\rm infections}\) by the log of the proportionality constant.
8. **Average hours an undergraduate spends in class in a semester (\(\tau_{\rm UG}\))** On average, students at the University are enrolled for 15 credits with 45 hours of course-related work per week, including both lecture, non-lecture time in classrooms (e.g., recitation), and time spent outside of class on homework and other coursework. We assume half of the 45 hours is spent in the classroom. Over a 14-week semester, a student spends \(45/2\times 14=315\) hours in the classroom. Thus we set \(\tau_{\rm UG}=\mathbf{315}\).
9. **Population sizes** 1. \(n_{\rm UG}=15,000\) (Cornell University 2021b) 2. \(n_{\rm faculty}=850\)
The Ithaca campus has roughly 1,700 faculty members (Cornell University 2021b). We assume half of them teach undergraduate classes in a semester.
1. \(n_{\rm graduate}=3,120\)
In Fall 2020, there were 6,239 graduate students (Office of Institutional Research & Planning, Cornell University 2022). We assume half of them work as TAs for classes or recitations. Note that some of these TAs may not interact face-to-face with students, e.g., if their primary responsibility is grading, and that the risk calculated is the average risk across all TAs, including these individuals.
* \(\beta_{\rm faculty}\,{=}\,2/3\)
On average, a course at the University has three meetings per week with two being lectures taught by faculty and one being a recitation led by a graduate student. Thus, we set this parameter to be \(2/3\).
10. **Sufficient distancing of instructors** We assume that faculty and graduate student instructors are sufficiently distanced from the students during lecture that their risk of infection only arises from long-distance transmission. We assume social distancing is maintained during one-on-one discussions between instructors and students and that they do not add significantly to the total interaction time between instructors and students.
### A3. Mathematical Model for the Risk of Transmission over Short and Long Ranges
In this section, we present the mathematical model for the transmission probability of COVID-19 depending on the relative location of the source case and susceptible individual. This is an important component in simulating infections in a single classroom, as described in Stage 1 in Section A1.
Exposure to respiratory fluids is a major mode of transmission of SARS-CoV-2. An infectious source case releases the virus through exhalation of virus-containing respiratory fluids (e.g., through speaking, coughing, or sneezing). A susceptible person becomes exposed to the virus if they inhale the virus-containing aerosols or fine droplets, if the virus particles deposit on their mucous membranes via larger droplets, or if they touch a contaminated surface and then touch their mucous membranes (Centers for Disease Control and Prevention 2021b). We consider the first two possibilities here. (Hand sanitizers and disinfecting wipes are provided in all classrooms, which reduces the risk from the third mode of transmission.) In particular, we model (1) _long-range transmission_ via aerosols and fine droplets that suspend in the air and (2) _short-range transmission_ via large droplets that eventually deposit after being emitted. Under the assumption that instructors are at least 6 feet away from all students, instructors are only at the risk of long-range transmission, while students are subject to the risk of both. The modeling of both types of transmission relies on the exponential dose-response model, introduced below.
**Exponential Dose-response Model**
A dose-response model calculates the transmission probability as a function of _dose_, the amount of virus particles a susceptible person is exposed to. In the exponential dose-response model (Watanabe et al. 2010), the transmission probability given dose \(D\) takes the form
\[\mathbb{P}(\rm transmission)\,{=}\,1-\exp(-c\cdot D), \tag{3}\]
where \(c\) is a positive constant. Observe that \(\mathbb{P}(\rm transmission)\) is concave in the dose \(D\), a fact used in Section A1 (Simulation Tool) to observe that the increase in risk created by adding a second positive in the classroom is smaller than the increase in risk created by the first positive.
**Long-range Transmission**
We base our analysis of long-range transmission on the results from Schijven et al. (2021), who developed a model for predicting the transmission risk in an enclosed space due to aerosols only, under the assumption that emitted aerosols are dispersed across the entire room. The estimated risk depends on the aerosol-emitting activity (such as breathing, speaking, and singing), the level of ventilation of the room, the duration
of interaction, and the virus concentration of the source case (measured in the number of virus particles per unit volume). One key property is that the risk due to aerosol transmission is uniform across all locations of the room, since the aerosols disperse quickly across space once emitted.
The risk of aerosol transmission over time \(T\) is estimated by the exponential dose-response model:
\[\mathbb{P}_{\text{aerosol}}(\text{transmission},T)=1-\exp\left(-\frac{D(T)}{144 0}\right),\]
where \(D\) denotes the _dose_, i.e., the amount of virus particles that a susceptible person receives from the infectious person over time, and 1,440 is the estimated average number of virus copies to cause illness, according to Schijven et al. (2021). The dose depends on the number of virus particles emitted over time \(T\), which we denote \(N(T)\), as follows
\[D(T)=N(T)\cdot\frac{\text{inhalation rate of the susceptible (volume / time)}}{\text{volume of the room}},\]
where \(N(T)\) depends on the type of aerosol-emitting activity, the viral load of the infectious person, the ventilation condition of the room, and the duration of interaction \(T\). It is estimated that breathing emits aerosols containing 3,300 virus RNA copies per hour (assuming a nominal viral load \(10^{8}\) copies per milliliter), while the value is higher for speaking and singing. We refer the readers to Equations (4) - (14) in Schijven et al. (2021) for details of the calculation.
We implement a few additional calculations when deploying this model for the classroom simulation:
* We average the risk over the distribution of the source viral load, which Schijven et al. (2021) estimated to be log-normal. In particular, we compute the weighted average of transmission risk given that the source case viral load is \(10^{k}\) copies per milliliter, for \(k=5,6,\ldots,11\), with weights 0.12, 0.22, 0.3, 0.23, 0.103, 0.0236, 0.0034 respectively.
* We take into account the effect of vaccination and masking, as described in the previous assumptions.
* We implement different ventilation conditions, namely no ventilation (a conservative estimate of the amount of ventilation in naturally-ventilated spaces), 1 air exchange per hour, and 3 air exchanges per hour. We model the amount of aerosols present in the room per hour as being reduced by a factor of two and four under the latter two conditions, respectively.
#### Short-range Transmission
In this section, we first derive a mechanical model for the deposition of droplets over two-dimensional space over short distances. Based on the mechanical model, we derive an expression for the amount of droplets that a susceptible person at a certain location relative to the source receives (equivalently, the amount of droplets that reach the susceptible person spatially). We then model the susceptible person's risk of infection due to exposure to viral droplets using the exponential dose-response model (Equation 3). Finally, we estimate the model parameters from a dataset of transmissions on high speed trains in China (Hu et al., 2021). Table 5 summarizes the notation for the functions and parameters used in developing the model.
We first make a fundamental assumption for model tractability.
**Assumption 1**: _The concentration of virus particles in droplets exhaled by a source case is uniform across all droplets of different sizes._
Assumption 1 allows us to use the _volume_ of viral droplets as a proxy for the amount of virus that a susceptible person is exposed to. This simplifies the calculation and allows us to better leverage existing results from the fluid dynamics literature. We next assume that the transmission of droplets is not blocked by any obstacles.
Assumption 2.: _There are no obstacles between a source and a susceptible person, regardless of their locations. A susceptible person at distance \(r\) from the source receives all droplets that would deposit at distance \(r\) or further._
\begin{table}
\begin{tabular}{|p{142.3pt}|p{142.3pt}|} \hline
**Notation** & **Meaning** \\ \hline \(\phi(r)\) & fraction of droplets that deposit at distance \(\geq r\) meters from their location of emission in 1D \\ \hline \(\gamma_{1D}(r)\) & fraction of droplets deposited at distance \(r\) away from the source in 1D \\ \hline \(\gamma_{2D}(r,\theta)\) & fraction of droplets deposited at \((r,\theta)\) away from the source in the 2D model \\ \hline \(\alpha\) & parameter defining cone of exposure over \([-\alpha,\pi+\alpha]\) \\ \hline \(\phi_{2D,ind}(r,\theta;\alpha)\) & fraction of droplets emitted by the source case that reach an individual at \((r,\theta)\) assuming \(\alpha\)-cone of exposure \\ \hline \(D(r,\theta,T)\) & dose of virus that a susceptible person at \((r,\theta)\) away from the source receives throughout interaction duration \(T\) \\ \hline \(N(x,y)\) & number of close contacts seated \(x\) rows and \(y\) columns away from an index case \\ \hline \(Y(x,y)\) & number of close contacts seated \(x\) rows and \(y\) columns away from an index case that were later confirmed as positive \\ \hline \(d(x,y)\) & distance between an index case and a close contact at relative location \((x,y)\), computed from row-wise distance \(d_{r}(x,y)\) and column-wise separation \(d_{c}(x,y)\) \\ \hline \(q_{\alpha}(x,y)\) & expected fraction of close contacts counted at \((x,y)\) that are in the \(\alpha\)-cone of exposure \\ \hline \(p_{c_{2}}((x,y),T\,|\) in cone) & probability that a close contact in the cone of exposure at \((x,y)\) is infected over duration \(T\) \\ \hline \end{tabular}
\end{table}
Table 5: Notation for functions and parameters in the model for transmission over short distances.
Using fluid dynamics modeling and experimental data, Sun and Zhai (2020) estimated a function for the expected fraction of droplets that deposit at distance no less than \(r\) meters from their location of emission, assuming all droplets travel in the same direction, over a distribution of droplet sizes from a typical cough:
\[\phi(r)=-0.1819\cdot\ln(r)+0.43276. \tag{4}\]
This formula is valid for \(r\in[0.04,10.8]\) meters, at the two ends of which \(\phi(r)\) is equal to 1 and 0 respectively. We let \(r_{\min}=0.04\) and \(r_{\max}=10.8\).
If the source case exhaled all droplets in one direction, then a susceptible person at distance \(r\) away in that exact direction would receive a fraction \(\phi(r)\) of the droplets, while a susceptible person in all other directions would not receive any. In reality, however, the droplets emitted by an infectious individual may travel in multiple directions (Xie et al., 2009), putting surrounding neighbors at different angles at risk. As the droplets are being spread out in space, the one-dimensional model in Equation 4 does not accurately capture the amount of droplets that reach one susceptible person in the vicinity.
Thus, we extend this one-dimensional model to two dimensions. We design our 2D model of droplet deposition such that it depends on both the distance and angle of the susceptible person with respect to the source. To ensure our 2D model is consistent with the 1D model, we make the following assumption:
**Assumption 3**.: _The fraction of droplets traveling beyond the entire radius-\(r\) circle around the source is the same as \(\phi(r)\), the fraction beyond distance \(r\) in the 1D model._
Now we derive the 2D model under Assumption 3. Let \(\gamma_{1D}(r^{\prime})\) denote the fraction of droplets deposited _at_ distance \(r^{\prime}\) away from the source in the 1D model. By definition,
\[\phi(r)=\int_{r}^{r_{\max}}\gamma_{1D}(r^{\prime})dr^{\prime}. \tag{5}\]
Let \(\gamma_{2D}(r^{\prime},\theta)\) denote the fraction of droplets deposited at distance \(r^{\prime}\) and angle \(\theta\) away from the source in the 2D model. By definition and Assumption 3,
\[\phi(r)=\int_{r}^{r_{\max}}\int_{0}^{2\pi}\gamma_{2D}(r^{\prime},\theta)r^{ \prime}d\theta dr^{\prime}=\int_{r}^{r_{\max}}\left[\int_{0}^{2\pi}\gamma_{2D }(r^{\prime},\theta)d\theta\right]r^{\prime}dr^{\prime}. \tag{6}\]
The term in the bracket is the fraction of droplets that deposit over the entire circle of radius \(r^{\prime}\). From Equations 5 and 6, the 1D and 2D models should satisfy the following consistency condition:
\[\int_{0}^{2\pi}\gamma_{2D}(r^{\prime},\theta)d\theta=\frac{\gamma_{1D}(r^{ \prime})}{r^{\prime}}. \tag{7}\]
Our goal is to model the transmission risk that one susceptible person at distance \(r\) and angle \(\theta\) is subject to. Going from the fraction of droplets depositing over the entire circle, we next explicitly model the dependence of \(\gamma_{2D}(r^{\prime},\theta)\) on \(\theta\).
In realistic settings like classrooms or buses, a susceptible person could be seated in different directions from the source. We would naturally expect some directions to be riskier and others to be safer. For example, we would think of seats right behind the source as relatively safe, because the source is most likely facing and exhaling forward and the chair backs may block the droplets. On the other hand, it is possible that
the source may turn their head from left to right when seated, so that droplets may potentially even reach someone sitting in rows behind them.
Based on these observations, we set up a "cone of exposure" model that quantifies the dependence of risk on angle \(\theta\). The cone of exposure, with parameter \(\alpha\) (ranging from \(0\) to \(\pi/2\)), covers an arc of \((\pi/2+\alpha)\) degrees on both sides of the direction that the source case is facing. Hereafter, we call this an "\(\alpha\)-cone of exposure". An illustration is given in Figure 12. The source case (purple) is facing up and emits droplets in directions ranging from angle \(\alpha\) behind on their left to angle \(\alpha\) behind on their right. Susceptible cases (blue) sitting within this cone are at the risk of receiving the droplets; susceptible cases sitting outside of this cone are not at risk.
We choose the right-hand direction of the source to be of angle \(0\) and measure \(\theta\) counterclockwise. We let \(\mathbb{1}\left\{\theta\text{ in cone};\alpha\right\}\) be an indicator of whether \(\theta\) is in the \(\alpha\)-cone of exposure, i.e., \(\mathbb{1}\left\{\theta\text{ in cone};\alpha\right\}=\mathbb{1}\left\{ \theta\in[-\alpha,\pi+\alpha]\right\}\). The amount of droplets depositing is positive for those in the cone and zero for those outside the cone. Next, we make a further assumption about droplet distribution within the cone.
**Assumption 4**: _At the same distance, droplets are distributed uniformly over all angles in the cone of exposure._
This simplification aids the analysis, but in reality the distribution of droplets within the cone may depend on the angle in a more complicated way. We lack sufficient data to accurately estimate a complex angle-dependent droplet distribution model, and so a complex model might degrade accuracy rather than improve it. We thus adopt Assumption 4 in the spirit of regularization, understanding that the model we adopt approximates a more complex angle-dependent distribution model by replacing small droplet densities by \(0\) and large droplet densities by a constant.
With this cone-of-exposure model set up, we can model how the transmission risk depends on \(\theta\). We assume all individuals have comparable width. Then, for an individual at distance \(r\) and angle \(\theta\) from the source,
Figure 12: Cone of exposure model. The source case is represented with a purple “X” and the susceptible person is represented with a blue dot. The source emits virus uniformly over the cone extending from \(-\alpha\) to \(\pi+\alpha\). The susceptible person is located at distance \(r\) and angle \(\theta\) away from the source. They occupy an angle of \(\Delta\theta\) that scales inversely with \(r\).
they occupy an arc whose central angle \(\Delta\theta(r)\) is approximately inversely proportional to \(r\) and they receive all the droplets that would land in the sector across \([\theta,\theta+\Delta\theta(r)]\) and \([r,r_{\max}]\). Based on this insight, we derive \(\phi_{2D,ind}(r,\theta;\alpha)\), the fraction of droplets emitted by the source case that reach such an individual. We call this quantity the _droplet reception factor_.
\[\phi_{2D,ind}(r,\theta;\alpha) = \mathbb{1}\left\{\theta\text{ in cone};\alpha\right\}\int_{r}^{r_{\max}}\int_{\theta}^{\theta+\Delta\theta(r)} \gamma_{2D}(r^{\prime},\theta)r^{\prime}d\theta dr^{\prime} \tag{8}\] \[= \mathbb{1}\left\{\theta\text{ in cone};\alpha\right\}\int_{r}^{r_{ \max}}\int_{0}^{2\pi}\gamma_{2D}(r^{\prime},\theta)r^{\prime}d\theta dr^{ \prime}\cdot\frac{\Delta\theta(r)}{\pi+2\alpha}\] \[= \mathbb{1}\left\{\theta\text{ in cone};\alpha\right\}\int_{r}^{r_{ \max}}\gamma_{1D}(r^{\prime})dr^{\prime}\cdot\frac{\Delta\theta(r)}{\pi+2\alpha}\] \[= \mathbb{1}\left\{\theta\text{ in cone};\alpha\right\}\cdot\phi(r) \cdot\frac{\Delta\theta(r)}{\pi+2\alpha}\] \[\propto \mathbb{1}\left\{\theta\text{ in cone};\alpha\right\}\cdot\phi(r) \cdot\frac{1}{r},\]
where the second equality follows from Assumption 4, the third equality follows from Equation 7, and the fourth equality follows from Equation 5. Later, we will show that having an undetermined proportionality constant does not affect our results as long as the dependence on \(r\) and \(\theta\) is modeled correctly.
Transmission probability calculationGiven the expression for the droplet reception factor, we translate this to the probability of transmission using the exponential dose-response model in Equation 3. The dose that a susceptible individual at \((r,\theta)\) from the source receives is proportional to the fraction of the source's virus particles that they receive. By Assumption 1, this in turn is proportional to the droplet reception factor \(\phi_{2D,ind}(r,\theta;\alpha)\).
Next, we observe that the dose is larger if the source case and susceptible person maintain the same relative location longer. We call this the _duration of interaction_, denoted \(T\). For example, \(T\) is roughly one hour for a lecture. We make the following assumption about the droplet emission rate over time.
**Assumption 5**: _The amount of droplets a source case emits per unit time is constant._
Under Assumption 5, the amount of droplets emitted, and hence the dose, is proportional to the duration of interaction \(T\). Thus, the dose of virus particles that a susceptible person at \((r,\theta)\) away from the source receives can be expressed as
\[D(r,\theta,T) = c_{1}\cdot\phi_{2D,ind}(r,\theta;\alpha)\cdot T,\]
where \(c_{1}\) captures the proportionality of the dose to the droplet reception factor and the duration of interaction. We further absorb into \(c_{1}\) the proportionality relation within the droplet reception factor (Equation 8), yielding another constant \(c_{2}\), and derive the final expression of the transmission probability for a susceptible person at \((r,\theta)\) away from a source case for a duration of interaction \(T\):
\[\mathbb{P}_{\text{droplet}}(\text{transmission},r,\theta,T;\alpha,c_{2}) = 1-\exp\left(-c_{2}\cdot\mathbb{1}\left\{\theta\text{ in cone};\alpha\right\}\cdot\frac{\phi(r)}{r}\cdot T\right). \tag{9}\]
As a sanity check, we can see that if a susceptible person at \((r,\theta)\) is not in the cone of exposure, the transmission probability is \(0\).
_Parameter estimation_ The goal of this section is to first derive the likelihood for an empirically observed dataset based on the model above, and then find values of \(\alpha\) and \(c_{2}\) that maximize the likelihood.
Hu et al. (2021) studies 2,334 confirmed positive cases ("index cases") and 72,093 close contacts who had co-travel times of 0-8 hours from 12/19/2019 through 3/6/2020 on high-speed trains in China. They examine the association of attack rate with the spatial distance between pairs of index cases and close contacts. Here, a "close contact" was defined as a person who had co-traveled on a train within a 3-row seat distance of an index case within 14 days before symptom onset. We treat a close contact as equivalent to a susceptible individual in our model. Table 6 reports the number of close contacts, and, among them, those that were later confirmed as positive, at different seat locations with respect to an index case, within the period of study. We would like to derive the likelihood of observing the data in Table 6.
We now introduce additional notation for formalizing our likelihood model. Let \((x,y)\) denote the seat at \(x\) rows and \(y\) columns away from an index case, where \(x\) ranges from 0 to 3 and \(y\) ranges from 0 to 5. Let \(N(x,y)\) denote the number of close contacts that are seated \(x\) rows and \(y\) columns away from an index case (hereafter we call this "at relative location \((x,y)\)" for abbreviation). Let \(Y(x,y)\) denote the number of close contacts at relative location \((x,y)\) from an index case that were later confirmed as positive. Based on the train cabin layout give in Figure 1 in Hu et al. (2021), we calculate the separation between an index and a close contact at relative location \((x,y)\). In particular, let \(d_{r}(x,y)\) and \(d_{c}(x,y)\) denote the row-wise and column-wise distance in meters. For all \(y\), \(d_{r}(x,y)\!=\!0.9x\); for all \(x\), \(d_{c}(x,y)\) is equal to 0, 0.5, 1.05, 1.6, 2.1, and 2.6 for \(y\!=\!0,\ldots,5\) respectively. We then calculate \(d(x,y)\!=\!\sqrt{d_{r}(x,y)^{2}+d_{c}(x,y)^{2}}\) (Table 7).
The data does not contain information about which _direction_ the close contacts were seated with respect to the index cases. However, directionality information is crucial for our modeling. Thus, we make the following assumption about the symmetry of distribution of close contacts.
Assumption 6: _A close contact counted at \((x,y)\) is equally likely to have been seated in all possible directions at location \((x,y)\) away from the source._
With slight abuse of notation, we let \((+x,+y)\) and \((+x,-y)\) denote the seats \(x\) rows in front of the index case and \(y\) columns to the right and left, respectively. Similarly, we let \((-x,\pm y)\) denote the seats \(x\) rows behind the index case and \(y\) columns to the right or left. Because we assume the cone of exposure has an
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|} \hline \multirow{2}{*}{Rows apart} & \multicolumn{4}{l|}{Columns apart} \\ \cline{2-7} & 0 & 1 & 2 & 3 & 4 & 5 \\ \hline
0 & – & 92/2605 & 33/1996 & 7/1845 & 7/1825 & 3/1028 \\ \hline
1 & 10/4791 & 12/5084 & 5/3664 & 3/3464 & 1/3525 & 1/1872 \\ \hline
2 & 11/4386 & 8/4751 & 8/3429 & 5/3212 & 3/3250 & 3/1769 \\ \hline
3 & 2/4026 & 2/4395 & 4/3110 & 3/2945 & 3/2970 & 1/1589 \\ \hline \end{tabular}
\end{table}
Table 6: Cases number and total number of passengers co-traveled with an index patient (Hu et al. (2021), Table S1). Entries are \(Y(x,y)/N(x,y)\), where \(x\) is the number of rows apart and \(y\) is the number of columns apart.
angle larger than \(\pi\), the seats \((+x,\pm y)\) are always in the cone of exposure. The seats \((-x,\pm y)\) are in the cone of exposure if and only if \(\arctan\left(d_{r}(x,y)/d_{c}(x,y)\right)\leq\alpha\).
Let \(q(x,y;\alpha)\) denote the expected fraction of close contacts counted at \((x,y)\) (which could be at \((\pm x,\pm y)\)) that are in the \(\alpha\)-cone of exposure. Based on Assumption 6,
\[q(x,y;\alpha)=\frac{1}{2}+\frac{1}{2}\mathbb{1}\left\{\arctan\left(d_{r}(x,y) /d_{c}(x,y)\right)\leq\alpha\right\}. \tag{10}\]
We can calculate \(q(x,y;\alpha)\) for all possible \((x,y)\) pairs using Table 7.
Next, let \(p((x,y),T\mid\text{in cone};c_{2})\) denote the probability that a close contact in the cone of exposure at location \((x,y)\) is infected. Based on Equation 9, we model this as
\[p((x,y),T\mid\text{in cone};c_{2})=1-\exp\left(-c_{2}\cdot\frac{\phi(d(x,y))}{ d(x,y)}\cdot\kappa_{\text{mask}}\cdot T\right), \tag{11}\]
where we include an additional factor of masking effectiveness \(\kappa_{\text{mask}}\), due to the fact that wearing a mask can reduce the amount of virus that a susceptible person is actually _exposed to_ (compared to the virus in the droplets that _reach_ where the person is), and that mask-wearing had been quite prevalent since late January of 2020 in China. Konda et al. (2020) estimated that a poorly fitting mask made of cotton or silk will reduce virus dose by approximately 30%. We heuristically select \(\kappa_{\text{mask}}=0.8\), assuming that the dataset involved a mix of both masked and unmasked passengers2. We keep this constant separate from \(c_{2}\) so that \(c_{2}\) solely captures the way transmission probability depends on the unreduced dose. We set \(T\) to be 2.1 hours. This is the mean co-travel time over all pairs of index cases and close contacts in the data. Unfortunately, no information is given about the co-travel time for each individual pair.
Footnote 2: We constructed this model prior to conducting the simulations described in Appendix A1 and A2, so we used a point estimate for masking effectiveness. Nevertheless, the point estimate of 0.8 is well-aligned with the prior choice for masking effectiveness parameter \(m\), discussed in Appendix A2.
We let \(r(x,y;\alpha,c_{2})\) denote the overall transmission probability at relative location \((x,y)\) under parameters \(\alpha,c_{2}\). This is the product of the probability that a close contact at \((x,y)\) is in the cone of exposure (Equation 10) and the conditional probability that they become infected given that they are in the cone (Equation 11):
\[r(x,y;\alpha,c_{2})=q(x,y;\alpha)\cdot p((x,y),T\mid\text{in cone};c_{2}).\]
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|} \hline \multirow{2}{*}{Rows apart} & \multicolumn{4}{l|}{Columns apart} \\ \cline{2-7} & 0 & 1 & 2 & 3 & 4 & 5 \\ \hline
0 & – & 0.5 & 1.05 & 1.6 & 2.1 & 2.6 \\ \hline
1 & 0.9 & 1.03 & 1.38 & 1.84 & 2.28 & 2.75 \\ \hline
2 & 1.8 & 1.87 & 2.08 & 2.41 & 2.77 & 3.16 \\ \hline
3 & 2.7 & 2.75 & 2.90 & 3.14 & 3.42 & 3.75 \\ \hline \end{tabular}
\end{table}
Table 7: **Distance \(d(x,y)\) in meters, where \(x\) is the number of rows apart and \(y\) is the number of columns apart.**
For the likelihood model, we assume the number of transmissions at each \((x,y)\) independently follows a binomial distribution:
\[Y(x,y)\sim\text{Binomial}(N(x,y),r(x,y;\alpha,c_{2})).\]
Assuming the numbers of transmissions at each \((x,y)\) are independent, the likelihood of all observations \(\mathcal{D}:=\{N(x,y),Y(x,y)\}_{\begin{subarray}{c}x=0,\ldots,3\\ y=0,\ldots,5\end{subarray}}\) is given by:
\[\mathcal{L}(\mathcal{D})=\prod_{(x,y)}\binom{N(x,y)}{Y(x,y)}\cdot r(x,y; \alpha,c_{2})^{Y(x,y)}\cdot(1-r(x,y;\alpha,c_{2}))^{N(x,y)-Y(x,y)}.\]
We compute the log-likelihood and let \(c_{3}\) denote the constant term that does not depend on \(\alpha\) or \(c_{2}\):
\[\ell(\mathcal{D})=\sum_{(x,y)}\left[Y(x,y)\log(r(x,y;\alpha,c_{2}))+(N(x,y)-Y (x,y))\log(1-r(x,y;\alpha,c_{2}))\right]+c_{3}. \tag{12}\]
We next find values of \(\alpha\) and \(c_{2}\) that maximize the log-likelihood \(\ell(\mathcal{D})\). Using a discretized grid search, we find that the log likelihood is maximized at \(c_{2}=0.0135\) and multiple values of \(\alpha\). We choose the largest possible value \(\alpha=15\) degrees, or \(0.26\) radians, with the intention of being conservative.
**Combining the Short and Long-range Transmission Risk**
In our simulation, the risk for a susceptible individual is
\[\max\left(\mathbb{P}_{\text{droplet}}(\text{transmission}),\mathbb{P}_{\text{ aerosol}}(\text{transmission})\right),\]
i.e., the larger of the predicted risk due to short and long-range transmission. This is justified by the fact that we inferred the parameters for the droplet model assuming all infections in the dataset result from short-range transmission; as such, the inferred model implicitly accounts for long-range transmission in the dataset. In practice, the short-range transmission risk only dominates the long-range transmission risk at short distances (approximately within three meters), while the risk due to aerosol is uniform across all locations.
Finally, we recall that the computed risk so far is based on studies on the original virus strain. Thus, we multiply the calculated risk by 2.4 to account for the increased transmissibility of the Delta variant, as described in Appendix A2.
|
2308.11442 | SDeMorph: Towards Better Facial De-morphing from Single Morph | Face Recognition Systems (FRS) are vulnerable to morph attacks. A face morph
is created by combining multiple identities with the intention to fool FRS and
making it match the morph with multiple identities. Current Morph Attack
Detection (MAD) can detect the morph but are unable to recover the identities
used to create the morph with satisfactory outcomes. Existing work in
de-morphing is mostly reference-based, i.e. they require the availability of
one identity to recover the other. Sudipta et al. \cite{ref9} proposed a
reference-free de-morphing technique but the visual realism of outputs produced
were feeble. In this work, we propose SDeMorph (Stably Diffused De-morpher), a
novel de-morphing method that is reference-free and recovers the identities of
bona fides. Our method produces feature-rich outputs that are of significantly
high quality in terms of definition and facial fidelity. Our method utilizes
Denoising Diffusion Probabilistic Models (DDPM) by destroying the input morphed
signal and then reconstructing it back using a branched-UNet. Experiments on
ASML, FRLL-FaceMorph, FRLL-MorDIFF, and SMDD datasets support the effectiveness
of the proposed method. | Nitish Shukla | 2023-08-22T13:46:12Z | http://arxiv.org/abs/2308.11442v1 | # SDeMorph: Towards Better Facial De-morphing from Single Morph
###### Abstract
Face Recognition Systems (FRS) are vulnerable to morph attacks. A face morph is created by combining multiple identities with the intention to fool FRS and making it match the morph with multiple identities. Current Morph Attack Detection (MAD) can detect the morph but are unable to recover the identities used to create the morph with satisfactory outcomes. Existing work in de-morphing is mostly reference-based, i.e. they require the availability of one identity to recover the other. Sudipta et al. [9] proposed a reference-free de-morphing technique but the visual realism of outputs produced were feeble. In this work, we propose SDeMorph (Stably Diffused De-morpher), a novel de-morphing method that is reference-free and recovers the identities of bona fales. Our method produces feature-rich outputs that are of significantly high quality in terms of definition and facial fidelity. Our method utilizes Denoising Diffusion Probabilistic Models (DDPM) by destroying the input morphed signal and then reconstructing it back using a branched-UNet. Experiments on ASML, FRLL-FaceMorph, FRLL-MorDIFF, and SMDD datasets support the effectiveness of the proposed method.
## 1 Introduction
Face Recognition Systems (FRS) are widely deployed for person identification and verification for many secure access control applications. Amongst many, applications like the border control process where the face characteristics of a traveler are compared to a reference in a passport or visa database in order to verify identity require FRS systems to be robust and reliable. As with all applications, FRS is also prone to various attacks such as presentation attacks [7], electronic display attacks, print attacks, replay attacks, and 3D face mask attacks[6, 2, 3, 4, 5]. Besides these, morphing attacks have also emerged as severe threats undermining the capabilities of FRS systems[1, 22]. In this paper, we focus on morph attacks.
Morph attack refers to generating a composite image that resembles closely to the identities it is created from. The morphed image preserves the biometric features of all participating identities [23, 24]. Morph attacks allow multiple identities to gain access using a single document[25, 26] as they can go undetected through manual inspection and are capable to confound automated FRS. In the recent past, deep learning techniques have been applied successfully to generate morphs. In particular, Generative Adversarial Networks (GAN) have shown tremendous success[27, 28, 29, 30, 31]. Most of the morph generation techniques rely on facial landmarks where morphs are created by combining faces based on their corresponding landmarks[32, 33, 34]. Deep learning methods simply eliminate the need for landmarks.
Morph Attack Detection (MAD) is crucial for the integrity and reliability of FRS. Broadly, MAD can be either a reference-free single-image technique[35, 36, 37] or a reference-based differential-image technique[38, 39, 40]. Reference-free methods utilize the facial features obtained from the input to detect whether the input is morphed or not whereas reference-based techniques compare the input image to a reference image which is typically a trusted live capture of the individual taken under a trusted acquisition scenario.
MAD is essential from the security point of view but it does not reveal any information about the identities of the individuals involved in the making of morph. From a forensics standpoint, determining the identity of the persons participating in morph creation is essential and can help with legal proceedings. Limited work exists on face de-morphing and the majority of them are reference-based. In this paper, our objective is to decompose a single morphed image into the participating face images, without requiring any prior information on the morphing technique or the identities involved. We also make no assumption on the necessity of the input being a morphed image. Our work builds upon [9] and aims to improve the results both visually and quantitatively. Overall, our contributions are as follows:
* We propose SDeMorph to extract face images from a morphed input without any assumptions on the prior information. To the best of our knowledge, this is
the first attempt to exploit DDPMs for facial image restoration in face morphing detection.
* A symmetric branched network architecture, that shares the latent code between its outputs is designed to de-morph the identity features of the bona fide participants hidden in the morphed facial image.
* We experimentally establish the efficacy of our method through extensive testing on various datasets. Results clearly show the effectiveness in terms of reconstruction quality and restoration accuracy.
The rest of the paper is organized as follows: Section 2 gives a brief background on the diffusion process and formulates Face de-morphing. Section 3 introduces the rationale and proposed method. Section 4 outlines the implementation details, experiments, and results. Finally, Section 5 concludes the paper.
## 2 Background
### Denoising Diffusion Probabilistic Models (DDPM)
On a high level, DDPMs[8] are latent generative models that learn to produce or recreate a fixed Markov chain \(x_{1},x_{2},...,x_{T}\). The forward Markov transition, given the initial data distribution \(x_{0}\sim q(x_{0})\), adds gradual Gaussian noise to the data according to a variance schedule \(\beta_{1},\beta_{2},....,\beta_{T}\), that is,
\[q(x_{t}|x_{t-1})=\mathcal{N}(x_{t};\sqrt{1-\beta}x_{t-1},\beta_{t}\mathbb{I}) \tag{1}\]
The conditional probability \(q(x_{t}|x_{0})\) (diffusion) and \(q(x_{t-1}|x_{0})\) (sampling), can be expressed using Bayes' rule and Markov property as
\[q(x_{t}|x_{0})=\mathcal{N}(x_{t};\sqrt{\alpha}x_{0};(1-\bar{\alpha})\mathbb{I }),\;\;t=1,2,...,T \tag{2}\]
\[q(x_{t-1}|x_{t})=\mathcal{N}(x_{t-1};\bar{\mu}(x_{t},x_{0});\tilde{\beta} \mathbb{I}),t=1,2,...,T \tag{3}\]
where \(\alpha_{t}=1-\beta_{t}\), \(\bar{\alpha}_{t}=\prod_{s=1}^{t}\alpha_{s}\), \(\tilde{\beta}=\frac{1-\bar{\alpha}_{t-1}}{1-\bar{\alpha}_{t}}\beta_{t}\) and \(\tilde{\mu}(x_{t},x_{0})=\frac{\sqrt{\alpha_{t}}\beta_{t}}{1-\bar{\alpha}_{t} }x_{0}+\frac{\sqrt{\alpha_{t}}(1-\bar{\alpha}_{t-1})}{1-\bar{\alpha}_{t}}x_{t}\).
DDPMs generate the Markov chain by using the reverse process having prior distribution as \(p(x_{T})=\mathcal{N}(x_{T};0,\mathbb{I})\) and Gaussian transition distribution as
\[p_{\theta}(x_{t-1}|x_{t})=\mathcal{N}(x_{t-1};\mu_{\theta}(x_{t},t),\sum_{ \theta}(x_{t},t)),\;\;t=T,..,1 \tag{4}\]
The parameters \(\theta\) are learned to make sure that the generated reverse process closely mimics the noise added during the forward process. The training aims to optimize the objective function which has a closed form given as the KL divergence between Gaussian distributions. The objective can be simplified as \(\mathbb{E}_{x,\epsilon\sim\mathcal{N}(0,1),t}||\epsilon-\epsilon_{\theta}(x _{t},t)||_{2}^{2}|\).
Figure 1: Illustration of the noise schedule and reconstruction performed by the proposed method. The scheduler adds noise to input (Left column) until the input signal is destroyed. (Middle, Right column) The model aims to predict the noise schedule to reconstruct the bona fides. The final outputs are extracted at \(t=0\).
### Face Morphing
Face morphing refers to the process of combining two faces, denoted as \(\mathcal{I}_{1}\) and \(\mathcal{I}_{2}\) to produce a morphed image, \(\mathcal{X}\) by aligning the geometric landmarks as well as blending the pixel level attributes. The morphing operator \(\mathcal{M}\) defined as
\[\mathcal{X}=\mathcal{M}(\mathcal{I}_{1},\mathcal{I}_{2}) \tag{5}\]
aims to produce \(\mathcal{X}\), such that the biometric similarity of the morph and the bona fides is high i.e. for the output to be called a successful morph attack, \(\mathcal{B}(\mathcal{X},\mathcal{I}_{1})>\tau\) and \(\mathcal{B}(\mathcal{X},\mathcal{I}_{2})>\tau\) should hold, where \(\mathcal{B}\) is a biometric comparator and \(\tau\) is the threshold value.
Initial work on de-morphing[40] used the reference of one identity to recover the identity of the second image. The authors also assumed prior knowledge about landmark points used in morphing and the parameters of the morphing process. FD-GAN[20] also uses a reference to recover identities as in the previous method. It uses a dual architecture and attempts to recover the first image from the morphed input using the second identity's image. It then tries to recover the second identity using the output of the first identity by the network. This is done to validate the effectiveness of their generative model.
## 3 Methodology
### Rationale
In [9], the authors decompose the morphed image into output images using a GAN that is composed of a generator, a decomposition critic, and two markovian discriminators. Inspired by the work, we propose a novel method that takes a morphed image \(\mathcal{X}\) and decomposes it into output images \(\mathcal{O}_{1}\) and \(\mathcal{O}_{2}\). The goal of the method is to produce outputs similar to the bona fides(BF), \(\mathcal{I}_{1}\) and \(\mathcal{I}_{2}\). The method also works with non-morphed images, i.e. if the input is a non-morphed image, the method would generate outputs very similar to the inputs (\(\mathcal{O}_{1}\approx\mathcal{O}_{2}\approx\mathcal{X}\)). The task of decomposition of morphed images can be well equated with the problem of separating two signals which has been studied extensively. Among many, we can mention independent component analysis (ICA) [10, 11], morphological component analysis (MCA) [12, 13, 14], and robust principal component analysis [15, 16]. These methods rely on strong prior assumptions such as independence, low rankness, sparsity, etc. However, the application of these techniques in de-morphing faces is difficult because such strong prior assumptions are typically not met. Motivated by the above-mentioned issues, we propose a novel method that is reference-free, i.e. takes a single morphed image and recovers the bona fides images used in creating the morph. In this paper, We closely follow the methodology in [8] with two changes 1) a _branched_-UNet is used instead of regular UNet and 2) cross-road loss is implemented. We explain both in 3.2.
### Proposed Method
The morphing operator \(\mathcal{M}(\cdot,\cdot)\), typically involves highly intricate and non-linear warping image editing functions which make de-morphing from a single image an ill-posed problem. We adopt a generative diffusion probabilistic model that iteratively adds noise to the input until the input signal is destroyed. The reverse process learns to recover the input signal from the noise.
#### 3.2.1 Forward diffusion Process
The forward diffusion process consists of adding a small amount of Gaussian noise to the input in steps ranging from \(0\) to \(T\). This result in the input sequence \(x_{0},x_{1},x_{2},....,x_{T}\), where \(x_{0}\) is the unadulterated sample. As \(T\rightarrow\infty\), \(x_{T}\) becomes equivalent to an isotropic Gaussian distribution. The step size is controlled by the variance schedule \(\{\beta_{t}\in(0,1)\}_{t=0}^{T}\). The forward process is typically fixed and pre-defined. During the forward process, we add the aforementioned noise schedule to the morphed image until the signal degenerated into pure noise as illustrated in Figure 1 (first column).
#### 3.2.2 Reverse sampling process
The goal of the learning is to estimate the noise schedule, i.e. the amount of noise added at time step \(t\). We follow a similar setup as in [8]. A deep learning network is used to realize \(p_{\theta}\) in Equation 4. In this paper, we have employed a _branched_-UNet which is trained to predict the parameters used during the forward process. A branched-UNet shares the same latent code with both of its outputs. This enables the model to output images that are semantically closer to the input. Figure 1 illustrates this, at time \(t\), the UNet takes input, the noisy image, and tries to reconstruct the noisy version of the ground truth (second and third column). Finally, the clean output is extracted at \(t=0\).
#### 3.2.3 Loss function
The sampling function \(f\) used to estimate the reverse process is trained with the "cross-road" loss defined as
\[\begin{split}\mathcal{L}=\sum_{t}\min[&\mathcal{L}_{ 1}^{t}(\mathcal{I}_{1}^{t},\mathcal{O}_{1}^{t})+\mathcal{L}_{1}^{t}(\mathcal{I }_{2}^{t},\mathcal{O}_{2}^{t}),\\ &\mathcal{L}_{1}^{t}(\mathcal{I}_{1}^{t},\mathcal{O}_{2}^{t})+ \mathcal{L}_{1}^{t}(\mathcal{I}_{2}^{t},\mathcal{O}_{1}^{t})]\end{split} \tag{6}\]
where \(\mathcal{L}_{1}\) is the per-pixel loss. \(\mathcal{I}_{i}^{t},\mathcal{O}_{i}^{t}\), \(i=1,2\) are the noisy inputs and outputs of the sampling process respectively at time step \(t\). The reason for using cross-road loss is that the outputs of the reverse process lack any order. Having that,
it is not guaranteed that \(\mathcal{O}_{1}\) corresponds to \(\mathcal{I}_{1}\) and \(\mathcal{O}_{2}\) to \(\mathcal{I}_{2}\). Therefore, we consider only 2 possible scenarios and incorporate that into the loss. Taking the minimum of both cases ensures that the correct pairing is done. The loss encourages the sampling process to estimate the noise added during the forward process by forcing the outputs to be visually similar to noisy inputs at time \(t\).
## 4 Experiments and Results
### Datasets and Preprocessing
We perform our experiment with 4 different morphing techniques on the following datasets:
**AMSL face morph dataset** The dataset contains 2,175 morphed images belonging to 102 subjects captured with neutral and smiling expressions. Not all of the images are used to create morphed images. We randomly sample \(80\%\) of the data as our training set and the remaining is used as test set. This setting is maintained throughout the datasets used in this paper.
**FRLL-Morphs** The dataset is constructed using the Face Research London Lab dataset. The morphs are created using 5 different morphing techniques, namely, OpenCV (OCV), FaceMorpher (FM), Style-GAN 2 (SG), and WebMorpher (WM). We conduct our experiments on FaceMorpher morphs. Each morph method contains 1222 morphed faces generated from 204 bona fide samples. All the images are generated using only frontal face images.
**MorDIFF** The MorDIFF dataset is an extension of the SYN-MAD 2022 dataset using the same morphing pairs. Both SYN-MAD and MorDIFF are based on FRLL dataset. The dataset contains 1000 attack images generated from 250 BF samples, each categorized on the basis of gender(male/female) and expression(neutral/smiling).
**SMDD** The dataset consists of 25k morphed images and 15k BF images constructed from 500k synthetic images generated StyleGAN2-ADA trained on Flickr-Faces-HQ Dataset (FFHQ) dataset. The evaluation dataset also has an equal number of morphed and bona fide images.
In this paper, we have used morphed images for training but testing is done on both morphed and unmorphed images.
### Implementation Details
Throughout our experiments, we set \(T=400\) for MorDIFF and \(T=300\) for remaining datasets. Our method does not generate data, thus a smaller value of \(T\) is preferred. The beta schedule for variances in the forward process is scaled linearly from \(\beta_{0}=10^{-4}\) to \(\beta_{T}=0.02\). With the schedule, the forward process produces \(\mathcal{X}^{t},\mathcal{I}_{1}^{t}\) and \(\mathcal{I}_{2}^{t}\), \(t=0,1,..,T\), the noisy morphed image and corresponding bona fides respectively.
The reverse process is realized by a _branched_-UNet. The UNet has an encoder consisting of convolution layers fol
Figure 2: Illustration of the architecture of the proposed method. A diffusion process incrementally adds noise to the input. A branched-UNet is used to estimate the noise schedule.
lowed by batch normalization [17]. To embed the time information, we use Transformer sinusoidal position embedding [18]. The UNet contains 2 identical decoders consisting of transpose convolutions and batch norm layers. Both the decoders share the same latent space and identical skip connections from the encoder layer. The training was done using Adam optimization with an initial learning rate of \(10^{-3}\) for \(300\) epochs. To quantitatively compare the generated faces and ground truth, we use an open-source implementation of ArcFace[19] network as a biometric comparator with cosine distance as the similarity measure.
### Results
We evaluate our method on both morphed and non-morphed images. In ideal conditions, the method outputs bona fides when the input is indeed a morphed image and
Figure 4: Faces reconstructed by the proposed method on (Top Left) FRLL-Facemorph, MorDIFF (Top Right) and SMDD datasets (Bottom). The reconstructed images have very high similarity with their corresponding BF samples.
Figure 3: Illustration of the de-morphed images produced using the proposed method on AMSL dataset. Here, “input” refers to the morphed image obtained from \(\mathcal{I}_{1}\) and \(\mathcal{I}_{2}\). The outputs are denoted as \(\mathcal{O}_{1}\) and \(\mathcal{O}_{2}\). (Left) The generated images achieve remarkably high similarity and visual realism when the input is indeed a morphed image. (Right) The model replicates the input when given an un-morphed image, i.e. \(\mathcal{X}=\mathcal{I}_{1}=\mathcal{I}_{2}\)
replicates the input twice when an unmorphed image is inputted. We evaluate our de-morphing method both quantitatively and qualitatively. We visualize the reconstructions of bona fides by the proposed method on AMSL dataset in Figure 3. The first column "input" is the morped image, and the next two columns (\(\mathcal{I}_{1},\mathcal{I}_{2}\)) are the bona fides used to construct \(\mathcal{X}\). Finally, the remaining two columns (\(\mathcal{O}_{1},\mathcal{O}_{2}\)) are the outputs produced by the method at \(t=0\). We observe that our method produces realistic images that visually resemble the ground truth. The method not only learns the facial features but also features like hairstyle (first row) and skin features like vitiligo (last row). The produced images are also significantly sharper compared to existing methods as illustrated in figure 7. The right set in Figure 3 are recon
Figure 5: Distribution of ArcFace cosine similarity on AMSL dataset. (Top Row) The proposed method achieves high facial similarity on the input-output pair \((\mathcal{I}_{i},\mathcal{O}_{i}),i=1,2\) (Column 4,5) in the morped case. (Bottom Row) A similar trend is observed in the unmorphed case when \(\mathcal{X}=\mathcal{I}_{1}=\mathcal{I}_{2}\).
Figure 6: Distribution of ArcFace cosine similarity on FRLL-FaceMorph(Top), MorDIFF(Middle) and SMDD dataset(Bottom). In all three cases, the proposed method outputs high-quality images having near-perfect similarity to BF samples.
struction on unmorphed images (\(\mathcal{X}=\mathcal{I}_{1}=\mathcal{I}_{2}\)). We observe that the method successfully replicates the input. Note that the outputs produced are not identical to each other but mere variations of the same unmorphed input. The method manages to replicate the unmorphed image with high facial fidelity despite having never been trained with it (the model is only trained on morphed images). We visualize similar results on FRLL-Morph, MorDIFF, and SMDD datasets in Figure 4. We observe similar visual results on these datasets with minor artifacts produced on MorDIFF. We believe that this is because of the usage of diffusion autoencoder to perform the MorDIFF attack which makes our method a direct inverse of the attack.
We also compare the generated faces using a biometric comparator \(\mathcal{B}\) to validate that our method is not generating faces with arbitrary features (i.e. arbitrary faces). We employ ArcFace as comparator \(\mathcal{B}\) and cosine distance as measure of similarity, large scores indicate higher facial similarity. We compute the ArcFace similarity between the following combinations between input and outputs: \((\mathcal{X},\mathcal{O}_{1}),(\mathcal{X},\mathcal{O}_{2}),(\mathcal{O}_{1}, \mathcal{O}_{2}),(\mathcal{O}_{1},\mathcal{I}_{1})\) and \((\mathcal{O}_{2},\mathcal{I}_{2})\). On AMSL dataset, Figure 5 visualizes the cosine similarity plots, the \(X\) axis represents the cosine similarity score between the pair of images, and \(Y\) axis is the percentage of test pairs attaining the similarity score. The top row represents the morphed case whereas the bottom row are plots pertaining to unmorphed images. We observe that the similarity plots of \((\mathcal{O}_{i},\mathcal{I}_{i}),i=1,2\) are heavily skewed towards the similarity of \(1\) (column 4,5). This indicates that the generated and corresponding bona fide sample belongs to the same person. Moreover, we observe that the distribution of \((\mathcal{O}_{1},\mathcal{O}_{2})\) is centered around \(0.5\), indicating that our model outputs images that are facially distinct within themselves. The bottom row of Figure 5 contains the cosine similarity plots on unmorphed images (\(\mathcal{X}\approx\mathcal{I}_{1}\approx\mathcal{I}_{2}\)) from AMSL dataset. In this case, we see that all the similarity plots are identical and skewed towards \(1\). This indicates that the method replicates the input image as both its outputs. Similar plots on FRLL-FaceMorph, MorDIFF, and SMDD datasets are presented in Figure 6.
Finally, to quantitatively measure the efficacy of our method, we compute the restoration accuracy[20] defined as the fraction of generated images that correctly match with their corresponding bona fide but does not match with the other bona fide (i.e. each output has exactly one matching bona fide) to the total number of test samples.
We use publicly available Face++[21] API to compare the bona fides and generated faces. The restoration accuracy is reported in Table 2. (i) **ASML**: Our method achieves restoration accuracy of \(97.70\%\) for Subject 1 and \(97.24\) for Subject 2. This means that over \(97\%\) of generated images correctly matched with their corresponding BF but didn't match with the other BF. (ii) **FRLL-FaceMorph**: \(96.00\%\) for Subject 1 and \(99.50\%\) for Subject 2. (iii) **FRLL MorDIFF**: \(78.00\%\) for Subject 1 and \(74.00\%\) for Subject 2 and finally (iv) **SMDD**: \(96.57\%\) for Subject 1 and \(99.37\%\) for Subject 2. The results indicate that our method performs well in terms of restoration accuracy.
**MAD Performance:** Apart from restoration accuracy, we also perform MAD experiments and measure the performance on the metric of APCER@5%BPCER. We report the results in Table 2. We observe a value of 2.08% for ASML dataset, 4.12% for FaceMorph, 12.18% for MorDiff and 6.41% for SMDD dataset. Lower values indicate that our method separates the distribution of morphs and bona fides significantly.
## 5 Summary
In this paper, we have proposed a novel de-morphing method to recover the identity of bona fides used to create the morph. Our method is reference-free, i.e. the method
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline Restoration & ASML & FRLL & FRLL & SMDD \\ Accuracy & & Face- & MorDIFF & \\ & & Morph & & \\ \hline Subject 1 & 97.70\% & 96.00\% & 78.00\% & 96.57\% \\ Subject 2 & 97.24\% & 99.50\% & 74.00\% & 99.37\% \\ \hline \end{tabular}
\end{table}
Table 1: We compute the restoration accuracy between bona fide and generated samples (\(\mathcal{O}_{i},\mathcal{I}_{i}\)), \(i=1,2\).
\begin{table}
\begin{tabular}{|l|c|} \hline Dataset & APCER @5\%BPCER \\ \hline ASML & 2.08 \\ FRLL FaceMorph & 4.12 \\ FRLL MorDIFF & 12.18 \\ SMDD & 6.41 \\ \hline \end{tabular}
\end{table}
Table 2: Morph Attack Detection accuracy. We compute APCER@5%BPCER
Figure 7: We compare the visual quality of outputs from our method to our closest competitor [9]. The first column is the morphed input and the second column are the original identities used to create the morph. The remaining two columns are the outputs generated from [9] and our method. In terms of facial features, our method generates face images that are much similar to ground truth. Moreover, the images generated are also of superior quality.
does not require any prior information on the morphing process which is typically a requirement for existing demorphing techniques. We use DDPM to iteratively destroy the signal in the input morphed image and during reconstruction, learn the noise schedule for each of the participating bona fides. To train, we employ an intuitive "cross-road" loss that automatically matches the outputs to the ground truth. We evaluate our method on AMSL, FRLL-FaceMorph, MorDIFF, and SMDD datasets resulting in visually compelling reconstructions and excellent biometric verification performance with original face images. We also show that our method outperforms its competitors in terms of quality of outputs (i.e. produces sharper, feature rich images) while keeping high restoration accuracy.
|
2305.01875 | Rationality of proper holomorphic maps between bounded symmetric domains
of the first kind | Let $D_{p,q}$ and $D_{p',q'}$ be irreducible bounded symmetric domains of the
first kind with rank $q$ and $q'$, respectively and let $f:D_{p,q}\to
D_{p',q'}$ be a proper holomorphic map that extends $C^2$ up to the boundary.
In this paper we show that if $q, q'\geq 2$ and $f$ maps Shilov boundary of
$D_{p,q}$ to Shilov boundary of $D_{p',q'}$, then $f$ is of the form $f =
\imath\circ F$, where $$
F=F_1\times F_2\colon D_{p,q}\to \Omega_1'\times \Omega_2', $$ $\Omega_1'$
and $\Omega_2'$ are bounded symmetric domains, $F_1 \colon D_{p,q}\to
\Omega_1'$ is a proper rational map, $F_2:D_{p,q}\to \Omega_2'$ is not proper
and $\imath: \Omega_1' \times \Omega'_2 \hookrightarrow D_{p',q'}$ is a
holomorphic totally geodesic isometric embedding of a reducible bounded
symmetric domain $\Omega_1' \times \Omega_2'$ into $D_{p',q'}$ with respect to
canonical K\"ahler-Einstein metrics. Moreover, if $p>q$, then $f$ is a rational
map. As an application we show that a proper holomorphic map $f:D_{p,q}\to
D_{p',q'}$ that extends $C^\infty$ up to the boundary is a rational map or a
totally geodesic isometric embedding with respect to the Kobayashi metrics, if
$3\leq q \leq q'\leq 2q-1.$ | Sung-Yeon Kim | 2023-05-03T03:34:00Z | http://arxiv.org/abs/2305.01875v1 | # Rationality of proper holomorphic maps between bounded symmetric domains of the first kind
###### Abstract.
Let \(D_{p,q}\) and \(D_{p^{\prime},q^{\prime}}\) be irreducible bounded symmetric domains of the first kind with rank \(q\) and \(q^{\prime}\), respectively and let \(f:D_{p,q}\to D_{p^{\prime},q^{\prime}}\) be a proper holomorphic map that extends \(C^{2}\) up to the boundary. In this paper we show that if \(q,q^{\prime}\geq 2\) and \(f\) maps Shilov boundary of \(D_{p,q}\) to Shilov boundary of \(D_{p^{\prime},q^{\prime}}\), then \(f\) is of the form \(f=\imath\circ F\), where
\[F=F_{1}\times F_{2}\colon D_{p,q}\to\Omega_{1}^{\prime}\times\Omega_{2}^{ \prime},\]
\(\Omega_{1}^{\prime}\) and \(\Omega_{2}^{\prime}\) are bounded symmetric domains, \(F_{1}\colon D_{p,q}\to\Omega_{1}^{\prime}\) is a proper rational map, \(F_{2}:D_{p,q}\to\Omega_{2}^{\prime}\) is not proper and \(\imath:\Omega_{1}^{\prime}\times\Omega_{2}^{\prime}\hookrightarrow D_{p^{ \prime},q^{\prime}}\) is a holomorphic totally geodesic isometric embedding of a reducible bounded symmetric domain \(\Omega_{1}^{\prime}\times\Omega_{2}^{\prime}\) into \(D_{p^{\prime},q^{\prime}}\) with respect to canonical Kahler-Einstein metrics. Moreover, if \(p>q\), then \(f\) is a rational map. As an application we show that a proper holomorphic map \(f:D_{p,q}\to D_{p^{\prime},q^{\prime}}\) that extends \(C^{\infty}\) up to the boundary is a rational map or a totally geodesic isometric embedding with respect to the Kobayashi metrics, if \(3\leq q\leq q^{\prime}\leq 2q-1\).
Key words and phrases:Proper holomorphic map, Bounded symmetric domain, Rational extension of holomorphic maps 2010 Mathematics Subject Classification: 32H35, 32M15, 14M15, 32V40
## 1. Introduction
The purpose of this paper is to investigate the possibility of extending proper holomorphic maps between bounded symmetric domains to their compact duals, which are Hermitian symmetric manifolds of compact type. Hermitian symmetric manifolds of compact type are the cominuscle rational homogeneous projective varieties. In particular, they are birational to projective spaces. Therefore it is interesting to find conditions under which a proper holomorphic map between two bounded symmetric domains can be extended to a rational map between their compact duals.
Proper holomorphic maps from a unit disc to itself are the finite Blaschke products. In higher dimensional case, there are certain rigidity phenomena. These phenomena were first discovered by Poincare ([P7]), who proved that any biholomorphic map between two connected open pieces of the unit sphere in \(\mathbb{C}^{2}\) is a restriction of an automorphism of its compact dual. Later, Alexander ([Al74]) and Henkin-Tumanov ([HeTum82]) generalized his result to higher dimensional unit balls and higher rank bounded symmetric domains respectively.
For proper holomorphic maps between balls of different dimensions, Webster ([We79]) used the CR geometry of the sphere to show that a proper holomorphic map \(f:\mathbb{B}^{n}\to\mathbb{B}^{N}\) that is three times continuously differentiable up to the boundary is a restriction of projective linear embedding of \(\mathbb{P}^{n}\) into \(\mathbb{P}^{N}\) if \(N=n+1\) and \(n\geq 3\). This result has been generalized by many
mathematicians under certain conditions on the dimension difference and the regularity of \(f\) on the boundary. We refer the works of Cima-Suffridge [10], Faran [14], Ebenfelt [1], Forstneric [15, 16], Globevnik [17], Huang [18, 19], Huang-Ji [15], Huang-Ji-Xu [18, 19], Stensones [19], D'Angelo [20, 21, 22], D03], D'Angelo-Kos-Riehl ([1]), D'Angelo-Lebl [1, 23] and the references therein.
When the dimension difference is arbitrary, Forstneric [16] proved the rational extendability of \(f:\mathbb{B}^{n}\to\mathbb{B}^{N}\). More precisely, he proved:
**Theorem 1.1**.: _Let \(U\) be an open ball centered at a point \(P\in\partial\mathbb{B}^{n}\), and let \(M=U\cap\mathbb{B}^{n}\). If \(N>n>1\) and \(f:\overline{\mathbb{B}}^{n}\cap U\to\mathbb{C}^{N}\) is a mapping of class \(C^{N-n+1}\) that is holomorphic on \(\mathbb{B}^{n}\cap U\) and takes \(M\) to the unit sphere \(\partial\mathbb{B}^{N}\), then \(f\) is rational, \(f=(p_{1},\ldots,p_{N})/q\), where the \(p_{j}\) and \(q\) are holomorphic polynomials of degree at most \(N^{2}(N-n+I)\). The extended map is holomorphic on \(\mathbb{B}^{n}\), it maps \(\mathbb{B}^{n}\) to \(\mathbb{B}^{N}\), and it has no poles on \(\partial\mathbb{B}^{n}\)._
In contrast to the case of complex unit balls, which are precisely the bounded symmetric domains of rank \(1\), much less is known for proper holomorphic maps between bounded symmetric domains \(\Omega,\Omega^{\prime}\) of higher rank. By using the geometric properties of Hermitian symmetric spaces with respect to the canonical Kahler-Einstein metrics, Tsai ([11]) proved that a proper holomorphic map \(f:\Omega\to\Omega^{\prime}\) is a restriction of a totally geodesic isometric embedding of the compact dual, if \(\operatorname{rank}(\Omega)\geq\operatorname{rank}(\Omega^{\prime})\geq 2\). When \(\operatorname{rank}(\Omega)<\operatorname{rank}(\Omega^{\prime})\), total geodesy of \(f\) fails in general. But when the rank difference or dimension difference is sufficiently small, classification or nonexistence of proper holomorphic maps were obtained for certain pairs of irreducible bounded symmetric domains of rank \(\geq 2\). We refer the readers to Chan [1, 20], Henkin-Novikov [1], Kim-Mok-Seo [10], Kim-Zaitsev [11, 12], Mok [13], Mok-Ng-Tu [14], Ng [11, 12, 13], Seo [14, 15, 16] and Tu [17, 18].
As for the rational extendability of \(f\), it is shown in [10] that if \(\Omega\) and \(\Omega^{\prime}\) are of the same type or of type three and type one, respectively and if the rank difference is sufficiently small(\(2\leq\operatorname{rank}(\Omega^{\prime})<2\)\(\operatorname{rank}(\Omega)-1\)), then \(f\) is of the form \(f=\imath\circ F\), where
\[F=F_{1}\times F_{2}\colon\Omega\to\Omega^{\prime}_{1}\times\Omega^{\prime}_{2},\]
\(\Omega^{\prime}_{1}\) and \(\Omega^{\prime}_{2}\) are bounded symmetric domains, \(F_{1}\colon\Omega\to\Omega^{\prime}_{1}\) is a totally geodesic isometric embedding and \(\imath:\Omega^{\prime}_{1}\times\Omega^{\prime}_{2}\hookrightarrow\Omega^{\prime}\) is a holomorphic totally geodesic isometric embedding of a reducible bounded symmetric domain \(\Omega^{\prime}_{1}\times\Omega^{\prime}_{2}\) into \(\Omega^{\prime}\) with respect to canonical Kahler-Einstein metrics. As a consequence \(f\) has a factor \(F_{1}\) that extends to a totally geodesic isometric embedding of the compact dual of \(\Omega\) into the compact dual of \(\Omega^{\prime}_{1}\). If \(\Omega\) is of rank one, there are proper holomorphic maps which do not have the above property. Reiter-Son [14] and Xiao-Yuan [11] classified proper holomorphic maps from \(\mathbb{B}^{n}\) to type four bounded symmetric domains. Those maps are algebraic but not necessarily have the above property.
In this paper, we generalize the result of Forstneric. More precisely, we show the rational extendability of proper holomorphic maps \(f\) between bounded symmetric domains of type one with rank \(\geq 2\) under a certain condition on the boundary values. If a proper holomorphic map \(f\) extends continuously to the boundary, \(f\) maps the boundary of \(\Omega\) into the boundary of \(\Omega^{\prime}\). The topological
boundary of \(\Omega\) is a disjoint union of the \(G\)-orbits \(S_{r}(\Omega)\), \(r=0,\ldots,\operatorname{rank}(\Omega)-1\), where \(G\) is the identity component of \(\operatorname{\mathsf{Aut}}(\Omega)\) such that each \(S_{r}(\Omega)\) is foliated by bounded symmetric domains of rank \(r\). Among those orbits, Shilov boundary \(S_{0}(\Omega)\) is the unique closed orbit and contains no complex manifold of positive dimension. In this paper, we impose a condition that \(f\) maps Shilov boundary of \(\Omega\) to Shilov boundary of \(\Omega^{\prime}\), which is a generalization of the condition that \(f:\overline{\mathbb{B}}^{n}\cap U\to\mathbb{C}^{N}\) maps \(\partial\mathbb{B}^{n}\cap U\) to \(\partial\mathbb{B}^{N}\).
**Theorem 1.2**.: _Let \(D_{p,q}\), \(D_{p^{\prime},q^{\prime}}\) be irreducible bounded symmetric domains of type one with rank \(q\), \(q^{\prime}\), respectively and let \(f:D_{p,q}\to D_{p^{\prime},q^{\prime}}\) be a proper holomorphic map. Suppose there exist a point \(P\in S_{0}(D_{p,q})\) and an open neighborhood \(U\) of \(P\) such that \(f\) extends \(C^{2}\) to \(U\). Suppose further that \(q,q^{\prime}\geq 2\) and \(f\) maps \(S_{0}(D_{p,q})\cap U\) to \(S_{0}(D_{p^{\prime},q^{\prime}})\). Then \(f\) is of the form \(f=\imath\circ F\), where_
\[F=F_{1}\times F_{2}\colon D_{p,q}\to\Omega^{\prime}_{1}\times\Omega^{\prime}_ {2},\]
\(\Omega^{\prime}_{1}\) _and \(\Omega^{\prime}_{2}\) are bounded symmetric domains, \(F_{1}\colon D_{p,q}\to\Omega^{\prime}_{1}\) is a proper rational map, \(F_{2}:D_{p,q}\to\Omega^{\prime}_{2}\) is not proper and \(\imath:\Omega^{\prime}_{1}\times\Omega^{\prime}_{2}\hookrightarrow D_{p^{ \prime},q^{\prime}}\) is a holomorphic totally geodesic isometric embedding of a reducible bounded symmetric domain \(\Omega^{\prime}_{1}\times\Omega^{\prime}_{2}\) into \(D_{p^{\prime},q^{\prime}}\) with respect to canonical Kahler-Einstein metrics. Moreover, if \(p>q\), then \(f\) is a rational map._
We remark that if \(q=1\), i.e., \(D_{p,q}=\mathbb{B}^{p}\) and if \(f\) maps Shilov boundary to Shilov boundary, then \(f\) is a proper holomorphic map from \(\mathbb{B}^{p}\) to \(\mathbb{B}^{\dim D_{p^{\prime},q^{\prime}}}\). Therefore by Theorem 1.1, \(f\) is a rational map if \(f\) is sufficiently smooth up to the boundary and \(p>1\). If \(q^{\prime}=1\), i.e., \(D_{p^{\prime},q^{\prime}}=\mathbb{B}^{p^{\prime}}\), then there is no proper holomorphic map \(f:D_{p,q}\to\mathbb{B}^{p^{\prime}}\) that extends continuously to an open neighborhood of a boundary point if \(q>1\).
If \(F_{1}\) in Theorem 1.2 is a totally geodesic isometric embedding with respect to the canonical Kahler-Einstein metrics, then by [10], \(f\) is a totally geodesic isometric embedding with respect to Kobayashi metric. With this result and [11], we obtain the following corollary.
**Corollary 1.3**.: _Let \(f:D_{p,q}\to D_{p^{\prime},q^{\prime}}\) be a proper holomorphic map that extends \(C^{\infty}\) up to the boundary. Suppose_
\[3\leq q\leq q^{\prime}\leq 2q-1.\]
_Then \(f\) is a rational map or of the form \(f=\imath\circ F\), where_
\[F=F_{1}\times F_{2}\colon D_{p,q}\to\Omega^{\prime}_{1}\times\Omega^{\prime} _{2},\]
\(\Omega^{\prime}_{1}\) _and \(\Omega^{\prime}_{2}\) are bounded symmetric domains, \(F_{1}\colon D_{p,q}\to\Omega^{\prime}_{1}\) is a proper rational map, \(F_{2}:D_{p,q}\to\Omega^{\prime}_{2}\) is not proper and \(\imath:\Omega^{\prime}_{1}\times\Omega^{\prime}_{2}\hookrightarrow D_{p^{ \prime},q^{\prime}}\) is a holomorphic totally geodesic isometric embedding of a reducible bounded symmetric domain \(\Omega^{\prime}_{1}\times\Omega^{\prime}_{2}\) into \(D_{p^{\prime},q^{\prime}}\) with respect to canonical Kahler-Einstein metrics. As a consequence \(f\) extends rationally to the compact dual or a totally geodesic isometric embedding with respect to the Kobayashi metrics._
Our method is to combine Kahler geometry of Hermitian symmetric spaces and CR geometry on the boundary orbits of bounded symmetric domains. We first generalize a strategy of characteristic bundles over a Hermitian symmetric space and moduli maps induced by \(f\), which was first used in the work of Mok-Tsai ([12]). Using the properness of \(f\), we define a moduli map \(f^{\sharp}_{r}\) between
the moduli spaces of invariantly totally geodesic subdomains of \(\Omega\) and \(\Omega^{\prime}\), which is meromorphic. As in [10], the pseudoconcavity of the moduli space of invariantly totally geodesic subdomains forces the moduli map to extend globally to a rational map between moduli spaces of invariantly totally geodesic subspaces of the compact duals (cf. [11]).
For each point \(P\) in the compact dual \(X\) of \(\Omega\), we define a complex variety \(\mathscr{Z}_{P}^{r}\subset X\) by the union of all characteristic subspaces of rank \(r\) passing through \(P\). If \(P\) is a point in the Shilov boundary, then we can define a subset \(\mathscr{S}_{P}^{r}\subset\mathscr{Z}_{P}^{r}\) by the union of all boundary components of rank \(r\) with \(P\) in their closure. We show that \(\mathscr{S}_{P}^{r}\) is a CR submanifold in \(S_{r}(\Omega)\) and \(\mathscr{Z}_{P}^{r}\) is the smallest compact complex variety that contains \(\mathscr{S}_{P}^{r}\). Moreover, if \(f\) maps Shilov boundary to Shilov boundary, then the image of \(\mathscr{S}_{P}^{r}\) under \(f\) is determined by the second jet of \(f\) restricted to the rank \(r\) boundary orbit \(S_{r}(\Omega)\). We use the CR geometry on the boundary orbits of \(\Omega\) and \(\Omega^{\prime}\) to analyse the CR second fundamental form of \(f\) and then show that \(f\) decomposes into a product of two holomorphic maps, where one of the factors is extended rationally to a neighborhood of \(P\) via lifting and pushing down the moduli map \(f_{r}^{\sharp}\) through double fibration of the universal family of invariantly totally geodesic subspaces.
The organization of the current article is as follows. In Section 2, we describe the moduli spaces and universal family of invariantly totally geodesic subspaces for Hermitian symmetric space of type one. In Section 3, we define moduli maps induced by \(f\). Then we obtain a condition for the moduli maps to be lifted to the universal family of invariantly totally geodesic subspaces. In Section 4, we investigate the CR structures of \(S_{r}(\Omega)\) and \(\mathscr{S}_{P}^{r}\). Then, in Section 5, we define CR second fundamental form of \(f\), which describes the image of \(\mathscr{S}_{P}^{r}\) via \(f\). Throughout Section 4 and 5, we use the Einstein summation convention unless stated otherwise. Finally, in Section 6, we prove Theorem 1.2 and Corollary 1.3.
**Acknowledgement** Author was supported by the Institute for Basic Science (IBS-R032-D1-2021-a00).
## 2. Preliminaries
In this section, we present some basic notions of bounded symmetric domains of type one. Then we will prove some basic properties of them. We refer [11], [12] and [10] as references.
### Hermitian symmetric spaces of type one
For positive integers \(p\geq q\geq 1\), define a basic form \(\langle\,\ \rangle=\langle\,\ \rangle_{p,q}\) on \(\mathbb{C}^{p+q}\) by
\[\langle z,w\rangle=-\sum_{1\leq j\leq q}z_{j}\overline{w}_{j}+\sum_{q+1\leq j \leq p+q}z_{j}\overline{w}_{j}.\]
The noncompact Hermitian symmetric space \(D_{p,q}\) of type one is the set of all \(q\)-planes \(V\subset\mathbb{C}^{p+q}\) such that the restriction \(\langle\,\ \rangle\big{|}_{V}\) is negative definite. In Harish-Chandra coordinates, \(D_{p,q}\) in the complex Grassmannian \(Gr(p,q)\) is realized as follows:
Write \(M^{\mathbb{C}}(p,q)\) for the set of \(p\times q\) matrices with coefficients in \(\mathbb{C}\), and denote by \(\{e_{1},\ldots,e_{p+q}\}\) the standard basis of \(\mathbb{C}^{p+q}\). For \(Z\in M^{\mathbb{C}}(p,q)\), denoting by \(v_{k}\), \(1\leq k\leq q\), the \(k\)-th column vector
of \(Z\) as a vector in \(\mathbb{C}^{p}=\operatorname{Span}_{\mathbb{C}}\{e_{1+q},\ldots,e_{p+q}\}\) we identify \(Z\) with the \(q\)-plane in \(\mathbb{C}^{p+q}\) spanned by \(\{e_{k}+v_{k}:1\leq k\leq q\}\). Then we have
\[D_{p,q}=\left\{Z\in M^{\mathbb{C}}(p,q):I_{q}-Z^{*}Z>0\right\},\]
where \(Z^{*}\) denotes the conjugate transpose of \(Z\). Throughout the paper, we always assume that \(D_{p,q}\) is defined as above.
The topological boundary \(\partial D_{p,q}\) of \(D_{p,q}\) is a disjoint union of the boundary orbits \(S_{p,q,r}\), \(r=0,\ldots,q-1\), where each \(S_{p,q,r}\) consists of all \(q\)-planes \(V\subset\mathbb{C}^{p+q}\) such that the restriction \(\left\langle\,\ \right\rangle\big{|}_{V}\) has \(r\) negative and \(q-r\) zero eigenvalues. The connected identity component \(G\) of the biholomorphic automorphism group \(\mathsf{Aut}(D_{p,q})\) is identified with the identity component of the group of all linear transformations of \(\mathbb{C}^{p+q}\) preserving \(\left\langle\,\ \right\rangle\). Therefore each \(S_{p,q,r}\) is a \(G\)-orbit in \(Gr(p,q)\). The rank \(q-1\) boundary orbit \(S_{p,q,q-1}\) is called the hypersurface boundary and the rank \(0\) boundary orbit \(S_{p,q,0}\) is called _Shilov boundary_. We remark that \(S_{p,q,0}\) is the set of extremal points of \(\overline{D}_{p,q}\).
For a pair of linear subspaces \((V_{1},V_{2})\) such that \(V_{1}\subset V_{2}\), denote by \([V_{1},V_{2}]_{q}\) the set of all elements \(x\in Gr(p,q)\) such that
\[V_{1}\subset V_{x}\subset V_{2}, \tag{2.1}\]
where \(V_{x}\) is the \(q\)-dimensional subspace of \(\mathbb{C}^{p+q}\) corresponding to \(x\in Gr(p,q)\). For a linear subspace \(A\subset\mathbb{C}^{p+q}\), denote by \(A^{*}\) the space of all vectors \(v\) in \(\mathbb{C}^{p+q}\) such that
\[\left\langle\cdot,v\right\rangle\big{|}_{A}=0.\]
\(A\) is called a _null space_ if
\[\left\langle A,A\right\rangle=0\]
or equivalently,
\[A\subset A^{*}.\]
If \(A\) is a \((q-r)\)-dimensional null space, then \(A^{*}\) is a \((p+r)\)-dimensional subspace containing \(A\) and \([A,A^{*}]_{q}\cap S_{p,q,r}\) is a maximal complex submanifold in \(\partial D_{p,q}\) biholomorphic to \(D_{p-q+r,r}\), called a _boundary component_ of rank \(r\). Up to the action of \(\mathsf{Aut}(D_{p,q})\), a boundary component of rank \(r\) is equivalent to a standard boundary component
\[F_{r}=\left\{\begin{pmatrix}I_{q-r}&0\\ 0&Z\end{pmatrix},Z\in D_{p-q+r,r}\right\}.\]
We remark that \([A,A^{*}]_{q}\) is the compact dual of \([A,A^{*}]_{q}\cap S_{p,q,r}\). Conversely, every maximal complex submanifold in \(\partial D_{p,q}\) is of this form. See [20].
### Invariantly totally geodesic subspaces
An invariantly totally geodesic subspace of subdiagram type in \(X=Gr(p,q)\) is a subgrassmannian \([V_{1},V_{2}]_{q}\) for some linear subspaces \(V_{1}\) and \(V_{2}\) such that \(V_{1}\subset V_{2}\). Hence, for fixed positive integers \(a\leq b\), the moduli space of invariantly totally geodesic subspaces of subdiagram type with \(\dim V_{1}=a\), \(\dim V_{2}=b\) is the flag variety
\[\mathcal{F}_{a,b}(X)=\{(V_{1},V_{2}):\{0\}\subset V_{1}\subset V_{2}\subset \mathbb{C}^{p+q},\dim V_{1}=a,\,\dim V_{2}=b\}. \tag{2.2}\]
For \(\sigma=(V_{1},V_{2})\in\mathcal{F}_{a,b}(X)\) we sometimes denote the corresponding subgrassmannian \([V_{1},V_{2}]_{q}\) by \(X_{\sigma}\).
A subgrassmannian \([V_{1},V_{2}]_{q}\subset Gr(p,q)\) with \(\dim V_{1}=q-r\) and \(\dim V_{2}=p+r\) is called a _characteristic subspace_ of rank \(r\). We denote the moduli space of characteristic subspaces of rank \(r\) for \(r=1,\ldots,q-1\) by \(\mathcal{D}_{r}(X)\), i.e.,
\[\mathcal{D}_{r}(X)=\{(V_{1},V_{2}):\{0\}\subset V_{1}\subset V_{2}\subset \mathbb{C}^{p+q},\dim V_{1}=q-r,\,\dim V_{2}=p+r\}. \tag{2.3}\]
Then \(\mathcal{D}_{r}(X)\) is biholomorphic to \(G/P_{r}\) for a parabolic subgroup \(P_{r}\) of \(G\) and the automorphism group is \(SL(p+q,\mathbb{C})\) for \(r>0\) (see section 3.3 in [1]). In particular, \(\mathcal{D}_{r}(X)\) is a rational homogeneous manifold.
For a given \(\Omega=D_{p,q}\) and its compact dual \(X=Gr(p,q)\), define
\[\mathcal{D}_{r}(\Omega):=\{\sigma\in\mathcal{D}_{r}(X)\colon\Omega_{\sigma}:= X_{\sigma}\cap\Omega\neq\emptyset\}.\]
Here \(\Omega_{\sigma}\) is a bounded symmetric domain of type one with rank \(r\). We call \(\Omega_{\sigma}\) a characteristic subdomain of rank \(r\). For a rank \(k\) boundary orbit \(S_{k}=S_{k}(\Omega)=S_{p,q,k}\) of \(\Omega\) with \(k\geq r\), define
\[\mathcal{D}_{r}(S_{k}):=\{\sigma\in\mathcal{D}_{r}(X)\colon\Omega_{\sigma}:=X _{\sigma}\cap S_{k}\text{ is open in }X_{\sigma}\}.\]
Then \(\mathcal{D}_{r}(\Omega)\) and \(\mathcal{D}_{r}(S_{k})\), \(k=r,\ldots,q-1\) are \(G\)-orbits in \(\mathcal{D}_{r}(X)\) such that \(\mathcal{D}_{r}(S_{k})\subset\partial\mathcal{D}_{r}(\Omega)\). Furthermore \(\mathcal{D}_{r}(S_{r})\) is the unique closed orbit of \(G\) such that \(\sigma\in\mathcal{D}_{r}(S_{r})\) if and only if \(\Omega_{\sigma}\) is a boundary component of rank \(r\).
### Associated characteristic bundle
A unit vector \(v\in T_{x}X\) is called a _characteristic vector_ if it realizes the maximum of the holomorphic sectional curvature of \(X\) with respect to the canonical Kahler-Einstein metric. Characteristic fiber bundle \(\mathscr{C}(X)\) is the bundle
\[\mathscr{C}(X)=\bigcup_{x\in X}\{[v]\in\mathbb{P}T_{x}X:v\text{ is a characteristic vector}\}\]
which is a holomorphic fiber bundle over \(X\). When a Hermitian symmetric domain is realized by a bounded symmetric domain \(\Omega\) via Harish-Chandra embedding in \(X\), the fibers \(\mathscr{C}_{x}\) over \(x\in\Omega\) are parallel with respect to Harish-Chandra coordinates.
For each characteristic vector \(v\in T_{x}X\), there is an orthogonal decomposition
\[T_{x}X=\mathbb{C}v\oplus\mathscr{H}_{v}\oplus\mathscr{N}_{v}\]
into eigenspaces of the Hermitian form
\[\mathcal{R}_{v}(\xi,\eta):=R_{v\bar{v}\xi\bar{\eta}}\]
corresponding to eigenvalues \(R_{v\bar{v}v\bar{v}},1/2R_{v\bar{v}v\bar{v}}\) and \(0\), where \(R\) is the curvature tensor of the canonical Kahler-Einstein metric on \(X\). Let \(\mathbb{P}_{v}\) be the projective line in \(X\) tangent to \(v\) that realizes the maximum of holomorphic sectional curvature. Then there exists a unique characteristic subspace \(X_{\sigma}\) of rank \(q-1\) passing through \(x\) such that the tangent space of \(X_{\sigma}\) at \(x\) is given by
\[T_{x}X_{\sigma}=\mathscr{N}_{v}.\]
Conversely, for each characteristic subspace \(X_{\sigma}\) of rank \(q-1\) passing through \(x\), up to a constant multiple, there exists a unique characteristic vector \(v_{x}\) at \(x\) such that
\[T_{x}X_{\sigma}=\mathscr{N}_{v_{x}}.\]
Moreover, for \(x,y\in X_{\sigma}\cap\overline{\Omega}\), \(v_{x}\) and \(v_{y}\) are parallel in Harish-Chandra coordinates. In either case, \(\mathbb{P}_{v}\times X_{\sigma}\) is a totally geodesic submanifold in \(X\) for \(v=v_{x}\) or \(v=v_{y}\). Define
\[\mathscr{N}_{q-1}(X)=\bigcup_{[v]\in\mathscr{C}(X)}\mathscr{N}_{v}=\{T_{x}X_ {\sigma}:\sigma\in\mathcal{D}_{q-1}(X),x\in X_{\sigma}\}.\]
It is a complex homogeneous fiber bundle over \(X\), called the _associated characteristic bundle_ of rank \(q-1\). Likewise we can define associated characteristic bundles of rank \(r\) by
\[\mathscr{N}_{r}(X)=\{T_{x}X_{\sigma}:\sigma\in\mathcal{D}_{r}(X),x\in X_{ \sigma}\},\quad r=1,\ldots q-1,\]
which are complex homogeneous fiber bundles over \(X\).
Let \(A\subset\mathscr{C}_{x}(X)\) be a set of characteristic directions at \(x\). Denote
\[\mathscr{N}_{A}=\bigcap_{[v]\in A}\mathscr{N}_{v}.\]
Then there exists a unique invariantly totally geodesic subspace \(X_{A}\) passing through \(x\) such that
\[T_{x}X_{A}=\mathscr{N}_{A}.\]
Furthermore, there exists a unique invariantly totally geodesic subspace \(Y_{A}\) of subdiagram type passing through \(x\) such that
\[\mathscr{N}_{A}=\mathscr{N}_{\mathscr{C}_{x}(X)\bigcap\mathbb{P}T_{x}Y_{A}}.\]
Let
\[\mathscr{C}_{x}(X_{A}):=\{[v]\in\mathscr{C}_{x}(X):v\in T_{x}X_{A}\}.\]
Then
\[T_{x}Y_{A}=\mathscr{N}_{\mathscr{C}_{x}(X_{A})}.\]
We will denote by \(\mathscr{H}_{A}\) the orthogonal complement of \(T_{x}Y_{A}+\mathscr{N}_{A}\). Then we have an orthogonal decomposition
\[T_{x}X=T_{x}Y_{A}\oplus\mathscr{H}_{A}\oplus\mathscr{N}_{A}. \tag{2.4}\]
### Universal family of characteristic subspaces
For a given \(r>0\), define
\[\mathcal{U}_{r}(X):=\{(x,\sigma)\in X\times\mathcal{D}_{r}(X)\colon x\in X_{ \sigma}\}.\]
Then there is a canonical double fibration
\[\rho_{r}\colon\mathcal{U}_{r}(X)\to\mathcal{D}_{r}(X),\quad\pi_{r}\colon \mathcal{U}_{r}(X)\to X\]
given by
\[\rho_{r}(x,\sigma)=\sigma,\quad\pi_{r}(x,\sigma)=x.\]
Note that
\[X_{\sigma}=\pi_{r}\left(\rho_{r}^{-1}(\sigma)\right).\]
Define \(\imath_{r}\colon\mathcal{U}_{r}(X)\to\mathcal{G}(n_{r},TX)\) with \(n_{r}=\dim X_{\sigma}\) by \(\imath_{r}(x,\sigma)=T_{x}X_{\sigma}\), where \(\mathcal{G}(n_{r},TX)\) is a Grassmannian bundle over \(TX\). Then \(\imath_{r}\) is a \(G\)-equivariant holomorphic embedding. Hence we may regard \(\mathcal{U}_{r}(X)\) as a complex manifold in \(\mathcal{G}(n_{r},TX)\). For each \(x\in X\), define
\[\mathcal{Z}_{x}^{r}:=\rho_{r}(\pi_{r}^{-1}(x)).\]
Similarly, we can define a \(G\)-equivariant holomorphic embedding \(j_{r}\colon\mathcal{U}_{r}(X)\to\mathcal{G}(m_{r},T\mathcal{D}_{r}(X))\) with \(m_{r}=\dim\mathcal{Z}_{x}^{r}\) by \(j_{r}(x,\sigma)=T_{\sigma}\mathcal{Z}_{x}^{r}\) and we may regard \(\mathcal{U}_{r}(X)\) as a complex manifold in \(\mathcal{G}(m_{r},T\mathcal{D}_{r}(X))\)(See [10]).
For a complex submanifold \(M\subset X\), define
\[\mathcal{Z}_{M}^{r}:=\{\sigma\in\mathcal{D}_{r}(X):M\subset X_{\sigma}\}, \quad\mathcal{S}_{M}^{r}:=\mathcal{Z}_{M}^{r}\cap\mathcal{D}_{r}(S_{r}),\]
\[\mathscr{Z}_{M}^{r}:=\pi_{r}\left(\rho_{r}^{-1}(\mathcal{Z}_{M}^{r})\right), \quad\mathscr{S}_{M}^{r}:=\mathscr{Z}_{M}^{r}\cap S_{r}.\]
For \(M=X_{\sigma}\) and \(M=\{x\}\), we will denote \(\mathcal{Z}_{M}^{r}\) by \(\mathcal{Z}_{\sigma}^{r}\) and \(\mathcal{Z}_{x}^{r}\), respectively for simplicity. We remark that If \(M=[A,B]_{q}\), then
\[\mathcal{Z}_{M}^{r}=\{\sigma=(V,W)\in\mathcal{D}_{r}(X):V\subset A,\ B\subset W\}\]
and
\[\mathscr{Z}_{M}^{r}=\{x\in X:\dim V_{x}\cap A\geq q-r,\ \dim(V_{x}+B)\leq p+r\}. \tag{2.5}\]
For a given \(r\), we will omit superscript \(r\) if there is no confusion.
**Lemma 2.1**.: _Let \(M\subset X\) be a subgrassmannian. Then \(\mathcal{Z}_{M}^{r}\) is a projective algebraic manifold covered by a finite union of Euclidean coordinate charts and \(\mathscr{Z}_{M}^{r}\) is a complex variety in \(X\). Moreover, if \(M\) is a compact dual of a rank \(s\) boundary component, i.e. \(M=X_{\sigma}\) with \(\sigma\in\mathcal{D}_{s}(S_{s})\), then \(\mathcal{S}_{M}^{r}\) and \(\mathscr{S}_{M}^{r}\) are smooth manifolds._
Proof.: Let \(M=[V_{M},W_{M}]_{q}\) be a subgrassmannian in \(X\). Then \(\sigma=(A,B)\in\mathcal{D}_{r}(X)\) is contained in \(\mathcal{Z}_{M}^{r}\) if and only if
\[A\subset V_{M},\quad W_{M}\subset B.\]
Therefore \(\mathcal{Z}_{M}^{r}\) is biholomorphic to \(Gr(q-r,V_{M})\times Gr(r-s,\mathbb{C}^{p+q}/W_{M})\). Since \(\pi_{r}\) is a proper holomorphic map, \(\mathscr{Z}_{M}^{r}=\pi_{r}(\rho_{r}^{-1}(\mathcal{Z}_{M}^{r}))\) is a complex analytic variety in \(X\).
Now suppose \(M\) is of the form
\[M=[V_{M},V_{M}^{*}]_{q}\]
for some \((q-s)\)-dimensional null space \(V_{M}\). Then
\[\mathcal{S}_{M}^{r}=\{(A,A^{*}):A\in Gr(q-r,V_{M})\}\]
and
\[\mathscr{S}_{M}^{r}=\{x\in S_{r}:\dim V_{x}\cap V_{M}=q-r\},\]
which completes the proof.
**Lemma 2.2**.: _Suppose \(\Omega\) is of tube type, i.e., \(\Omega=D_{q,q}\). Then for a Shilov boundary point \(P\in S_{0}\), \(\mathcal{S}_{P}^{r}\) is a maximal totally real submanifold of \(\mathcal{Z}_{P}^{r}\)._
Proof.: Let \(\sigma=(A,B)\in\mathcal{Z}_{P}^{r}\). Then \(A\) is a subspace of \(V_{P}\), implying that
\[\langle A,A\rangle=0\]
and \(\sigma\in\mathcal{D}_{r}(S_{r})\) if and only if
\[B=A^{*}.\]
Hence under a biholomorphic equivalence \(\mathcal{Z}_{P}\simeq Gr(q-r,V_{P})\times Gr(q-r,\mathbb{C}^{2q}/V_{P})\), \(\mathcal{S}_{P}\) is identified with
\[\{(A,A^{*}/V_{P}):A\in Gr(q-r,V_{P})\}\subset Gr(q-r,V_{P})\times Gr(q-r, \mathbb{C}^{2q}/V_{P}).\]
Therefore there exists a totally real embedding \(\imath:Gr(q-r,V_{P})\to\mathcal{Z}_{P}\) defined by \(\imath(A)=(A,A^{*})\) whose image coincides with \(\mathcal{S}_{P}\), which completes the proof.
We regard \(X=Gr(p,q)\) as a submanifold in a projective space \(\mathbb{P}^{N}\) via the first canonical embedding \(\imath:X\to\mathbb{P}^{N}\). Then for each invariantly totally geodesic subspace \(X_{\sigma}\) of subdiagram type, there exists a unique \(\ell\)-dimensional projective space \(\mathbb{P}_{\sigma}^{\ell}\subset\mathbb{P}^{N}\) such that
\[\imath(X_{\sigma})=\imath(X)\cap\mathbb{P}_{\sigma}^{\ell}.\]
\(\mathbb{P}_{\sigma}^{\ell}\) is the smallest projective space in \(\mathbb{P}^{N}\) that contains \(\imath(X_{\sigma})\). Therefore the manifold
\[\mathcal{U}_{a,b}(X):=\{(x,\sigma)\in X\times\mathcal{F}(a,b;\mathbb{C}^{p+q}) :x\in X_{\sigma}\}\]
can be regarded as a submanifold in \(\mathbb{P}^{N}\times\mathscr{G}(\ell,N)\), where \(\mathscr{G}(\ell,N)\) is the set of all \(\ell\)-dimensional projective spaces in \(\mathbb{P}^{N}\).
Let
\[\mathcal{M}=\{(x,L)\in\mathbb{P}^{N}\times\mathscr{G}(\ell,N):x\in L\}\]
be the tautological \(\mathbb{P}^{\ell}\)-bundle over \(\mathscr{G}(\ell,N)\). Then \(\mathcal{M}\) is defined locally by a quadratic equation
\[Q(x,L)=(Q_{1},\ldots,Q_{N-\ell})(x,L)=0\]
with the property
\[\{x\in\mathbb{P}^{N}:Q(x,L)=0\}=L.\]
For a point \(P\in\mathbb{P}^{N}\), define
\[\mathcal{M}_{P}:=\{P\}\times\{L\in\mathscr{G}(\ell,N):P\in L\}.\]
Then \(\mathcal{M}_{P}\) is a projective algebraic manifold defined locally by
\[Q(P,\cdot)=0.\]
Choose an open set \(O\subset\mathbb{P}^{N}\times\mathscr{G}(\ell,N)\) on which \(Q=0\) can be expressed by
\[Q_{j}(x,L)=\sum_{k=0}^{N}a_{j,k}x_{k}=0,\quad j=1,\ldots,N-\ell,\]
where \(x=[x_{0};x_{1};\cdots;x_{N}]\in\mathbb{P}^{N}\) and \((a_{j,k})_{j,k}\) is a matrix of size \((N-\ell)\times(N+1)\). We may assume \(x_{0}=1\) and \((a_{j,k})_{j,k=1}^{N-\ell}\) is an identity matrix. A complex manifold \(\mathcal{N}\subset\mathcal{M}_{P}\cap O\) is defined locally
by \(\{(P,h(\zeta))\in\mathcal{M}_{P}:\zeta\in U\}\) for some holomorphic embedding \(h:U\to\mathscr{G}(\ell,N)\), where \(U\) is a connected open set in a complex Euclidean space. Then \(h(\zeta)=(h_{j,k}(\zeta))\) satisfies
\[A_{j}(P;\zeta):=Q_{j}(P,h(\zeta))=\sum_{k=0}^{N}h_{j,k}(\zeta)P_{k}=0,\quad j=1,\ldots,N-\ell.\]
By taking the derivatives with respect to \(\zeta\) at \(\zeta_{0}\in U\), we obtain a system of affine equations \(A^{(m)}(\cdot;\zeta_{0})=((\partial_{\zeta}^{\alpha}A_{j})(\cdot;\zeta_{0}):j =1,\ldots,N-\ell,\ |\alpha|\leq m)\) defined by
\[(\partial_{\zeta}^{\alpha}A_{j})(P;\zeta_{0})=\sum_{k=0}^{N}(\partial_{\zeta }^{\alpha}h_{j,k})(\zeta_{0})P_{k},\quad j=1,\ldots,N-\ell\]
whose coefficients depend holomorphically on \(\zeta_{0}\), where for \(a\in\mathbb{N}^{d}\) with \(d=\dim U\),
\[\partial_{\zeta}^{\alpha}=\partial_{\zeta_{1}}^{\alpha_{1}}\cdots\partial_{ \zeta_{d}}^{\alpha_{d}}\]
and
\[|\alpha|=\alpha_{1}+\cdots+\alpha_{d}.\]
By linear algebra, we obtain the following lemma.
**Lemma 2.3**.: _Let \(\mathcal{N}\subset\mathcal{M}_{P}\cap O\) be a complex manifold. Then_
\[\bigcap_{L\in\mathcal{N}}L=\{P\}\]
_if and only if there exists an integer \(m\) and a point \((P,h(\zeta_{0}))\in\mathcal{N}\) such that \(A^{(m)}(\cdot;\zeta_{0})\) is of maximal rank._
## 3. Induced moduli maps
We identify \(X^{\prime}=Gr(p^{\prime},q^{\prime})\) with its image in a projective space \(\mathbb{P}^{N^{\prime}}\) via the first canonical embedding. For each \(r>0\), there exists a pair of integers \((a,b)=(a_{r},b_{r})\) depending only on \(r\) such that for general \(\sigma\in\mathcal{D}_{r}(\Omega)\), the smallest subgrassmannian of \(X^{\prime}\) that contains \(f(\Omega_{\sigma})\) is of the form \([A_{\sigma},B_{\sigma}]_{q^{\prime}}\) with \(\dim A_{\sigma}=a,\ \dim B_{\sigma}=b\). Define a map \(f_{r}^{\sharp}:\mathcal{D}_{r}(\Omega)\to\mathcal{F}_{a,b}(X)\) by
\[f_{r}^{\sharp}(\sigma)=(A_{\sigma},B_{\sigma}).\]
Then as in [KiMSe22], \(f_{r}^{\sharp}\) extends to a rational map \(f_{r}^{\sharp}:\mathcal{D}_{r}(X)\to\mathcal{F}_{a,b}(X^{\prime})\). Define
\[\mathcal{U}^{\prime}_{a,b}:=\{(y,L)\in X^{\prime}\times\mathcal{F}_{a,b}(X^{ \prime}):y\in L\}.\]
Then there exists a canonical double fibration
\[\rho^{\prime}_{a,b}\colon\mathcal{U}^{\prime}_{a,b}\to\mathcal{F}_{a,b}(X^{ \prime}),\quad\pi^{\prime}_{a,b}\colon\mathcal{U}^{\prime}_{a,b}\to X^{ \prime}.\]
The map \(\mathcal{F}_{r}\colon\mathcal{U}_{r}(\Omega)\to\mathcal{U}^{\prime}_{a,b}\) defined by \(\mathcal{F}_{r}(x,\sigma)=(f(x),f_{r}^{\sharp}(\sigma))\) preserves the double fibrations, i.e.
\[\pi^{\prime}_{a,b}\circ\mathcal{F}_{r}=f\circ\pi_{r},\quad\rho^{\prime}_{a,b }\circ\mathcal{F}_{r}=f_{r}^{\sharp}\circ\rho_{r},\]
where
\[\rho_{r}:\mathcal{U}_{r}(X)\rightarrow\mathcal{D}_{r}(X),\quad\pi_{r}:\mathcal{U} _{r}(X)\to X\]
is the canonical double fibration of the universal characteristic bundle over \(X.\)
We will regard \(\mathcal{U}^{\prime}_{a,b}\) as a submanifold of \(\mathbb{P}^{N^{\prime}}\times\mathscr{G}(\ell,N^{\prime})\) for some \(\ell\) and \(N^{\prime}.\) Similar to the case of universal characteristic bundles, for a complex submanifold \(M\subset X^{\prime}\) we define
\[\mathcal{Z}^{\prime\;\;a,b}_{M}:=\{(A,B)\in\mathcal{F}_{a,b}(X^{\prime}):M \subset[A,B]_{q^{\prime}}\}\]
and
\[\mathscr{Z}^{\prime\;\;a,b}_{M}=\pi^{\prime}_{a,b}\left(\left(\rho^{\prime}_{ a,b}\right)^{-1}\left(\mathcal{Z}^{\prime\;\;a,b}_{M}\right)\right).\]
We will omit superscript \(a,b\) if there is no confusion.
In the rest of this section, we will prove the following lemma whose proof can be given by a slight modification of the proof of Proposition 2.6 in [10]. To make this paper self contained, we will include the proof.
**Lemma 3.1**.: _Let \(f:\Omega\rightarrow\Omega^{\prime}\) be a proper holomorphic map. Suppose there exists a point \(x\in\Omega\) and a finite collection \(\Omega_{\sigma_{1}},\ldots,\Omega_{\sigma_{k}}\) of characteristic subdomains of rank \(1\) passing through \(x\) with \(\sigma_{j}\in Dom(f_{1}^{\sharp})\), \(j=1,\ldots,k\) such that_
\[\bigcap_{j}X^{\prime}_{f_{1}^{\sharp}(\sigma_{j})}=\{f(x)\}, \tag{3.1}\]
_where_
\[X^{\prime}_{f_{1}^{\sharp}(\sigma)}=\pi^{\prime}_{a,b}\left((\rho^{\prime}_{a,b})^{-1}(f_{1}^{\sharp}(\sigma))\right)\]
_for \((a,b)=(a_{1},b_{1})\). Then \(f\) has a rational extension \(\widehat{f}:X\to X^{\prime}.\)_
We may assume that \(x=0\in\Omega.\) Let \(\mathcal{N}\) be the regular locus of the closure of \(f_{1}^{\sharp}(\mathcal{Z}_{0}\cap Dom(f_{1}^{\sharp}))\). Then \(\mathcal{N}\) is a complex manifold in \(\mathcal{Z}^{\prime}_{f(0)}.\) Since \(f_{1}^{\sharp}\) is rational, \(\mathcal{N}\cap f_{1}^{\sharp}(\mathcal{Z}_{0}\cap Dom(f_{1}^{\sharp}))\) is dense in \(f_{1}^{\sharp}(\mathcal{Z}_{0}\cap Dom(f_{1}^{\sharp})).\) Therefore by condition (3.1), we obtain
\[\bigcap_{L\in\mathcal{N}}L=\{f(0)\}.\]
By Lemma 2.3, there exists an integer \(m>0\) and \(\sigma_{0}\in Dom(f_{1}^{\sharp})\cap\mathcal{Z}_{0}\) such that \(A^{(m)}(\cdot;\sigma_{0})\) is of maximal rank, where \(A^{(m)}(\cdot;\cdot)\) is the system given in Lemma 3.1 for \(h=f_{1}^{\sharp}.\) We will show that \(f\) extends meromorphically to \(X_{\sigma_{0}}\subset X\).
Let \(y\in X_{\sigma_{0}}.\) Then \(\sigma_{0}\in\mathcal{Z}_{y}.\) Consider the equation
\[Q(u,f_{1}^{\sharp}(\sigma))=0,\quad(u,\sigma)\in\mathbb{P}^{N^{\prime}}\times Dom (f_{1}^{\sharp}),\]
where \(Q\) is a quadratic defining equation of the tautological \(\mathbb{P}^{\ell}\)-bundle over \(\mathscr{G}(\ell,N^{\prime}).\) Then the derivatives of the above equation with respect to \(\sigma\in\mathcal{Z}_{y}\cap Dom(f_{1}^{\sharp})\) at \(\sigma_{0}\) depends holomorphically on a finite jet of \(f_{1}^{\sharp}\) at \(\sigma_{0}\) and tangent vectors of \(\mathcal{Z}_{y}\) at \(\sigma_{0}.\) Let
\[h_{y}:=f_{1}^{\sharp}\big{|}_{\mathcal{Z}_{y}}\]
and let \(A_{y}^{(m)}(\cdot,\sigma_{0})\) be the affine system defined in section 2.4 with respect to \(h_{y}\). Note that the solution of the \(A_{y}^{(m)}(\cdot;\sigma_{0})=0\) depends meromorphically on the coefficients of the system \(A_{y}^{(m)}(\cdot;\sigma_{0})\). Since \(\mathcal{U}_{1}(X)\) is biholomorphic to
\[\{(\sigma,T_{\sigma}\mathcal{Z}_{y}):(y,\sigma)\in\mathcal{U}_{1}(X)\}\subset Gr (m,T\mathcal{D}_{1}(X)),\quad m=\dim\mathcal{Z}_{y},\]
\(T_{\sigma_{0}}\mathcal{Z}_{y}\) depends holomorphically on \(y\). Furthermore, \(f_{1}^{\sharp}\big{|}_{\mathcal{Z}_{y}}\) depends meromorphically on a finite jet of \(f\) at \(y\). Therefore the coefficients of \(A_{y}^{(m)}(\cdot;\sigma_{0})\) depends meromorphically on \(y\), implying that the solution to
\[A_{y}^{(m)}(\cdot,\sigma_{0})=0\]
depends meromorphically on \(y\in X_{\sigma_{0}}\).
Note that the condition on the rank of \(A_{y}^{(m)}(\cdot;\sigma)\) is generic. Therefore there exists a dense open set \(\mathscr{W}_{0}\subset\mathcal{D}_{1}(\Omega)\) such that \(f\) extends meromorphically on \(X_{\sigma}\) for all \(\sigma\in\mathscr{W}_{0}\).
**Lemma 3.2**.: \(f\) _extends rationally to \(X\)._
Proof.: Let
\[W_{1}:=\bigcup_{\sigma\in\mathscr{W}_{0}}X_{\sigma}.\]
First we will show that \(f\) extends meromorphically to \(W_{1}\). Fix a point \(\sigma_{0}\in\mathscr{W}_{0}\) and choose a point \(y_{0}\in\Omega^{c}\cap X_{\sigma_{0}}\). It is enough to show that the solution to
\[Q(\cdot,L)=0,\quad L\in f_{1}^{\sharp}(\mathscr{W}_{0}\cap\mathcal{Z}_{y})\]
depends meromorphically on \(y\) on an open neighborhood of \(y_{0}\). Choose a small ball \(B\subset X\) centered at \(y_{0}\) and choose a holomorphic section \(s:B\to\mathcal{U}_{1}(X)\) such that \(s(y_{0})=(y_{0},\sigma_{0})\). After shrinking \(B\) if necessary, we may assume \(\rho_{1}\circ s(B)\subset\mathscr{W}_{0}\). Write
\[\sigma(y)=\rho_{1}\circ s(y),\quad y\in B.\]
Since \(f_{1}^{\sharp}\) is rational, the \(m\)-jet of \(f_{1}^{\sharp}\) at \(s(y)\) depends meromorphically \(y\). Hence the coefficients of the system \(A_{y}^{(m)}(\cdot;\sigma)\) at \(\sigma=\sigma(y)\) depends meromorphically on \(y\). It implies that the unique solution of the system
\[Q(\cdot,L)=0,\quad L\in f_{1}^{\sharp}(\mathscr{W}_{0}\cap\mathcal{Z}_{y})\]
should be the meromorphic extension of \(f\) on \(B\).
Suppose \(W_{i-1}\subset X\) and \(\mathscr{W}_{i-1}\subset\mathcal{D}_{r}(X)\) are defined such that \(f\) extends meromorphically on \(W_{i-1}\) and \(A^{(m)}(\cdot;\sigma)\) has maximal rank for all \(\sigma\in\mathscr{W}_{i-1}\), where we let \(W_{0}=\Omega\). Define
\[W_{i}:=\bigcup_{\sigma\in\mathscr{W}_{i-1}}X_{\sigma}.\]
Then as above, \(f\) extends meromorphically on \(W_{i}\). Define \(\mathscr{W}_{i}\) to be the set of points \(\sigma\) in \(\mathcal{Z}_{y},\ y\in W_{i}\) such that \(A_{y}^{(m)}(\cdot;\sigma)\) has maximal rank at \(\sigma\). Since \(y\in W_{i}\), by definition, there exists a point in
\(\mathcal{Z}_{y}\) at which \(A_{y}^{(m)}(\cdot;\sigma)\) is of maximal rank. Since the rank condition is generic, \(\mathscr{W}_{i}\cap\mathcal{Z}_{y}\) is dense in \(\mathcal{Z}_{y}\). In particular,
\[\mathscr{W}_{i-1}\subsetneq\mathscr{W}_{i},\]
if \(W_{i-1}\) is a proper subset of \(W_{i}\).
Finally we will show that there exists \(i\) such that \(W_{i}=X\). Observe that by Polysphere Theorem ([10]), for any two point \(x,y\in X\), there exists a chain of \(X_{\sigma},\ \sigma\in\mathcal{D}_{1}(X)\) of length \(\leq q\) that connects \(x\) and \(y\). Let \(x\in\Omega,\ y\in\Omega^{c}\). Choose a chain \(X_{\sigma_{1}},\ldots,X_{\sigma_{a}}\) connecting \(x\) and \(y\). Let
\[\mathscr{W}:=\{\sigma\in\mathcal{D}_{r}(X):A_{z}^{(m)}(\cdot;\sigma)\text{ is of maximal rank for some }z\in X_{\sigma}.\ \}.\]
Then \(\mathscr{W}\) is Zariski dense in \(\mathcal{D}_{1}(X)\). Note that the projection map in [11] is a holomorphic surjection. Therefore by moving \(\sigma_{j}\) sufficiently small, we may choose another characteristic subspaces \(\widetilde{\sigma}_{1},\ldots,\widetilde{\sigma}_{a}\) in \(\mathscr{W}\) whose chain connects \(y\) and a point \(\widetilde{x}\in\Omega\), i.e. \(y\in W_{a}\). Therefore \(f\) extends meromorphically on a neighborhood of \(y\), which completes the proof.
## 4. CR structures of boundary orbits
The rank \(r\) boundary orbit \(S_{r}=S_{r}(\Omega)\) of a bounded symmetric domain \(\Omega\) is a homogeneous CR manifold foliated by boundary components of rank \(r\). In this section, we investigate the CR structures of \(S_{r}=S_{r}(\Omega)\) and submanifolds in it for type one bounded symmetric domains \(\Omega=D_{p,q}\). We refer [10] as a reference.
### CR structure of \(S_{r}\)
For each \(x\in S_{r}=S_{p,q,r}\), there exists a unique maximal subspace \(Z_{0}(x)\subset V_{x}\) of dimension \(q-r\) such that
\[\langle Z_{0}(x),Z_{0}(x)\rangle=0.\]
We call \(Z_{0}(x)\) the _maximal null space_ of \(V_{x}\). Choose a complementary subspace \(\widetilde{Z}\subset V_{x}\) of \(Z_{0}(x)\). We will denote \(x\) by
\[x=Z_{0}(x)\oplus\widetilde{Z}.\]
An _adapted \(S_{r}\)-frame_ or simply an \(S_{r}\)-_frame_ is a set of vectors
\[Z_{1},\ldots,Z_{q-r},\widetilde{Z}_{1},\ldots,\widetilde{Z}_{r},X_{1},\ldots, X_{p-q+r},Y_{1},\ldots,Y_{q-r}\]
in \(\mathbb{C}^{p+q}\) for which the basic form \(\langle\,\ \rangle=\langle\,\ \rangle_{p,q}\) is given by the matrix
\[\begin{pmatrix}0&0&0&I_{q-r}\\ 0&-I_{r}&0&0\\ 0&0&I_{p-q+r}&0\\ I_{q-r}&0&0&0\end{pmatrix}.\]
Thus we have
\[Z_{0}(x)=\operatorname{span}\{Z_{1},\ldots,Z_{q-r}\},\quad V_{x}=Z_{0}\oplus \operatorname{span}\{\widetilde{Z}_{1},\ldots,\widetilde{Z}_{r}\}.\]
Denote
\[\widetilde{Z}:=\operatorname{span}\{\widetilde{Z}_{1},\ldots,\widetilde{Z}_{r} \},\quad X:=\operatorname{span}\{X_{1},\ldots,X_{p-q+r}\},\quad Y:= \operatorname{span}\{Y_{1},\ldots,Y_{q-r}\}.\]
Then the basic form \(\langle\,\ \rangle\) defines the natural duality pairings
\[Z_{0}(x)\times Y\to\mathbb{C},\quad\widetilde{Z}\times\widetilde{Z}\to\mathbb{C},\quad X\times X\to\mathbb{C}.\]
Denote by \(\mathcal{B}_{r}=\mathcal{B}_{p,q,r}\to S_{r}\) the adapted \(S_{r}\)-frame bundle and by \(\pi\) the Maurer-Cartan (connection) form on \(\mathcal{B}_{r}\) satisfying the structure equation \(d\pi=\pi\wedge\pi.\) Then we can write
\[\begin{pmatrix}dZ_{\alpha}\\ d\widetilde{Z}_{u}\\ dX_{k}\\ dY_{\alpha}\end{pmatrix}=\pi\begin{pmatrix}Z_{\beta}\\ \widetilde{Z}_{v}\\ X_{j}\\ Y_{\beta}\end{pmatrix}=\begin{pmatrix}\psi_{\alpha}^{\ \beta}&\widetilde{\theta}_{\alpha}^{\ v}& \theta_{\alpha}^{\ j}&\phi_{\alpha}^{\ \beta}\\ \widetilde{\sigma}_{u}^{\ \beta}&\widetilde{\omega}_{v}^{\ \nu}&\delta_{u}^{\ j}& \theta_{u}^{\ \beta}\\ \alpha_{k}^{\ \beta}&\delta_{k}^{\ v}&\omega_{k}^{\ j}&\theta_{u}^{\ \beta}\\ \xi_{\alpha}^{\ \beta}&\widetilde{\sigma}_{\alpha}^{\ v}&\sigma_{\alpha}^{\ j}& \widetilde{\psi}_{\alpha}^{\ \beta}\end{pmatrix}\begin{pmatrix}Z_{\beta}\\ \widetilde{Z}_{v}\\ X_{j}\\ Y_{\beta}\end{pmatrix}. \tag{4.1}\]
In the sequel, as in [10], we will identify forms on \(\mathcal{B}_{r}\) with their pullbacks to \(S_{r}\) via local sections of the frame bundle \(\mathcal{B}_{r}\to S_{r}.\) With that identification in mind, the forms \(\phi_{\alpha}^{\ \beta}\) give a basis in the space of all contact forms, i.e.,
\[T_{x}^{1,0}S_{r}=\{\phi_{\alpha}^{\ \beta}=0,\forall\alpha,\beta\}\]
and the upper right block forms
\[\begin{pmatrix}\theta_{\alpha}^{\ j}&\phi_{\alpha}^{\ \beta}\\ \delta_{u}^{\ k}&\widetilde{\theta}_{u}^{\ \beta}\end{pmatrix}\]
form a basis in the space of all \((1,0)\) forms on \(S_{r}.\) We denote by \(\phi,\,\theta,\,\delta\) the spaces of one forms spanned by \(\{\phi_{\alpha}^{\ \beta},\ \forall\alpha,\beta\},\,\{\widetilde{\theta}_{u}^{ \ \beta},\theta_{\alpha}^{\ j},\ \forall\alpha,\beta,u,j\}\) and \(\{\delta_{u}^{\ j},\ \forall u,j\},\) respectively.
There are several types of frame changes.
**Definition 4.1**.: We call a change of frame
1. change of position if \[Z_{\alpha}^{\prime}=W_{\alpha}^{\ \beta}Z_{\beta},\quad\widetilde{Z}_{u}^{ \prime}=W_{u}^{\ \beta}Z_{\beta}+W_{u}^{\ \upsilon}\widetilde{Z}_{v},\quad Y_{\alpha}^{ \prime}=V_{\alpha}^{\ \beta}Y_{\beta}+V_{\alpha}^{\ \upsilon}\widetilde{Z}_{v},\quad X_{j}^{ \prime}=X_{j},\] where \(W_{0}=(W_{\alpha}^{\ \beta})\) and \(V_{0}=(V_{\alpha}^{\ \beta})\) are \((q-r)\times(q-r)\) matrices satisfying \(V_{0}^{*}W_{0}=I_{q-r},\)\(\widetilde{W}=(W_{u}^{\ \upsilon})\) is an \(r\times r\) matrix satisfying \(\widetilde{W}^{*}\widetilde{W}=I_{r}\) and \(V_{\alpha}^{\ \beta}W^{*\ \gamma}_{\ \beta}+V_{\alpha}^{\ \upsilon}W^{*\ \gamma}_{\ \upsilon}=0;\)
2. change of real vectors if \[Z_{\alpha}^{\prime}=Z_{\alpha},\quad\widetilde{Z}_{u}^{\prime}=\widetilde{Z}_ {u},\quad X_{j}^{\prime}=X_{j},\quad Y_{\alpha}^{\prime}=Y_{\alpha}+H_{\alpha }^{\ \beta}Z_{\beta},\] where \(H=(H_{\alpha}^{\ \beta})\) is a skew hermitian matrix;
3. dilation if \[Z_{\alpha}^{\prime}=\lambda_{\alpha}^{-1}Z_{\alpha},\quad\widetilde{Z}_{u}^{ \prime}=\widetilde{Z}_{u},\quad Y_{\alpha}^{\prime}=\lambda_{\alpha}Y_{ \alpha},\quad X_{j}^{\prime}=X_{j},\] where \(\lambda_{\alpha}>0;\)
4. rotation if \[Z_{\alpha}^{\prime}=Z_{\alpha},\quad\widetilde{Z}_{u}^{\prime}=\widetilde{Z}_ {u},\quad Y_{\alpha}^{\prime}=Y_{\alpha},\quad X_{j}^{\prime}=U_{j}^{\ \ k}X_{k},\] where \((U_{j}^{\ \ k})\) is a unitary matrix.
The remaining frame change is given by
\[Z^{\prime}_{\alpha}=Z_{\alpha},\quad\widetilde{Z}^{\prime}_{u}=\widetilde{Z}_{u},\quad X^{\prime}_{j}=X_{j}+C_{j}^{\ \beta}Z_{\beta},\quad Y^{\prime}_{\alpha}=Y_{\alpha}+A_{\alpha}^{\ \beta}Z_{\beta}+B_{\alpha}^{\ j}X_{j},\]
such that
\[C_{j}^{\ \alpha}+B_{j}^{\ \alpha}=0\]
and
\[(A_{\alpha}^{\ \beta}+\overline{A_{\beta}^{\ \alpha}})+B_{\alpha}^{\ j}B_{j}^{\ \beta}=0,\]
where
\[B_{j}^{\ \alpha}:=\overline{B_{\alpha}^{\ j}}.\]
The change of connection one form \(\pi\) under each frame change is described in [10].
A boundary component \(\Omega_{\sigma}\) contains \(x\in S_{r}\) if and only if
\[\Omega_{\sigma}=[Z_{0}(x),Z_{0}^{*}(x)]_{q}\cap S_{r}.\]
Therefore \(\Omega_{\sigma}\subset S_{r}\) is a maximal integral manifold of
\[dZ_{\alpha}=0\mod Z_{0}\oplus\widetilde{Z},\quad\forall\alpha\]
or equivalently, a maximal integral manifold of
\[\phi_{\alpha}^{\ \beta}=\widetilde{\theta}_{u}^{\ \beta}=\theta_{\alpha}^{\ j}=0, \quad\forall\alpha,\beta,u,j.\]
**Lemma 4.2**.: _Let \(A_{x}\subset\mathscr{C}_{x}(X)\) be the set of characteristic directions orthogonal to \(T^{1,0}_{x}S_{r}\) with respect to the canonical Kahler-Einstein metric. Then_
\[T_{x}[Z_{0}(x),Z_{0}(x)^{*}]_{q}=\mathscr{N}_{A_{x}},\]
_where \(\mathscr{N}_{A}\) is the space defined in Section 2.3._
Proof.: Let \(\{Z_{\alpha},\widetilde{Z}_{u},X_{j},Y_{\alpha}\}\) be an \(S_{r}\)-frame at \(x\). Since \(\phi_{\alpha}^{\ \beta}\), \(\alpha,\beta=1,\ldots,q-r\) span the space of contact forms, after a suitable frame change, we may assume that
\[\operatorname{span}\{A_{x}\}=\operatorname{Hom}(Z_{0}(x),Y).\]
Choose a \(q-r\) dimensional polyshere \(P_{x}=\prod_{\alpha=1}^{q-r}\mathbb{P}_{\alpha}\) passing through \(x\), where \(\mathbb{P}_{\alpha}\) is a projective line such that
\[T_{x}\mathbb{P}_{\alpha}=\operatorname{Hom}(Z_{\alpha},Y_{\alpha}).\]
Then for any \(q-r\) dimensional null space
\[V_{0}=\operatorname{span}\{Z_{\alpha}+c_{\alpha}Y_{\alpha},\ \alpha=1,\ldots,q-r\},\quad c_{\alpha}\in\mathbb{R},\]
and \(r\) dimensional negative definite space
\[\widetilde{V}\subset\operatorname{span}\{\widetilde{Z}_{u},X_{j},\ u=1, \ldots,r,\ j=1,\ldots,p-q+r\},\]
the point \(V_{0}\oplus\widetilde{V}\) is contained in \(S_{r}\). Hence \(\prod_{\alpha=1}^{q-r}\Delta_{\alpha}\) is a totally geodesic polydisc in \(\Omega\) such that
\[\prod_{\alpha=1}^{q-r}\partial\Delta_{\alpha}\times\Omega_{\sigma}\subset S_{ r},\]
where \(\Delta_{\alpha}=\mathbb{P}_{\alpha}\cap\Omega\) and \(\Omega_{\sigma}=[Z_{0}(x),Z_{0}(x)^{*}]_{q}\cap S_{r}\), completing the proof.
Note that by (2.4), \(T^{1,0}S_{r}\) has an orthogonal decomposition
\[T^{1,0}_{x}S_{r}=\mathscr{H}_{A_{x}}\oplus\mathscr{N}_{A_{x}}.\]
The spaces \(\operatorname{span}\{A_{x}\}\), \(\mathscr{H}_{A_{x}}\) and \(\mathscr{N}_{A_{x}}\) are parallel along \(x\in\Omega_{\sigma}\) for each fixed boundary component \(\Omega_{\sigma}\subset S_{r}\) in Harish-Chandra coordinates(cf. [10]).
### CR structure of \(\mathscr{S}_{M}^{r}\)
Let \(M\) be a subgrassmannian of the form \(M=[V_{M},V_{M}^{*}]_{q}\) for some \(q-s\) dimensional null space \(V_{M}\) with \(s<r.\) Then for any \(x\in S_{r}\), \(x\) is contained in \(\mathscr{S}_{M}=\mathscr{S}_{M}^{r}\) if and only if \(Z_{0}(x)\subset V_{M}\), i.e.,
\[\mathscr{S}_{M}=\{x\in S_{r}:\langle Z_{0}(x),V_{M}^{*}\rangle=0\}.\]
Hence
\[\operatorname{span}\{Z_{0}(x):x\in\mathscr{S}_{M}\}=V_{M}\]
and \(\mathscr{S}_{M}\subset S_{r}\) is a maximal integral manifold of
\[\langle dZ_{\alpha},V_{M}^{*}\rangle=0,\quad\alpha=1,\dots,q-r.\]
By (2.5), \(\mathscr{Z}_{M}=\mathscr{Z}_{M}^{r}\) is a Schubert variety of the form
\[\{x\in X:\dim V_{x}\cap V_{M}\geq q-r,\ \dim V_{x}\cap V_{M}^{*}\geq q-r+s\}\]
when \(s>0\) and of the form
\[\{x\in X:\dim V_{x}\cap V_{M}\geq q-r\}\]
when \(s=0\).
**Lemma 4.3**.: \(\mathscr{S}_{M}\) _is a CR manifold such that_
\[T^{1,0}_{x}\mathscr{S}_{M}=T^{1,0}_{x}[Z_{0}(x),Z_{0}(x)^{*}]_{q}.\]
_Moreover, if \(\Omega\) is of tube type and \(P\in S_{0}\), then for \(x\in\mathscr{S}_{P},\)_
\[\left\{v-\sqrt{-1}J(v):v\in T_{x}\mathscr{S}_{P}\right\}=T^{1,0}_{x}\mathscr{ Z}_{P}=T^{1,0}_{x}S_{r},\]
_where \(J\) is the complex structure of \(X\)._
Proof.: Let \(\{Z_{\alpha},\widetilde{Z}_{u},X_{j},Y_{\alpha}\}\) be an adapted \(S_{r}(\Omega)\)-frame at \(x\) so that
\[dZ_{\alpha}=\widetilde{\theta}_{\alpha}^{\ \ u}\widetilde{Z}_{u}+\theta_{ \alpha}^{\ \ j}X_{j}+\phi_{\alpha}^{\ \beta}Y_{\beta}\mod Z_{0}(x).\]
Then on \(T_{x}\mathscr{S}_{M}\), we obtain
\[\widetilde{\theta}_{\alpha}^{\ \ u}\langle\widetilde{Z}_{u},V_{M}^{*}\rangle+ \theta_{\alpha}^{\ \ j}\langle X_{j},V_{M}^{*}\rangle+\phi_{\alpha}^{\ \beta}\langle Y_{\beta},V_{M}^{*}\rangle=0,\quad\alpha=1,\dots,q-r. \tag{4.2}\]
Since \(V_{M}\) is a null space containing \(Z_{0}(x)\), after a frame change, we may assume that
\[V_{M}=\operatorname{span}\{Z_{\alpha},\widetilde{Z}_{u}-X_{u},\ \alpha=1,\dots,q-r,\ u=1,\dots,r-s\}.\]
Then
\[V_{M}^{*}=V_{M}+\operatorname{span}\{\widetilde{Z}_{u},X_{j},\ u,j>r-s\}.\]
Hence by (4.2), we obtain
\[T_{x}\mathscr{S}_{M}=\{\phi_{\alpha}^{\ \beta}=\widetilde{\theta}_{\alpha}^{\ u}= \square_{\alpha}^{\ j}=0,\ \alpha,\beta=1,\dots,q-r,\ j=1,\dots,p-q+r,\ u=r-s+1,\dots,r\},\]
where
\[\square_{\alpha}^{\ j}=\widetilde{\theta}_{\alpha}^{\ j}+\theta_{\alpha}^{\ j },\quad j=1,\dots,r-s\]
and
\[\square_{\alpha}^{\ j}=\theta_{\alpha}^{\ j},\quad j=r-s+1,\dots,p-q+r.\]
Let \(v\in T^{1,0}_{x}\mathscr{S}_{M}\). Then \(Re(v)\) and \(Im(v)\) should satisfy
\[\phi_{\alpha}^{\ \beta}=\widetilde{\theta}_{\alpha}^{\ u}=\square_{\alpha}^{ \ j}=0,\quad\alpha,\beta=1,\dots,q-r,\ j=1,\dots,n,\ u=r-s+1,\dots,r,\]
which implies
\[\phi_{\alpha}^{\ \beta}=\widetilde{\theta}_{u}^{\ \beta}=\theta_{\alpha}^{\ j }=0,\quad\forall\alpha,\beta,u,j.\]
Therefore
\[T^{1,0}_{x}\mathscr{S}_{M}=T_{x}[Z_{0}(x),Z_{0}(x)^{*}]_{q}.\]
If \(M=\{P\}\) and \(\Omega\) is of tube type, then \(s=0\) and \(V^{*}_{P}=V_{P}\). Hence on a neighborhood of \(x\in\mathscr{S}_{P}\), \(\mathscr{Z}_{P}\) is defined by
\[\{x\in X:\dim V_{x}\cap V_{P}=q-r\},\]
which implies
\[T^{1,0}_{x}\mathscr{Z}_{M}=\{\phi_{\alpha}^{\ \beta}=0\}=T^{1,0}_{x}S_{r},\]
completing the proof.
In the proof of Lemma 4.3, we showed that there exists an \(S_{r}\)-frame \(\{Z_{\alpha},\widetilde{Z}_{u},X_{j},Y_{\alpha}\}\) at \(x\) such that \(\mathscr{S}_{M}\) is a maximal integral manifold of
\[\phi_{\alpha}^{\ \beta}=\widetilde{\theta}_{\alpha}^{\ u}=\square_{\alpha}^{ \ j}=0,\quad u=r-s+1,\dots,r,\ j=1,\dots,p-q+r,\]
where
\[\square_{\alpha}^{\ j}=\widetilde{\theta}_{\alpha}^{\ j}+\theta_{\alpha}^{\ j },\quad j=1,\dots,r-s\]
and
\[\square_{\alpha}^{\ j}=\theta_{\alpha}^{\ j},\quad j=r-s+1,\dots,p-q+r.\]
We will denote by \(\square_{M}\) the space of \(1\)-forms spanned by \(\{\square_{\alpha}^{\ j},\ \forall\alpha,j\}\). If \(M=P\), we will denote \(\square_{M}\) by \(\square_{P}\) for simplicity. Then for such frames, we obtain the following lemma.
**Lemma 4.4**.: _Let \(\Omega_{\sigma}\) be a rank \(r>s\) boundary component of \(\Omega\) such that \(\Omega_{\sigma}\subset\mathscr{Z}_{M}^{r}\) and let \(x\in\Omega_{\sigma}\) be a smooth point of \(\mathscr{Z}_{M}^{r}\). Then \(T^{1,0}_{x}\mathscr{Z}_{M}^{r}\) is a subspace of \(T^{1,0}_{x}S_{r}\) defined by_
\[\widetilde{\theta}_{u}^{\ \beta}=\theta_{\alpha}^{\ j}=0,\quad u,j>r-s.\]
Proof.: Since \(x\) is a smooth point of \(\mathscr{Z}_{M}\), \(\mathscr{Z}_{M}\) is locally defined by
\[\{y\in X:\dim V_{y}\cap V_{M}=q-r,\ \dim V_{y}\cap V_{M}^{*}=q-r+s\}\]
when \(s>0\) and
\[\{y\in X:\dim V_{y}\cap V_{M}=q-r\}\]
when \(s=0\), which completes the proof.
## 5. Second fundamental form of \(f\)
### Transversality
In this subsection we show the transversality of a holomorphic map \(f\) between type one bounded symmetric domains at general Shilov boundary points. More precisely, we will show the following proposition.
**Proposition 5.1**.: _Let \(f:\Omega\to\Omega^{\prime}\) be a proper holomorphic map between irreducible bounded symmetric domains of type one. Suppose there exist a point \(P\in S_{0}(\Omega)\) and an open neighborhood \(U\) of \(P\) such that \(f\) extends \(C^{2}\) to \(U\). Suppose further that \(f\) maps \(S_{0}(\Omega)\cap U\) to \(S_{0}(\Omega^{\prime})\). Then for general point \(x\in S_{0}(\Omega)\cap U\), the rank of \(f_{*}(v)\) is equal to the rank of \(\Omega^{\prime}\) for general \(v\in T_{x}X\)._
First we will show the following.
**Lemma 5.2**.: _Let \(\Delta\subset\mathbb{C}\) be a unit disc and let \(h:\Delta\to\mathbb{B}^{n}\), \(n\geq 2\) is a proper holomorphic map, \(C^{2}\) up to a connected open set \(U\) of the boundary. Then the set \(\{P\in U:h_{*}(\nu_{P})\in T^{c}\partial\mathbb{B}^{n}\}\) is a nowhere dense subset of \(\partial\Delta\), where \(\nu_{P}\) is an outward normal vector of \(\Delta\) at \(P\) and \(T^{c}\partial\mathbb{B}^{n}=T\partial\mathbb{B}^{n}\cap J(T\partial\mathbb{B}^ {n})\)._
Proof.: Since everything in the proof is purely local, we may assume that \(h\) is holomorphic on the left half plane \(\mathbb{H}=\{\zeta=r+\sqrt{-1}t:r<0\}\) and \(\nu_{P}=\dfrac{\partial}{\partial r}\). Suppose there exists an open set \(U\subset\partial\mathbb{H}\) such that \(h\) is \(C^{2}\) and
\[h_{*}(\nu_{P})\in T^{c}\mathbb{B}^{n},\quad P\in U\]
or equivalently
\[h^{\prime}(P)\cdot\overline{h(P)}=\sum_{j}h^{\prime}_{j}(P)\overline{h_{j}(P) }=0,\quad P\in U, \tag{5.1}\]
where \(h=(h_{1},\dots,h_{n})\) and \(h^{\prime}_{j}=\dfrac{dh_{j}}{d\zeta}\). Apply \(\dfrac{\partial}{\partial t}=\operatorname{Im}\left(\dfrac{\partial}{ \partial\zeta}\right)\) to obtain
\[\sum_{j}h^{\prime\prime}_{j}(P)\overline{h_{j}(P)}-\sum_{j}|h^{\prime}_{j}|^{ 2}(P)=0,\quad P\in U. \tag{5.2}\]
Write
\[h(r+it)=A_{0}(t)+A_{1}(t)r+A_{2}(t)r^{2}+o(r^{2}).\]
Then (5.1) and (5.2) imply
\[A_{1}(t)\cdot\overline{A_{0}(t)}=0,\quad A_{2}(t)\cdot\overline{A_{0}(t)}= \|A_{1}(t)\|^{2},\quad t\in U.\]
Therefore for \(t\in U\),
\[\|h(r+it)\|^{2} = (A_{0}(t)+A_{1}(t)r+A_{2}(t)r^{2})\cdot\overline{(A_{0}(t)+A_{1}(t)r +A_{2}(t)r^{2})}+o(r^{2})\] \[= 1+3\|A_{1}(t)\|^{2}r^{2}+o(r^{2}).\]
Since \(h\) is proper,
\[\|A_{1}(t)\|^{2}\leq 0,\quad t\in U.\]
Then
\[\|h^{\prime}(P)\|^{2}=0,\quad P\in U,\]
implying that \(h\) is constant, contradicting the assumption that \(h\) is proper.
_Proof of Proposition 5.1:_ Choose a totally geodesic holomorphic disc \(\phi:\Delta\to\Omega\) such that \(\phi(\partial\Delta)\subset S_{0}(\Omega)\) and \(\phi(\partial\Delta)\cap U\neq\emptyset\). Write \(F:=f\circ\phi=(F_{1},\ldots,F_{q^{\prime}}):\Delta\to\Omega^{\prime}\) in Harish-Chandra coordinates, where \(F_{j}\) is a vector-valued function. Since \(f\) maps Shilov boundary to Shilov boundary, on \(\partial\Delta\), we obtain
\[F_{j}\cdot\overline{F}_{k}=\delta_{jk},\quad j,k=1,\ldots,q^{\prime}. \tag{5.3}\]
By Lemma 5.2, there exists \(P\in\partial\Delta\) such that
\[F_{j}^{\prime}(P)\cdot\overline{F}_{j}(P)\neq 0,\quad j=1,\ldots,q^{\prime}.\]
After composing an automorphism of \(\Omega^{\prime}\), we may assume that
\[F(P)=(Id_{q^{\prime}};0)^{t}.\]
Write
\[F=(f_{1};f_{2})^{t},\]
where \(f_{1}\) is a \(q^{\prime}\times q^{\prime}\) matrx-valued holomorphic function. Then by differentiating (5.3) along \(\partial\Delta\), we obtain
\[(f_{1}^{\prime})(P)=(\overline{f_{1}^{\prime}(P)})^{t},\]
i.e., \((f_{1}^{\prime}(P))\) is a Hermitian matrix such that each diagonal entry is nonvanishing. This property holds up to any automorphism of \(\Omega^{\prime}\) that fixes \(F(P)\). Hence \(f_{1}^{\prime}\) is of rank \(q^{\prime}\), which completes the proof.
Now let \(f:\Omega\to\Omega^{\prime}\) be as in the Theorem 1.2. Then \(f\) restricted to each boundary component is a holomorphic map into a boundary component of \(\Omega^{\prime}\) and extends \(C^{2}\) up to the boundary sending Shilov boundary to Shilov boundary. Hence by Proposotion 5.1, we obtain the following lemma.
From now on, we denote by \(S_{r}(\Omega)\) and \(S_{r}(\Omega^{\prime})\), the rank \(r\) boundary orbits of \(\Omega\) and \(\Omega^{\prime}\), respectively.
**Lemma 5.3**.: _Suppose there exists a nonempty open neighborhood \(U\) of a Shilov boundary point of \(\Omega\) such that \(f\) extends \(C^{2}\) to \(U\) and \(f(S_{1}(\Omega)\cap U)\subset S_{r}(\Omega^{\prime})\). Then there exists \(P\in S_{0}(\Omega)\cap U\) such that \(f_{*}(\nu)\) is of rank \(r\) for general \([\nu]\in\mathscr{C}_{P}(X)\)._
Since \(f\) is holomorphic in \(\Omega\), if \(f\) satisfies the condition in Lemma 5.3, then \(f_{*}(\nu)\) is of rank \(r\) for general \(P\in\overline{\Omega}\) and general \([\nu]\in\mathscr{C}_{P}(X)\). From now on to the rest of this section, we assume that \(\Omega\) is of tube type and \(f_{*}(\nu)\) is of rank \(r\) for general \(P\in\overline{\Omega}\) and general \([\nu]\in\mathscr{C}_{P}(X)\).
Let \(f_{1}^{\sharp}:\mathcal{D}_{1}(X)\rightarrow\mathcal{F}_{a,b}(X^{\prime})\) be the moduli map induced by \(f\). Since \(f_{*}(\nu)\) is of rank \(r\) for general \(\nu\in\mathscr{C}(\Omega)\), for \(\sigma\in\mathcal{D}_{1}(S_{1}(\Omega))\), \(f_{1}^{\sharp}(\sigma)\) is of the form \([A,B]_{q^{\prime}}\) for some \((q^{\prime}-r)\)-dimensional null space \(A\). Since \(f_{1}^{\sharp}\) is rational and \(\mathcal{D}_{1}(S_{1}(\Omega))\) is a CR manifold that is not contained in any complex subvariety of \(\mathcal{D}_{1}(X)\) of positive codimension (See [KiMSe22]), for any \(\sigma\in Dom(f_{1}^{\sharp})\), \(f_{1}^{\sharp}(\sigma)\) is of the form \([A,B]_{q^{\prime}}\) for some \((q^{\prime}-r)\)-dimensional subspace \(A\). That is, \(f_{1}^{\sharp}\) is a map to \(\mathcal{F}_{q^{\prime}-r,b}(X^{\prime})\) for some \(b\).
Let
\[Pr:\mathcal{F}_{q^{\prime}-r,b}(X^{\prime})\to Gr(q^{\prime}-r,\mathbb{C}^{p^{ \prime}+q^{\prime}})\]
be a projection defined by \(Pr((A,B))=A\). For a general point \(P\in S_{0}(\Omega)\), define
\[\mathcal{L}_{P}:=\{(A,A^{*})\in\mathcal{D}_{r}(X^{\prime}):A\in Pr(f_{1}^{ \sharp}(\mathcal{S}_{P})\cap Dom(f_{1}^{\sharp}))\}\]
and
\[\mathscr{L}_{P}:=\pi_{r}^{\prime}\left(\left(\rho_{r}^{\prime}\right)^{-1}( \mathcal{L}_{P})\right), \tag{5.4}\]
where \(\pi_{r}^{\prime}:\mathcal{U}_{r}(X^{\prime})\to X^{\prime}\), \(\rho_{r}^{\prime}:\mathcal{U}_{r}(X^{\prime})\rightarrow\mathcal{D}_{r}(X^{ \prime})\) is the canonical double fibration of the universal family of characteristic spaces of rank \(r\) over \(X^{\prime}\) defined in Section 2.4. Since \(f\) is a CR map, for each general boundary component \(\Omega_{\sigma}\subset S_{1}(\Omega)\), there exists a unique boundary component of \(\Omega^{\prime}\) with rank \(r\) that contains \(f(\Omega_{\sigma})\). This boundary component is given by
\[[Z_{0}(f(x)),Z_{0}(f(x))^{*}]_{q^{\prime}}\cap S_{r}(\Omega^{\prime}),\quad x \in\Omega_{\sigma},\]
where we denote by \(Z_{0}(y)\) the maximal null space of \(V_{y}\) for \(y\in\partial\Omega^{\prime}\). Recall that \(Z_{0}(f(x))\) is constant on each boundary component. Therefore
\[Z_{0}(f(\mathscr{S}_{P}\cap U)))\subset Pr(f_{1}^{\sharp}(\mathcal{S}_{P}) \cap Dom(f_{1}^{\sharp}))\]
and hence
\[f(\mathscr{S}_{P}\cap U)\subset\mathscr{L}_{P}. \tag{5.5}\]
Since each point in the boundary orbit of rank \(r\) has a unique maximal null space of codimension \(r\), we can define a smooth map \(\Pi:S_{r}(\Omega^{\prime})\to Gr(q^{\prime}-r,\mathbb{C}^{p^{\prime}+q^{ \prime}})\) by \(\Pi(y)=Z_{0}(y)\). Then for any general \(x\in\mathscr{S}_{P}\cap U\), there exists an open neighborhood of \(U^{\prime}\subset S_{r}(\Omega^{\prime})\) of \(f(x)\) such that
\[\mathscr{L}_{P}\cap U^{\prime}=\Pi^{-1}\left(Z_{0}(f(\mathscr{S}_{P}^{\prime} \cap U))\right)\cap U^{\prime}.\]
For a general point \(x\in\mathscr{S}_{P}\cap U\), we may assume that \(\Pi\circ f\) is a surjection at \(x\) onto its image and hence
\[T_{f(x)}\mathscr{L}_{P}=f_{*}(T_{x}\mathscr{S}_{P}^{1}(\Omega))+ker(\Pi_{*}).\]
By taking the complexification, we obtain
\[\{v-\sqrt{-1}Jv:v\in T_{f(x)}\mathscr{L}_{P}\}=f_{*}(T_{x}^{1,0}S_{1})+T_{f(x) }[Z_{0}(f(x)),Z_{0}(f(x))^{*}]_{q^{\prime}}. \tag{5.6}\]
Let \(P\in S_{0}(\Omega)\cap\overline{\Omega}_{\sigma}\) for some rank one boundary component \(\Omega_{\sigma}\) and let \(x=P+\lambda\nu_{\sigma}\in\Omega_{\sigma}\subset S_{1}(\Omega)\) be general points in \(U\), where \(\nu_{\sigma}\in T_{P}X\) is an outward normal vector of \(\partial\Omega_{\sigma}\) in \(X_{\sigma}\) at \(P\) and \(\lambda<0\) is a sufficiently small real number. Then
\[f(P+\lambda\nu_{\sigma})=f(P)+\lambda f_{*}(\nu_{\sigma})+O(\lambda^{2})\in \Omega^{\prime}_{\sigma^{\prime}},\]
where \(\Omega^{\prime}_{\sigma^{\prime}}\) is the rank \(r\) boundary componentof \(\Omega^{\prime}\) that contains \(f(\Omega_{\sigma}\cap U)\). Since \(f_{*}(\nu_{\sigma})\) is of rank \(r\) and tangent to \(f(X_{\sigma})\) at \(f(P)\), we may assume that \(f(P+\lambda\nu_{\sigma})\) and \(f(P)+\lambda f_{*}(\nu_{\sigma})\) are contained in the same boundary component \(\Omega^{\prime}_{\sigma^{\prime}}\) for all sufficiently small \(\lambda<0\), i.e.,
\[Z_{0}(f(x))=Z_{0}(f(P)+\lambda f_{*}(\nu_{\sigma})).\]
Since \(T_{y}\mathscr{L}_{P}\) is parallel along \(y\in[Z_{0}(f(x)),Z_{0}(f(x))^{*}]_{q^{\prime}}\) in Harish-Chandra coordinates, it implies
\[T_{f(x)}\mathscr{L}_{P}=T_{f(P)+\lambda f_{*}(\nu_{\sigma})}\mathscr{L}_{P}\]
under parallel translation.
Now consider a one parameter family of rank one boundary components \(\Omega_{\sigma(t)}\), \(t\in(-\epsilon,\ \epsilon)\) such that
\[T_{P}X_{\sigma(t)}=(a+tb)^{*}\otimes(a+tb).\]
That is, after a suitable frame change, we can choose an \(S_{1}(\Omega)\)-frame \(\{e_{1},\ldots,e_{q},e_{1}^{*},\ldots,e_{q}^{*}\}\) of \(\mathbb{C}^{2q}\) with
\[\langle e_{i},e_{j}\rangle=\langle e_{i}^{*},e_{j}^{*}\rangle=\langle e_{i},e_ {j}^{*}\rangle-\delta_{i,j}=0\]
such that
\[V_{P}=\operatorname{span}\{e_{1},\ldots,e_{q}\}\]
and
\[X_{\sigma(t)}=[V_{\sigma(t)},V_{\sigma(t)}^{*}]_{q},\]
where
\[V_{\sigma(t)}=\operatorname{span}\{e_{1}+\sqrt{-1}te_{q},e_{2},\ldots,e_{q-1} \},\quad V_{\sigma(t)}^{*}=\operatorname{span}\{e_{q},e_{q}^{*}+\sqrt{-1}te_{ 1}^{*}\}+V_{\sigma(t)}.\]
Choose two curves
\[x(t):=f(P+\lambda\nu_{\sigma(t)}),\quad t\in(-\epsilon,\ \epsilon) \tag{5.7}\]
and
\[y(t):=f(P)+\lambda f_{*}(\nu_{\sigma(t)})=f(P)+\lambda\left(f_{*}(\nu_{a})+tf_ {*}(v_{a,b})+t^{2}f_{*}(\nu_{b})\right),\quad t\in(-\epsilon,\ \epsilon),\]
where \(\nu_{a},\nu_{b}\) and \(v_{a,b}\) are vectors in \(T_{P}X\) such that the vector field
\[\nu_{\sigma(t)}:=\nu_{a}+tv_{a,b}+t^{2}\nu_{b},\quad t\in(-\epsilon,\ \epsilon)\]
satisfies
\[T_{P}X_{\sigma(t)}=\mathbb{C}\nu_{\sigma(t)}.\]
After shrinking \((-\epsilon,\ \epsilon)\), we may assume that for a sufficiently small fixed \(\lambda<0\),
\[Z_{0}(x(t))=Z_{0}(y(t)),\quad t\in(-\epsilon,\ \epsilon).\]
Set
\[\mathscr{L}_{a,b}:=\bigcup_{t\in(-\epsilon,\ \epsilon)}[Z_{0}(y(t)),Z_{0}(y(t))^{*}]_{ q^{\prime}}.\]
Since \(f(P)\) is a Shilov boundary point, \(Z_{0}(y(t))\) is a subspace of \(V_{f(P)}\), i.e.,
\[Z_{0}(y(t))=V_{y(t)}\cap V_{f(P)}.\]
Therefore \(\mathscr{L}_{a,b}\) is a submanifold of a Schubert variety
\[\mathcal{W}_{P}:=\{y\in Gr(q^{\prime},p^{\prime}):\dim V_{y}\cap V_{f(P)}\geq q ^{\prime}-r\}.\]
Moreover, \(y(0)\) is a smooth point of \(\mathcal{W}_{P}\). On the other hand, since \(y(t)\) is a curve of degree two, \(\{y(t):t\in(-\epsilon,\ \epsilon)\}\) is contained in
\[y(0)+\operatorname{span}\{\dot{y}(0),\ddot{y}(0)\}\subset y(0)+T_{y(0)} \mathcal{W}_{P},\]
where
\[\dot{y}(0)=\frac{dy}{dt}(0)=f_{*}(v_{a,b}),\quad\ddot{y}(0)=\frac{d^{2}y}{dt^{ 2}}(0)=2f_{*}(\nu_{b}).\]
Here we regard \(y(0)+T_{y(0)}\mathcal{W}_{P}\) as a linear subset of \(X^{\prime}\) passing through \(y(0)\). Therefore the curve \(Z_{0}(y(t))\), \(t\in(-\epsilon,\ \epsilon)\) is of degree two in \(t\) and contained in a linear subspace
\[Z_{0}(y(0))+\operatorname{span}\{\Pi_{*}(\dot{y}(0)),\Pi_{*}(\ddot{y}(0))\}.\]
Here we regard \(Z_{0}(y(0))+\operatorname{span}\{\Pi_{*}(\dot{y}(0)),\Pi_{*}(\ddot{y}(0))\}\) as a linear subset of \(Gr(q^{\prime}-r,V_{f(P)})\) passing through \(Z_{0}(y(0))\). Since
\[\Pi_{*}(\dot{y}(0))=\frac{dZ_{0}(y(t))}{dt}(0),\quad\Pi_{*}(\ddot{y}(0))=\frac {d^{2}Z_{0}(y(t))}{dt^{2}}(0),\]
we obtain
\[Z_{0}(x(t))=Z_{0}(y(t))\in Z_{0}(f(x))+\operatorname{span}\left\{(Z_{0})_{*}( \dot{x}(0)),\frac{d^{2}Z_{0}\circ x(t)}{dt^{2}}(0)\right\},\quad t\in(- \epsilon,\ \epsilon).\]
Since \(a,b\) are arbitrary, we obtain
\[Z_{0}(f(\mathscr{S}_{P}\cap U))\subset Z_{0}(f(x))+T_{Z_{0}(f(x))}Z_{0}(f( \mathscr{S}_{P}))+\mathbb{F}\mathbb{F}^{(2)}_{Z_{0}(f(x))}Z_{0}(f(\mathscr{S} _{P})),\]
where \(\mathbb{F}\mathbb{F}^{(2)}_{Z_{0}(f(x))}Z_{0}(f(\mathscr{S}_{P}))\) is the span of the second fundamental form of \(Z_{0}(f(\mathscr{S}_{P}\cap U))\) at \(Z_{0}(f(x))\) with respect to the flat(Euclidean) connection of \(T_{Z_{0}(f(x))}Gr(q^{\prime}-r,V_{f(P)})\subset Gr(q^{\prime}-r,V_{f(P)})\) in a big Schubert cell.
### Second fundamental form of \(f:S_{1}(\Omega)\cap U\to S_{r}(\Omega^{\prime})\)
We will use capital Greek letters \(\Phi_{\alpha}^{\ \beta},\widetilde{\Theta}_{U}^{\ \beta},\Theta_{\alpha}^{\ \ J}, \Delta_{U}^{\ J},\) etc. for connection one forms on \(\mathcal{B}_{r}(\Omega^{\prime})=\mathcal{B}_{p^{\prime},q^{\prime},r}\) pulled back to \(S_{r}(\Omega^{\prime})=S_{p^{\prime},q^{\prime},r}.\) Since \(f\) is a CR map, \(f\) satisfies
\[f^{*}(\Phi_{\alpha}^{\ \beta})=0\mod\phi,\]
\[f^{*}(\Theta_{\alpha}^{\ \ J})=f^{*}(\widetilde{\Theta}_{U}^{\ \beta})=0\mod\phi,\theta,\]
\[f^{*}(\Delta_{U}^{\ J})=0\mod\phi,\theta,\delta. \tag{5.8}\]
Since everything in this section is purely local and \(f\) restricted to \(S_{1}(\Omega)\) is a local embedding on an open set, we may omit \(U\) and identify \(T_{x}^{1,0}S_{1}(\Omega)\) with \(f_{*}(T_{x}^{1,0}S_{1}(\Omega)).\) Then the pull back of one forms via \(f\) is the restriction of one forms to \(T_{f(x)}^{1,0}f(S_{1}(\Omega)).\) In what follows, we will omit \(f^{*}\) if there is no confusion.
For \(x\in S_{1}(\Omega),\)\(f_{*}(T_{x}^{1,0}S_{1}(\Omega))\) is a subspace in \(\operatorname{Hom}(V_{f(x)},\mathbb{C}^{p^{\prime}+q^{\prime}}/V_{f(x)}).\) We define subspaces \(K_{x}\) and \(R_{x}\) in \(V_{f(x)}\) and \(\mathbb{C}^{p^{\prime}+q^{\prime}}/V_{f(x)},\) respectively by
\[K_{x}:=\bigcap\left\{\operatorname{Ker}\left(proj(v)\right):v\in f_{*}(T_{x}^{ 1,0}S_{1}(\Omega))\right\},\quad R_{x}:=\operatorname{Span}_{\mathbb{C}} \left\{\operatorname{Im}\left(proj(v)\right):v\in f_{*}(T_{x}^{1,0}S_{1}( \Omega))\right\}.\]
where \(proj\) is the projection to the orthogonal complement of \(T_{f(x)}[Z_{0}(f(x)),Z_{0}(f(x))^{*}]_{q^{\prime}}\subset T_{f(x)}X^{\prime}\) with respect to the canonical Kahler-Einstein metric. Then
\[Gr_{x}:=\left\{A\in\operatorname{Hom}\left(V_{f(x)},R_{x}\right):\operatorname {Ker}(A)\supset K_{x}\right\}\cap T_{f(x)}^{1,0}S_{r}(\Omega^{\prime})+T_{f( x)}[Z_{0}(f(x)),Z_{0}(f(x))^{*}]_{q^{\prime}}\]
is a linear subspace in \(T_{f(x)}^{1,0}S_{r}(\Omega^{\prime})\) that contains \(f_{*}(T_{x}^{1,0}S_{1}(\Omega)).\)
Let \(P\in S_{0}(\Omega)\) be a point such that \(x\in\mathscr{S}_{P}.\) Then \(f(P)\) is a point in \(S_{0}(\Omega^{\prime})\) such that
\[f(\mathscr{S}_{P})\subset\mathscr{S}_{f(P)}^{\prime}. \tag{5.9}\]
After a rotation and position change, we may assume
\[V_{f(P)}=Z_{0}(f(x))+\operatorname{span}_{\mathbb{C}}\{V_{U}:=\widetilde{Z}_{U }-X_{U},\ U=1,\ldots,r\}\]
and
\[T_{f(x)}\mathscr{Z}_{f(P)}^{\prime}=\{\Phi_{\alpha}^{\ \beta}=\Theta_{\alpha}^{\ J}=0,\ \alpha,\beta=1,\ldots,q^{\prime}-r,\ J>r\}.\]
Since \(\Omega\) is of tube type and therefore
\[f_{*}(T_{x}^{1,0}S_{1}(\Omega))=f_{*}(T_{x}\mathscr{Z}_{P})\subset T_{f(x)} \mathscr{Z}_{f(P)}^{\prime},\]
we obtain
\[\Theta_{\alpha}^{\ \ J}=0\mod\phi,\quad J>r.\]
Hence
\[Gr_{x}\subset\{\Phi_{\alpha}^{\ \beta}=\Theta_{\alpha}^{\ J}=0,\ J>r\},\quad \forall\alpha,\beta,U.\]
After rotation and position change, we may assume
\[Gr_{x}=\{\Phi_{\alpha}^{\ \beta}=\Theta_{\alpha}^{\ \ J}=\widetilde{\Theta}_{U}^{ \ \beta}=0,\quad\alpha,\beta=1,\ldots,q^{\prime}-r,\ J>J_{1},\ U>U_{1}\}\]
for some integers \(U_{1},J_{1}\leq r.\) Then on \(f_{*}(T_{x}^{1,0}S_{1}(\Omega)),\) it holds that
\[\Theta_{\alpha}^{\ \ J}=\widetilde{\Theta}_{U}^{\ \beta}=0\mod\phi,\quad J>J_{1},\ U >U_{1}.\]
On the other hand, by (5.9), we obtain
\[\widetilde{\Theta}_{\alpha}^{\;\;U}+\Theta_{\alpha}^{\;\;U}=0\mod\phi,\square_{P}.\]
Hence if \(U>U_{1}\), then
\[\Theta_{\alpha}^{\;\;U}=0\mod\phi,\square_{P}.\]
Since \(f\) is a CR map, it implies
\[\Theta_{\alpha}^{\;\;J}=0\mod\phi,\quad J>U_{1}.\]
Similarly, we obtain
\[\widetilde{\Theta}_{\alpha}^{\;\;U}=0\mod\phi,\quad U>J_{1}.\]
Therefore,
\[J_{1}=U_{1}\]
and
\[\Theta_{\alpha}^{\;\;J}=\widetilde{\Theta}_{U}^{\;\;\beta}=0\mod\phi,\quad U,J>J_{1}. \tag{5.10}\]
Hence for general \(x\in S_{1}(\Omega)\), there exists a reduction of \(S_{r}(\Omega^{\prime})\)-frame such that
\[Gr_{x}=\{\Phi_{\alpha}^{\;\;\beta}=\Theta_{\alpha}^{\;\;J}=\widetilde{\Theta}_ {U}^{\;\;\beta}=0,\quad\alpha,\beta=1,\ldots,q^{\prime}-r,\;J,U>J_{1}\}.\]
Moreover, by definition of \(K_{x}\) and \(R_{x}\), for each fixed \(j\leq J_{1}\) and \(u\leq J_{1}\), there exist \(\alpha\) and \(\beta\) such that \(\Theta_{\alpha}^{\;\;j}\) and \(\widetilde{\Theta}_{u}^{\;\;\beta}\) modulo \(\phi\) are nonvanishing. From now on, we let the small indices \(u,v\), \(j,k\) run from \(1\) to \(J_{1}\) and the capital indices \(U,V\), \(J,K\) from \(J_{1}+1\) unless specified otherwise.
By differentiating (5.10) using the structure equation, we obtain
\[\widetilde{\Theta}_{\alpha}^{\;\;v}\Delta_{v}^{\;\;J}+\Theta_{\alpha}^{\;\;k} \Omega_{k}^{\;\;J}=\widetilde{\Omega}_{U}^{\;\;v}\widetilde{\Theta}_{v}^{\;\; \beta}+\Delta_{U}^{\;\;k}\Theta_{k}^{\;\;\beta}=0\mod\phi,\theta\wedge \overline{\theta}. \tag{5.11}\]
We will show
\[\Delta_{v}^{\;\;J}=\Delta_{U}^{\;\;k}=0\mod\theta,\phi \tag{5.12}\]
and
\[\Omega_{k}^{\;\;J}=\widetilde{\Omega}_{U}^{\;\;v}=0\mod\phi,\theta,\overline {\theta}. \tag{5.13}\]
Fix \(J>J_{1}\). By induction on \(\alpha\), we can choose a position change which still satisfies (5.10) and a sequence of positive integers \(v_{1}<v_{2}<\ldots<v_{q^{\prime}-r}\) such that \(\widetilde{\Theta}_{\alpha}^{\;\;v},\;v_{\alpha-1}<v\leq v_{\alpha}\) modulo \(\phi\) is linearly independent and \(\widetilde{\Theta}_{\alpha}^{\;\;v}\) modulo \(\phi\) vanishes for \(v>v_{\alpha}\), where we let \(v_{0}=0\). By condition on \(J_{1}\), we obtain \(v_{q^{\prime}-r}=J_{1}\). Hence by using (5.11) inductively on \(\alpha\), we obtain
\[\sum_{v_{\alpha-1}<v\leq v_{\alpha}}\widetilde{\Theta}_{\alpha}^{\;\;v}\Delta _{v}^{\;\;J}=0\mod\phi,\theta,\]
which implies by Cartan lemma and (5.8),
\[\Delta_{v}^{\;\;J}=0\mod\phi,\theta.\]
The same argument is valid for other cases.
Choose a curve \(x:(-\epsilon,\ \epsilon)\to\mathscr{S}^{\prime}_{f(P)}\) of the form (5.7) such that \(x(0)=f(x).\) We may assume that
\[\dot{x}(0)\not\in T_{f(x)}[Z_{0}(f(x)),Z_{0}(f(x))^{*}]_{q^{\prime}}.\]
Write
\[Z_{\alpha}(x(t))=t\sum_{U=1}^{r}C_{\alpha}^{\ U}V_{U}+t^{2}\sum_{U=1}^{r}D_{ \alpha}^{\ U}V_{U}+O(t^{3})\mod Z_{0}(f(x_{0})),\quad\alpha=1,\dots,q^{\prime} -r, \tag{5.14}\]
where
\[V_{U}=\widetilde{Z}_{U}-X_{U},\ U=1,\dots,r\]
are vectors that, together with \(Z_{0}(f(x)),\) span \(V_{f(P)}.\) Write
\[Z_{\alpha}(x(s))=Z_{\alpha}((x(t))+(s-t)V_{\alpha}(t)+O((s-t)^{2}),\quad s,t \in(-\epsilon,\ \epsilon)\]
for some vector field \(V_{\alpha}(t)\) of the form
\[V_{\alpha}(t)=\sum_{U=1}^{r}C_{\alpha}^{\ U}(t)V_{U}\mod Z_{0}(f(x)),\quad t \in(-\epsilon,\ \epsilon).\]
Then in view of (5.14),
\[C_{\alpha}^{\ U}(t)=C_{\alpha}^{\ U}\]
and
\[\sum_{U=1}^{r}C_{\alpha}^{\ U}V_{U}(t)=\sum_{U=1}^{r}C_{\alpha}^{\ U}V_{U}+2t \sum_{U=1}^{r}D_{\alpha}^{\ U}V_{U}+O(t^{2}). \tag{5.15}\]
By (5.10),
\[C_{\alpha}^{\ U}=0,\quad U>J_{1}.\]
By differentiating (5.15) with respect to \(t,\)
\[C_{\alpha}^{\ u}\dot{V}_{u}(0)=2\sum_{U=1}^{r}D_{\alpha}^{\ U}V_{U}. \tag{5.16}\]
On the other hand, since
\[dZ_{\alpha}=\frac{1}{2}(\widetilde{\Theta}_{\alpha}^{\ u}-\Theta_{\alpha}^{\ u})V_{u}\mod\phi,\Box_{P},\]
span\(\{\dot{V}_{u}(0),\ u=1,\dots,J_{1}\}\) is obtained from
\[dV_{U}=d\widetilde{Z}_{U}-dX_{U}=\sum_{W=1}^{r}(\widetilde{\Omega}_{U}^{\ W}- \overline{\Delta}_{U}^{\ W})\widetilde{Z}_{W}+\sum_{J=1}^{p^{\prime}-q^{ \prime}+r}(\Delta_{U}^{\ J}-\Omega_{U}^{\ J})X_{J}\mod Z_{0},Y. \tag{5.17}\]
Define
\[\widehat{V}_{U}=\widetilde{Z}_{U}+X_{U},\quad U=1,\dots,r\]
so that
\[\langle V_{U},\widehat{V}_{W}\rangle=-2\delta_{U}^{\ W},\quad U,W=1,\dots,r\]
and
\[\langle\widehat{V}_{U},X_{J}\rangle=0,\quad U=1,\ldots,r,\ J>r.\]
By substituting
\[\widetilde{Z}_{W}=\frac{1}{2}(V_{W}+\widehat{V}_{W}),\quad X_{W}=\frac{1}{2}(V_{ W}-\widehat{V}_{W})\]
to (5.17) we obtain
\[2dV_{u}=\sum_{W=1}^{r}(\widetilde{\Omega}_{u}^{\ W}+\Omega_{u}^{\ W}-\Delta_{u }^{\ W}-\overline{\Delta}_{u}^{\ W})\widehat{V}_{W}+\sum_{r>J}(\Delta_{u}^{\ J}- \Omega_{u}^{\ J})X_{J}\mod V_{f(P)},\]
which implies
\[(\widetilde{\Omega}_{u}^{\ W}+\Omega_{u}^{\ W}-\Delta_{u}^{\ W}-\overline{ \Delta}_{u}^{\ W})=(\Delta_{u}^{\ J}-\Omega_{u}^{\ J})=0,\quad\text{mod }\phi,\square_{P},\quad 1 \leq W\leq r,\ J>r\]
and
\[dV_{u}=\sum_{W=1}^{r}(\widetilde{\Omega}_{u}^{\ W}-\overline{\Delta}_{u}^{\ W})V_{W} \mod\phi,\square_{P}.\]
Let
\[L_{P}:=\text{span}\{Z_{0}(f(y)):y\in\mathscr{S}_{P}\}.\]
Then \(L_{P}\) is the smallest subspace of \(V_{f(P)}\) such that
\[\mathscr{L}_{P}\subset\pi_{r}^{\prime}\left((\rho_{r}^{\prime})^{-1}(\{(A,B) \in\mathcal{D}_{r}(X^{\prime}):A\subset L_{P},L_{P}^{*}\subset B\}\right),\]
where \(\mathscr{L}_{P}\) is defined in (5.4). By nondegeneracy of \(\widetilde{\Theta}_{u}^{\ \beta},u=1,\ldots,J_{1}\) and (5.14), \(L_{P}\) contains \(\text{span}\{V_{u},\ u=1,\ldots,J_{1}\}\). Therefore we may assume
\[L_{P}=Z_{0}(f(x))+\text{span}\{V_{U},\ U=1,\ldots,J_{P}\}\]
and
\[L_{P}^{*}=L_{P}+\text{span}\{\widetilde{Z}_{U},X_{J},\ U,J>J_{P}\}\]
for some \(J_{1}\leq J_{P}\leq r\). Since \(Z_{0}(f(y))\) is a subspace in a fixed null space \(V_{f(P)}\),
\[L_{P}^{*}:=\bigcap_{y\in\mathscr{S}_{P}}Z_{0}^{*}(f(y)).\]
Define
\[M_{P}:=[L_{P},L_{P}^{*}]_{q^{\prime}}=\bigcap_{y\in\mathscr{S}_{P}}[Z_{0}(f(y )),Z_{0}(f(y))^{*}]_{q^{\prime}}.\]
By definition, for any \(x\in\mathscr{S}_{P}\), \([Z_{0}(f(x)),Z_{0}(f(X))^{*}]_{q^{\prime}}\) contains \(M_{P}\) and (5.20) becomes
\[\mathscr{L}_{P}\subset\pi_{r}^{\prime}\left((\rho_{r}^{\prime})_{r}^{-1}( \mathcal{Z}_{M_{P}}^{\prime})\right)=\mathscr{Z}_{M_{P}}^{\prime}.\]
If \(M_{P}\) is a point, then
\[\bigcap_{\sigma\in\mathcal{S}_{P}\cap Dom(f_{1}^{\sharp})}X_{f_{1}^{\sharp}( \sigma)}^{\prime}=\{f(P)\}.\]
Hence by applying Lemma 5.4, we can show that \(f\) has a rational extension. Now assume that \(M_{P}\) is positive dimensional. Since \(L_{P}\) is a a null space, \(M_{P}\) is a nontrivial characteristic subspace of \(X^{\prime}\) passing through \(f(P)\) such that \(M_{P}\cap S_{s}(\Omega^{\prime})\) is a boundary component of \(\Omega^{\prime}\) for some \(0<s<r\) and
\[T_{f(P)}M_{P}=\bigcap_{y\in\mathscr{S}_{P}}T_{f(P)}[Z_{0}(f(y)),Z_{0}(f(y))^{*} ]_{q^{\prime}}.\]
By Lemma 4.2,
\[\bigcap_{y\in\mathscr{S}_{P}}T_{f(P)}[Z_{0}(f(y)),Z_{0}(f(y))^{*}]_{q^{\prime }}=\bigcap_{y\in\mathscr{S}_{P}}\mathscr{N}_{A_{y}},\]
where
\[A_{y}=\operatorname{Hom}(Z_{0}(f(y)),Y_{y})\]
for some suitable \(S_{r}(\Omega^{\prime})\)-frame at \(f(y)\). Here we regard \(\mathscr{N}_{A_{y}}\) as a subspace in \(T_{f(P)}X^{\prime}\) by parallel translation in Harish-Chandra coordinates. Let \(\widehat{L}_{P}\) be a subspace in \(\mathbb{C}^{p^{\prime}+q^{\prime}}\) such that
\[\bigcap_{y\in\mathscr{S}_{P}}\mathscr{N}_{A_{y}}=\mathscr{N}_{A}\]
for
\[A=\operatorname{Hom}(L_{P},\widehat{L}_{P}/L_{P}).\]
That is,
\[\widehat{L}_{P}=\operatorname{span}\{Y_{y},\ y\in\mathscr{S}_{P}\}\mod L_{P}.\]
Since \(Y_{y}\) is a dual of \(Z_{0}(y)\) under the basic form \(\langle\,\ \rangle\), we obtain
\[\widehat{L}_{P}=\operatorname{span}\{\widehat{V}_{U},\ U=1,\ldots,J_{P}\} \mod Y_{f(x)}+L_{P}.\]
and
\[T_{f(P)}M_{P}=\sum_{U,J>J_{P}}\Delta_{U}^{\ J}X_{J}.\]
Therefore
\[\langle dV_{u},\widetilde{Z}_{W}\rangle=\langle d\widehat{V}_{u},X_{J}\rangle =0\mod\phi,\square_{P},\quad W,J>J_{P},\]
which implies
\[\Delta_{U}^{\ J}=\Delta_{u}^{\ J}=0\mod\phi,\quad U,J>J_{P}\]
and
\[\widetilde{\Omega}_{u}^{\ W}=\Omega_{u}^{\ J}=0\mod\phi,\square_{P},\quad W,J> J_{P}.\]
Furthermore, since
\[\langle V_{u},\widetilde{Z}_{W}\rangle=\langle V_{u},X_{J}\rangle=0,\quad W,J> J_{P}\]
on \(\mathscr{L}_{P}\), by complexification and (5.6), we obtain
\[\langle dV_{u},\widetilde{Z}_{W}\rangle=\langle dV_{u},X_{J}\rangle=0,\quad W,J >J_{P}\]
on \(T_{x}^{1,0}S_{1}(\Omega)\), i.e.,
\[\langle dV_{u},\widetilde{Z}_{W}\rangle=\langle d\widehat{V}_{u},X_{J}\rangle =0\mod\phi,\bar{\theta},\quad W,J>J_{P},\]
which implies
\[\widetilde{\Omega}_{u}^{\ W}=\Omega_{u}^{\ J}=0\mod\phi,\bar{\theta},\quad W,J>J_ {P}.\]
Since \(\square_{P}\) and \(\bar{\theta}\) are linearly independent, together with (5.18), we obtain
\[\widetilde{\Omega}_{u}^{\ W}=\Omega_{u}^{\ J}=0\mod\phi,\quad W,J>J_{P}. \tag{5.21}\]
Moreover, since \(L_{P}\) is the smallest subspace that satisfies (5.20), by the property of degree two curves, we obtain that for each \(W\leq J_{P},\) there exists \(u\) such that
\[\widetilde{\Omega}_{u}^{\ W}-\overline{\Delta}_{u}^{\ W}\neq 0\mod\phi,\square _{P},\]
implying that
\[\widetilde{\Omega}_{u}^{\ W}\neq 0\mod\phi,\bar{\theta}.\]
This condition depends only on \(\widetilde{\Omega}_{u}^{\ W}\) and independent of the choice of \(P.\) Hence we can choose an integer \(J_{2}=J_{P}\) and a further reduction of \(S_{r}(\Omega^{\prime})\)-frame such that
\[V_{U}=\widetilde{Z}_{U}\mod X,\quad U=J_{1}+1,\ldots,J_{2}\]
with the nondegeneracy condition on \(\widetilde{\Omega}_{u}^{\ W}\) modulo \(\phi,\bar{\theta}\) stated above. That is, if we let
\[\widetilde{\Omega}_{u}^{\ W}=h_{u}^{\ W,\alpha}\theta_{\alpha}\mod\phi,\bar{ \theta}\]
and
\[h_{u}^{\ a}:=\sum_{W=J_{1}+1,\ldots,J_{2}}h_{u}^{\ W,a}V_{W},\]
then
\[\operatorname{span}\{V_{u},\ u=1,\ldots,J_{1}\}+\operatorname{span}\{h_{u}^{ \ a}:a=1,\ldots,q-1,\,u=1,\ldots,J_{1}\}=L_{P}\mod Z,\]
where \(\theta_{a},\ a=1,\ldots,q-1\) are \((1,0)\)-forms of \(S_{1}(\Omega)\) corresponding to \(\Theta_{\alpha}^{\ J}.\)
Suppose that after rotation of \(X_{W},\ J_{1}<W\leq J_{2},\) there exists \(W_{0}\leq J_{2}\) such that
\[\Omega_{u}^{\ W_{0}}-\Delta_{u}^{\ W_{0}}=0\mod\phi,\square_{P}\quad\forall u.\]
Then by (5.18),
\[\widetilde{\Omega}_{u}^{\ W_{0}}=0\mod\phi,\bar{\theta},\quad\forall u,\]
which contradictis the assumption on \(J_{2}.\) Therefore for each \(W=J_{1}+1,\ldots,J_{2},\) there exists \(u\) such that
\[\Omega_{u}^{\ W}-\Delta_{u}^{\ W}\neq 0\mod\phi,\square_{P},\]
which implies
\[\Omega_{u}^{\ W}\neq 0\mod\phi,\theta.\]
Thus the choice of \(\widehat{L}_{P}\) is independent of \(P.\)
Summing up, we have a reduction of \(S_{r}(\Omega^{\prime})\)-frame such that
\[\widetilde{\Theta}_{U}^{\ \beta}=\Theta_{\alpha}^{\ J}=0\mod\phi,\quad U,J>J_ {1},\]
\[\widetilde{\Omega}_{u}^{\ W}=\Delta_{U}^{\ J}=\Delta_{u}^{\ J}=\Omega_{k}^{\ J}=0\mod\phi,\quad U,W,J>J_{2}\]
and
\[T_{f(P)}M_{P}=\sum_{U,J>J_{2}}\Delta_{U}^{J}X_{J}\]
for all \(P\in S_{0}(\Omega)\) such that \(x\in\mathscr{S}_{P}\). Here we regard \(T_{f(P)}M_{P}\) as a subspace in \(T_{f(x)}^{1,0}S_{r}(\Omega^{\prime})\) by parallel translation in Harish-Chandra coordinates. By using the reduction of \(S_{r}(\Omega^{\prime})\)-frame, we will prove the following main technical lemma of the paper.
**Lemma 5.4**.: _Let \(\Omega\) be of tube type and let \(P\in S_{0}(\Omega)\) be a general point. Suppose there exists a nontrivial subgrassmannian \(M\) such that_
\[f_{1}^{\sharp}(\mathcal{S}_{P}\cap Dom(f_{1}^{\sharp}))\subset\mathcal{S}_{M}^ {\prime}.\]
_Then there exists a unique maximal characteristic subspace \(M_{P}\subset X^{\prime}\) of the form_
\[M_{P}=[L_{P},L_{P}^{*}]_{q^{\prime}}\]
_for some null space \(L_{P}\) such that_
\[f(\mathscr{S}_{P})\subset\mathscr{S}_{M_{P}}^{\prime}. \tag{5.22}\]
_Furthermore, \(M_{P}\) is parallel with \(M_{\widetilde{P}}\) for general \(P,\widetilde{P}\in S_{0}(\Omega)\)._
Proof.: It is enough to show that \(M_{P}\) is parallel with each other for general \(P\in S_{0}(\Omega)\). Let \(x\in S_{1}(\Omega)\) be a general point. Then for all \(P\in S_{0}(\Omega)\) such that \(x\in\mathscr{S}_{P}^{1}\) or equivalently,
\[P\in[Z_{0}(x),Z_{0}(x)^{*}]_{q}\cap S_{0}(\Omega),\]
\(M_{P}\) is parallel with each other. Since any two points in \(S_{0}(\Omega)\) are connected by chain of \([Z_{0}(x),Z_{0}(x)^{*}]_{q}\cap S_{0}(\Omega),\ x\in S_{1}(\Omega)\), we can complete the proof.
**Corollary 5.5**.: _Let \(\Omega^{\prime}_{1}\) and \(\Omega^{\prime}_{2}\) be totally geodesic subdomains of \(\Omega^{\prime}\) such that \(\Omega^{\prime}_{1}\times\Omega^{\prime}_{2}\) is a totally geodesic subspace of \(\Omega^{\prime}\) of maximal rank passing through \(f(0)\) and \(T_{f(0)}\Omega^{\prime}_{2}\) is parallel with \(T_{f(p)}M_{p}\) for \(P\in S_{0}(\Omega)\). Then_
\[f(\Omega)\subset\Omega^{\prime}_{1}\times\Omega^{\prime}_{2}. \tag{5.23}\]
_Moreover, if we decompose \(f=f_{1}\times f_{2}:\Omega\to\Omega^{\prime}_{1}\times\Omega^{\prime}_{2}\), then \(f_{1}\) is a proper rational map._
Proof.: Since
\[S_{0}(\Omega^{\prime}_{1}\times\Omega^{\prime}_{2})=S_{0}(\Omega^{\prime}_{1} )\times S_{0}(\Omega^{\prime}_{2}),\]
to show (5.23), it is enough to show that
\[f(S_{0}(\Omega))\subset S_{0}(\Omega^{\prime}_{1})\times S_{0}(\Omega^{\prime }_{2}).\]
Let \(P\in S_{0}(\Omega)\) be a general point and let \(M_{P}\) be the maximal characteristic subspace in Lemma 5.4. Since \(T_{f(P)}M_{P}\) is parallel with \(T_{f(0)}\Omega^{\prime}_{2}\), \(M_{P}\) is of the form
\[M_{P}=\{A_{P}\}\times M,\]
where \(M\) is the compact dual of \(\Omega^{\prime}_{2}\) and \(A_{P}\) is a point in \(S_{0}(\Omega^{\prime}_{1})\). Since \(P\) is arbitrary, we obtain
\[f(P)\in S_{0}(\Omega^{\prime}_{1})\times S_{0}(\Omega^{\prime}_{2}),\quad \forall P\in S_{0}(\Omega).\]
Now it is enough to show that \(f_{1}\) is proper and the characteristic subspace \(M_{P}\) in Lemma 5.4 for \(f_{1}\) is a point for some \(P\in S_{0}(\Omega)\). Suppose \(f_{1}\) is proper. In the proof of Corollary 5.5, \(M_{P}\) for \(f\) is of the form \(\{A_{P}\}\times M\) for general \(P\in S_{0}(\Omega)\). Hence by definition, \(M_{P}\) in Lemma 5.4 for \(f_{1}\) is \(\{A_{P}\}\), implying that \(f_{1}\) is rational. To show that \(f_{1}\) is proper, it is enough to show that \(f_{1}(S_{q-1}(\Omega))\subset\partial\Omega_{1}^{\prime}\). Suppose otherwise. Since \(f\) is proper and
\[\partial(\Omega_{1}^{\prime}\times\Omega_{2}^{\prime})=(\partial\Omega_{1}^{ \prime}\times\Omega_{2}^{\prime})\cup(\Omega_{1}^{\prime}\times\partial\Omega_ {2}^{\prime})\cup(\partial\Omega_{1}^{\prime}\times\partial\Omega_{2}^{\prime }),\]
we obtain
\[f_{2}(S_{q-1}(\Omega))\subset\partial\Omega_{2}^{\prime},\]
i.e., \(f_{2}:\Omega\to\Omega_{2}^{\prime}\) is a proper holomorphic map. Now suppose \(f_{2}\) is proper. Then by Corollary 5.5, there exists a further decomposition \(f_{2}=g\times h:\Omega\to\Omega_{3}^{\prime}\times\Omega_{4}^{\prime}\subset \Omega_{2}^{\prime}\) and \(M_{P}\) in Lemma 5.4 for \(f_{2}\) is of the form \(\{B_{P}\}\times N\) for some \(B_{P}\in S_{0}(\Omega_{3}^{\prime})\). Since \(\Omega_{1}^{\prime}\times\Omega_{2}^{\prime}\) is totally geodesic, this implies that \(M_{P}\) for \(f\) should be of the form \(\{A_{P}\times B_{P}\}\times N\), contradicting the uniqueness of \(M_{P}=\{A_{P}\}\times M\) for \(f\).
## 6. Proof of Theorem 1.2 and Corollary 1.3
First assume that \(\Omega\) is of tube type. Then Corollary 5.5 will complete the proof. Now assume that \(\Omega\) is of non-tube type. Then the following lemma will complete the proof.
**Lemma 6.1**.: _Let \(P\in S_{0}(\Omega)\) and let \(M_{P}\) be a characteristic subspace passing through \(f(P)\) such that_
\[f(\mathscr{S}_{P}^{1})\subset\mathscr{S}_{M_{P}}^{\prime}. \tag{6.1}\]
_Then \(M_{P}\) is a point for general \(P\in S_{0}(\Omega)\)._
Proof.: Let \(P\in S_{0}(\Omega)\) be a general point. Suppose \(M_{P}\) is not a point. Then \(M_{P}\) is a positive dimensional characteristic subspace. Choose a rank one boundary component \(\Omega_{\sigma}\) such that \(P\in\partial\Omega_{\sigma}\). Since \(\Omega\) is non-tube type, \(\Omega_{\sigma}\) is a ball of dimension at least \(2\). Let \(x\in\Omega_{\sigma}\). Choose a totally geodesic maximal tube type subdomain \(\Omega_{x}\subset\Omega\) such that \(x\in S_{1}(\Omega_{x})\) and \(P\in S_{0}(\Omega_{x})\). Since \(M_{P}\) is nontrivial, by Corollary 5.5, there exist nontrivial tube type subdomain \(\Omega_{1}^{\prime}(x)\) and a characteristic subdomain \(\Omega_{2}(x)^{\prime}\) such that
\[f(\Omega_{x})\subset\Omega_{1}^{\prime}(x)\times\Omega_{2}^{\prime}(x).\]
Moreover, in view of (5.19), \(\Omega_{2}^{\prime}(x)\) is completely determined by \(Z_{0}(f(\mathscr{S}_{P}^{1}(\Omega_{x}))))\). Since \(Z_{0}(f)\) is constant on each boundary component, \(\Omega_{2}^{\prime}(x)\) is parallel for all \(x\in\Omega_{\sigma}\). Since \(\Omega_{1}^{\prime}(x)\times\Omega_{2}^{\prime}(x)\) is totally geodesic, \(\Omega_{1}^{\prime}(x)\) is also parallel for all \(x\in\Omega_{\sigma}\). Hence there exists a totally geodesic subspace of the form \(\Omega_{1}^{\prime}(\sigma)\times\Omega_{2}^{\prime}(\sigma)\subset\Omega^{\prime}\) of maximal rank such that \(\Omega_{1}^{\prime}(\sigma)\) is of tube type, \(\Omega_{2}^{\prime}(\sigma)\) is a characteristic subspace and
\[f(\Omega_{\sigma})\subset\partial(\Omega_{1}^{\prime}(\sigma)\times\Omega_{2}^ {\prime}(\sigma)).\]
Since \(\Omega_{\sigma}\) is a ball with dimension at least \(2\), \(f\) maps Shilov boundary to Shilov boundary and \(\Omega_{1}^{\prime}\) is of tube type, this implies that
\[f(\partial\Omega_{\sigma})\subset\{A_{P}\}\times\partial\Omega_{2}^{\prime}(\sigma)\]
for some \(A_{P}\in S_{0}(\Omega_{1}^{\prime})\) depending only on \(f(P)\). Since \(\Omega_{\sigma}\) is arbitrary in \(\mathscr{S}_{P}^{1}\) and \(\Omega_{2}^{\prime}(y)\) is parallel with \(\Omega_{2}^{\prime}(x)\) for \(x,y\in\mathscr{S}_{P}^{1}(\Omega_{x})\), \(\Omega_{2}^{\prime}(\sigma)\) is parallel with \(\Omega_{2}^{\prime}(\widetilde{\sigma})\) for \(\Omega_{\sigma},\Omega_{\widetilde{\sigma}}\subset\mathscr{S}_{P}^{1}\) and
\[f(\mathscr{S}_{P})\subset\{A_{P}\}\times\Omega_{2}^{\prime}.\]
Since \(P\) is general, we obtain
\[f(S_{1}(\Omega))=f_{1}(\Omega)\times f_{2}(\Omega)\subset\Omega_{1}^{\prime} \times\Omega_{2}^{\prime}.\]
In particular, \(f_{1}\) is constant. Since \(f(S_{0}(\Omega))\subset S_{0}(\Omega^{\prime})\), we obtain \(f_{1}(\Omega)\) is a point in \(S_{0}(\Omega_{1}^{\prime})\), contradicting the assumption that \(f\) is proper.
_Proof of Corollary 1.3:_ Let \(f(S_{0}(\Omega)\cap U)\subset S_{m}(\Omega^{\prime})\) for some open set \(U\). If \(m=0\), then by Theorem 1.2, \(f\) is of the form
\[f=f_{1}\times f_{2}:\Omega\to\Omega_{1}^{\prime}\times\Omega_{2}^{\prime} \subset\Omega^{\prime}\]
whose factor \(f_{1}:\Omega\to\Omega_{1}^{\prime}\) is a rational proper map. If \(\Omega_{2}^{\prime}\) is trivial, then \(f\) is rational. If \(\Omega_{2}^{\prime}\) is nontrivial, then \(\text{rank}(\Omega_{1}^{\prime})<2q-1\). Hence by Corollary 1.2 of [13], \(f_{1}\) has a standard embedding factor. Suppose \(m\geq 1\). Then by the properness of \(f\), after shringking \(U\) if necessary, \(f(S_{1}(\Omega)\cap U)\) is contained in \(S_{r}(\Omega^{\prime})\) for some \(r>m\geq 1\), implying that
\[q^{\prime}-r<2(q-1)\]
and on an open set of \(S_{1}(\Omega)\), \(f\) is transversal, i.e.,
\[f_{*}(\nu)\notin T_{f(x)}^{1,0}S_{r}(\Omega^{\prime})+T_{f(x)}^{0,1}S_{r}( \Omega^{\prime})\]
for all real vector \(\nu\in T_{x}S_{1}(\Omega)\) transversal to \(T_{x}^{1,0}S_{1}(\Omega)+T_{x}^{0,1}S_{1}(\Omega)\). Then by Corollary 1.2 of [13], \(f\) has a standard embedding factor, which completes the proof.
|
2306.06492 | Study of the nonleptonic charmless $B$ ${\to}$ $SS$ decays with the QCD
factorization approach | Inspired by the brilliant prospects of the ongoing $B$ meson experiments, the
hadronic charmless $B$ ${\to}$ $SS$ decays are studied by considering the
next-to-leading (NLO) contributions with the QCD factorization approach, where
$S$ denotes the scalar mesons $K_{0}^{\ast}(1430)$ and $a_{0}(1450)$. Branching
ratios and $CP$ violating asymmetries are estimated with the updated values of
hadronic parameters obtained from a covariant light-front quark model, for two
scenarios where the scalar mesons are the $1^{3}P_{0}$ and $2^{3}P_{0}$ states.
It is found that the NLO contributions are very important for the $B$ ${\to}$
$SS$ decays; For the $B$ ${\to}$ $a_{0}(1450)K_{0}^{\ast}(1430)$ and $B_{s}$
${\to}$ $K_{0}^{\ast}(1430)\overline{K}_{0}^{\ast}(1430)$ decays, branching
ratios can reach up to the order of ${\cal O}(10^{-5})$ by assuming that the
scalar mesons are the $1P$ states, and should first be investigated in the
future experiments. | Lili Chen, Mengfei Zhao, Liting Wang, Yueyang Kang, Qin Chang, Junfeng Sun | 2023-06-10T17:16:29Z | http://arxiv.org/abs/2306.06492v1 | # Study of the nonleptonic charmless \(B\to SS\) decays
###### Abstract
Inspired by the brilliant prospects of the ongoing \(B\) meson experiments, the hadronic charmless \(B\to SS\) decays are studied by considering the next-to-leading (NLO) contributions with the QCD factorization approach, where \(S\) denotes the scalar mesons \(K_{0}^{*}(1430)\) and \(a_{0}(1450)\). Branching ratios and \(CP\) violating asymmetries are estimated with the updated values of hadronic parameters obtained from a covariant light-front quark model, for two scenarios where the scalar mesons are the \(1^{3}P_{0}\) and \(2^{3}P_{0}\) states. It is found that the NLO contributions are very important for the \(B\to SS\) decays; For the \(B\to a_{0}(1450)K_{0}^{*}(1430)\) and \(B_{s}\to K_{0}^{*}(1430)\overline{K}_{0}^{*}(1430)\) decays, branching ratios can reach up to the order of \(\mathcal{O}(10^{-5})\) by assuming that the scalar mesons are the \(1P\) states, and should first be investigated in the future experiments.
Introduction
According to the traditional quark model, the \(P\)-wave triplet states of the quark-antiquark system have the quantum number \(J^{P}=0^{+}\), and are called the scalar mesons. The scalar mesons mostly appear as the hadronic resonances, and have large decay widths. There will exist several resonances and decay channels within a short mass interval. The overlaps between resonances and background make it considerably difficult to resolve the scalar mesons. In addition, the di-boson combinations can also have the quantum number \(J^{P}=0^{+}\). In contrast to the ground pseudoscalar and vector mesons, the identification of the scalar is long-standing puzzle. To understand the internal structure of the scalar mesons is one of the most interesting topics in hadron physics. Generally, the scalar mesons have been identified as the ordinary quark-antiquark \(q\bar{q}\) states, tetraquark \(q\bar{q}q\bar{q}\) states, meson-meson molecular states or even those supplemented with a scalar glueball. There are many candidates with \(J^{PC}=0^{++}\) below 2 GeV, which cannot be accommodated in one \(SU(3)\) flavor nonet satisfactorily. From the mass spectrum of those scalar mesons and their chromatic as well as electromagnetic decays, a prospective picture (scenario 2, hereafter this text will be abbreviated as S2) suggests that the isovector \(a_{0}(1450)\), isodoublet \(K_{0}^{*}(1430)\), isoscalar \(f_{0}(1710)\) and \(f_{0}(1370)\) above 1 GeV can be assigned to be a conventional \(SU(3)\)\(q\bar{q}\) scalar nonet with the spectroscopy symbol of \(1^{3}P_{0}\)[1], while the scalar mesons \(a_{0}(980)\), \(K_{0}^{*}(700)\) (or \(\kappa\)), \(f_{0}(980)\) and \(f_{0}(500)\) (or \(\sigma\)) below 1 GeV form the unconventional \(q\bar{q}q\bar{q}\) exotic nonet [1; 2; 3]. Of course, the above assignments is tentative. In alternative schemes, the scalar mesons with mass below 1 GeV are interpreted as the lowest lying \(q\bar{q}\) states, while the scalars \(a_{0}(1450)\), \(K_{0}^{*}(1430)\), \(f_{0}(1710)\) and \(f_{0}(1370)\) are regarded as the radial excited states with the spectroscopy symbol of \(2^{3}P_{0}\) (scenario 1, namely S1).
It is widely known that the \(B\) mesons have rich decay modes. The light scalar mesons can be produced in the \(B\) meson decays. The \(B\) meson hadronic decays involving the final scalar mesons provides another efficient way to investigate the features and the possible inner structures of the scalar mesons. Experimentally, some of the \(B\to SP\), \(SV\), \(SS\), \(SX\) decays (where the symbols of \(S\), \(P\), \(V\) and \(X\) denote the light scalar mesons, pseudoscalar mesons, vector mesons and other particles, respectively), such as the \(B\to K_{0}^{*}(1430)^{+}\pi^{-}\), \(K_{0}^{*}(1430)^{+}\omega\), \(K_{0}^{*}(1430)^{0}\overline{K}_{0}^{*}(1430)^{0}\), \(K_{0}^{*}(1430)^{0}\pi^{+}\gamma\) decays, have been measured by Belle, BaBar and LHCb groups [1]. With the running of high-luminosity Belle-II and LHCb
experiments and the coming CEPC, FCC-ee and HL-LHC experiments, more and more data on the \(B\) meson decays will be available in the future, more and more \(B\to SP\), \(SV\), \(SS\), \(SX\) decays can be discovered and investigated, and the measurement precision will be higher and higher, which lays a solid experimental foundation to carefully study the scalar mesons and distinguish theoretical models. Phenomenologically, many of the \(B\to SP\), \(SV\), \(SS\), \(SX\) decays have been studied extensively with various theoretical models. For example, the study of the \(B\to SP\), \(SV\) decays with the QCD factorization (QCDF) approach [4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15], the \(B\to SP\), \(SV\), \(SS\) decays with the perturbative QCD (PQCD) approach [16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38], the semileptonic \(B\to SX\) decays with the sum rules and other approaches [39; 40; 41; 42; 43; 44; 45; 46; 47], and so on. It is easy to imagine that various scenarios, such as S1 and S2, will inevitably give different theoretical predictions on the \(B\) meson decays. It is noteworthy that some studies have shown that branching ratios for the charmless \(B\to SS\) decays with the PQCD approach can be very large, for example, \(\mathcal{B}(B_{s}{\rightarrow}K^{*}_{0}(1430)K^{*}_{0}(1430))\sim\mathcal{O}(1 0^{-4})\)[33], \(\mathcal{B}(B{\rightarrow}K^{*}_{0}(1430)\sigma)\sim\mathcal{O}(10^{-4})\)[35], \(\mathcal{B}(B_{s}{\rightarrow}\sigma\sigma)\sim\mathcal{O}(10^{-4})\), \(\mathcal{B}(B_{s}{\rightarrow}\sigma f_{0}(980))\sim\mathcal{O}(10^{-4})\), \(\mathcal{B}(B_{s}{\rightarrow}f_{0}(980)f_{0}(980))\)\(\sim\mathcal{O}(10^{-4})\), \(\mathcal{B}(B{\rightarrow}\sigma\sigma)\sim\mathcal{O}(10^{-5})\)[38], \(\mathcal{B}(B{\rightarrow}a_{0}(980)a_{0}(980))\sim\mathcal{O}(10^{-5})\)[33]. And the more striking phenomena are that branching ratios for the pure annihilation \(B\to SS\) decays, which might be very small by intuition, are very large with the PQCD approach, for example, \(\mathcal{B}(B_{s}{\rightarrow}a_{0}(980)a_{0}(980))\sim\mathcal{O}(10^{-5})\), \(\mathcal{B}(B_{s}{\rightarrow}a_{0}(1450)a_{0}(1450))\sim\mathcal{O}(10^{-5})\), \(\mathcal{B}(B_{d}{\rightarrow}\kappa^{+}\kappa^{-})\sim\mathcal{O}(10^{-6})\), \(\mathcal{B}(B_{d}{\rightarrow}K^{*}_{0}(1430)^{+}K^{*}_{0}(1430)^{-})\sim \mathcal{O}(10^{-6})\)[37]. So the study of the \(B\to SS\) decays is very promising and tempting both theoretically and experimentally. In order to deepen our understanding on the properties of the light scalar mesons and provide the ongoing and coming experimental analysis with additional theoretical references, in this paper, we will study the nonleptonic charmless \(B\to SS\) decays with the QCDF approach, by considering scenarios S1 and S2 for the scalar mesons, where \(S=K^{*}_{0}(1430)\) and \(a_{0}(1450)\).
This paper is organized as follows. In Section II, the theoretical framework are briefly reviewed, the next-to-leading order effective coefficients for the \(B\to SS\) decays and the weak annihilation amplitudes are given with the QCDF approach. In Section III, the values of the nonperturbative input parameters are fixed. The numerical results and our comments are presented in Section IV. Finally, We conclude with a summary in Section V. The decay amplitudes are displayed in the Appendix.
Theoretical Framework
### The effective Hamiltonian
The low-energy effective Hamiltonian for the charmless nonleptonic \(B\to SS\) decays is written as [48],
\[\mathcal{H}_{\rm eff} = \frac{G_{F}}{\sqrt{2}}\sum_{q=d,s}\Big{\{}V_{ub}V_{uq}^{*}\Big{[}C_ {1}(\mu)O_{1}(\mu)+C_{2}(\mu)O_{2}(\mu)\Big{]} \tag{1}\] \[- V_{tb}V_{tq}^{*}\Big{[}\sum_{i=3}^{10}C_{i}(\mu)O_{i}(\mu)+C_{7 \gamma}(\mu)O_{7\gamma}(\mu)+C_{8g}(\mu)O_{8g}(\mu)\Big{]}\Big{\}}+{\rm h.c.},\]
where the Fermi constant \(G_{F}\) and the Cabibbo-Kobayashi-Maskawa (CKM) matrix elements \(V_{ij}\) have been well determined experimentally [1]. The Wilson coefficients \(C_{i}\), which summarize the short-distance physical contributions, are in principle computable with the perturbative theory order by order at the scale of \(\mu=m_{W}\), and can be evaluated to the energy scale of the \(B\) meson decays \(\mu\sim m_{b}\) with the renormalization group equation (RGE) [48], where \(m_{W}\) and \(m_{b}\) are the mass of the gauge boson \(W\) of the weak interactions and the heavy \(b\) quark mass, respectively. The remaining theoretical work is to calculate the hadronic matrix elements (HMEs), \(\langle S_{1}S_{2}|O_{i}|B\rangle\), where the local four-quark effective operators \(O_{i}\) are sandwiched between the initial \(B\) meson and the final scalar mesons.
In order to generate the essential strong phases for \(CP\) violations in the hadronic \(B\) meson decays, and cancel the unphysical \(\mu\)-dependence of decay amplitude \(\mathcal{A}=\langle S_{1}S_{2}|\mathcal{H}_{\rm eff}|B\rangle\) originating from the Wilson coefficients, the high order radiative corrections to HMEs are necessary and should be taken into consideration. However, the perturbative contributions embedded in HMEs became entangled with the nonperturbative contributions, which makes the theoretical calculations extremely complicated. How to properly and reasonably evaluate HMEs of the hadronic \(B\) meson decays has been an academic focus.
### The QCDF decay amplitudes
The QCDF approach [49; 50; 51; 52; 53; 54] is one of many QCD-inspired phenomenological remedies to deal with HMEs. Based on the power counting rules in the heavy quark limits and an expansion series in the strong coupling \(\alpha_{s}\) assisted by the collinear approximation, the long- and short-distance contributions are factorized. The nonperturbative contributions in
HMEs are either power suppressed by \(1/m_{b}\) or incorporated into the hadronic transition form factors and mesonic distribution amplitudes (DAs). Up to the leading power corrections of order \(1/m_{b}\), the QCDF factorization formula for HMEs concerned is written as [50],
\[\langle S_{1}S_{2}|O_{i}(\mu)|B\rangle = \sum_{j}F_{j}^{B\to S_{1}}\,f_{S_{2}}{\int}dy\,{\cal T}_{ij}^{I}(y)\, \phi_{S_{2}}(y)+(S_{1}{\leftrightarrow}S_{2}) \tag{2}\] \[+ f_{B}\,f_{S_{1}}\,f_{S_{2}}{\int}dx\,dy\,dz\,{\cal T}_{i}^{II}(x,y,z)\,\phi_{S_{1}}(x)\,\phi_{S_{2}}(y)\,\phi_{B}(z),\]
where \(x\), \(y\) and \(z\) are the longitudinal momentum fractions of the valence quarks. The form factors \(F_{j}^{B\to S}\), the decay constants \(f_{B}\) and \(f_{S}\), the mesonic light cone DAs \(\phi_{B}\) and \(\phi_{S}\), all of them are the nonperturbative parameters. These parameters are regarded to be universal and process-independent, and can be obtained from the experimental data, lattice QCD simulation, QCD sum rules, or by comparison with other exclusive processes. \({\cal T}^{I}\) and \({\cal T}^{II}\) are the hard-scattering functions describing the local interactions among quarks and gluons at the \(B\) meson decay scale. They are, in principle, perturbatively calculable to all orders in \(\alpha_{s}\) at the leading power order of \(1/m_{b}\). At the leading order (LO) \(\alpha_{s}^{0}\), \({\cal T}^{I}\) = 1 and \({\cal T}^{II}\) = 0. The convolution integrals of \({\cal T}^{I}\) and \(\phi_{S}\) result in the decay constant of the emission scalar mesons. One can return from the QCDF formula Eq.(2) to the naive factorization (NF) approximations [55; 56], _i.e._, the four-quark HMEs can be written as the product of two diquark HMEs, and the diquark HMEs can be replaced by HMEs of the corresponding hadronic currents and then further parameterized by hadronic transition form factors and decay constants. Beyond the order \(\alpha_{s}^{0}\), the radiative corrections to HMEs make \({\cal T}^{I,II}\) no longer trivial, and some information about the \(CP\)-violating strong phases and \(\mu\)-dependence of HMEs can be retrieved naturally.
With the QCDF approach, the amplitudes for the concerned \(B\to SS\) decays can be generally written as,
\[{\cal A}\,=\,\langle S_{1}S_{2}|{\cal H}_{\rm eff}|B\rangle\,=\,\frac{G_{F}}{ \sqrt{2}}\sum_{i}\lambda_{i}\sum_{j=1}^{10}a_{j}\langle S_{1}S_{2}|O_{j}|B \rangle_{\rm NF}, \tag{3}\]
where the parameter \(\lambda_{i}\) is the product of the CKM elements; the coefficient \(a_{j}\) including the nonfactorizable contributions beyond the leading order of \(\alpha_{s}\) is the combinations of the Wilson coefficients; HMEs \(\langle S_{1}S_{2}|O_{j}|B\rangle_{\rm NF}\) are defined and evaluated with the NF approximation.
### The QCDF coefficients
To simplify the decay amplitude expressions, we will use the notations in Refs. [57] and write the QCDF coefficients as follows.
\[\alpha_{1}(S_{1}\,S_{2}) = a_{1}(S_{1}\,S_{2}), \tag{4}\] \[\alpha_{2}(S_{1}\,S_{2}) = a_{2}(S_{1}\,S_{2}),\] (5) \[\alpha_{3}^{p}(S_{1}\,S_{2}) = a_{3}^{p}(S_{1}\,S_{2})+a_{5}^{p}(S_{1}\,S_{2}),\] (6) \[\alpha_{4}^{p}(S_{1}\,S_{2}) = a_{4}^{p}(S_{1}\,S_{2})+\bar{\gamma}_{\chi}^{S_{2}}\,a_{6}^{p}(S _{1}\,S_{2}),\] (7) \[\alpha_{3,EW}^{p}(S_{1}\,S_{2}) = a_{9}^{p}(S_{1}\,S_{2})+a_{7}^{p}(S_{1}\,S_{2}),\] (8) \[\alpha_{4,EW}^{p}(S_{1}\,S_{2}) = a_{10}^{p}(S_{1}\,S_{2})+\bar{\gamma}_{\chi}^{S_{2}}\,a_{8}^{p}( S_{1}\,S_{2}), \tag{9}\]
where \(S_{1}\) denotes the recoiled scalar meson which absorbs the light spectator quark of the initial \(B\) mesons, and \(S_{2}\) denotes the emitted scalar meson. The ratio \(\bar{\gamma}_{\chi}^{S}\) is defined as
\[\bar{\gamma}_{\chi}^{S}(\mu) = \gamma_{\chi}^{S}(\mu)\,\bar{\mu}_{S}^{-1}(\mu)\,=\,\frac{2\,m_{S }}{\overline{m}_{b}(\mu)}, \tag{10}\] \[\gamma_{\chi}^{S}(\mu) = \frac{2\,m_{S}^{2}}{\overline{m}_{b}(\mu)\,[\overline{m}_{1}(\mu) -\overline{m}_{2}(\mu)]},\] (11) \[\bar{\mu}_{S}(\mu) = \frac{m_{S}}{\overline{m}_{1}(\mu)-\overline{m}_{2}(\mu)}, \tag{12}\]
where \(m_{S}\) is the mass of the emission scalar meson, and the \(\mu\)-dependent \(\overline{m}_{i}\) is the \(\overline{\rm MS}\) running quark mass and can be evaluated with RGE. \(\overline{m}_{1}\) and \(\overline{m}_{2}\) correspond to the two valence quarks in a scalar meson.
Up to the next-to-leading order (NLO) in the coupling \(\alpha_{s}\), the general form of the QCDF coefficients \(a_{i}^{p}\) is expressed as,
\[a_{i}^{p}(S_{1}\,S_{2}) = \Big{(}C_{i}+\frac{C_{i\pm 1}}{N_{c}}\Big{)}\,N_{i}(S_{2})+P_{i}^{p} (S_{2}) \tag{13}\] \[+ \frac{C_{i\pm 1}}{N_{c}}\,\frac{C_{F}\,\alpha_{s}}{4\pi}\Big{[}V_{i}(S _{2})+\frac{4\pi^{2}}{N_{c}}\,H_{i}(S_{1}\,S_{2})\Big{]},\]
where the superscript \(p\) is to be omitted for \(i\) = 1 and 2, and the upper (lower) signs apply when \(i\) is odd (even). \(C_{i}\) is the Wilson coefficients, the color factor \(C_{F}=(N_{c}^{2}-1)/(2\,N_{c})\) and the color number \(N_{c}=3\). Due to the relations between the scalar and vector decay constants for the scalar meson (see ), the factor \(N_{i}(S_{2})\) is
\[N_{i}(S_{2}) = \left\{\begin{array}{ll}1&\quad\mbox{for}\ \ i\,=\,6,8;\\ \bar{\mu}_{S}^{-1}&\quad\mbox{others}.\end{array}\right. \tag{14}\]
In Eq.(13), the terms proportional to \(N_{i}(S_{2})\) are the LO contributions. It is obvious that except for the coefficients of \(a_{6,8}\), the LO contributions are proportional to the mass difference \(\Delta\overline{m}=\overline{m}_{1}\,-\,\overline{m}_{2}\). For the scalar mesons consisting of the light quarks, a common sense is that the mass difference \(\Delta\overline{m}\) is usually very small. So, it is easy to picture that the LO contributions are suppressed by the chiral factors, and that the NLO contributions would be necessary and important for the \(B\to SS\) decays. The terms proportional to \(\alpha_{s}\) are the NLO contributions, including the vertex corrections \(V_{i}(S_{2})\), penguin contributions \(P_{i}^{p}(S_{2})\), and hard spectator scattering amplitudes \(H_{i}(S_{1}\,S_{2})\). When the emission \(S_{2}\) meson can be decoupled from the \(B\)-\(S_{1}\) system, corresponding to the first line in Eq.(2), \(V_{i}(S_{2})\) and \(P_{i}^{p}(S_{2})\) are written as the convolution integrals of hard scattering kernels \(T^{I}(y)\) and mesonic DAs \(\phi_{S_{2}}(y)\). When the initial \(B\) meson is entangled with the final states by the hard spectator scattering interactions, \(H_{i}(S_{1}\,S_{2})\) are written as the convolution integrals of hard scattering kernels \(T^{II}\) and all participating mesonic DAs, corresponding to the second line in Eq.(2). For the \(B\to SS\) decays, the explicit expressions of \(V_{i}(S_{2})\), \(P_{i}^{p}(S_{2})\) and \(H_{i}(S_{1}\,S_{2})\) have been shown in our previous paper [15] by using the replacements of the Gegenbauer moments \(a_{i}^{M_{j}}\to b_{i}^{S_{j}}\), the chiral factor \(\gamma_{\chi}^{M_{i}}\to\bar{\gamma}_{\chi}^{S_{i}}\), and DAs \(\phi_{M_{i}}\to\phi_{S_{i}}\). For example, by integrating out the momentum fraction, \(H_{i}(S_{1}\,S_{2})\) can be expressed as the functions of the Gegenbauer moments embedded in the mesonic DAs.
\[H_{i}(S_{1}\,S_{2})=\left\{\begin{array}{ll}0,&\mbox{ for $i\,=\,6,8$};\\ -\frac{B_{S_{1}\,S_{2}}}{A_{S_{1}\,S_{2}}}\frac{m_{B}}{\lambda_{B}} \Big{[}9\sum_{m=0}^{3}b_{m}^{S_{1}}\sum_{j=0}^{3}(-1)^{j}b_{j}^{S_{2}}-3\, \bar{\gamma}_{\chi}^{S_{1}}X_{H}\sum_{k=0}^{3}b_{k}^{S_{2}}\Big{]},&\mbox{ for $i\,=\,5,7$};\\ \frac{B_{S_{1}\,S_{2}}}{A_{S_{1}\,S_{2}}}\frac{m_{B}}{\lambda_{B}} \Big{[}9\sum_{m=0}^{3}b_{m}^{S_{1}}\sum_{j=0}^{3}b_{j}^{S_{2}}-3\,\bar{\gamma }_{\chi}^{S_{1}}X_{H}\sum_{k=0}^{3}(-1)^{k}b_{k}^{S_{2}}\Big{]},&\mbox{ others}\end{array}\right. \tag{15}\]
with the common factors are
\[A_{S_{1}\,S_{2}}\,=\,i\,\frac{G_{F}}{\sqrt{2}}\,U_{0}^{B\,S_{1}}(m_{S_{2}}^{2} )\,\bar{f}_{S_{2}}\,(m_{B}^{2}-m_{S_{1}}^{2}), \tag{16}\]
\[B_{S_{1}\,S_{2}}\,=\,i\,\frac{G_{F}}{\sqrt{2}}\,f_{B}\,\bar{f}_{S_{1}}\,\bar{ f}_{S_{2}}, \tag{17}\]
\[\frac{m_{B}}{\lambda_{B}}\,=\,\int_{\,0}^{1}dz\,\frac{\phi_{B}(z)}{z}, \tag{18}\]
\[X_{H}\,=\,\int_{\,0}^{1}\frac{dx}{1-x}, \tag{19}\]
where \(U_{0}^{B\,S_{1}}\) is the form factors, \(f_{B}\) is the decay constant for the \(B\) meson, \(\bar{f}_{S_{i}}\) is the scalar decay constant for the scalar mesons, the quantity \(\lambda_{B}\) is used to parameterize our ignorance about the \(B\) mesonic DAs, and the phenomenological parameter \(X_{H}\) is introduced to regularize the end point singularities.
In addition, according to many practical application of the QCDF approach in the two-body hadronic \(B\) decays, such as Refs. [4; 50; 51; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72], it was shown that the weak annihilation (WA) contributions are important and worth of consideration, although they are formally power suppressed relative to the LO contributions based on the QCDF power counting rules in the heavy quark limits. The QCDF coefficients of the WA amplitudes for the \(B\to SS\) decays have the same expression as those in Eq.(55) of Ref. [57], _i.e._,
\[\beta_{i}^{p}\,=\,-\frac{B_{S_{1}\,S_{2}}}{A_{S_{1}\,S_{2}}}\,b_{i}^{p}, \tag{20}\]
\[b_{1} = \frac{C_{F}}{N_{c}^{2}}\,C_{1}\,A_{1}^{i},\qquad b_{2}\,=\,\frac{ C_{F}}{N_{c}^{2}}\,C_{2}\,A_{1}^{i}, \tag{21}\] \[b_{3}^{p} = \frac{C_{F}}{N_{c}^{2}}\,\big{[}C_{3}\,A_{1}^{i}+C_{5}\,(A_{3}^{i }+A_{3}^{f})+N_{c}\,C_{6}\,A_{3}^{f}\big{]},\] (22) \[b_{4}^{p} = \frac{C_{F}}{N_{c}^{2}}\,\big{[}C_{4}\,A_{1}^{i}+C_{6}\,A_{2}^{i} \big{]},\] (23) \[b_{3,EW}^{p} = \frac{C_{F}}{N_{c}^{2}}\,\big{[}C_{9}\,A_{1}^{i}+C_{7}\,(A_{3}^{i }+A_{3}^{f})+N_{c}\,C_{8}\,A_{3}^{f}\big{]},\] (24) \[b_{4,EW}^{p} = \frac{C_{F}}{N_{c}^{2}}\,\big{[}C_{10}\,A_{1}^{i}+C_{8}\,A_{2}^{i }\big{]}, \tag{25}\]
and the building blocks are respectively written as the functions of the Gegenbauer moments.
\[A_{1}^{i} \approx 2\,\pi\,\alpha_{s}\Big{\{}9\,\Big{[}b_{0}^{S_{1}}\Big{(}b_{0}^{ S_{2}}\,(X_{A}-4+\frac{\pi^{2}}{3})+b_{2}^{S_{2}}\,(6\,X_{A}-\frac{107}{3}+2\, \pi^{2})\] \[+b_{1}^{S_{2}}\,(3\,X_{A}+4-\pi^{2})+b_{3}^{S_{2}}\,(10\,X_{A}+ \frac{23}{18}-\frac{10}{3}\,\pi^{2})\Big{)}\] \[-b_{1}^{S_{1}}\,\Big{(}b_{0}^{S_{2}}\,(X_{A}+29-3\,\pi^{2})+b_{2} ^{S_{2}}\,(6\,X_{A}+754-78\,\pi^{2})\] \[+b_{1}^{S_{2}}\,(3\,X_{A}-213+21\,\pi^{2})+b_{3}^{S_{2}}\,(10\,X_ {A}-\frac{12625}{6}+210\,\pi^{2})\Big{)}\] \[+b_{2}^{S_{1}}\,\Big{(}b_{0}^{S_{2}}\,(X_{A}-119+12\,\pi^{2})+b_{ 2}^{S_{2}}\,(6\,X_{A}-9609+972\,\pi^{2})\] \[+b_{1}^{S_{2}}\,(3\,X_{A}+1534-156\,\pi^{2})+b_{3}^{S_{2}}\,(10\,X _{A}+\frac{118933}{3}-4020\,\pi^{2})\Big{)}\] \[-b_{3}^{S_{1}}\,\Big{(}b_{0}^{S_{2}}\,(X_{A}+\frac{2956}{9}-\frac {100}{3}\,\pi^{2})+b_{2}^{S_{2}}\,(6\,X_{A}+\frac{198332}{3}-6700\,\pi^{2})\] \[+b_{1}^{S_{2}}\,(3\,X_{A}-\frac{20743}{3}+700\,\pi^{2})+b_{3}^{S_{2 }}\,(10\,X_{A}-\frac{3585910}{9}+\frac{121100}{3}\,\pi^{2})\Big{)}\Big{]}\]
\[-\bar{\gamma}_{\chi}^{S_{1}}\,\bar{\gamma}_{\chi}^{S_{2}}\,X_{A}^{2}\Big{\}}, \tag{26}\]
\[A_{2}^{i}\,\approx\,2\,\pi\,\alpha_{s}\Big{\{}9\,\Big{[}b_{0}^{S_ {2}}\Big{(}b_{0}^{S_{1}}\,(X_{A}-4+\frac{\pi^{2}}{3})+b_{2}^{S_{1}}\,(6\,X_{A} -\frac{107}{3}+2\,\pi^{2})\] \[-b_{1}^{S_{1}}\,(3\,X_{A}+4-\pi^{2})-b_{3}^{S_{1}}\,(10\,X_{A}+ \frac{23}{18}-\frac{10}{3}\,\pi^{2})\Big{)}\] \[+b_{1}^{S_{2}}\,\Big{(}b_{0}^{S_{1}}\,(X_{A}+29-3\,\pi^{2})+b_{2} ^{S_{1}}\,(6\,X_{A}+754-78\,\pi^{2})\] \[-b_{1}^{S_{1}}\,(3\,X_{A}-213+21\,\pi^{2})-b_{3}^{S_{1}}\,(10\,X_ {A}-\frac{12625}{6}+210\,\pi^{2})\Big{)}\] \[+b_{2}^{S_{2}}\,\Big{(}b_{0}^{S_{1}}\,(X_{A}-119+12\,\pi^{2})+b_{ 2}^{S_{1}}\,(6\,X_{A}-9609+972\,\pi^{2})\] \[-b_{1}^{S_{1}}\,(3\,X_{A}+1534-156\,\pi^{2})-b_{3}^{S_{1}}\,(10\,X _{A}+\frac{118933}{3}-4020\,\pi^{2})\Big{)}\] \[+b_{3}^{S_{2}}\,\Big{(}b_{0}^{S_{1}}\,(X_{A}+\frac{2956}{9}-\frac {100}{3}\,\pi^{2})+b_{2}^{S_{1}}\,(6\,X_{A}+\frac{198332}{3}-6700\,\pi^{2})\] \[-b_{1}^{S_{1}}\,(3\,X_{A}-\frac{20743}{3}+700\,\pi^{2})-b_{3}^{S_ {1}}\,(10\,X_{A}-\frac{3585910}{9}+\frac{121100}{3}\,\pi^{2})\Big{)}\Big{]}\] \[-\bar{\gamma}_{\chi}^{S_{1}}\,\bar{\gamma}_{\chi}^{S_{2}}\,X_{A}^ {2}\Big{\}}, \tag{27}\]
\[A_{3}^{i}\,\approx\,-6\,\pi\,\alpha_{s}\Big{\{}\bar{\gamma}_{\chi }^{S_{1}}\,\Big{[}b_{0}^{S_{2}}\,(X_{A}^{2}-2\,X_{A}+\frac{\pi^{2}}{3})+6\,b_{2 }^{S_{2}}\,(X_{A}^{2}-\frac{16}{3}\,X_{A}+\frac{15}{2}+\frac{\pi^{2}}{3})\] \[+3\,b_{1}^{S_{2}}\,(X_{A}^{2}-4\,X_{A}+4+\frac{\pi^{2}}{3})+10\,b_ {3}^{S_{2}}\,(X_{A}^{2}-\frac{13}{9}\,X_{A}+\frac{191}{18}+\frac{\pi^{2}}{3}) \Big{]}\] \[+\bar{\gamma}_{\chi}^{S_{2}}\,\Big{[}b_{0}^{S_{1}}\,(X_{A}^{2}-2\, X_{A}+\frac{\pi^{2}}{3})+6\,b_{2}^{S_{1}}\,(X_{A}^{2}-\frac{16}{3}\,X_{A}+ \frac{15}{2}+\frac{\pi^{2}}{3})\] \[-3\,b_{1}^{S_{1}}\,(X_{A}^{2}-4\,X_{A}+4+\frac{\pi^{2}}{3})-10\,b_ {3}^{S_{1}}\,(X_{A}^{2}-\frac{13}{9}\,X_{A}+\frac{191}{18}+\frac{\pi^{2}}{3}) \Big{]}\Big{\}}, \tag{28}\]
\[A_{1}^{f}\,=\,A_{2}^{f}\,=\,0, \tag{29}\]
\[A_{3}^{f}\,\approx\,-6\,\pi\,\alpha_{s}\,X_{A}\,\Big{\{}\bar{ \gamma}_{\chi}^{S_{1}}\,\Big{[}b_{0}^{S_{2}}\,(2\,X_{A}-1)+b_{2}^{S_{2}}\,(12\,X_ {A}-31)\] \[+b_{1}^{S_{2}}\,(6\,X_{A}+11)+b_{3}^{S_{2}}\,(20\,X_{A}-\frac{187}{ 3})\Big{]}\] \[-\bar{\gamma}_{\chi}^{S_{2}}\,\Big{[}b_{0}^{S_{1}}\,(2\,X_{A}-1)+b_ {2}^{S_{1}}\,(12\,X_{A}-31)\] \[-b_{1}^{S_{1}}\,(6\,X_{A}+11)-b_{3}^{S_{1}}\,(20\,X_{A}-\frac{187}{ 3})\Big{]}\Big{\}}, \tag{30}\]
where \(X_{A}\) has the similar definition and function as the the parameter \(X_{H}\) in Eq.(19) to regularize the end point divergence appearing in the weak annihilation topologies. With the QCDF approach, \(X_{H}\) and \(X_{A}\) are usually parameterized as
\[X_{H}\,=\,\ln\Bigl{(}\frac{m_{B}}{\Lambda_{h}}\Bigr{)}\,(1+\rho_{H}\,e^{i\,\phi _{H}}), \tag{31}\]
\[X_{A}\,=\,\ln\Bigl{(}\frac{m_{B}}{\Lambda_{h}}\Bigr{)}\,(1+\rho_{A}\,e^{i\,\phi_{A} }), \tag{32}\]
with \(\Lambda_{h}\) = 0.5 GeV [57], and \(\rho_{H,A}\) and \(\phi_{H,A}\) are the undetermined parameters. Theoretically, \(X_{H}\) and \(X_{A}\) are respectively related to the contributions from hard spectator scattering and weak annihilations, and their physical implication and significations are in nature different. What's more, these parameters should depend on the specific process and hadrons, because they actually originate from the convolution integrals of hard scattering functions and hadronic DAs. In the practical application of the QCDF approach, \(X_{H}\) and \(X_{A}\) are usually and approximately regarded as the universal quantities to reduce the number of phenomenological model parameters. Here, we will consider two special cases. One case (C1) is to use the minimal parameters as possible, for example, \(\rho_{H}\) = \(\rho_{A}\) = 1 and \(\phi_{H}\) = \(\phi_{A}\) = \(-55^{\circ}\)[57]. The other case (C2) is that the factorizable and nonfactorizable WA contributions are treated independently, and two quantities \(X_{A}^{f}\) and \(X_{A}^{i}\) are introduced to replace \(X_{A}\). A global fit on the \(B\to PP\) decays with an approximation \(X_{H}\approx X_{A}^{i}\) gives (\(\rho_{A}^{i}\), \(\phi_{A}^{i}\)) = (2.98, \(-105^{\circ}\)) and (\(\rho_{A}^{f}\), \(\phi_{A}^{f}\)) = (1.18, \(-40^{\circ}\)) [68].
## III Input parameters
There are many input parameters in the numerical calculations. These parameters generally fall into two categories. One has been well determined experimentally or theoretically and listed explicitly in Ref. [1], such as the Fermi coupling constant \(G_{F}\), Wilson coefficients, the CKM elements, and hadron mass as well. Their central values in Ref. [1] will be regarded as the default inputs unless otherwise specified. The other is the nonperturbative parameters, such as the decay constants, mesonic transition form factors, and hadronic DAs, which lead to produce the main theoretical errors. The choice of these parameters requires certain caution.
### The CKM elements
The Wolfenstein parameterization is traditionally and commonly used for the unitary CKM matrix, due to the obvious power series in the Wolfenstein parameter \(\lambda\) among the CKM elements. The values of the four Wolfenstein parameters are [1],
\[A=0.790^{+0.017}_{-0.012},\quad\lambda=0.22650\pm 0.00048,\quad\bar{\rho}=0.14 1^{+0.016}_{-0.017},\quad\bar{\eta}=0.357\pm 0.011. \tag{33}\]
### The decay constants
The lattice QCD results of the isospin averages of the \(B\) meson decay constants are [1],
\[f_{B_{u,d}} = 190.0\pm 1.3\,\mbox{MeV}, \tag{34}\] \[f_{B_{s}} = 230.3\pm 1.3\,\mbox{MeV}. \tag{35}\]
There are two kinds of definitions of the decay constants for the scalar mesons, _i.e._,
\[\langle S(p)\,|\bar{q}_{1}\,\gamma^{\mu}\,q_{2}\,|\,0\rangle\,=\,f_{S}\,p^{\mu}, \tag{36}\]
\[\langle S(p)\,|\bar{q}_{1}\,q_{2}\,|\,0\rangle\,=\,m_{S}\,\bar{f}_{S}(\mu). \tag{37}\]
The scale-dependent scalar decay constant \(\bar{f}_{S}(\mu)\) and the vector decay constant \(f_{S}\) are related by the equation of motion,
\[f_{S}\,=\,\bar{f}_{S}(\mu)\,\bar{\mu}_{S}^{-1}(\mu). \tag{38}\]
Clearly, the vector decay constant \(f_{S}\) is proportional to the running mass difference, \(\Delta\overline{m}\), between the two valence quarks resided in the scalar mesons. \(f_{S}\) for the light scalars should be seriously suppressed by the small \(\Delta\overline{m}\), especially for the electrically neutral scalar mesons owing to charge conjugation invariance or conservation of vector current. For example, \(f_{S}\) will vanish for the \(a_{0}^{0}\) meson. At the same time the scalar decay constants \(\bar{f}_{S}\) remain finite. Here a preferable solution scheme is to use the scalar decay constants \(\bar{f}_{S}\). This is one of the main reasons for the factors in Eq.(14). In addition, the scalar mesons and its antiparticles have the same scalar decay constants, \(\bar{f}_{S}\) = \(\bar{f}_{\bar{S}}\). It means that the vector decay constants \(f_{S}\) = \(-f_{\bar{S}}\) from Eq.(12) and Eq.(38), which results in the \(f_{S}\) = 0 for the \(a_{0}^{0}\) meson.
\begin{table}
\begin{tabular}{l c c c c c c} & \multicolumn{2}{c}{this work} & \multicolumn{2}{c}{Refs. [5]} & Ref. [73] & Ref. [74] \\ \cline{2-7} scenarios & S1 & S2 & S1 & S2 & S2 & S2 \\ \hline \(\bar{f}_{K^{0}_{0}(1430)}\) & \(234^{+85}_{-87}\) & \(542^{+180}_{-190}\) & \(-300\pm 30\) & \(445\pm 50\) & \(427\pm 85\) & \(358\pm 1\) \\ \(\bar{f}_{a_{0}(1450)}\) & \(256^{+56}_{-54}\) & \(456^{+57}_{-56}\) & \(-280\pm 30\) & \(460\pm 50\) & & \(375\pm 2\) \\ \end{tabular}
\end{table}
Table 1: The scalar decay constants \(\bar{f}_{S}\) (in units of MeV) at the scale of \(\mu\) = 1 GeV. Here the theoretical errors come mainly from the Gaussian parameter \(\beta\) responsible for mesonic wave functions.
Experimentally, these decay constants can be extracted from the purely leptonic decays of the scalar mesons. It is widely known that the scalar mesons usually appear as resonances and decay dominantly through the strong interactions, and the occurrence probability of the leptonic decays of the scalar mesons should in principle be very small. The leptonic decays of the scalar mesons have not been discovered by now. The experimental data on the decay constants of the scalar mesons is still unavailable. The theoretical values of the scalar decay constants \(\bar{f}_{S}\) corresponding to the S1 and S2 scenarios are listed in Table 1. It is clearly seen that for the S2 scenario, the central values of the decay constants \(\bar{f}_{S}\) obtained with the covariant light-front quark model (CLFQM) in this paper are generally in agreement with those from the QCD sum rules [5; 73] and light-cone sum rules [74] within an error range. Of course, the errors arising from the Gaussian parameter \(\beta\) responsible for mesonic wave functions are still very large due to the inadequate data and our insufficient understanding on the scalar mesons for the moment, especially for \(\bar{f}_{K^{*}_{0}(1430)}\) with the S2 scenario. What's more, the values of \(\bar{f}_{S}\) with the S2 scenario are about twice as larger as those with the S1 scenario, which will inevitably bring the obviously hierarchical relations with the branching ratios with these two different scenarios, because the decay amplitudes are directly proportional to the decay constants. A significant difference between branching ratios might be used to distinguish whether these scalar mesons are the \(1P\) or \(2P\) states.
### Hadronic transition form factors
The form factors of \(B\to S\) transitions are defined as [4; 5; 6],
\[\langle S(k)\,|\bar{q}\,\gamma_{\mu}\,\gamma_{5}\,b\,|B(p)\rangle = -i\,\Big{[}\Big{(}P_{\mu}-\frac{m_{B}^{2}-m_{S}^{2}}{q^{2}}\,q_{ \mu}\Big{)}\,U_{1}(q^{2})+\frac{m_{B}^{2}-m_{S}^{2}}{q^{2}}\,q_{\mu}\,U_{0}(q^ {2})\Big{]}, \tag{39}\]
where \(P_{\mu}=p_{\mu}\) + \(k_{\mu}\) and \(q_{\mu}=p_{\mu}\) - \(k_{\mu}\). \(U_{0}(q^{2})\) and \(U_{1}(q^{2})\) respectively denote longitudinal and transverse form factors. To regulate the singularities at the pole \(q^{2}=0\), the relations \(U_{0}(0)=U_{1}(0)\) is required. The values of \(U_{0,1}(0)\) can be obtained by fit the dependence of the form factors on \(q^{2}\) with the 3-parameter formula [5; 6],
\[U_{i}(q^{2})\,=\,\frac{U_{i}(0)}{1-a\,(q^{2}/m_{B}^{2})+b\,(q^{2}/m_{B}^{2})^ {2}}. \tag{40}\]
The form factors obtained from CLFQM are listed in Table 2. It is clearly seen that (1) the central values of \(U_{0,1}(0)\) in this work are very close to those of Ref. [5]. They are slightly
larger (smaller) than those given in Ref. [5] for the S2 (S1) scenario. The differences come mainly from the quark running mass and the Gaussian parameter \(\beta\) as well. (2) For the S2 scenario, the \(SU(3)\) flavor symmetry among the central values of \(U_{0,1}(0)\) seems to be held well. (3) For the \(B\to K_{0}^{*}(1430)\), \(a_{0}(1450)\) transition form factors, the differences between the S1 and S2 scenarios given in Ref. [5] are less obvious than those obtained in this work. Here the ratio of \(U_{0,1}^{B\to K_{0}^{*},a_{0}}(0)\) between the S1 and S2 scenarios is approximately 2/3, which will result in the ratio of branching ratio proportional to the square of the form factors is approximately 1/2. The bigger difference of the ratio, the easier the measurement becomes, and the more helpful it is to distinguish whether these scalar mesons are the \(1P\) or \(2P\) states from the semileptonic \(B\to S\ell\nu\) decays in the future experiments, and to check the different theoretical predictions.
### Mesonic light cone DAs
The definition of mesonic light cone DAs is [4; 5],
\[\left\langle S(p)\,\right|q_{2\beta}(z_{2})\,q_{1\alpha}(z_{1})\left|0\right\rangle\]
\begin{table}
\begin{tabular}{c|c|c|c c c|c c c} & & transition & \(U_{1}(0)\) & \(a\) & \(b\) & \(U_{0}(0)\) & \(a\) & \(b\) \\ \hline \multirow{8}{*}{S1} & \multirow{4}{*}{this} & \(B\to K_{0}^{*}(1430)\) & 0.18\(\pm\)0.01 & 1.03 & 0.15 & 0.18\(\pm\)0.01 & \(-\)0.23 & 0.29 \\ & & \(B\to a_{0}(1450)\) & 0.19\(\pm\)0.01 & 1.01 & 0.16 & 0.19\(\pm\)0.01 & \(-\)0.17 & 0.30 \\ & work & \(B_{s}\to K_{0}^{*}(1430)\) & 0.23\(\pm\)0.02 & 0.92 & 0.29 & 0.23\(\pm\)0.02 & \(-\)0.23 & 0.36 \\ \cline{2-10} & \multirow{2}{*}{Ref. [5]} & \(B\to K_{0}^{*}(1430)\) & 0.21 & 1.59 & 0.91 & 0.21 & 0.59 & 0.09 \\ & & \(B\to a_{0}(1450)\) & 0.21 & 1.66 & 1.00 & 0.21 & 0.73 & 0.09 \\ \hline \multirow{8}{*}{S2} & \multirow{4}{*}{this} & \(B\to K_{0}^{*}(1430)\) & 0.29\(\pm\)0.02 & 1.27 & 0.33 & 0.29\(\pm\)0.02 & 0.16 & 0.11 \\ & & \(B\to a_{0}(1450)\) & 0.29\(\pm\)0.02 & 1.33 & 0.38 & 0.29\(\pm\)0.02 & 0.32 & 0.06 \\ \cline{1-1} & work & \(B_{s}\to K_{0}^{*}(1430)\) & 0.28\(\pm\)0.02 & 1.58 & 0.84 & 0.28\(\pm\)0.02 & 0.55 & 0.20 \\ \cline{1-1} & \multirow{2}{*}{Ref. [5]} & \(B\to K_{0}^{*}(1430)\) & 0.26 & 1.52 & 0.64 & 0.26 & 0.44 & 0.05 \\ \cline{1-1} & & \(B\to a_{0}(1450)\) & 0.26 & 1.57 & 0.70 & 0.26 & 0.55 & 0.03 \\ \end{tabular}
\end{table}
Table 2: Form factors for the \(B\to S\) transitions obtained from CLFQM, considering the S1 and S2 scenarios for the scalar mesons.
\[=\,\frac{1}{4}\,\bar{f}_{S}\int_{0}^{1}\!dx\,e^{i\,(xp\cdot z_{2}+\bar{x} \cdot z_{1})}\,\Big{\{}\,\not\!p\,\Phi_{S}(x)+m_{S}\,\Big{[}\Phi_{S}^{s}(x)- \sigma_{\mu\nu}\,p^{\mu}\,z^{\nu}\frac{\Phi_{S}^{\sigma}(x)}{6}\Big{]}\Big{\}}_{ \alpha\beta}, \tag{41}\]
where the arguments \(\bar{x}\) = 1 \(-\)\(x\) and \(z\) = \(z_{2}\)\(-\)\(z_{1}\). \(\Phi_{S}\) is the twist-2 light cone DAs. The twist-3 light cone DAs \(\Phi_{S}^{s,\sigma}\) are related by the equations of motion [5],
\[\xi\,\Phi_{S}^{s}(x)+\frac{1}{6}\,\frac{d\,\Phi_{S}^{\sigma}(x)}{d\,x}\,=\,0, \tag{42}\]
where \(\xi\) = \(x\)\(-\)\(\bar{x}\) = 2 \(x\)\(-\) 1. The twist-2 DAs are written as [4; 5]
\[\Phi_{S}(x,\,\mu)\,=\,6\,x\,\bar{x}\,\Big{\{}b_{0}^{S}+\sum_{n=1}^{\infty}b_{n }^{S}(\mu)\,C_{n}^{3/2}(\xi)\Big{\}}, \tag{43}\]
where the Gegenbauer moments \(b_{i}^{S}\), corresponding to the expansion coefficients of Gegenbauer polynomials \(C_{i}^{3/2}(\xi)\), are hadronic parameters. The asymptotic forms of the twist-3 DAs are respectively written as [5],
\[\Phi_{S}^{s}(x,\,\mu)\,=\,1, \tag{44}\]
\[\Phi_{S}^{\sigma}(x,\,\mu)\,=\,6\,x\,\bar{x}. \tag{45}\]
The Gegenbauer moments \(b_{n}^{S}\) in the twist-2 DAs \(\Phi_{S}\) are listed in Table 3. Our comments are (1) for either the S1 or S2 scenarios, the orbital angular momentum \(L=1\) between the
\begin{table}
\begin{tabular}{c|c|c|c c c c} & & mesons & \(b_{0}^{S}\) & \(b_{1}^{S}\) & \(b_{2}^{S}\) & \(b_{3}^{S}\) \\ \hline \multirow{4}{*}{S1} & this & \(K_{0}^{*}(1430)\) & 0.08\(\pm\)0.01 & \(-\)0.15\(\pm\)0.05 & 0.06\(\pm\)0.01 & \(-\)0.09\(\pm\)0.05 \\ & work & \(a_{0}(1450)\) & 0 & \(-\)0.17\(\pm\)0.06 & 0 & \(-\)0.19\(\pm\)0.03 \\ \cline{2-7} & Ref. [6] & \(K_{0}^{*}(1430)\) & 0 & 0.58\(\pm\)0.07 & 0 & \(-\)1.20\(\pm\)0.08 \\ & & \(a_{0}(1450)\) & 0 & 0.89\(\pm\)0.20 & 0 & \(-\)1.38\(\pm\)0.18 \\ \hline \multirow{4}{*}{S2} & this & \(K_{0}^{*}(1430)\) & 0.08\(\pm\)0.01 & \(-\)0.13\(\pm\)0.05 & \(-\)0.03\(\pm\)0.00 & \(-\)0.01\(\pm\)0.00 \\ & work & \(a_{0}(1450)\) & 0 & \(-\)0.17\(\pm\)0.03 & 0 & \(-\)0.03\(\pm\)0.01 \\ \cline{1-1} \cline{2-7} & Ref. [6] & \(K_{0}^{*}(1430)\) & 0 & \(-\)0.57\(\pm\)0.13 & 0 & \(-\)0.42\(\pm\)0.22 \\ \cline{1-1} & & \(a_{0}(1450)\) & 0 & \(-\)0.58\(\pm\)0.12 & 0 & \(-\)0.49\(\pm\)0.15 \\ \end{tabular}
\end{table}
Table 3: The values of the Gegenbauer moments at the scale of \(\mu\) = 1 GeV, considering the S1 and S2 scenarios for the scalar mesons. The results in this work are obtained from CLFQM, and those of Ref. [6] from QCD sum rules.
two components of the scalar mesons. Using the parity of Gegenbauer polynomials and the isospin symmetry, the wave function should in principle be antisymmetric, \((-1)^{L}\), under the exchange of the longitudinal momentum fractions of the two valence quarks \(x\leftrightarrow\bar{x}\), _i.e._, the Gegenbauer moments \(b_{n}^{S}\) with even \(n\) should be zero. This feature is clearly demonstrated for the \(a_{0}(1450)\) mesons in Table 3 and Fig. 1 (a). (2) For the \(K_{0}^{*}(1430)\) mesons, the flavor \(SU(3)\) symmetry breaking effects should be given due consideration. DAs for the \(K_{0}^{*}(1430)\) mesons should be asymmetric under \(x\leftrightarrow\bar{x}\), _i.e._, the Gegenbauer moments \(b_{n}^{S}\) with both even and odd \(n\) are nonzero. This property is properly illustrated by our results in Table 3 and Fig. 1 (b). (3) According to the definition of hadronic matrix elements given by Eq.(3.18) and Eq.(3.20) in Ref. [5], the Gegenbauer moments \(b_{1}^{S}\) in DAs and the decay constant \(\bar{f}_{S}\) are directly interrelated. The positive values of \(b_{1}^{S}\) correspond to the the negative value of \(\bar{f}_{S}\) listed in Table 1 for the S1 scenario, and vice versa for the S2 scenario. In this sense, the positive and negative sign for \(b_{1}^{S}\) from CLFQM in this work and QCD sum rules in Ref. [6] are self-consistent.
## IV Numerical results and discussions
In the rest frame of the \(B\) meson, the \(CP\)-averaged branching ratio is defined as,
\[\mathcal{B}\,=\,\frac{\tau_{B}}{16\pi}\,\frac{p_{\rm cm}}{m_{B}^{2}}\,\big{\{} |\mathcal{A}(B{\rightarrow}f)|^{2}+|\mathcal{A}(\overline{B}{\rightarrow} \overline{f})|^{2}\big{\}}, \tag{46}\]
where \(\tau_{B}\) is the \(B\) meson lifetime, \(p_{\rm cm}\) is the common momentum of final states.
Figure 1: The twist-2 DAs for the \(a_{0}(1450)\) and \(K_{0}^{*}(1430)\). The dashed and solid lines correspond to the truncations up to \(n\) = 1 and 3, respectively.
The direct \(CP\) asymmetry is defined as,
\[A_{CP}\,=\,\frac{\Gamma(\overline{B}{\rightarrow}f)-\Gamma(B{\rightarrow} \overline{f})}{\Gamma(\overline{B}{\rightarrow}f)+\Gamma(B{\rightarrow} \overline{f})}. \tag{47}\]
When the final states are common to the neutral \(B^{0}_{d,s}\) and \(\overline{B}^{0}_{d,s}\) decays, the \(CP\) violating asymmetry is defined as,
\[A_{CP}\,=\,A^{\rm mix}_{CP}\sin(x\,\Delta m\,t)-A^{\rm dir}_{CP}\cos(x\, \Delta m\,t), \tag{48}\]
\[A^{\rm mix}_{CP}\,=\,\frac{2\,{\cal I}m(\lambda_{f})}{1+|\lambda_{f}|^{2}}, \tag{49}\]
\[A^{\rm dir}_{CP}\,=\,\frac{1-|\lambda_{f}|^{2}}{1+|\lambda_{f}|^{2}}, \tag{50}\]
\[\lambda_{f}\,=\,\left\{\begin{array}{l}\frac{ V^{*}_{tb}\,V_{ td}}{V_{tb}\,V^{*}_{td}}\,\frac{{\cal A}(\overline{B}^{0}_{d}{\rightarrow}f)}{{ \cal A}(\overline{B}^{0}_{d}{\rightarrow}f)},\quad\mbox{for the $B^{0}_{d} $-$\overline{B}^{0}_{d}$ system},\\ \\ \frac{ V^{*}_{tb}\,V_{ts}}{V_{tb}\,V^{*}_{ts}}\,\frac{{\cal A}( \overline{B}^{0}_{s}{\rightarrow}f)}{{\cal A}(B^{0}_{s}{\rightarrow}f)},\quad \mbox{for the $B^{0}_{s}$-$\overline{B}^{0}_{s}$ system}.\end{array}\right. \tag{51}\]
The numerical results on the \(CP\)-averaged branching ratios and \(CP\) asymmetries for the \(B\to SS\) decays are listed in Table 4 and 5. Here we use the symbols T for the color favored tree processes, C for the color-suppressed tree processes, P for the penguin dominated processes, and A for the pure annihilation processes. Our comments are as follows.
(1) As we have discussed earlier, the LO contributions are suppressed by the factor \(N_{i}(S_{2})\) in Eq.(13), then the NLO contributions will be very important for the \(B\to SS\) decays. It is clearly shown in Table 4 that for both the S1 and S2 scenarios, the NLO contributions to branching ratios are generally significant, even a few fold changes to the LO contributions corresponding to the numbers in the "NF" columns for some processes, such as C-class \(B\) decays where the NLO contributions are proportional to the large Wilson coefficient \(C_{1}\).
(2) The hard spectator scattering amplitudes \(H_{i}(S_{1}\,S_{2})\) belong to the nonfactorizable NLO contributions in Eq.(13) with the QCDF approach. So, the NLO contributions should be sensitive to the parameter \(X_{H}\). And the parameter \(X_{H}\) is closely related to the WA parameter \(X_{A}\) in this work. In Table 4, the differences of branching ratios between the C1 and C2 cases are still obvious, for example, for the \(B\to a_{0}a_{0}\) and \(B_{s}\to K^{*}_{0}\overline{K}^{*}_{0}\) decays, and the A-class decays as well. Additionally, the parameters \(X_{H,A}\) are always accompanied by the Gegenbauer moments in Eq.(15) and Eqs.(26--30). The smaller uncertainties of
the Gegenbauer moments bring branching ratios with the smaller theoretical uncertainties, compared with those in Refs. [5; 6].
(3) As it is well known that the T-class \(B\) decays are induced by the external \(W\) boson emission interactions with the factorization approach, and their amplitudes are proportional to the large Wilson coefficient \(C_{1}\) or \(\alpha_{1}\). These processes should theoretically have a relatively large branching ratio. It might be a little curious in Table 4 that branching ratios for the T-class \(B\to SS\) decays are very small, \(\sim\mathcal{O}(10^{-7})\). Some even are less than the branching ratios of the purely WA decays. One of the main reasons is that the LO contributions of the T-class decay amplitudes are seriously suppressed by the factor \(N_{i}(S_{2})\) in Eq.(13), with \(N_{i}(a_{0})\sim 0.002\), and their NLO contributions are suppressed by both the factor \(\alpha_{s}/N_{c}\) and the small coefficient \(C_{2}\).
(4) It is obvious in Table 4 that for both the C1 and C2 cases, branching ratios of the S2 scenario are larger than the corresponding ones of the S1 scenario, because the decay amplitudes are proportional to the product of the decay constants of the scalar mesons and form factors, and the numerical values of both the decay constants of the scalar mesons (see Table 1) and form factors (see Table 2) of the S2 scenario are larger than the corresponding ones of the S1 scenario. Specifically, the P-class \(B\to a_{0}\overline{K}^{*}_{0}\) and \(B_{s}\to K^{*}_{0}\overline{K}^{*}_{0}\) decays where the penguin contributions are largely enhanced by the CKM elements \(V_{tb}V^{*}_{ts}\sim\mathcal{O}(\lambda^{2})\) relative to the possible tree contributions associated with \(V_{ub}V^{*}_{us}\sim\mathcal{O}(\lambda^{4})\), their branching ratios can reach up to even \(\mathcal{O}(10^{-5})\) for the S2 scenario. These flagship decay modes should get priority in the future experimental research program for searching for the \(B\to SS\) decays.
(5) Experimentally, more than ten years ago, a hint of the \(B^{0}\to K^{*0}_{0}\overline{K}^{*0}_{0}\) decays with a significance of 0.8 \(\sigma\) has been reported by the Belle Collaboration with the \(K^{+}K^{-}\pi^{+}\pi^{-}\) final states [75], and branching ratio \(\mathcal{B}=(3.21^{+2.89+2.31}_{-2.85-2.32})\times 10^{-6}\) and the upper limit at the 90% confidence level \(\mathcal{B}<\) 8.4\(\times 10^{-6}\). Our results are marginally consistent with data when considering the large experimental errors. Theoretically, besides the small Wilson coefficients and the CKM elements \(V_{tb}V^{*}_{td}\sim\mathcal{O}(\lambda^{3})\), the relatively smaller branching ratios might arise from the Gegenbauer moments, which result in a flatter shape line of the scalar mesonic DAs, and further leads to a milder overlap among the participating mesonic DAs, and finally give a more modest decay amplitudes. Experimentally, it is entirely necessary and desirable to improve the accuracy of measurements and investigate more and more \(B\to SS\) decays in the future in order to verify various theoretical models and explore the properties the scalar
mesons.
(6) The weak annihilation amplitudes are thought to be power suppressed with the QCDF approach [50; 51]. The purely WA \(B\) decays should in principle have very small branching ratios. The evidences have been demonstrated in the \(B\to K^{\pm}K^{*\mp}\) and \(B_{s}\to\pi\pi\) decays theoretically [57; 58; 59; 60; 66; 67; 68; 69; 70] and experimentally [1]. The similar phenomena or/and patterns also appear in Table 4 for the A-class \(B\to SS\) decays, with branching ratios of \({\cal B}\)\({\cal O}(10^{-7})\). The impressive and amazing thing is that in some cases, branching ratios of the A-class \(B\to SS\) decays with appropriate parameters can catch up with or overtake those of the T-class decays, which is very unlike the hadronic \(B\to PP\), \(PV\) decays. These typical characteristics for the \(B\to SS\) decays are closely related with the properties of the scalar mesons, such as the decay constants, DAs and so on. Additionally, branching ratios of the A-class \(B\to SS\) decays are very sensitive to the parameter \(X_{A}\), with both the S1 and S2 scenarios. It is clear that with the topologically dependent parameters, _i.e._, \(X_{A}^{i}\neq X_{A}^{f}\) for the C2 case, the corresponding branching ratios are relatively larger, due to the larger value of \(\rho_{H}^{i}\). Our understanding of the WA contributions to the nonleptonic \(B\) decays with the QCDF approach is not comprehensive enough. Albeit very challenging, the experimental measurements on the A-class \(B\to SS\) decays are interesting and helpful to explore the underlying dynamical mechanism and the higher power corrections to HMEs.
(7) In Table 4, the theoretical uncertainties are very large, especially those from the hadronic parameters. Usually, the ratio of branching ratios are defined to reduce theoretical uncertainties on one hand, and on the other hand to check some potential symmetry or conservation quantities, for example, the observables \(R_{K,D}\) for the universality of the inherited electroweak couplings to all charged leptons. Here, we give some ratios of branching ratios with the universal parameter \(X_{A}\) for the C1 case, for example,
\[R_{1} = \frac{{\cal B}(B^{-}{\rightarrow}a_{0}^{-}\overline{K}_{0}^{*0})} {2\,{\cal B}(B^{-}{\rightarrow}a_{0}^{0}K_{0}^{*-})}\approx 1.04^{+0.00+0.16}_{-0.00-0.14}\ \mbox{(S1)},\ 1.06^{+0.00+0.11}_{-0.00-0.11}\ \mbox{(S2)}; \tag{52}\] \[R_{2} = \frac{{\cal B}(\overline{B}^{0}{\rightarrow}a_{0}^{+}K_{0}^{*-})} {2\,{\cal B}(\overline{B}^{0}{\rightarrow}a_{0}^{0}\overline{K}_{0}^{*0})} \approx 0.96^{+0.00+0.15}_{-0.00-0.14}\ \mbox{(S1)},\ 0.94^{+0.00+0.11}_{-0.00-0.10}\ \mbox{(S2)};\] (53) \[R_{3} = \frac{{\cal B}(\overline{B}_{s}^{0}{\rightarrow}K_{0}^{*0} \overline{K}_{0}^{*0})}{{\cal B}(\overline{B}_{s}^{0}{\rightarrow}K_{0}^{*+}K _{0}^{*-})}\approx 1.08^{+0.00+0.02}_{-0.01-0.02}\ \mbox{(S1)},\ 1.07^{+0.00+0.01}_{-0.00-0.01}\ \mbox{(S2)}. \tag{54}\]
All these ratios are expected to be \(R_{1,2,3}\) = 1 by applying the \(SU(3)\) flavor symmetry.
(8) It is clear in Table 5 that the \(CP\) violating asymmetries depend on the parameter \(X_{A}\) which contains the strong phases. It is known that with the QCDF approach, the strong phases necessary for the direct \(CP\) violation arise from the NLO contributions, which are the order \(\alpha_{s}\) or \(\Lambda_{\rm QCD}/m_{b}\) and suppressed compared with the LO contributions. However, as noted earlier, the LO contributions are seriously suppressed by the factor \(N_{i}(S_{2})\) in Eq.(13), which will indirectly result in the larger strong phases from the NLO contributions. These effects will have more influences on the direct \(CP\) violating asymmetries for the T- and C-class \(B\to SS\) decays than the P-class ones, because of the larger Wilson coefficients for the T- and C-class decays. The larger direct \(CP\) asymmetries for the \(B_{d}\to a_{0}a_{0}\) and \(B_{s}\)\(\to a_{0}K_{0}^{*}\) are expected for both the S1 and S2 scenarios. And by comparison, the absolute values of the direct \(CP\) violating asymmetries in the T- and C-class \(B\to SS\) decays are generally larger than those in the corresponding \(B\to PP\), \(PV\) decays [57; 58; 59; 60; 66; 67; 68; 69; 70]. In addition, to contract with the so-called \(\pi K\) puzzle, the difference between the direct \(CP\) asymmetries for the \(B^{-}\to a_{0}^{-}\overline{K}_{0}^{*0}\) and \(\overline{B}_{0}\to a_{0}^{+}K_{0}^{*0-}\) decays is estimated to be,
\[\Delta A_{CP} = A_{CP}(B^{-}{\rightarrow}a_{0}^{-}\overline{K}_{0}^{*0})-A_{CP} (\overline{B}_{0}{\rightarrow}a_{0}^{+}K_{0}^{*0-}) \tag{55}\] \[= (5.95^{+0.18+1.69}_{-0.18-1.36})\%\ \mbox{(S1)},\ (5.48^{+0.17+1.21}_{-0.17-0.90})\% \ \mbox{(S2)},\]
with the universal parameter \(X_{A}=X_{H}\) for the C1 case, and
\[\Delta A_{CP} = (3.71^{+0.11+2.46}_{-0.11-1.95})\%\ \mbox{(S1)},\ (5.05^{+0.15+1.66}_{-0.15-1.40})\% \ \mbox{(S2)}, \tag{56}\]
with the topologically dependent parameters \(X_{A}^{i}\neq X_{A}^{f}\) for the C2 case. Unfortunately, no data are available on the \(CP\) asymmetries for the \(B\to SS\) decays at the moment.
## V Summary
To meet the coming high precision measurements on the \(B\) meson decays based on the huge amount of data, and provide a ready and helpful reference in clarifying the open questions related to the scalar mesons, the hadronic charmless \(B\to SS\) decays are studied with the QCDF approach, where the symbol \(S\) denotes the scalar mesons \(K_{0}^{*}(1430)\) and \(a_{0}(1450)\). It is found that the LO contributions are proportional to the mass difference of the two valence quarks embedded in the scalar mesons, and thereby seriously suppressed. This causes two consequences. (1) The branching ratios for the \(B\to a_{0}a_{0}\) and
\(a_{0}K_{0}^{*}\) decays belonging to the T- and C-class are very small, about the order of \({\cal O}(10^{-7})\). (2) The NLO contributions become necessary and predominant for the \(B\to SS\) decays. With the updated values of hadronic parameters obtained from CLFQM, including the transition form factors, the decay constants and Gegenbauer moments in mesonic DAs for the two scenarios where the scalar mesons in question are the \(1P\) and \(2P\) triplet states, the \(CP\)-averaged branching ratios and \(CP\) violating asymmetries are given with the universal end-point parameters \(X_{A}\) and topology-dependent parameters \(X_{A}^{i}\neq X_{A}^{f}\). The numerical results show that (1) theoretical uncertainties of both the branching ratios and the direct \(CP\) asymmetries come mainly from hadronic parameters. (2) Branching ratios for the \(B_{s}\to K_{0}^{*}\overline{K}_{0}^{*}\) decays and the purely weak annihilation decays \(B_{s}\to a_{0}a_{0}\) and \(B_{d}\to K_{0}^{*+}K_{0}^{*-}\), and the direct \(CP\) asymmetries for the \(B_{d}\to a_{0}a_{0}\) decays are very sensitive to the parameter \(X_{A}\). (3) For the \(B\to a_{0}K_{0}^{*}\) and \(B_{s}\to K_{0}^{*}\overline{K}_{0}^{*}\) decays, branching ratios for the S2 scenario are about one order of magnitude larger than those for the S1 scenario, and can reach up to the order of \({\cal O}(10^{-5})\). These decays should first be searched and investigated experimentally. (4) Theoretical uncertainties come mainly from the hadronic parameters. More focus and effort are needed to improve the theoretical calculation precision. Some ratios of branching ratios are given based on the \(SU(3)\) flavor symmetry. In addition, there is too little available data to draw any conclusions on whether the scalar mesons \(K_{0}^{*}(1430)\) and \(a_{0}(1450)\) are the \(1P\) or \(2P\) states. Hope more and more \(B\to SS\) decays can be measured with higher and higher precision at the high-luminosity colliders in the future.
## Acknowledgements
This work is supported by the National Natural Science Foundation of China (Grant Nos. 12275067, 12275068, 12135006, 12105078), Natural Science Foundation of Henan Province (Grant No. 222300420479), and Excellent Youth Foundation of Henan Province (Grant No. 212300410010).
## Appendix A The decay amplitudes for the \(B\to SS\) decays
Here, the symbols is used to simplify the decay amplitudes.
\[\lambda_{q}\left(\cdots\right)\,=\,\sum_{p=u,c}V_{pb}\,V_{pq}^{*}\left(\cdots\right) \tag{105}\]
\[\sqrt{2}\,{\cal A}(B^{-}{\rightarrow}a_{0}^{-}a_{0}^{0}) = \lambda_{d}\,\big{\{}A_{a_{0}^{-}a_{0}^{0}}\,\big{[}\delta_{u}^{p} \,\alpha_{2}-\alpha_{4}^{p}+\frac{3}{2}\,\alpha_{3,EW}^{p}+\frac{1}{2}\,\alpha _{4,EW}^{p}-\delta_{u}^{p}\,\beta_{2}-\beta_{3}^{p}-\beta_{3,EW}^{p}\big{]} \tag{106}\] \[\qquad+A_{a_{0}^{-}a_{0}^{-}}\,\big{[}\delta_{u}^{p}\,\alpha_{1} +\alpha_{4}^{p}+\alpha_{4,EW}^{p}+\delta_{u}^{p}\,\beta_{2}+\beta_{3}^{p}+ \beta_{3,EW}^{p}\big{]}\big{\}},\] \[{\cal A}(B^{-}{\rightarrow}a_{0}^{-}\overline{K}_{0}^{*0}) = \lambda_{s}\,A_{a_{0}\,\overline{K}_{0}^{*}}\,\big{[}\alpha_{4}^ {p}-\frac{1}{2}\,\alpha_{4,EW}^{p}+\delta_{u}^{p}\,\beta_{2}+\beta_{3}^{p}+ \beta_{3,EW}^{p}\big{]}, \tag{107}\]
\[\sqrt{2}\,{\cal A}(B^{-}{\rightarrow}a_{0}^{0}K_{0}^{*-}) = \lambda_{s}\,\big{\{}A_{a_{0}\,K_{0}^{*}}\,\big{[}\delta_{u}^{p} \,\alpha_{1}+\alpha_{4}^{p}+\alpha_{4,EW}^{p}+\delta_{u}^{p}\,\beta_{2}+\beta _{3}^{p}+\beta_{3,EW}^{p}\big{]} \tag{108}\] \[\qquad+A_{K_{0}^{*}a_{0}}\,\big{[}\delta_{u}^{p}\,\alpha_{2}+ \frac{3}{2}\,\alpha_{4,EW}^{p}\big{]}\big{\}},\] \[{\cal A}(B^{-}{\rightarrow}K_{0}^{*-}K_{0}^{*0}) = \lambda_{d}\,A_{K_{0}^{*-}\,K_{0}^{*0}}\,\big{[}\alpha_{4}^{p}- \frac{1}{2}\,\alpha_{4,EW}^{p}+\delta_{u}^{p}\,\beta_{2}+\beta_{3}^{p}+\beta_{ 3,EW}^{p}\big{]}, \tag{109}\]
\[{\cal A}(\overline{B}^{0}{\rightarrow}a_{0}^{+}a_{0}^{-}) = \lambda_{d}\,\big{\{}A_{a_{0}^{+}a_{0}^{-}}\,\big{[}\delta_{u}^{p} \,\alpha_{1}+\alpha_{4}^{p}+\alpha_{4,EW}^{p}+\beta_{3}^{p}+\beta_{4}^{p}- \frac{1}{2}\,\beta_{3,EW}^{p} \tag{110}\] \[\qquad-\frac{1}{2}\,\beta_{4,EW}^{p}\big{]}+A_{a_{0}^{-}a_{0}^{+} }\,\big{[}\delta_{u}^{p}\,\beta_{1}+\beta_{4}^{p}+\beta_{4,EW}^{p}\big{]}\big{\}},\]
\[{\cal A}(\overline{B}^{0}{\rightarrow}a_{0}^{0}a_{0}^{0}) = -\lambda_{d}\,A_{a_{0}\,a_{0}}\,\big{[}\delta_{u}^{p}\,\alpha_{2} -\alpha_{4}^{p}+\frac{3}{2}\,\alpha_{3}^{p}+\frac{1}{2}\,\alpha_{4,EW}^{p} \tag{111}\] \[\qquad-\delta_{u}^{p}\,\beta_{1}-\beta_{3}^{p}-2\,\beta_{4}^{p}+ \frac{1}{2}\,\beta_{3,EW}^{p}-\frac{1}{2}\,\beta_{4,EW}^{p}\big{]},\] \[{\cal A}(\overline{B}^{0}{\rightarrow}a_{0}^{+}K_{0}^{*-}) = \lambda_{s}\,A_{a_{0}\,K_{0}^{*}}\,\big{[}\delta_{u}^{p}\,\alpha_{ 1}+\alpha_{4}^{p}+\alpha_{4,EW}^{p}+\beta_{3}^{p}-\frac{1}{2}\,\beta_{3,EW}^ {p}\big{]}, \tag{112}\]
\[{\cal A}(\overline{B}^{0}{\rightarrow}a_{0}^{0}\overline{K}_{0}^{*0}) = \lambda_{s}\,\big{\{}A_{a_{0}\,\bar{K}_{0}^{*}}\,\big{[}-\alpha_{4} ^{p}+\frac{1}{2}\,\alpha_{4,EW}^{p}-\beta_{3}^{p}+\frac{1}{2}\,\beta_{3,EW}^{ p}\big{]} \tag{113}\] \[\qquad+A_{\bar{K}_{0}^{*}a_{0}}\,\big{[}\delta_{u}^{p}\,\alpha_{ 2}+\frac{3}{2}\,\alpha_{3,EW}^{p}\big{]}\big{\}},\]
\[{\cal A}(\overline{B}^{0}{\rightarrow}K_{0}^{*+}K_{0}^{*-}) = \lambda_{d}\,\big{\{}A_{K_{0}^{*-}\,K_{0}^{*+}}\,\big{[}\delta_{u }^{p}\,\beta_{1}+\beta_{4}^{p}+\beta_{4,EW}^{p}\big{]}+B_{K_{0}^{*+}\,K_{0}^{ *-}}\,\big{[}b_{4}^{p}-\frac{1}{2}\,b_{4,EW}^{p}\big{]}\big{\}}, \tag{114}\] \[{\cal A}(\overline{B}^{0}{\rightarrow}K_{0}^{*0}\overline{K}_{0}^{* 0}) = \lambda_{d}\,\big{\{}A_{\bar{K}_{0}^{*}\,K_{0}^{*}}\,\big{[}\alpha_{4} ^{p}-\frac{1}{2}\,\alpha_{4,EW}^{p}+\beta_{3}^{p}+\beta_{4}^{p}-\frac{1}{2}\, \beta_{3,EW}^{p}\] (115) \[\qquad-\frac{1}{2}\,\beta_{4,EW}^{p}\big{]}+B_{K_{0}^{*}\,\bar{K}_ {0}^{*}}\,\big{[}b_{4}^{p}-\frac{1}{2}\,b_{4,EW}^{p}\big{]}\big{\}},\]
\[\mathcal{A}(\overline{B}^{0}_{s}{\rightarrow}a^{+}_{0}a^{-}_{0}) = \lambda_{s}\,\big{\{}B_{a^{+}_{0}\,a^{-}_{0}}\,\big{[}b^{p}_{4}- \frac{1}{2}\,b^{p}_{4,EW}\big{]}+B_{a^{-}_{0}\,a^{+}_{0}}\,\big{[}\delta^{p}_{u} \,b_{1}+b^{p}_{4}+b^{p}_{4,EW}\big{]}\big{\}}, \tag{12}\] \[\mathcal{A}(\overline{B}^{0}_{s}{\rightarrow}a^{0}_{0}a^{0}_{0}) = \lambda_{s}\,B_{a_{0}\,a_{0}}\,\big{[}\delta^{p}_{u}\,b_{1}+2\,b^{p }_{4}+\frac{1}{2}\,b^{p}_{4,EW}\big{]},\] (13) \[\mathcal{A}(\overline{B}^{0}_{s}{\rightarrow}K^{*+}_{0}a^{-}_{0} ) = \lambda_{d}\,A_{K^{*}_{0}\,a_{0}}\,\big{[}\delta^{p}_{u}\,\alpha_{ 1}+\alpha^{p}_{4}+\alpha^{p}_{4,EW}+\beta^{p}_{3}-\frac{1}{2}\,\beta^{p}_{3, EW}\big{]},\] (14) \[\sqrt{2}\,\mathcal{A}(\overline{B}^{0}_{s}{\rightarrow}K^{*0}_{0 }a^{0}_{0}) = \lambda_{d}\,A_{K^{*}_{0}\,a_{0}}\,\big{[}\delta^{p}_{u}\,\alpha_{ 2}-\alpha^{p}_{4}+\frac{3}{2}\,\alpha^{p}_{3,EW}+\frac{1}{2}\,\alpha^{p}_{4, EW}-\beta^{p}_{3}+\frac{1}{2}\,\beta^{p}_{3,EW}\big{]}, \tag{15}\]
\[\mathcal{A}(\overline{B}^{0}_{s}{\rightarrow}K^{*0}_{0}\overline{ K}^{*0}_{0}) = \lambda_{s}\,\big{\{}A_{K^{*}_{0}\,\bar{K}^{*}_{0}}\,\big{[}\alpha ^{p}_{4}-\frac{1}{2}\,\alpha^{p}_{4,EW}+\beta^{p}_{3}+\beta^{p}_{4}-\frac{1}{ 2}\,\beta^{p}_{3,EW} \tag{16}\] \[\quad-\frac{1}{2}\,\beta^{p}_{4,EW}+B_{\bar{K}^{*}_{0}\,K^{*}_{0} }\,\big{[}b^{p}_{4}-\frac{1}{2}\,b^{p}_{4,EW}\big{]}\big{\}},\]
\[\mathcal{A}(\overline{B}^{0}_{s}{\rightarrow}K^{*+}_{0}K^{*-}_{0} ) = \lambda_{s}\,\big{\{}A_{K^{*}_{0}\,\bar{K}^{*}_{0}}\,\big{[}\delta ^{p}_{u}\,\alpha_{1}+\alpha^{p}_{4}+\alpha^{p}_{4,EW}+\beta^{p}_{3}+\beta^{p}_{ 4}-\frac{1}{2}\,\beta^{p}_{3,EW} \tag{17}\] \[\quad-\frac{1}{2}\,\beta^{p}_{4,EW}+B_{\bar{K}^{*}_{0}\,K^{*}_{0} }\,\big{[}\delta^{p}_{u}\,b_{1}+b^{p}_{4}+b^{p}_{4,EW}\big{]}\big{\}}.\]
|
2307.09488 | PLiNIO: A User-Friendly Library of Gradient-based Methods for
Complexity-aware DNN Optimization | Accurate yet efficient Deep Neural Networks (DNNs) are in high demand,
especially for applications that require their execution on constrained edge
devices. Finding such DNNs in a reasonable time for new applications requires
automated optimization pipelines since the huge space of hyper-parameter
combinations is impossible to explore extensively by hand. In this work, we
propose PLiNIO, an open-source library implementing a comprehensive set of
state-of-the-art DNN design automation techniques, all based on lightweight
gradient-based optimization, under a unified and user-friendly interface. With
experiments on several edge-relevant tasks, we show that combining the various
optimizations available in PLiNIO leads to rich sets of solutions that
Pareto-dominate the considered baselines in terms of accuracy vs model size.
Noteworthy, PLiNIO achieves up to 94.34% memory reduction for a <1% accuracy
drop compared to a baseline architecture. | Daniele Jahier Pagliari, Matteo Risso, Beatrice Alessandra Motetti, Alessio Burrello | 2023-07-18T07:11:14Z | http://arxiv.org/abs/2307.09488v1 | # PLINIO: A User-Friendly Library of Gradient-based Methods for Complexity-aware DNN Optimization
###### Abstract
Accurate yet efficient Deep Neural Networks (DNNs) are in high demand, especially for applications that require their execution on constrained edge devices. Finding such DNNs in a reasonable time for new applications requires automated optimization pipelines since the huge space of hyper-parameter combinations is impossible to explore extensively by hand. In this work, we propose PLINIO, an open-source library implementing a comprehensive set of state-of-the-art DNN design automation techniques, all based on lightweight gradient-based optimization, under a unified and user-friendly interface. With experiments on several edge-relevant tasks, we show that combining the various optimizations available in PLINIO leads to rich sets of solutions that Pareto-dominate the considered baselines in terms of accuracy vs model size. Noteworthy, PLINIO achieves up to 94.34% memory reduction for a <1% accuracy drop compared to a baseline architecture.
NAS, Pruning, Quantization, Deep Learning, PyTorch, Design Space Exploration, Domain-specific Computing
## I Introduction
Deep Neural Networks (DNNs) reach state-of-the-art performance in many applications, ranging from computer vision to bio-signals processing, but are extremely expensive in terms of computation and memory [1, 2, 3]. This is currently considered somewhat of a secondary issue for cloud-hosted models, whose accuracy has improved in each new generation as an effect of sheer model upscaling, also thanks to the availability of gargantuous amounts of data. However, for tasks that require the execution of DNNs on mobile or edge devices, limiting computational complexity and memory footprint is fundamental [2], and even in the cloud, hardware/energy costs and sustainability issues will eventually mandate a careful consideration of complexity [4].
Unfortunately, DNNs have a very large set of hyper-parameters, i.e., configurations that are not (traditionally) trained by gradient descent together with the model weights, yet greatly influence results. At a high level, we can distinguish _training hyper-parameters_ (e.g., the optimizer used for training, the initial learning rate and its schedule, etc) and _architectural hyper-parameters_ (e.g., the number and type of layers, their configuration, the weights and activations bitwidth, etc). The former only affect the training process and, therefore, the accuracy of the resulting model. The latter, instead, strongly impact both predictive performance and inference complexity. Furthermore, they can be set in virtually infinite combinations, creating an immense optimization space [5]. Exploring such space by hand is prone to following conventional rules of thumb and results in suboptimal outcomes [6].
Accordingly, design exploration and automated optimization tools, generally referred to as AutoML [7] are becoming popular to design accurate yet compact and efficient DNNs for new applications, especially when targeting constrained edge hardware. More specifically, Neural Architecture Search (NAS) methods automate the search for optimal combinations of layers and their configurations [6], whereas Mixed-Precision Search (MPS) solutions look for the optimal data representation for each model tensor [8]. In both cases, early approaches resorted to time-consuming black-box optimization methods such as Reinforcement Learning (RL) and Evolutionary Algorithms (EA), which required tens of GPU-days for a single optimization [6]. More recently, gradient-based NAS and MPS have been proposed as lightweight alternatives to these solutions. These so-called _One-shot_ or _Differentiable_ methods utilize gradient-descent to simultaneously train a DNN and optimize its architecture, thus obtaining an optimized model in a time comparable to a single training [9].
One key limitation of gradient-based approaches, however, is the lack of user-friendly libraries that can be employed by ML practitioners without experience on NAS or MPS to optimize a DNN for their applications while ignoring implementation details. Such a library should also combine optimizations targeting multiple architectural hyper-parameters, at different granularity levels, in order to fully explore the design space. Similar tools have been recently released both commercially [10] and open-source [11], but mostly for resource-hungry iterative (i.e., RL, EA, etc.) AutoML methods.
In this paper, we present **PLINIO**, a library for **P**lug-and-play **L**ightweight **N**eural **I**nference **O**ptimization, which tries to bridge this gap by providing a unified and user-friendly domain-specific language for a diverse set of gradient-based AutoML procedures. Namely, PLINIO currently supports: i) _coarse-grained NAS_ for selecting among alternative layers [9]; ii) a _fine-grained NAS_ for optimizing each layer's internal hyper-parameters (e.g., the number of channels in a Convolutional layer) [12]; iii) a _differentiable MPS_ method for selecting both weights and activation bit-widths and quantization parameters, supporting common quantization formats [13, 14, 15]. Since the fine-grained NAS in ii) is analogous to structured pruning [12], PLINIO supports three of the most common complexity-driven DNN optimizations in the state-of-the-art, i.e., **Quantization, Pruning and NAS**[2], |
2308.10984 | Debiasing Counterfactuals In the Presence of Spurious Correlations | Deep learning models can perform well in complex medical imaging
classification tasks, even when basing their conclusions on spurious
correlations (i.e. confounders), should they be prevalent in the training
dataset, rather than on the causal image markers of interest. This would
thereby limit their ability to generalize across the population. Explainability
based on counterfactual image generation can be used to expose the confounders
but does not provide a strategy to mitigate the bias. In this work, we
introduce the first end-to-end training framework that integrates both (i)
popular debiasing classifiers (e.g. distributionally robust optimization (DRO))
to avoid latching onto the spurious correlations and (ii) counterfactual image
generation to unveil generalizable imaging markers of relevance to the task.
Additionally, we propose a novel metric, Spurious Correlation Latching Score
(SCLS), to quantify the extent of the classifier reliance on the spurious
correlation as exposed by the counterfactual images. Through comprehensive
experiments on two public datasets (with the simulated and real visual
artifacts), we demonstrate that the debiasing method: (i) learns generalizable
markers across the population, and (ii) successfully ignores spurious
correlations and focuses on the underlying disease pathology. | Amar Kumar, Nima Fathi, Raghav Mehta, Brennan Nichyporuk, Jean-Pierre R. Falet, Sotirios Tsaftaris, Tal Arbel | 2023-08-21T19:01:45Z | http://arxiv.org/abs/2308.10984v1 | # Debiasing Counterfactuals In the Presence of Spurious Correlations
###### Abstract
Deep learning models can perform well in complex medical imaging classification tasks, even when basing their conclusions on spurious correlations (i.e. confounders), should they be prevalent in the training dataset, rather than on the causal image markers of interest. This would thereby limit their ability to generalize across the population. Explainability based on counterfactual image generation can be used to expose the confounders but does not provide a strategy to mitigate the bias. In this work, we introduce the first end-to-end training framework that integrates both (i) popular debiasing classifiers (e.g. distributionally robust optimization (DRO)) to avoid latching onto the spurious correlations and (ii) counterfactual image generation to unveil generalizable imaging markers of relevance to the task. Additionally, we propose a novel metric, _Spurious Correlation Latching Score (SCLS)_, to quantify the extent of the classifier reliance on the spurious correlation as exposed by the counterfactual images. Through comprehensive experiments on two public datasets (with the simulated and real visual artifacts), we demonstrate that the debiasing method: (i) learns generalizable markers across the population, and (ii) successfully ignores spurious correlations and focuses on the underlying disease pathology.
Keywords:Biomark Counterfactuals Debiasing Explainablity
## 1 Introduction
Deep learning models have shown tremendous success in disease classification-based on medical images, given their ability to learn complex imaging markers across a wide population of subjects. These models can show good performance and still be _biased_ as they may focus on spurious correlations in the image that are not causally related to the disease but arise due to confounding factors - should they be common across the majority of samples in the training dataset. As a result, the confounding predictive image markers may not generalize across
the population. For example, a deep learning model was able to accurately detect COVID-19 from chest radiographs, but rather than relying on pathological evidence, the model latched on to spurious correlations such as medical devices or lettering in the image [3]. As a result, these image markers did not generalize across the population.
In order to safely deploy black-box deep learning models in real clinical applications, explainability should be integrated into the framework so as to expose the spurious correlations on which the classifier based its conclusions. Popular post-hoc explainability strategies, such as Grad-CAM [16, 6, 19], SHAP [10], LIME [11] are not designed to expose the precise predictive markers driving a classifier. Models that integrate counterfactual image generation, along with black-box classifiers [21, 2, 23], permit exposing the predictive markers used by the classifier. However, should these methods discover that the markers are indeed simply visual artifacts there are no strategies to mitigate the resulting biases. Furthermore, although several debiasing methods have been successfully implemented to account for generalizability [1, 26, 17, 8, 27], they do not integrate explainability into the framework in order to provide reasons for improved performance.
Therefore, the important question to be answered is - _Can a model be trained to disregard spurious correlations and identify generalizable predictive disease markers?_
In this paper, we propose the first end-to-end training framework for the explainability of classifier and debiasing via counterfactual image generation. We seek to discover imaging markers that reflect underlying disease pathology and that generalize across subgroups. Extensive experiments are performed on two different publicly available datasets - (i) _RSNA Pneumonia Detection Challenge_ and (ii) _CheXpert_[5]. To illustrate the goal, Figure 1 shows an example from the contrived CheXpert dataset, where most of the sick subjects have medical device(s) (e.g. a pacemaker) in their images while most of the healthy subjects do not. As such, there exists a spurious correlation between a confounding visual artifact (the medical devices) and the disease. A classifier based on a standard
Figure 1: Counterfactual (CF) image indicating that the classifier latched onto spurious correlations (medical devices) when correctly predicting that subject is sick (class: Pleural Effusion), due to their prevalence in the training dataset for this class. (a) Chest radiograph of a sick subject with several medical devices shown (cyan boxes), (b) Generated (CF) image, (c) Difference heat map shows maximum change around the medical devices, rather than indicating the correct markers for the disease.
optimization technique, empirical risk minimization (ERM), incorrectly indicates the medical device as a disease marker, as depicted by the counterfactual (CF). In this work, we propose replacing ERM with a popular debiasing method, Group-DRO (distributional robust optimization). This permits the classifier to focus on the pathological image markers of the disease rather than on spurious correlation(s). Additionally, we show that Group-DRO ignores the visual artifact when making its decision, and generalizes across subgroups without the spurious correlation. Since standard metrics to evaluate counterfactuals do not indicate the region where the classifier focuses, we also propose a novel metric, the Spurious Correlation Latching Score (SCLS), to measure the degree to which the classifier latches onto spurious correlations. Our experiments indicate an improvement (in terms of differences in classifier outputs) of 0.68 and 0.54 in the SCLS using the Group-DRO classifier over the ERM for each of the two datasets.
## 2 Methodology
We propose an end-to-end training strategy to explain the output of a classifier. Here, we are considering a scenario where majority of the training data encompasses a spurious correlation with the target label. However, there is also a minority subgroup in the dataset that does not have any spurious correlation with the target label i.e., if the classifier was to rely onto the spurious correlation then the performance on these minority subgroups will be poor. Also, the term'majority' and'minority' is based on the number of samples in these groups. An overview of our approach is shown in Figure 2.
### Classifier Explainability and Debiasing Via Counterfactual Image Generation
#### 2.1.1 Disease Classification
Binary (e.g. "sick" or "healthy") classification of the images is performed using either a standard classifier (ERM [24]), or a classifier that mitigates biases across sub-groups (Group-DRO [18]). The ERM classifier (\(f_{ERM}\)) is expected to be affected by the spurious correlation present in the training dataset, as it minimizes the loss over the entire training dataset and latching onto spurious correlation is a shortcut to minimize the loss. Thus, it would not generalize across the minority subgroups of the dataset [12, 20]. On the contrary, the DRO classifier (\(f_{DRO}\)) is not expected to learn the spurious correlation as it considers the majority and minority subgroups separately when optimizing the loss. Thus, it would generalize well across all subgroups.
#### 2.1.2 Generative model for synthesizing counterfactuals
We develop an explainability framework that integrates counterfactual image generation together with a classifier during training. We adapted Cycle-GAN [25] as the generative model for counterfactual image generation, chosen for its strong performance across a variety of domains [13, 25]. A pre-trained, frozen binary classifier (\(f_{ERM}\)
or \(f_{DRO}\)) provides supervision to the generator. The proposed architecture and optimization objectives (see Figure 2) are designed to generate counterfactual images that adhere to the following common constraints [14, 15, 7]: (i)_Identity preservation_: The counterfactual images resemble the input images with minimal change; (ii) _Classifier consistency_: Counterfactual images belong to the target class; (iii)_Cycle consistency_: When counterfactual images are fed through the opposing generator, the output reverts to the original image (see Figure 2).
During inference, based on the classifier's decision (i.e., \(f_{ERM}\) or \(f_{DRO}\)) for the input image, we generate counterfactual images and analyze the difference heatmap between the factual (input) and counterfactual (synthesized) images. This interpretable heat map indicates the image markers that contribute the most to changing the classifier's decision.
### Metrics for Evaluating Counterfactuals: Accounting for Spurious Correlations
Standard counterfactual evaluation metrics are structured so as to ensure that the generated images (a) preserve the subject identity and thus penalize generated counterfactual images that are significantly different from the factual (original) images and (b) result in a maximal change in the class label (e.g. from healthy to sick). Identity preservation is typically measured by _structural similarity index_ (SSIM) [4] and _Actionability_[15, 14], defined as \(\mathbb{E}\left[\left\|x-x_{cf}\right\|_{L_{1}}\right]\) between factual (\(x\)) and counterfactual (\(x_{cf}\)) images. Here, a higher value for SSIM and a lower value for Actionability would indicate better counterfactuals. The _counterfactual prediction gain_ (CPG) [15], defined as \(|f(x)-f(x_{cf})|\), indi
Figure 2: Training procedure overview: The black-box classifier can be \(f_{ERM}\) or \(f_{DRO}\) and provides supervision to maintain the correct target class, \(y_{t}\). Two U-Net generators, \(G_{\textit{SH}}\) and \(G_{\textit{HS}}\), are employed to synthesize counterfactual images, namely \(x_{\mathcal{S}_{cf}}\) and \(x_{\mathcal{H}_{cf}}\). The discriminator \(D_{\mathcal{H}}\) and \(D_{\mathcal{S}}\) compares the counterfactual images with the domain of healthy \(\mathcal{H}\) and sick \(\mathcal{S}\) subjects respectively. Note, training a cycle-GAN requires simultaneous use of two input images from the two distributions.
cates the degree of change in the classifier's decision such that a higher value of CPG indicates better counterfactuals.
While such metrics are required to measure the validity of the generated counterfactuals, they do not assess whether the classifier latched onto spurious correlations. For example, consider an image of a sick subject in the presence of a spurious correlation. If the disease classifier, \(f_{ERM}\), latched onto the spurious correlation when identifying the subject as sick, the corresponding counterfactual image (i.e., depicting a healthy subject) would show changes in the area of the spurious correlation. In this case, all three evaluation metrics mentioned above would determine that this is a valid counterfactual image, based on high SSIM and low Actionability (shows minimal changes made compared to the factual image) and high CPG (due to the classifier decision changing from sick to healthy). However, the counterfactual image shows changes in the area of the spurious correlation rather than depicting the correct predictive image markers for the disease as desired.
In order to indicate that the classifier is correct but for the wrong reasons, we introduce a novel metric called Spurious Correlation Latching Score (SCLS) defined as follows:
\[\text{SCLS}=|d(x)-d(x_{cf})|. \tag{1}\]
Here, \(d(\cdot)\) is a separate classifier, trained to identify the presence of spurious correlation in the image. In cases where the counterfactual image makes changes in an area of spurious correlation, SCLS will be high, as the \(d(\cdot)\) will show a maximum change in its prediction between factual and counterfactual images. On the other hand, if the counterfactual image does not make changes in the area of the spurious correlation then SCLS will have a low value. As such, this evaluation strategy will validate how well the counterfactuals can help to determine that the classifier latched onto spurious correlations.
## 3 Experiments and Results
### Dataset and Implementation Details
We perform experiments on two publicly available datasets. The absence of ground truth makes the validation of counterfactual images particularly challenging. Therefore, to directly evaluate the quality of the generated counterfactual images in the presence of spurious correlations, we modify a publicly available dataset (_RSNA Pneumonia Detection Challenge_) by adding a synthetic artifact to the majority of the sick images (90%). The majority of the sick and few of the healthy subjects have an artifact in the image, whereas the majority of the healthy and a few sick subjects do not have this artifact. The spurious correlation (artifact) is a black dot of radius 9 pixels at the center of the image. Thus, there are a total of four subgroups (\(majority_{S}\), \(majority_{H}\), \(minority_{S}\) and \(minority_{H}\)) in the dataset with varying number of images: \(majority_{S}\) and \(majority_{H}\) are majority subgroups (sick with artifact and healthy without artifact), while \(minority_{S}\) and \(minority_{H}\) are minority subgroups (sick without
artifact and healthy with artifact). Henceforth, this dataset will be referred to as Dataset 1.
We also show experiments on a subset of a publicly available dataset (_CheXpert_[5]) with medical devices (visual artifacts), spuriously correlated with the disease. Specifically, we extract the subset of images that have labels "healthy" or "pleural effusion" (subjects with the presence of other diseases are removed from the dataset). This dataset will be referred to as Dataset 2. More details about both datasets are provided in Table 1. Note that both the datasets are divided into training/validation/testing with 70/10/20 random split. Example images for both datasets and all four subgroups are shown in Figure 3.
### Results
**Classifier Evaluation** For both datasets (Figure 4), the DRO-based classifier (\(f_{DRO}\)) performs better for the minority subgroups (\(minority_{S}\) and \(minority_{H}\));
\begin{table}
\begin{tabular}{l l l l l} \hline \hline & \multicolumn{1}{c}{**Disease**} & \multicolumn{1}{c}{**Image size**} & \multicolumn{1}{c}{**Classifier**} & \multicolumn{1}{c}{**[\(majority_{S}\), \(minority_{S}\), \(minority_{H}\)]**} \\ \hline
**Dataset 1** & Pneumonia & 512 x 512 & AlexNet [13] & 5413, 1526, 883, 7968 \\
**Dataset 2** & \begin{tabular}{l} Pleural \\ Effusion \\ \end{tabular} & 224 x 224 &
\begin{tabular}{l} Resnet-50 [22] \\ (pre-trained) \\ \end{tabular} & 2600, 260, 350, 3456 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Implementation details for the two datasets
Figure 3: Datasets 1 and 2 group division: The majority of the sick subjects [\(majority_{S}\)] and the minority of healthy subjects [\(minority_{H}\)] have visual artifacts (shown in cyan boxes). The majority of healthy subjects [\(majority_{H}\)] and the minority of sick subjects [\(minority_{S}\)] do not have visual artifacts. Top row: Simulated artifacts (black dots); Bottom row: Real artifacts (medical devices).
indicating that it can better generalize to sub-populations that do not have the same visual artifact as the majority subgroups. Both classifiers perform similarly for the majority subgroups (\(majority_{S}\) and \(majority_{H}\)).
**Qualitative Counterfactual Evaluation** Pneumonia in chest radiograph manifests as increased brightness in some regions of the lungs. In dataset 1, when examining the majority subgroup of sick subjects, the ERM-based classifier latches onto the spurious correlation, as seen by the difference maps. On the other hand, a DRO-based classifier focuses on the pathology of the disease, indicated by darker intensity regions over the lungs, as shown in Figure 5. The behavior of \(f_{ERM}\) is also evident in the minority subgroup, where the counterfactual for a healthy subject exhibits an enlarged artifact, wrongly suggesting that the visual artifact serves as a disease marker. Pleural effusion is characterized by the rounding of the costophrenic angle, augmented lung opacity, and reduced clarity of the diaphragm and lung fissures [9]. For the majority subgroup of sick subjects in Dataset 2, the counterfactual images based on ERM remove the medical device rather than focusing on the disease. In addition, for healthy subjects from the minority subgroup, maximum changes are observed around the medical device. On the other hand, for the majority subgroup, the DRO-based counterfactuals show changes around the expected areas while preserving the medical device.
**Quantitative Counterfactual Evaluation** In Table 2, counterfactual images generated by ERM and DRO show similar scores according to standard metrics: SSIM, Actionability and CPG. As these metrics are not designed to quantify whether the generated counterfactuals are affected by spurious correlations (see Section 2.2), the quality of the counterfactuals is now examined based on the proposed SCLS metric. The AUC of the classifier, \(d\), trained to detect the presence of artifacts is 1.0 and 0.82 for Dataset 1 and Dataset 2, respectively. As indicated by the last row of Table 2, the ERM-based classifier shows a high value (poor performance) for SCLS for both datasets. On the other hand, the DRO-based classifier has a low value (good performance) for SCLS for both datasets. These results corroborate the finding made by visual comparison of the counterfactual images generated by the ERM and DRO classifiers. Overall, both qualitative
Figure 4: Performance of ERM (\(f_{ERM}\)) and DRO (\(f_{DRO}\)) based classifier on a held out test set across all subgroups for both datasets. Notice that DRO has improved performance on minority subgroup [\(minority_{S}\) and \(minority_{H}\)] showing improved generalizability across all subgroups.
and quantitative evaluations indicate that an ERM-optimized classifier latches on to the spurious correlation prevalent in the dataset, while a DRO-optimized classifier can be trained to successfully ignore the spurious correlation.
## 4 Conclusion
Safe deployment of black-box models requires explainability to disclose when the classifier is basing its predictions on spurious correlations and is therefore not generalizable. In this paper, we presented the first integrated end-to-end training strategy for generating unbiased counterfactual images, capitalizing on a DRO classifier to enhance generalization. Our experiments based on two datasets demonstrate that, unlike standard ERM classifiers which are susceptible to latching onto spurious correlations, the unbiased DRO classifier performs significantly better for minority subgroups in terms of- (a) the classifier performance and (b) the novel SCLS metric, which quantifies the degree to which the classifier latches
Figure 5: Qualitative comparison of counterfactual (CF) images generated with ERM and DRO classifiers for both majority (top row) and minority (bottom row) subgroups. The ERM-based CFs show significant changes in the areas of spurious correlation (cyan boxes), whereas the DRO-based CFs show almost no changes in the same areas. In contrast, significant changes can be seen in the expected area of disease pathology (magenta boxes) in DRO-based CFs, while the ERM-based CFs show little to no changes in these areas.
on to the spurious correlation as depicted by the generated counterfactual images.
Current datasets typically do not provide the ground truth predictive markers of interest. Future work will require localizing the predictive markers (e.g. with bounding boxes) and determining the degree of overlap with the discovered markers. Further, we intend to explore the power of alternative debiasing techniques and their potential contribution to discovering generalizable image markers.
#### Acknowledgements.
The authors are grateful for funding provided by the Natural Sciences and Engineering Research Council of Canada, the Canadian Institute for Advanced Research (CIFAR) Artificial Intelligence Chairs program, the Mila - Quebec AI Institute technology transfer program, Microsoft Research, Calcul Quebec, and the Digital Research Alliance of Canada. S.A. Tsaftaris acknowledges the support of Canon Medical and the Royal Academy of Engineering and the Research Chairs and Senior Research Fellowships scheme (grant RCSRF1819 / 8 / 25), and the UK's Engineering and Physical Sciences Research Council (EPSRC) support via grant EP/X017680/1.
\begin{table}
\begin{tabular}{c|c c|c c} \hline & \multicolumn{2}{c|}{**Dataset 1**} & \multicolumn{2}{c}{**Dataset 2**} \\ \cline{2-5} & ERM & DRO & ERM & DRO \\ \hline Actionability \(\downarrow\) & 7.68 \(\pm\) 0.01 & 7.86 \(\pm\) 0.01 & 4.93 \(\pm\) 0.01 & 5.68 \(\pm\) 0.04 \\ SSIM \(\uparrow\) & 98.03 \(\pm\) 0.00 & 98.44 \(\pm\) 0.01 & 98.21 \(\pm\) 0.01 & 98.36 \(\pm\) 0.01 \\ CPG \(\uparrow\) & 0.91 \(\pm\) 0.04 & 0.96 \(\pm\)0.03 & 0.88 \(\pm\) 0.07 & 0.89 \(\pm\) 0.04 \\ \hline \hline
**SCLS \(\downarrow\)** & 0.80 \(\pm\) 0.08 & \(\mathbf{0.12\pm 0.07}\) & 0.76 \(\pm\) 0.09 & \(\mathbf{0.22\pm 0.06}\) \\ \hline \end{tabular}
\end{table}
Table 2: Quantitative results to compare counterfactual images generated for both datasets. A low SCLS value implies that the model (\(f_{DRO}\) in this case) did not latch onto the spurious correlation. |
2303.03334 | Multi-User Entanglement Distribution in Quantum Networks Using Multipath
Routing | Quantum networks facilitate numerous applications such as secure
communication and distributed quantum computation by performing entanglement
distribution. Multi-user quantum applications where quantum information is
shared between multiple users require access to a shared multipartite state
between the users. We consider the problem of designing protocols for
distributing such states, at an increased entanglement rate.
We propose three protocols that increase the entanglement rate of multi-user
applications by leveraging multipath routing. The protocols are evaluated on
quantum networks with NISQ constraints, including limited quantum memories and
probabilistic entanglement generation. Monte Carlo simulation results show that
the developed protocols achieve an exponential speedup of entanglement rate
compared to single-path routing techniques, with a maximum speedup of four
orders of magnitude for the cases studied. The speedup was also found to
improve for larger sets of users. When the protocols were tested in scaled-down
real-world topologies, it was found that topology can have a significant effect
on the achievable entanglement rates, with one order of magnitude difference
between topologies. Finally, we find that the benefits of multipath routing are
a maximum for short quantum memory decoherence times, and intermediate values
of entanglement generation probability. Hence the protocols developed can
benefit NISQ quantum network control and design. | Evan Sutcliffe, Alejandra Beghelli | 2023-03-06T18:06:00Z | http://arxiv.org/abs/2303.03334v2 | # Multi-User Entanglement Distribution in Quantum Networks Using Multipath Routing
###### Abstract
Quantum networks facilitate numerous applications such as secure communication and distributed quantum computation by performing entanglement distribution. Multi-user quantum applications where quantum information is shared between multiple users require access to a shared multipartite state between the users. We consider the problem of designing protocols for distributing such states, at an increased entanglement rate.
We propose three protocols that increase the entanglement rate of multi-user applications by leveraging multipath routing. The protocols are evaluated on quantum networks with NISQ constraints, including limited quantum memories and probabilistic entanglement generation. Monte Carlo simulation results show that the developed protocols achieve an exponential speedup of entanglement rate compared to single-path routing techniques, with a maximum speedup of four orders of magnitude for the cases studied. The speedup was also found to improve for larger sets of users. When the protocols were tested in scaled-down real-world topologies, it was found that topology can have a significant effect on the achievable entanglement rates, with one order of magnitude difference between topologies. Finally, we find that the benefits of multipath routing are a maximum for short quantum memory decoherence times, and intermediate values of entanglement generation probability. Hence the protocols developed can benefit NISQ quantum network control and design.
## I Introduction
A quantum network is a collection of devices which exchange quantum information over quantum channels. This can be achieved by first distributing shared entangled states, and then performing quantum teleportation [1, 2]. To communicate between two users, a two-qubit (bipartite) entangled state is required. For multiple users to have access to an entanglement requires a shared multi-qubit (multipartite) state. Applications which use shared multipartite states include clock synchronisation [3], distributed quantum sensing [4], and secret sharing [5, 6]. A further key motivation for quantum communication is the recent advances in quantum computation, and the benefits of running quantum algorithms as distributed quantum computations [7, 8]. In such cases, multipartite states distributed over quantum networks can facilitate multi-qubit operations in computation and quantum error correction [9, 10].
To share multipartite states between distant users requires the design of multi-user entanglement distribution protocols. Most currently proposed protocols first generate entanglement between each user and a central device, which is then transformed into a multipartite state. Each user-centre device entanglement can be generated by performing entanglement swapping along a pre-computed route of quantum repeaters [11, 12, 13]. The required multipartite state is then generated by performing a fusion operation in the central device. A key drawback to such approaches is that in quantum networks, single-path routing can have low rates of success. This issue is further compounded for multiple users. A secondary drawback is that the number of quantum memories at the central device can constrain the number of uses between a multipartite state is shared.
We, therefore, propose three multi-user entanglement distribution protocols that eliminate the limitations of using a single pre-computed routing solution by allowing routing to dynamically select a path, using knowledge of the successfully distributed entanglement states. Current quantum computers are described as being Noisy Intermediate Scale Quantum (NISQ) devices. NISQ devices are characterised by having a limited number of qubits and noisy operations. Therefore, we consider multipartite state distribution protocols for networks constrained by their available quantum resources.
The remainder of this paper is organised as follows: Section II describes the network model whilst Section III discusses previous work. Section IV presents the protocols here proposed and the performance evaluation of those in terms of entanglement rate is reported in Sections V and VI. Analytical upper bounds and approximations for the entanglement rate are derived in Section VII. Section VIII closes the paper with final remarks and areas for further work.
## II quantum network model and operations
### _Quantum network model_
A quantum network can be represented as a graph \(G(V,E)\), with a set of nodes \(V\) and edges \(E\). An example of a grid topology is shown in Figure 1\(a\). Nodes represent quantum devices which can perform local (qubit) operations and classical communication (LOCC). This allows them to act as _quantum repeaters_, performing entanglement swapping to generate long-distance entanglement. As we assume the devices can freely select any two qubits to perform entanglement swapping, they would also sometimes be defined as _quantum switches_[14]. We model nodes as being allocated one quantum memory per connected edge for communication. The decoherence of a qubit stored in a quantum memory is modelled using the cut-off model with a decoherence time \(T_{c}\)[15, 16].
The edges represent quantum channels, over which two-qubit entangled states can be distributed between adjacent nodes. If the entangled state is distributed successfully, it is stored in the allocated quantum memories of the adjacent nodes. We refer to the two-qubit \(|\sigma^{+}\rangle=\frac{1}{\sqrt{2}}(|00\rangle+|11\rangle)\) state as an _entanglement link_ when shared between adjacent devices (as in Figure 1), and as a _Bell pair_ when shared between distant users by entanglement swapping.
### _Entanglement swapping and entanglement fusion_
Over a quantum network, long-distance Bell pairs can be shared between remote devices by performing entanglement swapping at the intermediate nodes along a route connecting such remote devices. Therefore a path of entanglement links (as shown between Nodes A and B in Figure 2), can be converted into an entangled state shared between distant devices [17, 18, 19].
Entanglement fusion is an operation which takes two states of qubit sizes \(n_{1}\) and \(n_{2}\) as inputs and generates a single state of size \(n_{1}+n_{2}-1\). By performing a joint measurement on qubits from each state, a larger entangled state is produced. This is a deterministic operation which can be performed iteratively to generate a \(N\)-qubit multipartite state from \(N-1\) Bell pairs [20, 21]. A schematic of a fusion operation is shown in node A in Figure 2, where two iterations of the fusion operation are used to generate a \(|GHZ_{4}\rangle\) state shared between four separate nodes. As entanglement fusion is a local operation, a constraint is that a qubit from each state must be located in the same physical device.
In this paper, we focus on the distribution of the maximally entangled Greenberger-Horne-Zeilinger (GHZ) state, as it has many uses in quantum computations and secret-sharing applications [22]. Further, GHZ states can be transformed into graph states via LOCC [21], The \(N\)-qubit GHZ state is given as \(|GHZ\rangle_{N}=\frac{1}{\sqrt{2}}(|0\rangle^{\otimes N}+|1\rangle^{\otimes N})\).The \(|\sigma^{+}\rangle\) state is equivalent to a \(n=2\) GHZ state. Further, for GHZ states qubits can be removed by a Pauli X-basis measurement, without destroying the multipartite entanglement between the other qubits [23].
## III Previous Work
In quantum networks, a routing protocol selects a path of edges \(e\in E\), such that a quantum state can be distributed with the fewest number of channel uses, or equivalently, at the highest rate [24]. This requirement is similar to that for routing in classical networks. For a NISQ quantum network, with lossy quantum channels, routing could select the path of edges with the highest rate of entanglement link generation. This route selection could also be performed in conjunction with secondary parameters such as state fidelity or time delay [11]. Beyond routing, a full-stack approach to the operation of quantum networks can also be considered [25, 26].
Fig. 1: a) Graph of a grid quantum network, with a random set of entanglement links. b) Subgraph \(G^{\prime}(V,E^{\prime})\) with edges \(E^{\prime}\) corresponding to the entanglement links in _a)_
Fig. 2: Entanglement swapping and entanglement fusion operations required to generate a GHZ state. Entanglement swapping (1) allows long-distance Bell pairs to be generated from a path of entanglement links (As between Nodes A and B). These Bell pairs can then be transformed into a (4-qubit) GHZ state by entanglement fusion (2).
### _Shortest path (SP) routing_
For entanglement distribution between two users (sharing a bipartite state) in quantum networks, routing conventionally selects the shortest (single) path of edges \(e\in E\) between the users [15, 27]. This shortest path is found to maximise the end-to-end entanglement rate. For distributing multipartite entanglement, routing is more complex as entanglement must be generated between multiple parties. This requires paths of edges to be found between multiple users. Hence, some developed multipartite routing protocols utilise a central device [11, 12, 13]. These protocols operate by generating a Bell pair for each user with a central device. Once all Bell pairs have been generated, the GHZ state is generated by entanglement fusion at the central device. Meignant _et al._ also show that the use of a central node can be relaxed to allow routing along the Steiner tree connecting users, at the cost of additional classical communication [11].
### _Multipath routing in quantum networks_
Multipath routing in quantum networks refers to the allowed use of many possible routes through the network. Rather than attempting entanglement link generation over a pre-computed shortest path, multipath routing attempts entanglement link generation along multiple (or all) of the network edges. Of the successful entanglement links a path is selected. This post-selection of paths allows for a higher end-to-end entanglement rate in networks with probabilistic entanglement link generation. Further, if there is sufficient entanglement link generation redundancy, multipath routing can achieve an entanglement rate which is constant regardless of the distance between users [28, 29]. This is a significant improvement on the exponentially decaying rate-distance relationship achieved by SP routing [27].
The improved entanglement rate can be explained in terms of the bond percolation problem. For certain graphs where edges are created probabilistically, a giant connected component of size \(O(|V|)\) nodes emerge above a critical threshold probability \(p_{c}\)[30]. In quantum networks, the probabilistic edges represent entanglement links and any two users being in the same connected component means a path of entanglement links exists between them. This is a sufficient condition for generating a shared Bell pair. Therefore, for a graph with entanglement links generated with a probability above the critical threshold probability, the likelihood of a path existing is independent of the distance between the nodes.
For bipartite entanglement distribution, multipath routing can achieve higher entanglement rates than if a comparable network capacity is used for shortest path routing [27]. Pant _et al._ developed a network protocol for bipartite entanglement distribution using multipath routing. The work has since been extended by other authors such as to include networks with imperfect repeaters, simple multi-timestep models, or sharing a GHZ state between two users [31, 32, 33].
All multipath protocols previously proposed, excluding preliminary results by the authors, have been for routing between only two users. Further, except for work by Patil _et al._, only bipartite (two-qubit) entanglement distribution has been considered [31]. Patil _et al._ developed a protocol for distributing a GHZ state between two users. However, sharing a GHZ state between two users does not facilitate the multi-user quantum applications which require multipartite states.
### _Global vs. local network information_
The majority of the multipath entanglement distribution protocols discussed use global link-state information for routing. However, in [28, 31] protocol variants are developed that only require local link-state information. Using local-link state information reduces the classical communication required, but these protocols generate large intermediate entangled states, which could reduce final state fidelity.
### _Open research questions_
Multipath routing has not been applied to distributing multipartite states between multiple users, apart from in preliminary results by the authors. Multipartite state distribution is a use case which can benefit from multipath routing, as probabilistic entanglement link generation means larger states are generated at an exponentially lower rate. Further, as NISQ-era quantum networks will be constrained by limited-size quantum devices and low entanglement rates, the leveraging of more network capacity might be a particular benefit for early quantum networks. Hence we developed protocols for generating GHZ states shared between multiple users over a quantum network, which utilises multipath routing. These protocols were evaluated on network models which consider the constraints of NISQ quantum devices.
## IV Multipartite distribution protocols
We propose three protocols, which are novel examples of multipath routing applied to distributing a multipartite state between multiple users. The proposed multipath (MP) protocols are the Greedy Plus (MP-G+), Cooperative (MP-C), and Packing (MP-P) protocols, which are an extension of preliminary work by the authors in [34, 35]. The protocols assume a network temporal evolution in discrete time slots of duration \(T_{\text{slot}}\). In each timeslot, the protocols perform three distinct operations: entanglement link generation, routing and, if a valid route exists, GHZ state generation.
For entanglement link generation we assume nodes attempt to generate entanglement links with all adjacent nodes over quantum edges. Next, the protocol computes a routing solution using the global link-state information. The global link-state information can be represented graphically, as in Figure 0(b), where the subgraph \(G^{\prime}(V,E^{\prime})\) shows the set of edges \(E^{\prime}\) which represent successful entanglement links only. The
use of link-state knowledge for routing is made possible by heralded entanglement distribution where success is flagged by a classical signal [36]. Collecting the global link-state knowledge will have associated classical communication and time delay costs. Evaluating such costs is out of the scope of this work.
Once a routing solution has been found, a \(N\)-qubit GHZ state is generated from the selected entanglement links. This is done such that the qubits of the GHZ state are shared between a subset of users (nodes) \(S\in V\), with \(|S|=N\) and each user holds a single qubit. We assume that after link generation, all nodes can perform ideal LOCC operations, such as those required for entanglement swapping and fusion. Combined with the assumption of perfect entanglement link fidelity, we, therefore, do not consider the fidelity of the GHZ state generated. In this work, we assume each timeslot consists of these three operations.
#### Iii-B1 Multipath Greedy (MP-G+)
The MP-G+ protocol, described in Algorithm 1 is a multipath improvement of standard SP routing techniques for generating a multipartite state shared between multiple users. Our protocol instructs the network to generate a Bell pair between each user \(s\in S\) and a central node \(V_{c}\). A GHZ state is then generated by performing entanglement fusion on the qubits at \(V_{c}\) projecting the entanglement between the separate Bell pairs, onto a GHZ state, shared between qubits held by the users.
First, the algorithm selects a centre node, which was chosen to improve the expected entanglement rate (line 2). The protocol runs for multiple timeslots (lines 4-17) until a GHZ state is generated. At the start of each timeslot, the entanglement link generation and qubit decoherence of the network model are simulated (line 5) and the link state information of \(G^{\prime}\) is updated (line 6). The protocol then finds paths between the centre node and required users (lines 7-8). These users \(S^{\prime}\), are the subset of users which do not currently share a Bell pair with the centre node. The paths are selected for each timeslot such that the maximum number of users receive a shared Bell pair, using the fewest entanglement links. Each path of entanglement links is used to generate a shared Bell pair between the user and the centre node (line 10). These operations are run for multiple timeslots until all users share a Bell pair with the central node. Once this condition has been met, a GHZ state is generated (line 14).
An example routing solution of the MP-G+ protocol is shown in Figure 3. which shows how the shortest paths in \(G^{\prime}\) are not necessarily the shortest paths in the underlying network topology \(G\). The paths are found using a single-source (centre node) multi-sink (users) maxflow algorithm where each user can receive a maximum of unit flow capacity [37]. Routing using a maxflow approach is an improvement to the initial Multipath Greedy (MP-G) protocol proposed by the authors, where paths were found independently, using a greedy routing approach [34]. In practice, the entanglement rate improvement of the MP-G+ was found to be minimal compared to the original MP-G protocol.
The centre node is selected to achieve the highest expected entanglement rate. However, the protocol could equally use other selection criteria such as quantum memory availability or state generation fidelity, without significant changes to the routing process. The protocol selects the centre node by finding the valid routing solution, \(L\) (_paths_) which has the highest expected entanglement rate. That is, the centre node is then taken from the instance of \(L\) for which the following expression is maximised:
\[\prod_{e\in L}p_{e} \tag{1}\]
The expression maximised is the product of the entanglement link generation probabilities \(p_{e}\), for edges \(e\in L\). This approach selects the optimal centre node selection for SP routing and was therefore thought to be a good candidate for the centre node of the MP-G+ protocol.
As we constrain our network to only being able to store a single entanglement link per edge and entanglement links are consumed when entanglement swapping is performed, paths for each centre-user pair must be edge-disjoint. Therefore the edge degree of the centre node is an additional constraint. In our model, the centre node must have at least \(|S|\) edges, or \(|S|-1\) if the centre node is also a user, to store the required qubits for GHZ state generation.
```
1:functionMP-G+(\(G,S\))
2:\(V_{c}\) = selectCentreNode(G,S)
3:hasGHZ = False
4:while not HasGHZ do
5: simulateEntanglementLinks(G)
6:\(G^{\prime}\) = updateLinkSubgraph(G)
7:\(S^{\prime}\)= \(S\notin\) hasSharedBellPair(\(G,V_{c},S\))
8:\(paths=\) getShortestPaths(\(G^{\prime},V_{c},S^{\prime}\))
9:for\(path,s\in paths\)do
10: entanglementSwapping(\(G,path,V_{c},s\))
11:\(G^{\prime}\) = updateLinkSubgraph(\(G\))
12:endfor
13:if hasSharedBellPair(\(G,V_{c},S)==S\)then
14: entanglementFusion(\(G,V_{c},S\))
15: hasGHZ = True
16:endif
17:endwhile
18:endfunction
```
**Algorithm 1** MP-G+ protocol
#### Iii-B2 Multipath Cooperative (MP-C)
The multipath Cooperative (MP-C) protocol is a multipartite entanglement distribution protocol, first proposed by the authors in Sutcliffe _et al._[34]. A GHZ state can be generated from entanglement links as long as entanglement links form a connecting tree of the users. This is achieved using the fewest entanglement links by routing along the minimum Steiner tree in \(G^{\prime}\) connecting \(S\). Figure 3 illustrates the operation of the MP-C protocol where users are connected with the minimum Steiner
tree. The MP-C protocol runs for multiple timeslots (lines 3-12) until a valid tree is found (line 6). By performing entanglement swapping at intermediate nodes (line 8), the entanglement links of the tree are converted into Bell pairs shared between the users. The GHZ state is generated by performing entanglement fusion at the nodes which hold multiple qubits from the set of Bell pairs (line 9).
The MP-C protocol does not constrain routing to a central node, but this means that multiple entanglement fusion operations might be required to generate the GHZ state. The LOCC operations are therefore more complex compared to the MP-G+ and SP protocols, where entanglement fusion is performed only at the centre node. For the MP-G+ protocol, the use of a known centre node means that the protocol can use Bell pairs generated over multiple timeslots if it is not possible to generate Bell pairs for all users in a single timeslot. This asynchronous behaviour frees up edges to reattempt entanglement link generation for other users. As the MP-C protocol does not have a known centre node this is not feasible. Therefore all entanglement links of the tree are required to be present during a single timeslot. However, this constraint does not prevent the protocol from using entanglement links which have been generated over multiple timeslots.
```
1:functionMP-C(\(G,S\))
2:\(\text{HasGHZ}=\textbf{False}\)
3:whilenotHasGHZdo
4: SimulateEntanglementLinks(G)
5:\(G^{\prime}\) = updateLinkSubgraph(G)
6:ifhasConnectingTree(\(G^{\prime},S\))then
7:\(tree=\)minimumSteinerTree(\(G^{\prime},S\))
8: EntanglementSwapping(\(tree,S\))
9: EntanglementFusion(\(tree,S\))
10: HasGHZ = \(\textbf{True}\)
11:endif
12:endwhile
13:endfunction
```
**Algorithm 2** MP-C protocol
#### Iii-C3 Multipath Packing (MP-P)
Previous protocols considered have been developed to distribute a single GHZ state. For certain conditions, multiple GHZ states between the same set of users can be generated during a single timeslot. This can improve the average ER and benefit applications which require multiple copies of a multipartite state. These include QKD [38], or entanglement distillation, when multiple copies of a state can be combined to improve state fidelity [20].
If multiple edge-disjoint trees exist in the link-state graph \(G^{\prime}\), then a separate GHZ state can be generated from each tree. The MP-P protocol, described in Algorithm 3, is an improvement of the MP-C protocol, where instead of terminating after generating a single GHZ state, the GHZ generation operations (lines 6-11) are repeated until a connecting tree can no longer be found. An example of the MP-P protocol is shown in Figure 3, where two Steiner trees are feasible. This operation is a simple method for finding the Steiner Tree packing of \(G^{\prime}\). An optimal packing algorithm was not used, due to the hardness of calculation [39].
```
1:functionMP-P(\(G,S\))
2:\(\text{HasGHZ}=\textbf{False}\)
3:whilenotHasGHZdo
4: SimulateEntanglementLinks(G)
5:\(G^{\prime}\) = updateLinkSubgraph(G)
6:whilehasConnectingTree(\(G^{\prime},S\))do
7:tree = minimumSteinerTree(\(G^{\prime},S\))
8: EntanglementSwapping(\(G\), tree, \(S\))
9: EntanglementFusion(\(G\), tree, \(S\))
10:\(G^{\prime}\) = updateLinkSubgraph(G)
11: HasGHZ = \(\textbf{True}\)
12:endwhile
13:endwhile
14:endfunction
```
**Algorithm 3** MP-P protocol
#### Iii-C4 Shortest Path (SP)
For comparison, a generalised version of the multipartite shortest path (SP) protocols from literature are described [11, 27]. From a centre node (line 2) a set of edge-disjoint shortest paths between the central node and each user (line 3), are selected. This selection process is the same as for the MP-G+ protocol. Next, the protocol iterates (lines 5-19) until a GHZ state is generated. To do so, the protocol operates to generate a set of Bell pairs, by performing entanglement swapping on the entanglement links along each path between a user and the central node (lines
Fig. 3: The routing solutions of the MP-G+, MP-C and MP-P protocols are shown for three users in an example for \(G^{\prime}\). The MP-G+ protocol finds the minimum distance edge-disjoint paths in \(G^{\prime}\) to connect the users to the centre node. For this example, this requires 7 entanglement links. The MP-C and MP-P both find the minimum distance Steiner tree (using 6 links). However, the MP-P protocol selects an additional Steiner tree, allowing a second GHZ to be generated (Tree A & B).
9-14). Once all Bell pairs have been generated, entanglement fusion is performed at the centre node to generate the desired GHZ state (line 16).
A comparison of the main features of the proposed protocols and the SP protocol is shown in Table I. In the table, \(|V|\) and \(|E|\) represent the number of network nodes and edges respectively. The values \(|L|\) represent the number of edges in a route \(L\), which will be some function of the number of users \(|S|\). The MP-G+ and SP protocols utilise a central node, and hence the size of the GHZ state is limited by the number of quantum memories at the centre node. As These protocols, therefore, do not freely scale with the number of users. In terms of computational complexity, multipath protocols have higher per-timeslot computational requirements compared to the SP protocol, as the route used is not pre-computed such as for the SP protocol. In terms of classical communication complexity (number of messages exchanged), the multipath protocols require additional classical communications to obtain the state of \(G^{\prime}\).
## V Comparative performance evaluation
The protocols were evaluated using a Monte Carlo simulation run on the quantum network model described in Section II. We compare the protocols in terms of the achieved entanglement rate (ER), which is the average number of GHZ states generated per timeslot \((\text{GHZ}/T_{\text{slot}})\).
A key constraint in quantum networks is the rate of entanglement link generation, which is the limit of quantum information transmission [27]. The proposed protocols were assessed against the probability \(p_{e}\) of an entanglement link being generated during a timeslot. For entanglement link generation which is lossy and hence probabilistic, the probability of successfully generating an entanglement link over an edge \(e\in E\) can be modelled as:
\[p_{e}=p_{\text{op}}(1-p_{\text{loss}}) \tag{2}\]
which is composed of the probability of imperfect operations in entanglement link generation (\(p_{\text{op}}\)) and the probability of photonic qubit loss in the channel (\(p_{\text{loss}}\)). For an optical fibre edge of length \(L\) km with attenuation \(0.2\) dB/km, this loss can be expressed as \(p_{\text{loss}}=10^{-0.2L/10}\). The operation probability \(p_{\text{op}}\) is the entanglement rate for two back-to-back devices (e.g. at \(L=0\)km). This represents a lumped probability of generating an entanglement link, considering factors other than photon transmission in the fibre. Factors that can affect the probability \(p_{op}\) include failure in photon generation, imperfect qubit-photon entanglement or photon frequency conversion [18, 42].
A further parameter considered was memory decoherence. In our discrete-time network model, entangled qubits are stored with perfect fidelity for \(Q_{c}\) timeslots (\(Q_{c}=\lfloor T_{c}/T_{\text{slot}}\rfloor\)), after which they are assumed to have undergone decoherence (\(T_{c}\) is the memory decoherence time, described in Section II-A). After generation, entanglement links are stored for \(Q_{c}\) timeslots after which they must be utilised or discarded. This allows us to consider only ideal entanglement links. Further, a stored entanglement link blocks entanglement link generation to be re-attempted over an edge, while the quantum memories are occupied.
Throughout Section V we evaluated the protocols on a single test network model, varying one parameter at a time to generate each result. The test network model was defined as a \(6\times 6\) grid topology, with \(|S|=4\) randomly located users \(S\in V\). The entanglement link generation probability \(p_{e}=p\) was uniform for all edges and fixed at \(p=0.75\). The quantum memory decoherence was assumed to be sufficient to only store entanglement for a single timeslot (i.e. \(Q_{c}=1\)).
### _Affect of entanglement link generation on multipartite state generation_
The entanglement rate (ER) of the protocols was found for sharing a GHZ state over the test grid topologies. Figure 4 shows the ER of the multipartite protocols, with the
performance of the shortest path (SP) protocol also shown for comparison. Datapoints represent \(1000\) random locations for \(|S|=4\), and each GHZ state is attempted for up to \(5000\) timeslots. If more than 5% of the protocol runs fail to generate a GHZ state within the maximum timeslots, the datapoint was not plotted.
Figure 4 shows that the proposed multipath protocols achieved a higher ER than the SP protocol, due to the flexibility added by the multipath strategy. Additionally, the MP-P and MP-C protocols outperform the MP-G+ protocol. This is because the routing requirements of the MP-P and MP-C protocols are looser than for the MP-G+ protocol. The MP-P and MP-C protocols can use any connecting tree in \(G^{\prime}\), but the MP-G+ protocol requires a routing solution with \(|S|\) separate edge-disjoint path of entanglement links between the centre node and each user. As a result, there are more possible combinations of entanglement links which can be used to generate a GHZ state for the MP-P and MP-C protocols. Further, all instances of \(G^{\prime}\) which lead to a GHZ for the MP-G+ protocol can also be used by the MP-C and MP-P protocols, so these protocols will always achieve an ER greater or equal to the MP-G+ protocol.
It is also observed that the MP-P protocol outperforms the MP-C protocol for \(p\gtrapprox 0.7\), where the MP-P protocol achieves an ER\(>1\). The MP-P protocol achieves a higher ER as it can generate multiple GHZ states in a single timeslot when there are sufficient entanglement links in \(G^{\prime}\) to connect users via multiple edge-disjoint Steiner trees. This occurs when percolation is observed, for networks where edges are generated above the percolation threshold (\(p>0.5\) in grid topologies) [30], but the exact improvement in ER will depend on the network hardware parameters and user distribution. As multiple trees are unlikely to exist below the percolation threshold, the MP-P performs comparably to the MP-C protocol for \(p<0.5\). To speed up the generation of results, the ER of the SP protocol was found analytically, as when \(Q_{c}=1\) a direct analytical expression exists. For a routing solution of edges \(L\) with entanglement link generation probabilities \(p_{e}\):
\[\text{ER}=\prod_{e\in L}p_{e} \tag{3}\]
### _Distance independent entanglement rate_
A key benefit of multipath routing shown in the literature is the ability to generate long-distance entanglement, at a rate which is independent of the distance between the two users [28, 29]. This result also required that entanglement links are generated above the percolation threshold probability. We show that the multipartite routing protocols developed can achieve a distance-independent ER for multiple users, regardless of their proximity. We initially demonstrate this by showing the ER achieved by the protocols, for users at the four corners nodes of a grid network (\(|S|=4\)). The distance between the corner users is increased by increasing the number of nodes (\(M\times M\)) of the grid topology. The shortest routing solution connecting users is the minimum Steiner tree, which is \(3\times M\), and hence grows linearly with grid size.
Figure 5 shows the ER achieved, plotted against the grid width \(M\) of the topology. The Figure shows that the MP protocols achieve a constant ER with distance, while the rate achieved by the SP protocol decreases exponentially. Further, consistent with the results shown in Figure 4, the ER achieved by the MP-P and MP-C protocols are higher than the MP-G+ protocol.
The speedup observed of the MP protocols over SP protocols is of order \(O((1/p)^{|S|})\) (\(0\leq p\leq 1\), \(|S|\geq 2\)). As the MP protocols achieve an ER which scales with \(O(1)\) for the distance between users, and SP protocols achieve a best-case scaling ER\(\sim p^{|S|}\). Hence, the entanglement rate speedup of the developed multipath protocols is exponential of order \(O((1/p)^{|S|})\).
These results show that the developed protocols can utilise multipath routing to achieve distance-independent ER, even for multipartite states shared between multiple users. However, the results of Figure 5 require that \(p\) is above the percolation threshold for the given topology. For the multipath protocols operating below the percolation threshold, the ER will decrease with increased distance between users, but with a significantly improved ER-distance scaling compared to the SP protocol.
### _Entanglement rate with varied users_
The protocols were tested to assess the effect of the number of users on the entanglement rate. The protocols attempted to distributed GHZ states between 3-25 randomly distributed users. Figure 6 shows the ER achieved by the protocols. While the standard SP protocol exhibits an exponential decrease in ER with the number of users, the MP-P and MP-C protocols were not found to be constrained by this limit. The size of the GHZ state generated by the MP-G+ and SP protocols is limited by the number of quantum memories available at the centre node. In grids networks with the assumptions used, the largest state that can be generated is a \(|\text{GHZ}_{4}\rangle\) state between \(|S|=4\) users, with \(|S|=5\) feasible only if the centre node is also a user.
Fig. 4: Entanglement rate against entanglement link generation probability, for distributing a \(|\text{GHZ}_{4}\rangle\) state between \(|S|=4\) users.
As the MP-P and MP-C protocols do not require a central node, they can freely scale with the number of users. Further, in contrast to SP routing, they exhibit a minimum penalty to the ER for each additional user. For the MP-C protocol, a \(\text{GHZ}_{25}\) state was generated with an \(\text{ER}\approx 0.5\) This result is only valid above the critical probability \(p>p_{\text{crit}}\), with distributing entanglement between large numbers of users being more challenging below the percolation threshold. A further result was that the benefit of MP-P was found the be more significant for fewer users, with a minimal benefit for using the MP-P protocol over MP-C beyond 5 users in the grid topology.
### _Quantum memory decoherence effect on entanglement rate_
We have demonstrated that our developed multipath protocols can achieve an exponential speedup, with respect to the SP protocol, for the distribution of multipartite states between multiple users. However, for significant benefit, these protocols require entanglement link generation which succeeds with a probability close to the percolation threshold for a given topology. Most practical topologies will therefore require an entanglement link probability above what is currently feasible experimentally [43]. In this section, we further demonstrate how the benefits of multipath routing can be observed, even below the percolation threshold. This is achievable when the nodes are equipped with quantum memories sufficient to store entanglement for multiple timeslots.
We have so far only considered networks with quantum memories sufficient for storing qubits for a single timeslot (\(Q_{c}=1\)) which is a common assumption in the literature. However, this limits the possible complexity of the protocols. Patil _et al._ consider a network model in which entanglement can be reattempted for multiple timeslots, but also assume each edge has an additional quantum memory for every timeslot [32]. We use a multi-timestep model where entangled links can be stored for \(Q_{c}\) timeslots after generation, before being discarded. Given the lower performance of MP-G+, we do not consider this protocol for further analysis.
To study the impact of quantum memory decoherence on entanglement rate, the MP-C protocol was run on networks equipped with a range of quantum memory decoherence times. Results are shown in Figure 7, where networks equipped with better quantum memories (i.e. higher \(Q_{c}\)) achieved a higher ER. However, for certain conditions (e.g. \(Q_{c}=5,10,\inf\) and \(p>0.2\)), a minimal benefit was observed when using quantum memories with longer decoherence times. This was because, on average, the protocols were able to generate a GHZ state before \(Q_{c}\) attempts, and the additional storage time was not required.
For the MP-C protocol, it was observed the ER of the protocol approached an upper bound of ER=\(p\), for values of \(p<0.5\). Further, below values of approximately \(p\times Q_{c}<0.5\), the achieved ER starts to decrease significantly with reduced \(p\). This can be explained by considering the probability of an entanglement link being present in an instance of \(G^{\prime}\). Let \(X\) be the event of an entanglement link existing over an edge during a specific timeslot (and hence instance of \(G^{\prime}\)). For \(Q_{c}=1\) the expected value \(X\) is equal to the entanglement generation probability \(E[X]=p\). For \(Q_{c}>1\), the expected value \(E[X]\) is given by:
\[E[X]=\frac{pQ_{c}}{1+p(Q_{c}-1)} \tag{4}\]
for a steady state. This expression comes from the fact that entanglement links can be stored for \(Q_{c}\) timeslots after being generated, but also entanglement link generation is not reattempted while an entanglement link is already present. A giant connected component (of size \(O(|V|)\) will exist in \(G^{\prime}\) when the expected value \(E[X]\) of an entanglement link existing is above the threshold probability for percolation (\(0.5\) for lattice grids). Therefore the maximum speedup obtained by multipath routing can be achieved, even when entanglement link generation \(p_{e}\) is below the critical probability. This result demonstrates how improved quantum memories also can improve the entanglement rates of the developed multipath
Fig. 5: Entanglement rate for distributing a GHZ state between the four corner nodes (\(|S|=4\)) in a grid network topology. The Protocols were run on grid topologies of an increasing number of nodes (\(M\times M\)). The x-axis shows the number of edges required by the SP protocol to connect the required nodes.
Fig. 6: ER achieved with the number of users to share a multipartite state between. Similar results were obtained in grid topologies of other sizes.
routing protocols.
A wider parameter sweep of \(p\) and \(Q_{c}\) was performed to quantify the entanglement rate speedup of MP-P over SP protocol. Figure 8 shows the ratio of the ER achieved for the MP-P and SP protocols (\(ER_{MP-P}/ER_{SP}\)). This was tested with the same network parameters as in Figure 7 and the areas in white are where more than 5% of runs failed to generate a GHZ. Using the MP-P protocol increased the entanglement rate with respect to the SP protocol, for all values of \(Q_{c}\) and \(0<p\leq 1\), with the largest improvement being for network parameters close to the percolation threshold. For the networks with quantum memory decoherence \(Q_{c}>1\) we considered the probability \(E[X]\) of an entanglement link existing over an edge during a given timeslot (i.e. \(E[X]\approx 0.5\)). A maximum speedup was observed for \(p=0.47\), \(Q_{c}=1\), with a \(4\times 10^{4}\) improvement in ER. Below \(p=0.5\) maximum speedup would be observed for values of \(p,Q_{c}\) values complying with \(E[X]\approx 0.5\) but there were not a sufficient number of successful runs of the simulation to plot the results accurately. Similar results were obtained for different-sized grid topologies and for randomly located users. For these other conditions, a similar speedup was observed, but with a different magnitude. As shown in Figure 5, the improvement in ER depends on the distance between the users. Hence, the size of the grid and the location of the users can both affect the magnitude of the entanglement rate speedup.
## VI Varied Topologies
While the protocols designed are general to network topology, different topologies might have different achievable entanglement rates. We simulated the protocols on topologies taken from several real-world optical networks, which are described in Table II [44]. The edge lengths of the topologies used were scaled down by a factor of 100, to more closely match the size of current experimental entanglement setups [43]. The effect of entanglement link generation probability on ER was found by varying \(p_{\text{op}}\), such that \(p_{e}\) for edges \(e\in E\) is given by Equation (2).
### _Entanglement rate speedup in mesh topologies_
For the topologies, the value of \(p_{\text{op}}\) was varied and \(p_{e}\) was calculated for each edge using Eq. (2). The ER of the MP-P protocol, \(ER_{MP-P}\), was then plotted against average edge probabilities \(\bar{p_{e}}\) for each network, to account for the varied distribution of edge lengths, and therefore values of \(p_{e}\). We use randomly located users with \(|S|=5\) and \(Q_{c}=1\). As with previous results, data points represent \(1000\) random locations attempted for up to \(5000\) timeslots.
Figure 9 shows that the ER followed a similar trend to that in grid topologies with uniform \(p\). However, the absolute ER achieved by the protocols varied between topologies, especially at low \(\bar{p_{e}}\). Further, the topologies in which the highest ER was achieved (Eurocore, EON and UKNet) had a wide range of average edge lengths. This suggested that network topology affects the ER of the multipath protocols.
The speedup of the MP-P protocol over the SP protocol on the varied topologies is shown in Figure 10. The colour identifying the result for each topology in Figure 10 is the same as in Figure 9. For the mesh topologies, the tree-variant of SP routing was used [21]. In this approach routing still selects the single shortest path, but instead of connecting to a centre node routing is performed along the shortest Steiner tree.
Fig. 8: \(ER_{MP-P}\) over \(ER_{SP}\) for a parameter sweep of \(p\) and \(Q_{c}\). Entanglement rate found for generating GHZ state between the four corner nodes (\(|S|=4\)) of the topology
Fig. 7: Entanglement rate for MP-C against link generation probability \(p\) with selected values of \(Q_{c}\). Also plotted, is the line \(\text{ER}=p\).
For all topologies, a significant speedup occurred for all values of \(\bar{p_{e}}\), with a maximum at an intermediate value of \(\bar{p_{e}}\) depending on the topology. For the topologies tested the maximum speedup occurred between \(0.28<\bar{p_{e}}<0.52\). The benefit of using the MP-P protocol was found to be reduced for both high and low \(\bar{p_{e}}\). For high \(\bar{p_{e}}\) the SP protocol was sufficient to obtain a relatively high ER, hence the possible speedup for the MP-P protocol decreased relatively. Similarly, for very low \(\bar{p_{e}}\), we suggest that routing will predominately succeed along the minimum distance path in \(G^{\prime}\), as longer paths will exist with a significantly lower likelihood. This also reduces the benefit of the proposed multipath protocols. However, the results suggest a significant ER improvement will still be achieved for low \(\bar{p_{e}}\). At intermediate values of \(\bar{p_{e}}\), there was sufficient redundancy in the edge distribution of graph \(G^{\prime}\), such that a route was likely to be found by the multipath protocol. Thus, the SP protocol has a much reduced ER as in contrast, any entanglement link failure along the route prevents a GHZ state from being generated. Further, it was found the magnitude of the speedup was also less than for the grid topology in Section V-D. This is due to the choice of users rather than the effect of topology. As routing was performed between randomly selected users, rather than between corner nodes, the shorter average path length means the SP protocol achieves a higher ER, but as shown, path length does not significantly affect the ER of the MP protocols.
### _Scaling with users in varied topologies_
The MP-P protocol was studied for the mesh topologies to assess the impact of the number of users on the ER. This was found for two values of operational probability \(p_{\text{op}}\) as shown in Figure 11, a) \(p_{op}=0.75\) and b) \(p_{op}=0.4\). As seen in Figure 6 with the grid topology, the ER decreases for additional users. However, we observe significant variations in the ER scaling between topologies. Certain topologies such as the Eurocore, UKnet and \(6\times 6\) Grid networks allowed higher entanglement rates. This was thought to be primarily due to their higher nodal degree. For the multipath protocols, being able to utilise many possible paths means that a high nodal degree improves ER.
However, the ordering of ER achieved by the protocols in different topologies was not consistent for all values of \(p_{\text{op}}\). For example, Figure 11a shows the grid topology (black line) performed better at \(p_{op}=0.75\) relative to the other topologies than at \(p_{op}=0.4\) (Figure 11b). This behaviour might be explained by considering the size of the largest connected component of \(G^{\prime}\), which will be a function of \(p_{\text{op}}\) and topology. Figure 11c shows the proportion of network nodes belonging to the largest connected component with varied \(p_{\text{op}}\). The difference in the size of the largest connected component at \(p_{op}=0.4\) and \(p_{op}=0.75\) correlates with variation in relative performance between the topologies.
## VII analytical results
The Monte Carlo simulation model allows for varied protocols and network models to be tested accurately. However, this approach can require a significant number of iterations to generate accurate results, and hence can be slow. Hence analytical expressions were developed, with significantly reduced computational costs, to allow for fast comparisons of network conditions on the protocols.
### _Upper bound of entanglement rate_
The upper bound of the ER is the maximum number of GHZ states which can be distributed per timeslot. In reference to the network graph \(G\) this upper bound is the maximum number of edge-disjoint Steiner trees which can be found to connect all users. Equivalently, the number of trees in in
Fig. 10: The speedup achieved by the MP-P protocol over the SP protocol for varied \(\bar{p_{op}}\) (\(|S|=5,Q_{e}=1\)).
Fig. 9: \(ER_{MP-P}\) for varied \(\bar{p_{op}}\) (\(|S|=5,Q_{e}=1\)). ER is shown against average entanglement link generation probability \(\bar{p_{e}}\). The legend used also applies to later figures in Section VI
gives the number of GHZ states which can actually be be distributed in a given timeslot. Finding the number of edge-disjoint trees in a graph is the Steiner tree packing problem [39]. However, due to the hardness of solving the Steiner tree packing problem directly, an equivalent bound was instead considered. This equivalent is the minimum cut required to separate any user from the set of other users. This also gives an upper bound on the number of Steiner trees which can be distributed over a graph. For the grid topology, this min-cut is the minimum nodal degree of any user, giving an upper bound of \(\text{ER}\leq 4p\). The protocols developed, specifically MP-P, were not found to approach this upper bound for any values of \(p\). This suggests that routing multiple \(GHZ\) states in \(G^{\prime}\) is challenging, even for networks above the critical threshold for percolation. As we consider finite quantum memories, the probability of a complete path existing in any timeslot is lower than the upper bound suggests.
### _Analytical approximation_
An analytical approximation for the ER of the protocols was found as a function of the state of \(G^{\prime}\). For randomly located users in the network, the probability they are all in the same connected component can be expressed using combinatorics. The function \(M(V,S,C)\) is the probability of all \(|S|\) users being in a connected component of size \(C\).
\[M(V,S,C)=\begin{cases}\frac{(|V|-|S|)}{\binom{C-|S|}{|S|}}&if\ |S|\leq C\\ 0&otherwise\end{cases} \tag{5}\]
The expression \(\binom{|V|-|S|}{C-|S|}\) is the number of combinations for non-user nodes in the connected component and \(\binom{C}{|S|}\) is all possible combinations of a connected component which contains all \(|S|\) users. As \(G^{\prime}\) describes a graph of probabilistic generated edges, the size \(C\) will follow some probability distribution dependent on the entanglement link generation and storage properties of the network. Then, the ER can be estimated using a likelihood-weighted sum of the sizes of the connected component, using the probability distribution \(P(C=c)\) of the largest connected component being of size \(C\).
\[ER\approx\sum_{c=1}^{|V|}M(V,S,C)\times P(C=c) \tag{6}\]
This expression assumes Steiner tree routing and is hence only valid for the MP-C protocol (and the first GHZ for the MP-P protocol). Due to its inferior performance, the MP-G+ protocol was not assessed analytically.
Figure 12 shows the closeness of fit between the ER calculated for the MP-C protocol from the Monte Carlo simulation and Equation (6) for randomly located users and \(Q_{c}=1\). The close fit allows the use of the analytical expression to find entanglement rates which the Monte Carlo approach is not computationally efficient. The values plotted in Figure 12 use simulated instances of the network model, without running any entanglement distribution protocol, to estimate the distribution of the size of the largest connected component \(G^{\prime}\). This operation is significantly computationally faster when entanglement distribution protocols also run.
To avoid relying on computationally generated distributions for the connected component, it was hoped to find a closed-form expression for the ER as a function of topology and the expected value of the entangle
Fig. 11: ER of the MP-P protocol for the number of users \(|S|\) at a) \(p_{op}=0.75\) and b) \(p_{op}=0.4\). c) proportion of nodes which are part of the largest connected component in \(G^{\prime}\).
Fig. 12: ER for the MP-C protocol against operational probability \(p_{op}\). The MP-C protocol was tested for different topologies and different numbers of users. The solid lines represent the Monte Carlo simulation and the dashed lines represent ER predicted by Equation (6).
distribution can be calculated analytically for certain topologies such as for Erdos-Renyi graphs [45, 46]. We, therefore, approximated the subgraph \(G^{\prime}\) of the mesh topologies as an Erdos-Reyni graph, to model the size of the connected component generated from probabilistic entanglement links. While this approach successfully modelled the ER in Erdos-Renyi topologies, the approximation was not generally valid for the mesh topologies. Of the topologies considered, only Eurocore was well approximated using this expression.
## VIII Conclusions and further work
From a literature review, no protocols for multi-user entanglement distribution were found to utilise multipath routing. We proposed three such protocols, MP-G+, MP-C and MP-P which were hoped to speed up the entanglement rate of generating shared multipartite states. The protocols were simulated on quantum network models considering qubit decoherence and probabilistic entanglement, such as would be observed in a network of NISQ devices.
Simulation results showed that multipath routing provided an exponential entanglement rate speedup with the distance between users, compared to multipartite entanglement distribution protocols using shortest single path routing. Further, the benefits of multipath routing increased with additional users. It was found that the multipath protocols provide the highest benefit for entanglement link generation which succeeds with a probability close to the percolation threshold for the given topology. The use of quantum memories with improved decoherence times was considered and found to enhance multipartite entanglement rates, especially for multipath routing. As the multipath protocols provide speedup for intermediate values of entanglement generation and short decoherence times, this research will have possible applications for NISQ quantum networks. It was further found that the MP-P and MP-C protocols outperform equivalent methods for which a central node is required.
We further considered how different topologies affect the achievable entanglement rate achieved by the developed multipath protocols, and how those compare to shortest-path routing. It was found that there were significant variations between the topologies considered, but that this can be characterised using the connective properties of the network topologies.
|
2304.13180 | Sebis at SemEval-2023 Task 7: A Joint System for Natural Language
Inference and Evidence Retrieval from Clinical Trial Reports | With the increasing number of clinical trial reports generated every day, it
is becoming hard to keep up with novel discoveries that inform evidence-based
healthcare recommendations. To help automate this process and assist medical
experts, NLP solutions are being developed. This motivated the SemEval-2023
Task 7, where the goal was to develop an NLP system for two tasks: evidence
retrieval and natural language inference from clinical trial data. In this
paper, we describe our two developed systems. The first one is a pipeline
system that models the two tasks separately, while the second one is a joint
system that learns the two tasks simultaneously with a shared representation
and a multi-task learning approach. The final system combines their outputs in
an ensemble system. We formalize the models, present their characteristics and
challenges, and provide an analysis of achieved results. Our system ranked 3rd
out of 40 participants with a final submission. | Juraj Vladika, Florian Matthes | 2023-04-25T22:22:42Z | http://arxiv.org/abs/2304.13180v2 | Sebis at SemEval-2023 Task 7: A Joint System for Natural Language Inference and Evidence Retrieval from Clinical Trial Reports
###### Abstract
With the increasing number of clinical trial reports generated every day, it is becoming hard to keep up with novel discoveries that inform evidence-based healthcare recommendations. To help automate this process and assist medical experts, NLP solutions are being developed. This motivated the SemEval-2023 Task 7, where the goal was to develop an NLP system for two tasks: evidence retrieval and natural language inference from clinical trial data. In this paper, we describe our two developed systems. The first one is a pipeline system that models the two tasks separately, while the second one is a joint system that learns the two tasks simultaneously with a shared representation and a multi-task learning approach. The final system combines their outputs in an ensemble system. We formalize the models, present their characteristics and challenges, and provide an analysis of achieved results. Our system ranked 3rd out of 40 participants with a final submission.
## 1 Introduction
Clinical trials are research studies carried out in human subjects to assess the effectiveness and safety of medical, surgical, or behavioral interventions (Friedman et al., 2015). These investigations constitute the main approach that medical researchers use to determine whether a novel treatment such as a new medication, procedure, diet, or medical device is safe and efficient in humans. Clinical trials are often used to compare the effectiveness of a new treatment against the standard treatment or placebo treatment and to assess its adverse effects. When performed rigorously, clinical trials present the most valuable resource for informing evidence-based healthcare decisions.
On average, more than 100 reports of CTs are published every day (Zarin et al., 2019). Keeping up with all their results and novel discoveries is a time-consuming and often practically impossible endeavor. This has brought to light dedicated organizations, such as Cochrane,1 that manually synthesize these clinical outcomes (Higgins et al., 2019). Nevertheless, they struggle to keep up with the ever-increasing amount of literature and new studies. For this purpose, automated approaches based on Machine Learning (ML) and Natural Language Processing (NLP) can be developed to facilitate the process of inferring new knowledge and finding evidence from clinical trial reports.
Footnote 1: [https://www.cochranelibrary.com](https://www.cochranelibrary.com)
The SemEval-2023 Task 7, titled _Multi-evidence Natural Language Inference for Clinical Trial Data (NLI4CT)_, focused on developing NLP systems for making conclusions and finding evidence in clinical trial reports (CTRs) (Jullien et al., 2023). It featured two subtasks that were strongly coupled together. The first subtask was, given a clinical trial document and a claim, to develop a model that infers a logical relation b
Figure 1: The task consists of predicting whether a given claim entails or contradicts the clinical trial report based on the evidence found in it.
Entailment or Contradiction. The second subtask was, given all the sentences in the clinical trial document, to develop a model that selects a subset of those sentences that serve as evidence for making a decision on the logical entailment relation between the claim and the document. The task is illustrated in Figure 1. It shows two different claims, each coupled with an excerpt from a clinical trial report and a gold final label. The gold evidence sentences are highlighted for emphasis.
For this task, we developed two competitive NLP systems. The first one is a pipeline system, which learns to perform the tasks of evidence retrieval and textual entailment separately and takes into account one sentence at a time during the evidence selection process. The second approach is a joint system, which jointly learns the two subtasks with a unified representation of a claim and the whole document, and uses a shared training procedure. The joint system performed better on the test set, but the final system is an ensemble system that utilizes the power of both systems by combining their outputs with an averaging function. Our final system achieved the 3rd place in the competition out of 40 submitting participants. We provide the code of our system in a GitHub repository.2
Footnote 2: [https://github.com/jvladika/NLI4CT](https://github.com/jvladika/NLI4CT)
## 2 Related Work
The task of Natural Language Inference (NLI) consists of inferring whether there is logical entailment or contradiction between two pieces of text - a premise and a hypothesis. It has been researched since the early days of NLP when it was mostly tackled with rule-based and linguistically informed approaches MacCartney (2009). A big surge of interest in the task occurred after the release of the SNLI dataset Bowman et al. (2015), as well as follow-up datasets like MultiNLI (MNLI) Williams et al. (2018) and Adversarial-NLI (ANLI) Nie et al. (2020). The task has also been researched in the clinical domain - the dataset MedNLI Romanov and Shivade (2018) features more than 14,000 pairs of claims from clinical notes and patient records, extracted from the MIMIC-III database Johnson et al. (2016).
The task of evidence retrieval, or more precisely evidence sentence selection, has been studied as a step in common NLP tasks like Machine Reading Comprehension (MRC) Wang et al. (2019) and Question Answering (QA) Thayaparan et al. (2021). Its main purpose is to improve the explainability and interpretability of models' predictions but also to aid its reasoning process for making decisions. Combining the steps of evidence retrieval and textual entailment recognition comprises the task of automated fact-checking or claim verification, where the goal is to assess a claim's veracity based on the evidence related to it Guo et al. (2022). This task has mostly been concerned with verifying claims related to news events, society, and politics, but has been increasingly so researched in scientific, biomedical, and clinical domains Wadden et al. (2020); Sarrouti et al. (2021).
## 3 Dataset
The clinical trials used for constructing the dataset originate from _ClinicalTrials.gov_,3 a database of privately and publicly funded clinical studies conducted around the world, run by the U.S. National Library of Medicine. All the clinical trial reports in the dataset are related to breast cancer and are written in English. There is a total of 1,200 clinical trials and 2,400 corresponding claims related to these CTRs, which were written and annotated by clinical domain experts, clinical trial organizers, and research oncologists.
Footnote 3: [https://clinicaltrials.gov](https://clinicaltrials.gov)
For the task, each CTR may contain 1-2 patient groups, called cohorts or arms, and these groups may receive different treatments, or have different baseline characteristics. The dataset consists of a total of 2,400 claims and was split into a training dataset of 1,700 claims, a development dataset of 200 claims, and a hidden test dataset of 500 claims. There are two types of claims, those related only to a single CTR and other ones related to two CTRs, which usually make some form of comparison between the two reports. Each of the claims is related to only one of the following four sections:
* **Eligibility Criteria**. A set of conditions for patients to be allowed to take part in the clinical trial, such as age, gender, medical history.
* **Intervention**: Information concerning the type, dosage, frequency, and duration of treatments being studied in the clinical trial.
* **Results**: Reports the outcome of the trial with data like the number of participants, outcome measures, units, and results.
* **Adverse Events**: These are any unwanted side effects, signs, or symptoms observed in patients during the clinical trial.
## 4 System Description
In this chapter, we will describe the architecture of the developed systems and the choice of base models comprising the systems. Even though the task features two standalone subtasks and participating in each of them was optional, we found the best synergy is achieved by combining them. The selection of appropriate evidence sentences in subtask 2 is an important prerequisite for recognizing textual entailment in subtask 1. This is because most textual entailment datasets used for training the NLI systems are modeled to recognize entailment between two sentences only, so the performance between a sentence and a whole document is still underwhelming Schuster et al. (2022). Therefore, it was important to narrow down the whole document to a shorter span of text with only the relevant evidence sentences.
### Pipeline Systems
In the pipeline system, the evidence sentences selected by the first model are used as the input for veracity prediction to the next model. Using the standard terminology from computer science, we call these systems pipeline systems. It is also common to use the same underlying base model in both steps and fine-tune it for these two different tasks DeYoung et al. (2020).
To formalize the approach, we will define it mathematically. Given a claim \(c\) and \(n\) sentences \(s_{1},s_{2},...,s_{n}\) that constitute the clinical trial report, the goal is to train a model that predicts
\[z_{i}=\mathbbm{1}[s_{i}\text{ is an evidence sentence}].\]
This is modeled as a binary sequence classification task, where candidate sequences are a concatenation of a candidate sentence \(s_{i}\) and the claim \(c\) in the form of \(a_{i}=[s_{i};SEP;c]\). Each of these sequences is encoded with the base language model to obtain their dense representation \(h_{i}=BERT(a_{i})\). This representation is then fed to a classifier model, Multi-Layer Perceptron (MLP), and its output is passed to a softmax function that assigns the probabilities on whether the candidate sentence is or is not the evidence:
\[p_{i},\,\bar{p}_{i}=softmax(MLP(h_{i})).\]
Finally, a selector function labels those sentences with a probability over a threshold to be evidence sentences - this threshold is a hyperparameter that can be learned, such as \(z_{i}=p_{i}>0.5\). In the end, the model has selected \(k\) final evidence sentences \(e_{1},e_{2},...,e_{k}\) that are used as input for the next step.
The task of textual entailment is again a binary classification task that for a given claim \(c\) and \(k\) evidence sentences \(e_{1},e_{2},...,e_{k}\) predicts the logical relation of either Entailment or Contradiction. Using the standard terminology of Natural Language Inference, the claim \(c\) is the hypothesis, and the concatenation of evidence sentences \(e=[e_{1};e_{2};...;e_{k}]\) is the premise. These two are concatenated as \(x=[c;SEP;e]\) and once again passed to the base language model to obtain the dense embedding \(w=BERT(x)\). The model has to learn the function
\[\hat{y}(c;e)=softmax(MLP(w)),\]
which is the probability distribution of each inference label for the claim \(c\) given evidence \(e\). The class with the highest probability score is selected as the final verdict \(v(c;e)=argmax(y)\).
### Joint Systems
The second system we developed jointly learns both the tasks of evidence retrieval and textual entailment. This leverages the machine-learning technique of multi-task learning (MTL), which was shown to be data efficient and improve the performance of each individual task it learns on through shared representations Crawshaw (2020). Intuitively, the model improves the performance on both tasks simultaneously since selecting high-quality evidence is important for recognizing entailment and conversely, the final entailment/contradiction label influences the specific evidence to be selected.
Unlike in the pipeline system, where each sequence consisted only of _one_ candidate sentence and the claim, here the claim \(c\) is concatenated together with _all_ of the sentences \(s_{1},s_{2},...,s_{n}\) in the clinical trial document to obtain a claim-document sequence \(seq=[c;SEP;s_{1};SEP;s_{2};...;SEP;s_{n}]\).4 This approach makes the representation of each candidate sentence aware of the context it appears in with regard to the rest
of the document, as well as aware of the claim itself. This sequence is fed to the base language model to obtain a dense representation \(h=BERT(seq)=[h_{c};SEP;h_{s_{1}};...:SEP;h_{s_{n}}]\). The representation of each candidate sentence \(h_{s_{i}}=[h_{w_{1}},h_{w_{2}},...,h_{w_{m}}]\) is singled out from the initial representation and passed to a binary linear classifier that, similarly to the one in the pipeline system, calculates the probabilities of the sentence being an evidence sentence:
\[p_{i},\,\bar{p}_{i}=softmax(MLP(h_{s_{i}})).\]
Those sentences that are above the \(0.5\) threshold are then selected as evidence sentences and concatenated together to form the final evidence representation \(h_{e}=[h_{e_{1}},h_{e_{2}},...,h_{e_{k}}]\). This representation is given to a final ternary linear classifier, that same as in the pipeline system predicts the verdict
\[v=argmax(softmax(MLP(h_{e}))).\]
Note that the representation of the claim \(h_{c}\) is not passed anywhere because the idea is for evidence sentences to already be aware of the semantics of the claim from the joint claim-document representation.
### Ensemble System
After developing the pipeline system and the joint system, we ended up with the best-performing systems on the two tasks in each category of systems. These best systems were singled out and their outputs were combined for the final system. This type of approach is usually called a stacked Pavlysheko (2018) or an ensemble system Ganaie et al. (2021) and has been proven to perform well in machine-learning shared tasks and competitions.
Considering that the outputs of the two systems are different in certain predictions because of different output probabilities of classes, we decided to average the probabilities of each of the systems with appropriate weights. After experimenting with different weights, the final function was:
\[p_{final}=0.4\cdot p_{pipeline}+0.6\cdot p_{joint}.\]
### Base Models
Both of the components constituting the system use an underlying base model. We opt for large pre-trained language models (PLMs) since they represent the state of the art in virtually all NLP tasks. We experimented with a number of different base models. BERT Devlin et al. (2019) is used as the representative vanilla PLM, which gives good initial insight into the performance of PLMs on the task. Over the years, there have been multiple domain-specific variations of the BERT model, specialized for text in the scientific, biomedical, or clinical domains. BioBERT Lee et al. (2020) was fine-tuned on abstracts of biomedical scientific publications. ClinicalBERT Alsentzer et al. (2019) is a model fine-tuned on MIMIC-III database, a database of electronic health records and clinical notes of patients admitted to critical care units. UmlsBERT Michalopoulos et al. (2020) moves further away from unstructured text and injects structured domain knowledge, from UMLS Bodenreider (2004) - a large knowledge base of biomedical concepts and semantic relations between them, into the model training process.
DeBERTa He et al. (2021) is an improved extension of the BERT model that introduced disentangled attention and enhanced masked decoder, which both amplify the importance of positional embedding of tokens in a sequence. The DeBERTa-v3 He et al. (2021) is a novel version of the model that uses the task of replaced token detection (RTD) instead of predicting masked tokens. It was the first model to beat the human performance on the
\begin{table}
\begin{tabular}{l|c c c|c c c} \hline \hline \multicolumn{6}{c}{**Evidence Retrieval**} & \multicolumn{3}{c}{**Textual Entailment**} \\ \hline
**Base Model** & **Precision** & **Recall** & **F1** & **Precision** & **Recall** & **F1** \\ \hline BERT & \(77.5\) & \(80.8\) & \(80.2\) & \(58.2\) & \(64.0\) & \(61.0\) \\ \hline BioBERT & \(78.0\) & \(81.9\) & \(80.8\) & \(64.3\) & \(65.0\) & \(64.5\) \\ ClinicalBERT & \(81.0\) & \(81.2\) & \(81.1\) & \(52.7\) & \(87.0\) & \(65.7\) \\ UmlsBERT & \(78.6\) & \(88.5\) & \(83.9\) & \(56.1\) & \(83.0\) & \(67.0\) \\ \hline StructBERT & \(77.1\) & \(86.5\) & \(82.4\) & \(50.3\) & \(91.0\) & \(64.8\) \\ ERNIE & \(84.5\) & \(81.2\) & \(84.8\) & \(53.0\) & \(88.0\) & \(66.2\) \\ DeBERTa-v3 & \(81.8\) & \(89.1\) & **86.2** & \(78.5\) & \(84.0\) & **80.5** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results of different base models for the two tasks on the development set.
GLUE benchmark Wang et al. (2018) for natural language understanding tasks. For comparison, we include other models that excel in NLU tasks, namely StructBERT Wang et al. (2020) and ERNIE Zhang et al. (2019). In the end, we would like to test whether the models highly specialized for inference and textual entailment tasks beat the BERT extension models fine-tuned to biomedical language and knowledge.
The hyperparameters were the same for all models and setups: learning rate \(10^{-5}\), warmup rate \(0.06\), weight decay \(0.01\), epochs \(5\)-\(7\), mixed precision training enabled. For all of the datasets, their _Large_ version was used, imported from the HuggingFace repository.5 The models were additionally fine-tuned on the previously mentioned NLI datasets like MNLI, ANLI, MedNLI, following the approach of Laurer et al. (2022).
Footnote 5: For example, DeBERTa-v3-Large is at: [https://huggingface.co/microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large)
## 5 Results & Analysis
In this section, we report on results achieved by our systems and provide a qualitative error analysis with challenging examples from the dataset.
### Final Results
The results of different base models for the two tasks with the pipeline system are presented in Table 1. These are the results on the dev set considering that this dataset has the revealed truth labels and allowed for an unlimited number of test runs. The results for the evidence retrieval are binary classification metrics, where each candidate sentence was of positive class if it constituted evidence, otherwise of negative class. The results of the textual entailment task are again binary classification metrics for the two classes of entailment and contradiction, using the gold evidence sentences. We found that the model trained on gold evidence performs better on the final test set than the model trained on internally selected evidence sentences.
As visible from Table 1, the NLI task of textual entailment recognition was more challenging than the evidence sentence selection task. The domain-specific biomedical and clinical BERT models outperformed the vanilla BERT model on both tasks, which shows the benefit of pre-training large language models on specialized text for NLP tasks concerning specific domains. The clinical BERT outperforms the BioBERT, which is an expected outcome since it works with clinical trial reports, while UmlsBERT outperforms both, showing the effectiveness of injecting structured knowledge into language models. Nevertheless, by far the best-performing base model turned out to be DeBERTa-v3 and it especially excelled in the textual entailment task. Although this was expected considering its proven efficiency on NLU tasks, the sheer margin of difference came as a surprise.
The joint systems turned out to perform slightly better on the evidence retrieval task. This was expected considering that it considers the full document when doing the evidence retrieval task, which means each candidate sentence is contextualized with appropriate surroundings around it and was already shown to perform well in evidence selection for fact-verification datasets Stammbach (2021). Considering the long training times of the joint system, since it uses dense representations of full documents which can be up to \(1024\) tokens, we narrowed it down to just using DeBERTa as the underlying base model for embedding generation. The scores of the best-performing pipeline and best-performing joint system on the hidden test set are shown in Table 2. The outputs of these two systems were averaged with appropriate weights to obtain the final ensemble submission. We performed some additional post-processing such as truncating the selected evidence to a maximum of \(20\) sentences. This was done due to the tendency of the model to select all the candidate sentences as evidence sentences for claims with quantifiers like _at least one, exactly one, none of the patients_, etc. Selecting all the sentences incorrectly diminishes precision while increasing recall, but achieving high precision contributed more to the final score and was more challenging in general.
\begin{table}
\begin{tabular}{l c c} \hline \hline
**System** & **Evidence F1** & **Entailment F1** \\ \hline \hline Best pipeline & 79.8 & 77.2 \\ Best joint & 80.4 & 78.3 \\ Best ensemble & **81.8** & **79.8** \\ \hline Place & \(4^{\text{th}}\) & \(3^{\text{rd}}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Best performing model results on the test set
### Error Analysis
To better understand the strengths and weaknesses of our system, we conducted a manual analysis of predictions on the development set to find out where the final model incorrectly labeled the claims as the opposite class. Table 3 provides five such examples from the dataset. Each example in the table consists of a claim, accompanying evidence sentences, and a gold label assigned to them by the expert annotators. We also identify five different challenges that the model has to tackle in order to be able to correctly classify these claims. These challenges are:
* **Commonsense Reasoning.** The model has to understand concepts and make judgments about everyday matters, that are common and innate to humans.
* **Numerical Reasoning.** The model has to be capable of applying basic mathematical operations and grasping comparisons, orders, or quantities.
* **Multi-hop Reasoning.** In some examples, the model needs to combine multiple pieces of evidence (make multiple "hops") to come to the final conclusion.
* **Medical Knowledge.** Certain claims rely on expert knowledge of medical concepts and terminology related to the human body, diseases, drugs, or treatments.
* **World Knowledge.** This type of knowledge refers to the non-linguistic information about the outside world contained in claims.
## 6 Conclusion
In this paper, we describe our solution for the SemEval-2023 Task 7, dealing with natural language inference and evidence retrieval from clinical trial reports. We motivate the task, discuss related work, provide formal definitions of the developed systems, present the results, analyze the performance of models, and discuss some challenges in the process. We developed two types of systems - a pipeline system, which learns evidence retrieval and textual entailment sequentially, and a joint system, which learns the two tasks simultaneously. The final system combines them into an ensemble and achieved the 3rd place in the competition out of 40 teams with a final submission.
We anticipate this system will be useful for facilitating the work of medical experts in synthesizing the results and outcomes of the ever-increasing amount of clinical trial reports, as well as be used by the NLP community for the related tasks of biomedical question answering, automated claim verification, and recognizing textual entailment. The system could be improved in the future by overcoming the challenges related to commonsense, numerical, and multi-hop reasoning, or by injecting additional medical and world knowledge to it.
\begin{table}
\begin{tabular}{p{56.9pt} p{113.8pt} p{113.8pt} p{113.8pt}} \hline
**Challenge** & **Claim** & **Evidence** & **Label** \\ \hline
**Commonsense Reasoning** & In order to participate in the trial, participants must be aware of where they are, and what day it is. & Inclusion Criteria: Cognitively oriented to time, place, and person (determined by nurse) & **Entailment** \\ \hline
**Numerical Reasoning** & Neutropenia affected the majority of patients in cohort 1 of the primary trial. & Adverse Events: Total 26/69 (37.68\%) [...] Neutropenia 4/69 (5.80\%) & **Contradiction** \\ \hline
**Multi-Hop Reasoning** & The primary trial and the secondary trial do not use the same route of administration for interventions. & _(primary)_ Intervention: Vaccine Therapy / _(secondary)_ Intervention: PET Guided Biopsy & **Entailment** \\ \hline
**Medical Knowledge** & Patients with cancer that has spread from a breast tumor to their CNS are able to take part in the trial. & No histologically proven bone marrow metastasis. & **Contradiction** \\ \hline
**World Knowledge** & A minimum bodyweight of 50kg is required to participate in the secondary trial. & Patient Characteristics: Total body weight > 110 lbs (without clothes) & **Entailment** \\ \hline \end{tabular}
\end{table}
Table 3: Five common challenges found in the dataset with a representative example for each challenge that was classified incorrectly by our system. |
2310.17279 | Automatic Logical Forms improve fidelity in Table-to-Text generation | Table-to-text systems generate natural language statements from structured
data like tables. While end-to-end techniques suffer from low factual
correctness (fidelity), a previous study reported gains when using manual
logical forms (LF) that represent the selected content and the semantics of the
target text. Given the manual step, it was not clear whether automatic LFs
would be effective, or whether the improvement came from content selection
alone. We present TlT which, given a table and a selection of the content,
first produces LFs and then the textual statement. We show for the first time
that automatic LFs improve quality, with an increase in fidelity of 30 points
over a comparable system not using LFs. Our experiments allow to quantify the
remaining challenges for high factual correctness, with automatic selection of
content coming first, followed by better Logic-to-Text generation and, to a
lesser extent, better Table-to-Logic parsing. | Iñigo Alonso, Eneko Agirre | 2023-10-26T10:00:24Z | http://arxiv.org/abs/2310.17279v2 | # Automatic Logical Forms improve fidelity in Table-to-Text generation
###### Abstract
Table-to-text systems generate natural language statements from structured data like tables. While end-to-end techniques suffer from low factual correctness (fidelity), a previous study reported gains when using manual logical forms (LF) that represent the selected content and the semantics of the target text. Given the manual step, it was not clear whether automatic LFs would be effective, or whether the improvement came from content selection alone. We present \(TlT\) which, given a table and a selection of the content, first produces LFs and then the textual statement. We show for the first time that automatic LFs improve quality, with an increase in fidelity of 30 points over a comparable system not using LFs. Our experiments allow to quantify the remaining challenges for high factual correctness, with automatic selection of content coming first, followed by better Logic-to-Text generation and, to a lesser extent, better Table-to-Logic parsing.
## 1 Introduction
Data-to-text generation is the task of taking non-linguistic structured input such as tables, knowledge bases, tuples, or graphs, and automatically produce factually correct1 textual descriptions of the contents of the input (Reiter & Dale, 1997; Covington, 2001; Gatt & Krahmer, 2018). Note that the task is somehow underspecified: for the same table many textual descriptions are correct, each one focusing on a selection of the contents. This makes the use of manual evaluation like fidelity key to measure quality.
Footnote 1: We use factual correctness and fidelity indistinctly.
Recent Data-to-Text techniques (Chen et al., 2020; Acosta et al., 2022; Aghajanyan et al., 2022; Kasner & Dusek, 2022) leverage the performance of large-scale pre-trained models (Devlin et al., 2019), with significant performance gains.
However, end-to-end systems struggle to produce high-fidelity statements. As a result, Chen et al. (2020) propose to reformulate Data-to-Text as a Logic-to-Text problem focusing on tables, although the technique can be applied to other structured inputs. The input to the language realization module is a logical representation of the semantics of the target text along with the table information. The authors report an increase in factual correctness from 20% to 82%, compared to a system not using LFs. Note that the manually produced LFs include, implicitly, a selection of the contents to be used in the description. The authors left two open problems: Firstly, the improvement could come from the implicit content selection alone, casting doubts about the actual contribution of LFs. Secondly, it is not clear whether a system using automatic LFs would be as effective.
In this work, we present \(TlT\) (short from Table-to-Logic-to-Text), a two-step model that produces descriptions by automatically generating LFs and then producing the text from those LFs. Our model allows Table-to-Text generation systems to leverage the advantages of using LFs without requiring
manually written LFs. We separate the content selection process from the logical form generation step, allowing to answer positively to the open questions mentioned above with experiments on the Logic2Text dataset (Chen et al., 2020c). Although content selection alone improves results, the best results are obtained using automatic LFs, with noteworthy gains in fidelity compared to a system not using LFs. Our results allow to estimate the impact in fidelity of the remaining challenges, with automatic content selection coming first, followed by better Logic-to-Text and to a lesser extent Table-to-Logic. We also provide qualitative analysis of each step.
All code, models and derived data are public 2.
Footnote 2: [https://github.com/AlonsoApp/tlt](https://github.com/AlonsoApp/tlt)
## 2 Logical Forms
The LFs used in this work are tree-structured logical representations of the semantics of a table-related statement, similar to AMR graphs (Banarescu et al., 2012), and follow the grammar rules defined by (Chen et al., 2020c). Each rule can be executed against a database, a table in this case, yielding a result based on the operation it represents. As these graphs represent factual statements, the root is a boolean operation that should return True. Figure 1 shows an example of a table with its caption and logical form.
### Dataset
We use the Logic2Text dataset (Chen et al., 2020c). As mentioned in the introduction, Table-to-Text tasks are underspecified, as there are multiple descriptions about the table that could be factually correct and relevant. Logic2Text contains 4992 open-domain tables with an average of 2 manually constructed LFs and textual descriptions per table, making a total of 10753 samples (8566 train, 1092 dev. and 1095 test).
### Logical Form grammar
The grammar contains several non-terminals (nodes in the graph, some of which are illustrated in Fig. 1), as follows:
**Stat** represents boolean comparative statements such as greater than, less than, equals (shown as _eq_ in the figure), not equals, most equals or all equals, among others. This is the root of the LF graph.
**C** refers to an specific column in the input table (_attendance_ and _result_ in the figure).
**V** is used for specific values, which can be either values explicitly stated in the table (_w_ in the figure) or arbitrary values used in comparisons or filters (_52500_ in the figure).
**View** refers to a set of rows, which are selected according to a filter over all rows. The filters refer to specific conditions for the values in a specific column, e.g. _greater_. The figure shows _all_rows_, which returns all rows, and also _filter_str_eq_ which returns the rows that contain the substring "_w_" in the _result_ column.
**N** is used for operations that return a numeric value given a view and column as input, such as sums, averages (shown as _avg_ in the figure), maximum or minimum values, and also counters.
**Row** is used to select a single row according to maximum or minimum values in a column.
**Obj** is used for operations that extract values in columns from rows (either views or specific rows). The most common operations are _hop_ extractors that extract a unique value, for instance _str_hop_first_ extracts a string from the first row of a given _View_.
**I** is used to select values from ordinal enumerations in \(N\) and _Row_ rules, as for instance in order to select the "the 2nd highest" \(I\) would equal to 2.
Please refer to the Appendix C for full details. Note that _Stat_, _View_, \(N\), _Row_ and _Obj_ are internal nodes that constitute the structure of the LF (shown in blue in the figure), while column \(C\), value \(V\) and index \(I\) nodes are always leaf nodes.
We detected several ambiguities in the original grammar formulation that prevented training a semantic parser that outputs LFs.
The first one affects all functions that deal with strings. In the LF execution engine of Chen et al. (2020c) the implementation of those functions are divided in two: one that deals with normalization of numeric and date-like strings, and a strict version for other string values. We thus have two different functions in the grammar: a set for numerical and date-like values and another set for other string values, represented with the suffix ".str". The second one deals with an issue of inconsistency with the _hop_ function, which, given a row, returns the value associated to one of its columns. While the grammar states that these functions are only performed over _Row_ objects, in 25% of the examples in the dataset the function is used over a _View_ object, which can contain multiple rows. We defined a new function _hop_first_ for these latter cases.
The grammar in Appendix C contains the new rules that fix the ambiguity issues. We also converted automatically each LF in the dataset to conform to the unambiguous grammar. The conversion script is publicly available.
### Content Selection
In order to separate the effect of content selection and full LFs, we extracted the values in the LF, so we can test the performance of all models with and without content selection. The extracted values include values that are explicitly mentioned in table cells, but also other values present in the LF that are not explicitly found in the table. The set of these values constitute the additional input to the systems when using content selection (CS for short), classified as follows:
**TAB**: Values present in a table cell, verbatim or as a substring of the cell values.
Figure 1 shows an example, where "w" is a substring in several cells. 72.2% of the values are of this type.
**INF**: Values not in the table that are inferred, e.g. as a result of an arithmetic operation over values in the table. For instance 52500 in Figure 1 corresponds to the average over attendance values. 20.8% of _Value_ nodes are INF.
**AUX**: Auxiliary values not in the table nor INF that are used in operations, e.g. to be compared to actual values in cells, as in "_All scores are bigger than 5._". Only 7.1% are of type AUX.
In principle, one could train a separate model to select and produce all necessary content selection values to be fed into any Table-to-Text model, as follows: 1) Choose some values from table cells, either full or substring (TAB); 2) Infer some values via operations like average, count or max (INF); 3) Induce values to be used in comparisons (AUX). In order to separate the contribution of content
Figure 1: Example of a table with its caption, a logical form (in linearized and graph forms), its corresponding content selection values and the target statement. Note that \(w\) in the table stands for _win_. More details in the text.
selection and the generation of LFs, we decided to focus on the use of content selection, and not yet in producing the actual values. We thus derive these values from the manual gold LFs, and feed them to the models. The experiments will show that this content selection step is very important, and that current models fail without it. We leave automatic content selection for further research.
## 3 Generating text via logical forms
Our Text-to-Logic-to-Text (\(TlT\)) system has two main modules in a pipeline.
Given a table, its caption and, optionally, selected content, **Table2Logic** generates an LF. With the same table information, plus the generated LF, **Logic2Text** produces the statement text.
### Table2Logic
We frame this model as semantic parsing, adapting the IRNet grammar-based decoder by (Guo et al., 2019) to LFs. Given a table and corresponding LF in the dataset, the parser needs to produce the sequence of grammar derivations that leads to the given. More specifically, we follow the implementation of Valuenet by Brunner & Stockinger (2021), which is a more up to date revision of IRNet. Both models are NL-to-SQL semantic parsers that generate grammatically correct SQL sentences based on their descriptions. We adapted the system to produce logical forms instead of SQL.
The architecture of Table2Logic is presented in Figure 2.
We first feed a pre-trained BERT encoder (Devlin et al., 2019) with the concatenation of the following table data: the caption text, the table content in linearized form, the column names, and, in some of our model configurations, a set of content selection values manually extracted from the associated gold reference LF.
The output embeddings of the _CLS_ token, the caption tokens and the linearized values in the table are fed into an LSTM decoder (Hochreiter & Schmidhuber, 1997).
At each decoding step, the attention vector of the LSTM is used by four pointer networks (PN) (Vinyals et al., 2015) that select the next grammar-related actions to be taken. Each of the PNs accesses the attention vector of the LSTM plus additional information: the grammar PN has access to grammar information; the value PN uses output embeddings of table cells and other values; the index PN uses a separate set of embeddings for possible ordinal index values; the column PN uses column output embeddings.
Figure 2: Table2Logic architecture, with input in the top and output in the bottom. See text for details.
Following (Guo et al., 2019), Table2Logic performs two decoding iterations. In a first iteration, a **sketch** LF is generated using the grammar pointer network. The sketch LF consisting only of grammar related nodes (e.g. those in blue in Fig. 1), where _Value_, _Column_ and _Index_ nodes are represented by placeholders that are filled in a second decoding iteration by the corresponding PN.
We follow teacher-based training to calculate the loss for each decoding iteration. In the first iteration the loss is calculated by accumulating the cross entropy loss for each generated grammar node given the previous gold reference nodes. The sketch is then used to calculate the cross entropy loss of generating _Value_, _Column_ and _Index_ nodes. The weights of the network are updated using the sum of both loss values.
During inference, we use beam search to produce a set of candidates. In addition, we explore a False Candidate Rejection (FCR) policy to filter out all LFs in the beam that execute to _False_, as they would be factually incorrect. Thus, the candidate LF in the beam that executes to _True_ with maximum probability would be selected. Section 4 reports experiments with FCR.
### Logic2text
For the language realization model we use the top performer in (Chen et al., 2020c), which fine-tunes GPT-2 Radford et al. (2019) to produce text from tables. Their implementation allows to produce text from table information alone (caption, linearized table, list of column names) or both table information and a linearized logical form. See original publication for details.
## 4 Development of Table2Logic
In order to develop Table2Logic, we checked the effect of content selection, as well as the impact of rejecting LFs that evaluate to _False_ (FCR) in development data. Accuracy was computed using strict equality with respect to any of the manual Gold LFs. Both sketch accuracy (using placeholders for non-grammar nodes) and full accuracy are reported. As mentioned in the introduction, this task is underspecified, in that multiple LFs which are very different from the gold LFs could be also correct. Still, the accuracy is a good proxy of quality to discriminate between better and worse models. The results correspond to the checkpoints, out of 50 epochs, with the best full accuracy on development.
\begin{table}
\begin{tabular}{l c c} \hline Model & Sketch & Full \\ \hline No content selection (\(TIT_{noCS}\)) & 15.0 & 4.9 \\ \hline TAB & 42.6 & 27.3 \\ INF & 28.7 & 11.0 \\ AUX & 14.0 & 6.2 \\ TAB, INF & 56.5 & 39.3 \\ TAB, AUX & 44.3 & 28.6 \\ TAB, INF, AUX & **58.5** & 38.9 \\ \hline TAB, INF, AUX + FCR (\(TIT\)) & 56.0 & **46.5** \\ \hline \end{tabular}
\end{table}
Table 1: Table2Logic: Accuracy (% on dev.) over sketch and full LFs using different subsets of content selection (CS) and FCR in development. First row for \(TIT_{noCS}\), last row for \(TIT\), as introduced in Sect. 5.
Figure 3: Model configurations used in main experiments.
We tuned some hyperparameters on development and used default values for the rest (see Appendix B for details).
Table 1 shows the results for different subsets of content selection values, with the last row reporting results when FCR is used. Without FCR, the most important set of values are those explicit in the table (TAB), and the best results correspond to the use of all values, although AUX values do not seem to help much (in fact, the best non-FCR full results are obtained without using AUX, by a very small margin). The last row reports a sizeable improvement in accuracy for full LFs when using FCR, showing that FCR is useful to reject faulty LFs that do not evaluate to True.
Overall, the full accuracy of \(TlT\) might seem low, but given that the gold LFs only cover a fraction of possible LFs they are actually of good quality, as we will see in the next sections.
We also performed an additional ablation experiment where we removed the table information from the system in the last row (\(TlT\)). The sketch and full accuracies dropped (\(50.3\) and \(42.7\) respectively), showing that access to table information is useful even when content selection is available.
## 5 Experiments
In this section we report the results on text generation using the test split of the Logic2Text dataset. We first introduce the different models, the automatic evaluation and the manual evaluation.
### Model configurations
The configuration of the different models are shown in Figure 3. All models take as input the table information, including table caption, linearized table and column headers. In the top row, we include the upperbound system \(TlT_{gold}\), which takes the table plus the manually produced gold LF as input. In the middle row we include our system \(TlT\), which is composed by the Table2Logic module and the Logic2Text module. Both \(TlT\) and \(TlT_{gold}\) use the same Logic2Text module, but while the first uses automatically produced LFs, the second uses manual LFs. \(TlT\) is evaluated in two variants, with and without content selection (\(TlT\) and \(TlT_{noCS}\), respectively). Logic2Text uses default hyperparameters (Chen et al., 2020c).
The bottom row shows our baselines (T2T, short for Table2Text), which generate the text directly from table information, with and without content selection data. As Logic2Text is based on state-of-the-art generation (Chen et al., 2020c), and for the sake of comparability, both T2T and T2T\({}_{noCS}\) have the same codebase. That is, T2T uses the same GPT-2 model architecture as in Chen et al. (2020c) but trained without LFs. Receiving only the linearized table (in case of T2T\({}_{noCS}\)) and, in the case of T2T, the same list of manual CS values as \(TlT\).
### Automatic evaluation
The automatic metrics compare the produced description with the reference descriptions in the test split. As shown in Table 2, we report the same automatic metrics as in (Chen et al., 2020c), BLEU-4 (B-4), ROUGE-1, 2, and L (R-1, R-2, and R-L for short), along with two additional metrics BERTscore (BERTs) (Zhang et al., 2019) and BARTscore (BARTs) (Yuan et al., 2021) which can capture the semantic similarity between the ground truth and generation results. The results show that generation without content selection is poor for both the baseline system and our system (T2T\({}_{noCS}\) and \(TlT_{noCS}\), respectively). Content selection is key for good results in both kinds of systems, which improve around 10 points in all metrics when incorporating content selection (T2T and \(TlT\)). Automatic generation of LFs (\(TlT\)) allows to improve over the system not using them (T2T) in at least one point. If \(TlT\) had access to correct LFs it would improve 4 points further, as shown by the \(TlT_{gold}\) results. Note that our results for \(TlT_{gold}\) are very similar to those reported in (Chen et al., 2020c), as shown in the last row. We attribute the difference to minor variations in the model released by the authors.
### Human Fidelity evaluation
Given the cost of human evaluation, we selected three models to manually judge the fidelity of the produced descriptions: the baseline T2T model, our \(TlT\) model and the upperbound with manual
LFs, \(TIT_{gold}\). For this, we randomly selected 90 tables from the test set and generated a statement with each of the three models. In order to have two human judgements per example, we provided each evaluator with 30 sentences, along with the corresponding table and caption. The evaluators were asked to select whether the description is true, false or nonsense according to the caption and the table.This group of evaluators was comprised of eighteen volunteer researchers unrelated to this project. The evaluation concluded with a strong inter-evaluator agreement of 0.84 Fleiss' kappa coefficient (Fleiss, 1971). We discarded examples where there was disagreement.
Table 3 shows the fidelity figures for the three models. After the evaluation, we noticed that the faithfulness results for \(TIT_{gold}\) in our experiment matched the figure reported by Chen et al. (2020c), so we decided, for completeness, to include in the table their figures for T2T\({}_{noCS}\), which should be roughly comparable to the other results in the table.
In general, the differences in human fidelity evaluation are much higher than for automatic metrics, which we attribute to widely recognised issues of automatic metrics when evaluating text generation. From low to high, the results allow us to estimate the **separate contributions** of each component:
* **Manual content selection** improves fidelity in 24 points (T2T\({}_{noCS}\) vs. T2T) ;
* **Automatic LFs** improve an additional 30 points (T2T vs. \(TIT\));
* **Manual LFs** give 7 points (\(TIT\) vs. \(TIT_{gold}\));
* **Perfect Logic2Text** generation would yield 18 points (\(TIT_{gold}\) vs. 100%).
The figures confirm our contribution: it is possible to produce logical forms automatically, and they allow to greatly improve fidelity, with the largest fidelity improvement in the table, 30 points. Note that the other improvements are actually gaps which allow us to prioritize the areas for further research: automatic content selection (24 pt.), better Logic2Text (18 pt.) and better Table2Logic (7 pt.). In the following section we analyse the errors in the two later modules.
## 6 Qualitative analysis
We performed a qualitative analysis of failure cases in both Table2Logic and Logic2Text, as well as examples of factually correct descriptions generated from LFs different from gold LFs.
\begin{table}
\begin{tabular}{l c c c c c c} \hline Model & B-4 & R-1 & R-2 & R-L & BERTs & BARTs \\ \hline T2T\({}_{noCS}\) & 16.8 & 37.7 & 19.3 & 31.6 & 88.8 & -4.04 \\ \(TIT_{noCS}\) & 15.6 & 39.0 & 18.9 & 32.2 & 87.9 & -4.03 \\ \hline T2T & 26.8 & 55.2 & 31.5 & 45.7 & 91.9 & **-2.98** \\ \(TIT\) (ours) & **27.2** & **56.0** & **33.1** & **47.7** & **92.0** & -2.99 \\ \hline \(TIT_{gold}\) & 31.7 & 62.4 & 38.7 & 52.8 & 93.1 & -2.65 \\ \(TIT_{gold}\)* & 31.4* & 64.2* & 39.5* & 54.0* & - & - \\ \hline \end{tabular}
\end{table}
Table 2: Automated metrics for textual descriptions (test). Bottom two rows are upperbounds, as they use manual LFs. See text for system description. * for results reported in Chen et al. (2020c). Both BERTs and BARTs correspond to the f1 score. In case of the BARTscore higher is better.
\begin{table}
\begin{tabular}{l c c c} \hline Model & Faithful & Unfaithful & Nonsense \\ \hline T2T\({}_{noCS}\)* & 20.2* & 79.8* & - \\ T2T & 44.9 & 49.3 & 5.8 \\ \(TIT\) (ours) & **75.0** & **20.3** & **4.7** \\ \hline \(TIT_{gold}\) & 82.4 & 13.51 & 4.1 \\ \hline \end{tabular}
\end{table}
Table 3: Manual fidelity results. * for results reported in (Chen et al., 2020c).
### Table2Logic
We automatically compared the LFs generated by \(TlT\) in the development set that did not match their corresponding gold LFs. Note that the produced LFs can be correct even if they do not match the gold LF. We traverse the LF from left to right and record the first node that is different. Table 4 shows, in decreasing order of frequency, each grammar node type (cf. Section 2.2) with the most frequent confusions.
The most frequent differences focus on _Stat_ nodes, where a different comparison is often generated. The next two frequent nodes are column and row selections, where \(TlT\) selects different columns and rows, even if \(TlT\) has access to the values in the content selection. The frequency of differences of these three node types is well above the distribution in gold LFs. The rest of differences are less frequent, and also focus on generating different comparison or arithmetic operations.
### Logic2Text
The faithfulness score of descriptions generated from gold LFs (\(TlT_{g}old\)) is 82%, so we analysed a sample of the examples in this 18%. For the sake of space, we include full examples in Appendix D, which include table, caption, gold LF and generated description. We summarize the errors in three types:
**Comparative arithmetic**: Logic2Text miss-represented comparative arithmetic action rules in the LF in 40% of the cases. This resulted in cases where the output sentence declared that a given value was _smaller_ than another when the LF stated it was _larger_. Logic2Text also seem to ignore _round_ and _most_ modifiers of comparison operations, producing sentences with strict equality and omitting qualifiers like "roughly" or "most". The absence of these qualifiers made the produced sentences factually incorrect.
**LF omission**: Logic2Text disregarded part of the LF (33% of errors), resulting in omissions that led to false sentences. Many of these errors involved omitting an entire branch of the LF, leading, for instance, to sentences wrongly referring to all the instances in the data instead of the subset described in the LF.
\begin{table}
\begin{tabular}{l c c l} \hline \hline & **Fr.** & **Total** & **Confusions** \\ \hline Stat & 0.38 & 0.13 & greater \(\rightarrow\) less \\ & & & all equals \(\rightarrow\) most equals \\ & & & equals \(\rightarrow\) and \\ \hline C & 0.25 & 0.19 & column 3 \(\rightarrow\) column 0 \\ & & & column 1 \(\rightarrow\) column 0 \\ \hline Row & 0.16 & 0.02 & row 0 \(\rightarrow\) row 2 \\ & & & row 2 \(\rightarrow\) row 0 \\ \hline View & 0.11 & 0.20 & filter\_greater \(\rightarrow\) filter\_less \\ & & & filter\_greater \(\rightarrow\) filter\_eq \\ & & & filter\_eq \(\rightarrow\) all\_rows \\ \hline N & 0.05 & 0.03 & sum \(\rightarrow\) avg \\ & & & avg \(\rightarrow\) sum \\ \hline Obj & 0.03 & 0.26 & str\_hop \(\rightarrow\) num\_hop \\ & & & num\_hop \(\rightarrow\) str\_hop \\ \hline V & 0.01 & 0.16 & value 72 \(\rightarrow\) value 73 \\ & & & value 70 \(\rightarrow\) value 71 \\ \hline I & 0.01 & 0.01 & 1 \(\rightarrow\) 0 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Table2Logic: Distribution of differing node types (\(TlT\) vs. gold LFs). Fr. for frequency of node type in differing LFs, Total for overall frequency in gold. Rightmost column for most frequent confusions (\(TlT\)\(\rightarrow\) gold).
**Verbalization**: Logic2Text incurred in wrong verbalization and misspellings (27% of cases). For instance Logic2Text producing a similar but not identical name like in _foulisco_ instead of _francisco_.
We attribute the errors to the fact that the generator is based on a general Language Model such as GPT-2. While these language models are excellent in producing fluent text, it seems that, even after fine-tuning, they have a tendency to produce sentences that do not fully reflect the data in the input logical form. It also seems that the errors might be explained by the lower frequency of some operations. The 18% gap, even if it is much lower than the gap for systems that do not use LFs, together with this analysis, show that there is still room for improvement.
### Can an incorrect LF produce a faithful description?
The results in Table 1 show that our Table2Logic system has low accuracy when evaluated against gold logical forms (46%). On the contrary, the results in fidelity for the text generated using those automatically generated logical forms is very high, 75%, only 7 points lower to the results when using gold logical forms. This high performance in fidelity for automatic LFs might seem counter-intuitive, but we need to note that it is possible to generate a correct and faithful LF that is completely different from the gold logical form, i.e. the system decides to produce a correct LF that focuses on a different aspect of the information in the table with respect to the gold LF.
In order to check whether this is actually the case, we manually examined the automatic LFs from _TIT_ that resulted in faithful sentences in the manual evaluation while being "erroneous", that is, different from their gold LF references. In all cases, such \(TIT\) LFs are correctly formed and faithful, i.e. even if these LFs where "wrong" according to the strict definition of accuracy, the semantics they represent are informative and faithful to the source data. Table 5 shows a sample of the output sentence, with full details including table and LFs in Appendix E.
We categorized the samples as follows. 69% of them share a similar LF structure as their corresponding gold references, but with changes in key _Value_ or _Column_ nodes, making them semantically different. In 15% of the cases the LF had similar structure, but although there were differences, the LF was semantically equivalent to the gold LF. The rest of \(TIT\) LFs (16%) had a different structure, and where semantically different from reference counterparts, while still being correct and faithful to the table.
All in all the quality of LFs and corresponding text produced by \(TIT\) for this sample is comparable to those of the gold LF, and in some cases more concise and informative. This analysis confirms that the quality of Table2Logic is well over the 46% accuracy estimate, and that it can be improved, as the produced text lags 7 points behind gold LFs.
\begin{table}
\begin{tabular}{p{142.3pt} p{284.5pt}} \hline
**LF difference** & **Sentences** \\ \hline Similar structure, semantically equivalent & \(TIT\): In the list of Appalachian regional commission counties, Schoharie has the highest unemployment rate. \\ & **Human**: The appalachian county that has the highest unemployment rate is Schoharie. \\ \hline Similar structure, semantically different & \(TIT\): Dick Rathmann had a lower rank in 1956 than he did in 1959. \\ & **Human**: Dick Rathmann completed more laps in the Indianapolis 500 in 1956 than in 1959. \\ \hline Different structure, semantically different & \(TIT\): Most of the games of the 2005 Houston Astros’ season were played in the location of arlington. \\ & **Human**: Arlington was the first location used in the 2005 Houston Astros season. \\ \hline Simpler structure, more informative & \(TIT\): Aus won 7 events in the 2006 asp world tour. \\ & **Human**: Seven of the individuals that were the runner up were from aus. \\ \hline \end{tabular}
\end{table}
Table 5: Examples of faithful sentences produced by \(TIT\) from intermediate LFs that do not match the gold LF.
Related work
Natural Language Generation from structured data is a long-established research line. Over time, multiple techniques have been developed to solve this task in different ways, such as leveraging the structural information of the input data (Wiseman et al., 2017; Liu et al., 2018; Puduppully et al., 2019a; Rebuffel et al., 2020; Chen et al., 2020b), using neural templates (Wiseman et al., 2018; Li and Wan, 2018) or focusing on content ordering (Sha et al., 2018; Puduppully et al., 2019b; Su et al., 2021). However, recent techniques (Chen et al., 2020a; Aghajanyan et al., 2022; Kasner and Dusek, 2022) leverage pre-trained language models (Devlin et al., 2019; Radford et al., 2019).
The use of pre-trained language models has allowed for highly fluent outputs, but fidelity is still a big challenge and focus of recent work. Matsumaru et al. (2020) remove factually incorrect instances from the training data. Others take control of the decoder by making it attend to the source (Tian et al., 2019), using re-ranking techniques on it (Harkous et al., 2020) or applying constrains that incorporates heuristic estimates of future cost (Lu et al., 2021). Other work relys on heuristics such as surface matching of source and target to control generation (Wang et al., 2020; Shen et al., 2020; Li and Rush, 2020).
In a complementary approach, Chen et al. (2020c) focus on improving the fidelity of the generated texts by reformulating Table-to-Text as a Logic-to-Text problem. They incorporate a tree-structured logical representation of the semantics of the target text, logical forms (LF), along with the table information. This logical form highly conditions the language realization module to produce the statement represented in it, greatly improving fidelity. However, the logical forms in this work are manually produced by humans, highly reducing the applicability of this solution in a real world scenario. Solving this challenge would allow data-to-text models to leverage the benefits of this approach, which motivated our research.
Automatically generating LFs requires of techniques capable of producing outputs following a set of pre-defined grammar rules. This challenge is commonly addressed in many semantic parsing tasks (Yin and Neubig, 2017; Radhakrishnan et al., 2020). Guo et al. (2019) present IRNet, a NL-to-SQL semantic parser that generates grammatically correct SQL sentences based on their natural language descriptions. Valuenet Brunner and Stockinger (2021) introduced a BERT-based encoder (Devlin et al., 2019). In this work we adapted the grammar-based decoder of Valuenet to produce LFs, which allowed us to show that we can produce high quality LFs. More recent advances in semantic parsing, e.g. the use of larger language models (Raffel et al., 2020; BigScience Workshop, 2022; Zhang et al., 2022), could be easily folded in our system and would further increase the contribution of LFs.
## 8 Conclusions and future work
We have presented \(T\!IT\) which, given a table and a selection of the content, first produces logical forms and then the textual statement. We show for the first time that automatic LFs improve results according to automatic metrics and, especially, manually estimated factual correctness. In addition, we separately study the contribution of content selection and the formalization of the output as an LF, showing a higher impact in fidelity of the later. In this paper we focus on tables, but our findings and software can be easily ported to other structured inputs.
Our analysis allowed us to quantify that content selection would provide the largest boost in performance, followed, to a lesser extent in improved logic-to-text generation, and, finally, improved table-to-logic generation. We thus plan to focus on automatic content selection, which we think can be largely learned from user preference patterns found in the training data. We also plan to leverage our qualitative analysis to study complementary approaches to improve factual correctness in logic-to-text.
## Acknowledgements
This work is partially funded by MCIN/AEI 10.13039/501100011033 and by the European Union NextGenerationEU/ PRTR, as well as the Basque Government IT1570-22. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.